L
localai.computer
ModelsGPUsSystemsBuildsOpenClawMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds
  • AI News

Guides

  • OpenClaw Guide
  • How-To Guides

Legal

  • Privacy
  • Terms
  • Contact

© 2026 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. Systems
  3. NVIDIA DGX Station A100

NVIDIA DGX Station A100

NVIDIATower4x NVIDIA A100 80GB · 320GB total VRAM

Complete specifications and purchasing guidance for this pre-configured system.

Quick answer

NVIDIA DGX Station A100 is best for teams that want a pre-configured path to local AI without building from parts.

Price

$149,000

CPU

AMD EPYC 7742

GPU

4x NVIDIA A100 80GB

Memory

512GB


What's Inside
Hardware included with this configuration.
CPU
AMD EPYC 7742
GPU
4x NVIDIA A100 80GB
Memory
512GB
Storage
7,680GB
Power supply
1500W PSU
Discrete GPU included
Yes
Specifications
Technical details for deployment planning.
FieldDetails
ManufacturerNVIDIA
CategoryPre-built system
Form factorTower
Total VRAM / unified memory320GB
GPU cores (aggregate)27,648
Power / TDP1,500W
Noise levelLoud
Dimensions25.2 in × 10.4 in × 20.1 in
Warranty3 years
Release dateNov 16, 2020
Where to Buy
Pre-configured systems available from authorized retailers.
NVIDIA StoreRecommended
Contact Sales

Note: Affiliate links help support LocalAI Computer. Prices may vary.

System decision workflow

Check model requirementsValidate compatibilityCompare GPU optionsReview build plansOpen buying guides

Systems FAQ

Is NVIDIA DGX Station A100 good for local AI workloads?

Use this page as a baseline for memory, GPU, and power. Then validate exact model and quantization fit in compatibility checks before buying.

How should I compare this system against other options?

Compare price, available memory, and power draw against other systems and GPU pages, then map that to your target model requirements.

What is the next step after reviewing system specs?

Open model requirements and compatibility routes to confirm whether your target models run at acceptable speed and memory headroom.