L
localai.computer
ModelsGPUsSystemsBuildsOpenClawMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds
  • AI News

Guides

  • OpenClaw Guide
  • How-To Guides

Legal

  • Privacy
  • Terms
  • Contact

© 2026 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. Systems
  3. NVIDIA DGX A100

NVIDIA DGX A100

NVIDIARack Mount (6U)8x NVIDIA A100 80GB · 640GB total VRAM

Complete specifications and purchasing guidance for this pre-configured system.

Quick answer

NVIDIA DGX A100 is best for teams that want a pre-configured path to local AI without building from parts.

Price

$199,000

CPU

Dual AMD EPYC 7742

GPU

8x NVIDIA A100 80GB

Memory

2TB (2,048GB)


What's Inside
Hardware included with this configuration.
CPU
Dual AMD EPYC 7742
GPU
8x NVIDIA A100 80GB
Memory
2TB (2,048GB)
Storage
30TB (30,720GB SSD)
Power supply
6500W PSU
Discrete GPU included
Yes
Specifications
Technical details for deployment planning.
FieldDetails
ManufacturerNVIDIA
CategoryPre-built system
Form factorRack Mount (6U)
Total VRAM / unified memory640GB
GPU cores (aggregate)55,296
Power / TDP6,500W
Noise levelLoud
Dimensions6U rack (264 mm × 482 mm × 820 mm)
Warranty3 years
Release dateMay 14, 2020
Where to Buy
Pre-configured systems available from authorized retailers.
NVIDIA StoreRecommended
Contact Sales

Note: Affiliate links help support LocalAI Computer. Prices may vary.

System decision workflow

Check model requirementsValidate compatibilityCompare GPU optionsReview build plansOpen buying guides

Systems FAQ

Is NVIDIA DGX A100 good for local AI workloads?

Use this page as a baseline for memory, GPU, and power. Then validate exact model and quantization fit in compatibility checks before buying.

How should I compare this system against other options?

Compare price, available memory, and power draw against other systems and GPU pages, then map that to your target model requirements.

What is the next step after reviewing system specs?

Open model requirements and compatibility routes to confirm whether your target models run at acceptable speed and memory headroom.