L
localai.computer
ModelsGPUsSystemsBuildsOpenClawMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds
  • AI News

Guides

  • OpenClaw Guide
  • How-To Guides

Legal

  • Privacy
  • Terms
  • Contact

© 2026 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. Systems
  3. NVIDIA DGX H100

NVIDIA DGX H100

NVIDIARack Mount (10U)8x NVIDIA H100 80GB SXM5 · 640GB total VRAM

Complete specifications and purchasing guidance for this pre-configured system.

Quick answer

NVIDIA DGX H100 is best for teams that want a pre-configured path to local AI without building from parts.

Price

$300,000

CPU

Dual Intel Xeon Platinum 8480C

GPU

8x NVIDIA H100 80GB SXM5

Memory

2TB (2,048GB)


What's Inside
Hardware included with this configuration.
CPU
Dual Intel Xeon Platinum 8480C
GPU
8x NVIDIA H100 80GB SXM5
Memory
2TB (2,048GB)
Storage
30TB (30,720GB SSD)
Power supply
10200W PSU
Discrete GPU included
Yes
Specifications
Technical details for deployment planning.
FieldDetails
ManufacturerNVIDIA
CategoryPre-built system
Form factorRack Mount (10U)
Total VRAM / unified memory640GB
GPU cores (aggregate)135,168
Power / TDP10,200W
Noise levelLoud
Dimensions10U rack (442 mm × 482 mm × 891 mm)
Warranty3 years
Release dateSep 21, 2022
Where to Buy
Pre-configured systems available from authorized retailers.
NVIDIA StoreRecommended
Contact Sales

Note: Affiliate links help support LocalAI Computer. Prices may vary.

System decision workflow

Check model requirementsValidate compatibilityCompare GPU optionsReview build plansOpen buying guides

Systems FAQ

Is NVIDIA DGX H100 good for local AI workloads?

Use this page as a baseline for memory, GPU, and power. Then validate exact model and quantization fit in compatibility checks before buying.

How should I compare this system against other options?

Compare price, available memory, and power draw against other systems and GPU pages, then map that to your target model requirements.

What is the next step after reviewing system specs?

Open model requirements and compatibility routes to confirm whether your target models run at acceptable speed and memory headroom.