L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsOpenClawMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Guides

  • OpenClaw Guide
  • How-To Guides

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. Models
  3. meta-llama/Llama-3.3-70B-Instruct
  4. Speed on NVIDIA H100 SXM5 80GB
NVIDIA H100 SXM5 80GB~186 tok/s (Q4)

meta-llama/Llama-3.3-70B-Instruct speed on NVIDIA H100 SXM5 80GB

Quantization-specific throughput and VRAM requirements for meta-llama/Llama-3.3-70B-Instruct running on NVIDIA H100 SXM5 80GB.

Speed Snapshot
Topline estimate from compatibility data
Modelmeta-llama/Llama-3.3-70B-Instruct
GPUNVIDIA H100 SXM5 80GB
Q4 speed186 tok/s
Q4 VRAM required34GB
Data Source
Calculation and benchmark status

Speed values come from the compatibility dataset (`estimatedTokensPerSec`) and are sorted by quantization.

For full verdict logic and alternate GPUs, see the canonical compatibility page.

Open full compatibility report

Quantization Speed Table

QuantizationVRAM neededVRAM availableSpeedVerdict
Q434GB80GB186 tok/s✅ Fits
Q868GB80GB128 tok/s✅ Fits
FP16137GB80GB60 tok/s❌ Not recommended
Back to meta-llama/Llama-3.3-70B-InstructQ4 requirement pageFull compatibility breakdown