Quantization-specific throughput and VRAM requirements for meta-llama/Llama-3.3-70B-Instruct running on NVIDIA H100 PCIe 80GB.
Speed values come from the compatibility dataset (`estimatedTokensPerSec`) and are sorted by quantization.
For full verdict logic and alternate GPUs, see the canonical compatibility page.
Open full compatibility report| Quantization | VRAM needed | VRAM available | Speed | Verdict |
|---|---|---|---|---|
| Q4 | 34GB | 80GB | 111 tok/s | ✅ Fits |
| Q8 | 68GB | 80GB | 84 tok/s | ✅ Fits |
| FP16 | 137GB | 80GB | 44 tok/s | ❌ Not recommended |