Quantization-specific throughput and VRAM requirements for RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w4a16 running on NVIDIA H100 PCIe 80GB.
Speed values come from the compatibility dataset (`estimatedTokensPerSec`) and are sorted by quantization.
For full verdict logic and alternate GPUs, see the canonical compatibility page.
Open full compatibility report| Quantization | VRAM needed | VRAM available | Speed | Verdict |
|---|---|---|---|---|
| Q4 | 34GB | 80GB | 106 tok/s | ✅ Fits |
| Q8 | 68GB | 80GB | 70 tok/s | ✅ Fits |
| FP16 | 137GB | 80GB | 38 tok/s | ❌ Not recommended |