Quantization-specific throughput and VRAM requirements for RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamic running on NVIDIA H100 SXM5 80GB.
Speed values come from the compatibility dataset (`estimatedTokensPerSec`) and are sorted by quantization.
For full verdict logic and alternate GPUs, see the canonical compatibility page.
Open full compatibility report| Quantization | VRAM needed | VRAM available | Speed | Verdict |
|---|---|---|---|---|
| Q4 | 34GB | 80GB | 190 tok/s | ✅ Fits |
| Q8 | 68GB | 80GB | 123 tok/s | ✅ Fits |
| FP16 | 137GB | 80GB | 66 tok/s | ❌ Not recommended |