Quantization-specific throughput and VRAM requirements for openai/gpt-oss-120b running on Apple M2 Max.
Speed values come from the compatibility dataset (`estimatedTokensPerSec`) and are sorted by quantization.
For full verdict logic and alternate GPUs, see the canonical compatibility page.
Open full compatibility report| Quantization | VRAM needed | VRAM available | Speed | Verdict |
|---|---|---|---|---|
| Q4 | 59GB | 96GB | 11 tok/s | ✅ Fits |
| Q8 | 117GB | 96GB | 7 tok/s | ❌ Not recommended |
| FP16 | 235GB | 96GB | 4 tok/s | ❌ Not recommended |