NVIDIA H100 PCIe 80GB meets the minimum VRAM requirement for Q4 inference of deepseek-ai/deepseek-coder-33b-instruct. Review the quantization breakdown below to see how higher precision settings impact VRAM and throughput.
NVIDIA H100 PCIe 80GB can run deepseek-ai/deepseek-coder-33b-instruct with Q4 quantization. At approximately 113 tokens/second, you can expect Excellent speed - conversational response times under 1 second.
You have 63GB headroom, which is sufficient for system overhead and smooth operation.
| Quantization | VRAM needed | VRAM available | Estimated speed | Verdict |
|---|---|---|---|---|
| Q4 | 17GB | 80GB | 112.63 tok/s | ✅ Fits comfortably |
| Q8 | 34GB | 80GB | 81.63 tok/s | ✅ Fits comfortably |
| FP16 | 68GB | 80GB | 42.80 tok/s | ✅ Fits comfortably |