Quick Answer: Apple M2 Max offers 96GB VRAM and starts around $39.99. It delivers approximately 71 tokens/sec on deepseek-ai/deepseek-coder-1.3b-instruct. It typically draws 40W under load.
This GPU offers reliable throughput for local AI workloads. Pair it with the right model quantization to hit your desired tokens/sec, and monitor prices below to catch the best deal.
With 96GB VRAM, Apple M2 Max can run models up to approximately 240B parameters using 4-bit quantization. This handles most popular models including Llama 3 70B, Mistral 7B, and larger.
Consider H100 or MI300X — Maximum VRAM for enterprise workloads.
Buy directly on Amazon with fast shipping and reliable customer service.
Rotate out primary variants whenever validation flags an issue.
Essential accessories to pair with Apple M2 Max
Total Bundle Price
All items from Amazon
💡 Not ready to buy? Try cloud GPUs first
Test Apple M2 Max performance in the cloud before investing in hardware. Pay by the hour with no commitment.
| Model | Quantization | Tokens/sec | VRAM used |
|---|---|---|---|
| deepseek-ai/deepseek-coder-1.3b-instruct | Q4 | 70.68 tok/sEstimated Auto-generated benchmark | 2GB |
| ibm-research/PowerMoE-3b | Q4 | 70.49 tok/sEstimated Auto-generated benchmark | 2GB |
| unsloth/Llama-3.2-1B-Instruct | Q4 | 70.21 tok/sEstimated Auto-generated benchmark | 1GB |
| meta-llama/Llama-Guard-3-1B | Q4 | 70.17 tok/sEstimated Auto-generated benchmark | 1GB |
| nineninesix/kani-tts-2-en | Q4 | 70.15 tok/sEstimated Auto-generated benchmark | 1GB |
| tencent/HunyuanOCR | Q4 | 69.97 tok/sEstimated Auto-generated benchmark | 1GB |
| meta-llama/Llama-3.2-1B-Instruct | Q4 | 69.17 tok/sEstimated Auto-generated benchmark | 1GB |
| context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16 | Q4 | 69.11 tok/sEstimated Auto-generated benchmark | 2GB |
| Qwen/Qwen2.5-3B | Q4 | 69.00 tok/sEstimated Auto-generated benchmark | 2GB |
| nari-labs/Dia2-2B | Q4 | 68.41 tok/sEstimated Auto-generated benchmark | 2GB |
| deepseek-ai/DeepSeek-OCR-2 | Q4 | 68.38 tok/sEstimated Auto-generated benchmark | 2GB |
| meta-llama/Llama-3.2-1B | Q4 | 68.12 tok/sEstimated Auto-generated benchmark | 1GB |
| google/embeddinggemma-300m | Q4 | 67.59 tok/sEstimated Auto-generated benchmark | 1GB |
| Qwen/Qwen2.5-3B-Instruct | Q4 | 67.41 tok/sEstimated Auto-generated benchmark | 2GB |
| TinyLlama/TinyLlama-1.1B-Chat-v1.0 | Q4 | 66.76 tok/sEstimated Auto-generated benchmark | 1GB |
| unsloth/Llama-3.2-3B-Instruct | Q4 | 66.68 tok/sEstimated Auto-generated benchmark | 2GB |
| google-bert/bert-base-uncased | Q4 | 66.18 tok/sEstimated Auto-generated benchmark | 1GB |
| bigcode/starcoder2-3b | Q4 | 65.54 tok/sEstimated Auto-generated benchmark | 2GB |
| unsloth/gemma-3-1b-it | Q4 | 65.46 tok/sEstimated Auto-generated benchmark | 1GB |
| facebook/sam3 | Q4 | 65.25 tok/sEstimated Auto-generated benchmark | 1GB |
| inference-net/Schematron-3B | Q4 | 65.06 tok/sEstimated Auto-generated benchmark | 2GB |
| WeiboAI/VibeThinker-1.5B | Q4 | 64.52 tok/sEstimated Auto-generated benchmark | 1GB |
| meta-llama/Llama-3.2-3B | Q4 | 64.08 tok/sEstimated Auto-generated benchmark | 2GB |
| Qwen/Qwen3-ASR-1.7B | Q4 | 63.78 tok/sEstimated Auto-generated benchmark | 2GB |
| meta-llama/Llama-3.2-3B-Instruct | Q4 | 63.51 tok/sEstimated Auto-generated benchmark | 2GB |
| LiquidAI/LFM2-1.2B | Q4 | 63.49 tok/sEstimated Auto-generated benchmark | 1GB |
| apple/OpenELM-1_1B-Instruct | Q4 | 63.12 tok/sEstimated Auto-generated benchmark | 1GB |
| ibm-granite/granite-3.3-2b-instruct | Q4 | 62.08 tok/sEstimated Auto-generated benchmark | 1GB |
| google/gemma-2-2b-it | Q4 | 61.52 tok/sEstimated Auto-generated benchmark | 1GB |
| deepseek-ai/DeepSeek-OCR | Q4 | 60.87 tok/sEstimated Auto-generated benchmark | 2GB |
| allenai/OLMo-2-0425-1B | Q4 | 59.91 tok/sEstimated Auto-generated benchmark | 1GB |
| google/gemma-3-1b-it | Q4 | 59.89 tok/sEstimated Auto-generated benchmark | 1GB |
| google-t5/t5-3b | Q4 | 59.23 tok/sEstimated Auto-generated benchmark | 2GB |
| MiniMaxAI/MiniMax-M2 | Q4 | 58.98 tok/sEstimated Auto-generated benchmark | 4GB |
| meta-llama/Llama-2-7b-hf | Q4 | 58.91 tok/sEstimated Auto-generated benchmark | 4GB |
| deepseek-ai/DeepSeek-V3-0324 | Q4 | 58.83 tok/sEstimated Auto-generated benchmark | 4GB |
| google/gemma-2b | Q4 | 58.77 tok/sEstimated Auto-generated benchmark | 1GB |
| meta-llama/Meta-Llama-3-8B | Q4 | 58.66 tok/sEstimated Auto-generated benchmark | 4GB |
| trl-internal-testing/tiny-random-LlamaForCausalLM | Q4 | 58.59 tok/sEstimated Auto-generated benchmark | 4GB |
| microsoft/Phi-3.5-mini-instruct | Q4 | 58.52 tok/sEstimated Auto-generated benchmark | 4GB |
| Nanbeige/Nanbeige4.1-3B | Q4 | 58.48 tok/sEstimated Auto-generated benchmark | 3GB |
| trl-internal-testing/tiny-LlamaForCausalLM-3.2 | Q4 | 58.35 tok/sEstimated Auto-generated benchmark | 4GB |
| parler-tts/parler-tts-large-v1 | Q4 | 58.29 tok/sEstimated Auto-generated benchmark | 4GB |
| mistralai/Mistral-7B-Instruct-v0.2 | Q4 | 58.29 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen2.5-Coder-1.5B | Q4 | 58.15 tok/sEstimated Auto-generated benchmark | 3GB |
| huggyllama/llama-7b | Q4 | 58.14 tok/sEstimated Auto-generated benchmark | 4GB |
| microsoft/Phi-3-mini-4k-instruct | Q4 | 58.14 tok/sEstimated Auto-generated benchmark | 4GB |
| deepseek-ai/DeepSeek-R1-0528 | Q4 | 58.04 tok/sEstimated Auto-generated benchmark | 4GB |
| microsoft/Phi-3.5-mini-instruct | Q4 | 58.01 tok/sEstimated Auto-generated benchmark | 2GB |
| Qwen/Qwen3-8B-Base | Q4 | 58.00 tok/sEstimated Auto-generated benchmark | 4GB |
Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data
| Model | Quantization | Verdict | Estimated speed | VRAM needed |
|---|---|---|---|---|
| openai-community/gpt2 | Q8 | Fits comfortably | 35.01 tok/sEstimated | 7GB (have 96GB) |
| openai-community/gpt2 | FP16 | Fits comfortably | 20.64 tok/sEstimated | 15GB (have 96GB) |
| Qwen/Qwen2.5-7B-Instruct | Q4 | Fits comfortably | 56.08 tok/sEstimated | 4GB (have 96GB) |
| Qwen/Qwen2.5-7B-Instruct | Q8 | Fits comfortably | 35.83 tok/sEstimated | 7GB (have 96GB) |
| Qwen/Qwen2.5-7B-Instruct | FP16 | Fits comfortably | 19.87 tok/sEstimated | 15GB (have 96GB) |
| Qwen/Qwen3-0.6B | Q4 | Fits comfortably | 56.11 tok/sEstimated | 3GB (have 96GB) |
| Qwen/Qwen3-0.6B | Q8 | Fits comfortably | 37.69 tok/sEstimated | 6GB (have 96GB) |
| Qwen/Qwen3-0.6B | FP16 | Fits comfortably | 18.82 tok/sEstimated | 13GB (have 96GB) |
| Gensyn/Qwen2.5-0.5B-Instruct | Q4 | Fits comfortably | 53.78 tok/sEstimated | 3GB (have 96GB) |
| Gensyn/Qwen2.5-0.5B-Instruct | Q8 | Fits comfortably | 39.92 tok/sEstimated | 5GB (have 96GB) |
| Gensyn/Qwen2.5-0.5B-Instruct | FP16 | Fits comfortably | 21.28 tok/sEstimated | 11GB (have 96GB) |
| meta-llama/Llama-3.1-8B-Instruct | Q4 | Fits comfortably | 54.08 tok/sEstimated | 4GB (have 96GB) |
| meta-llama/Llama-3.1-8B-Instruct | Q8 | Fits comfortably | 36.41 tok/sEstimated | 9GB (have 96GB) |
| meta-llama/Llama-3.1-8B-Instruct | FP16 | Fits comfortably | 20.40 tok/sEstimated | 17GB (have 96GB) |
| dphn/dolphin-2.9.1-yi-1.5-34b | Q4 | Fits comfortably | 19.89 tok/sEstimated | 17GB (have 96GB) |
| dphn/dolphin-2.9.1-yi-1.5-34b | Q8 | Fits comfortably | 13.72 tok/sEstimated | 35GB (have 96GB) |
| dphn/dolphin-2.9.1-yi-1.5-34b | FP16 | Fits comfortably | 7.48 tok/sEstimated | 70GB (have 96GB) |
| openai/gpt-oss-20b | Q4 | Fits comfortably | 28.14 tok/sEstimated | 10GB (have 96GB) |
| openai/gpt-oss-20b | Q8 | Fits comfortably | 19.81 tok/sEstimated | 20GB (have 96GB) |
| openai/gpt-oss-20b | FP16 | Fits comfortably | 12.07 tok/sEstimated | 41GB (have 96GB) |
| google/gemma-3-1b-it | Q4 | Fits comfortably | 59.89 tok/sEstimated | 1GB (have 96GB) |
| google/gemma-3-1b-it | Q8 | Fits comfortably | 49.59 tok/sEstimated | 1GB (have 96GB) |
| google/gemma-3-1b-it | FP16 | Fits comfortably | 24.33 tok/sEstimated | 2GB (have 96GB) |
| Qwen/Qwen3-Embedding-0.6B | Q4 | Fits comfortably | 55.88 tok/sEstimated | 3GB (have 96GB) |
| Qwen/Qwen3-Embedding-0.6B | Q8 | Fits comfortably | 39.38 tok/sEstimated | 6GB (have 96GB) |
| Qwen/Qwen3-Embedding-0.6B | FP16 | Fits comfortably | 19.56 tok/sEstimated | 13GB (have 96GB) |
| Qwen/Qwen2.5-1.5B-Instruct | Q4 | Fits comfortably | 50.25 tok/sEstimated | 3GB (have 96GB) |
| Qwen/Qwen2.5-1.5B-Instruct | Q8 | Fits comfortably | 34.81 tok/sEstimated | 5GB (have 96GB) |
| Qwen/Qwen2.5-1.5B-Instruct | FP16 | Fits comfortably | 22.20 tok/sEstimated | 11GB (have 96GB) |
| facebook/opt-125m | Q4 | Fits comfortably | 49.55 tok/sEstimated | 4GB (have 96GB) |
| facebook/opt-125m | Q8 | Fits comfortably | 39.30 tok/sEstimated | 7GB (have 96GB) |
| facebook/opt-125m | FP16 | Fits comfortably | 19.63 tok/sEstimated | 15GB (have 96GB) |
| TinyLlama/TinyLlama-1.1B-Chat-v1.0 | Q4 | Fits comfortably | 66.76 tok/sEstimated | 1GB (have 96GB) |
| TinyLlama/TinyLlama-1.1B-Chat-v1.0 | Q8 | Fits comfortably | 48.01 tok/sEstimated | 1GB (have 96GB) |
| TinyLlama/TinyLlama-1.1B-Chat-v1.0 | FP16 | Fits comfortably | 25.17 tok/sEstimated | 2GB (have 96GB) |
| trl-internal-testing/tiny-Qwen2ForCausalLM-2.5 | Q4 | Fits comfortably | 49.72 tok/sEstimated | 4GB (have 96GB) |
| trl-internal-testing/tiny-Qwen2ForCausalLM-2.5 | Q8 | Fits comfortably | 39.08 tok/sEstimated | 7GB (have 96GB) |
| trl-internal-testing/tiny-Qwen2ForCausalLM-2.5 | FP16 | Fits comfortably | 20.59 tok/sEstimated | 15GB (have 96GB) |
| Qwen/Qwen3-4B-Instruct-2507 | Q4 | Fits comfortably | 50.45 tok/sEstimated | 2GB (have 96GB) |
| Qwen/Qwen3-4B-Instruct-2507 | Q8 | Fits comfortably | 34.22 tok/sEstimated | 4GB (have 96GB) |
| Qwen/Qwen3-4B-Instruct-2507 | FP16 | Fits comfortably | 18.59 tok/sEstimated | 9GB (have 96GB) |
| meta-llama/Llama-3.2-1B-Instruct | Q4 | Fits comfortably | 69.17 tok/sEstimated | 1GB (have 96GB) |
| meta-llama/Llama-3.2-1B-Instruct | Q8 | Fits comfortably | 46.78 tok/sEstimated | 1GB (have 96GB) |
| meta-llama/Llama-3.2-1B-Instruct | FP16 | Fits comfortably | 23.77 tok/sEstimated | 2GB (have 96GB) |
| openai/gpt-oss-120b | Q4 | Fits comfortably | 11.03 tok/sEstimated | 59GB (have 96GB) |
| openai/gpt-oss-120b | Q8 | Not supported | 7.03 tok/sEstimated | 117GB (have 96GB) |
| openai/gpt-oss-120b | FP16 | Not supported | 4.00 tok/sEstimated | 235GB (have 96GB) |
| Qwen/Qwen2.5-3B-Instruct | Q4 | Fits comfortably | 67.41 tok/sEstimated | 2GB (have 96GB) |
| Qwen/Qwen2.5-3B-Instruct | Q8 | Fits comfortably | 43.88 tok/sEstimated | 3GB (have 96GB) |
| openai-community/gpt2 | Q4 | Fits comfortably | 55.70 tok/sEstimated | 4GB (have 96GB) |
Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data
Explore how RTX 5070 stacks up for local inference workloads.
Explore how RTX 4060 Ti 16GB stacks up for local inference workloads.
Explore how RX 6800 XT stacks up for local inference workloads.
Explore how RTX 4070 Super stacks up for local inference workloads.
Explore how RTX 3080 stacks up for local inference workloads.
RPG • 2020
RPG • 2023
Action RPG • 2023
RPG • 2023
Survival Horror • 2023
Action RPG • 2022
Action RPG • 2024
Action Adventure • 2025
Survival Horror • 2023
Action • 2022
Action Adventure • 2023
Action Adventure • 2019