L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsOpenClawMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Guides

  • OpenClaw Guide
  • How-To Guides

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. GPUs
  3. Apple M2 Max

Quick Answer: Apple M2 Max offers 96GB VRAM and starts around $39.99. It delivers approximately 71 tokens/sec on deepseek-ai/deepseek-coder-1.3b-instruct. It typically draws 40W under load.

Apple M2 Max

In Stock
By AppleReleased 2023-01MSRP $3,199.00

This GPU offers reliable throughput for local AI workloads. Pair it with the right model quantization to hit your desired tokens/sec, and monitor prices below to catch the best deal.

Buy on Amazon - $39.99View Benchmarks
Specs snapshot
Key hardware metrics for AI workloads.
VRAM96GB
Cores38
TDP40W
ArchitectureApple Silicon M2
Key Takeaways
  • 96GB VRAM - runs models up to ~240B parameters
  • Very efficient (40W) - suitable for compact builds
  • Strong price-to-VRAM value

What this means for you

With 96GB VRAM, Apple M2 Max can run models up to approximately 240B parameters using 4-bit quantization. This handles most popular models including Llama 3 70B, Mistral 7B, and larger.

Who should buy

  • Professional AI workloads requiring maximum VRAM
  • Running 100B+ parameter models with full precision

Looking to upgrade?

Consider H100 or MI300X — Maximum VRAM for enterprise workloads.

Where to Buy

Buy directly on Amazon with fast shipping and reliable customer service.

💰Price Analysis
Good price range.
Current Price
Apple M2 Max
$0
Buy on Amazon
Stable over 30 days
30-day low
$462
30-day avg
$509
30-day high
$543
DEALNear lowest price! ($462)
Get email when price drops
AmazonIn Stock
$39.99
In stock
Buy on Amazon

Prime shipping available • 30-day returns

More Amazon options

Rotate out primary variants whenever validation flags an issue.

Complete Your Build

Essential accessories to pair with Apple M2 Max

Corsair RM750x (2025) 750W
Minimum 750W recommended for RTX 40 series
$119
Find on Amazon
Corsair Vengeance 32GB DDR5-6000
32GB ideal for AI workloads
$129
Find on Amazon
Noctua NF-A12x25 PWM
Quiet and efficient cooling
$35
Find on Amazon
Thermal Grizzly Kryonaut
Premium thermal paste for optimal cooling
$15
Find on Amazon

Total Bundle Price

All items from Amazon

$298
Individual: $298
Find All on AmazonMore GPUs

💡 Not ready to buy? Try cloud GPUs first

Test Apple M2 Max performance in the cloud before investing in hardware. Pay by the hour with no commitment.

Vast.aifrom $0.20/hrRunPodfrom $0.30/hrLambda Labsenterprise-grade

AI benchmarks

ModelQuantizationTokens/secVRAM used
deepseek-ai/deepseek-coder-1.3b-instructQ4
70.68 tok/sEstimated

Auto-generated benchmark

2GB
ibm-research/PowerMoE-3bQ4
70.49 tok/sEstimated

Auto-generated benchmark

2GB
unsloth/Llama-3.2-1B-InstructQ4
70.21 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-Guard-3-1BQ4
70.17 tok/sEstimated

Auto-generated benchmark

1GB
nineninesix/kani-tts-2-enQ4
70.15 tok/sEstimated

Auto-generated benchmark

1GB
tencent/HunyuanOCRQ4
69.97 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-1B-InstructQ4
69.17 tok/sEstimated

Auto-generated benchmark

1GB
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q4
69.11 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen2.5-3BQ4
69.00 tok/sEstimated

Auto-generated benchmark

2GB
nari-labs/Dia2-2BQ4
68.41 tok/sEstimated

Auto-generated benchmark

2GB
deepseek-ai/DeepSeek-OCR-2Q4
68.38 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-1BQ4
68.12 tok/sEstimated

Auto-generated benchmark

1GB
google/embeddinggemma-300mQ4
67.59 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen2.5-3B-InstructQ4
67.41 tok/sEstimated

Auto-generated benchmark

2GB
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4
66.76 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/Llama-3.2-3B-InstructQ4
66.68 tok/sEstimated

Auto-generated benchmark

2GB
google-bert/bert-base-uncasedQ4
66.18 tok/sEstimated

Auto-generated benchmark

1GB
bigcode/starcoder2-3bQ4
65.54 tok/sEstimated

Auto-generated benchmark

2GB
unsloth/gemma-3-1b-itQ4
65.46 tok/sEstimated

Auto-generated benchmark

1GB
facebook/sam3Q4
65.25 tok/sEstimated

Auto-generated benchmark

1GB
inference-net/Schematron-3BQ4
65.06 tok/sEstimated

Auto-generated benchmark

2GB
WeiboAI/VibeThinker-1.5BQ4
64.52 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-3BQ4
64.08 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen3-ASR-1.7BQ4
63.78 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-3B-InstructQ4
63.51 tok/sEstimated

Auto-generated benchmark

2GB
LiquidAI/LFM2-1.2BQ4
63.49 tok/sEstimated

Auto-generated benchmark

1GB
apple/OpenELM-1_1B-InstructQ4
63.12 tok/sEstimated

Auto-generated benchmark

1GB
ibm-granite/granite-3.3-2b-instructQ4
62.08 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-2-2b-itQ4
61.52 tok/sEstimated

Auto-generated benchmark

1GB
deepseek-ai/DeepSeek-OCRQ4
60.87 tok/sEstimated

Auto-generated benchmark

2GB
allenai/OLMo-2-0425-1BQ4
59.91 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-3-1b-itQ4
59.89 tok/sEstimated

Auto-generated benchmark

1GB
google-t5/t5-3bQ4
59.23 tok/sEstimated

Auto-generated benchmark

2GB
MiniMaxAI/MiniMax-M2Q4
58.98 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Llama-2-7b-hfQ4
58.91 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-V3-0324Q4
58.83 tok/sEstimated

Auto-generated benchmark

4GB
google/gemma-2bQ4
58.77 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Meta-Llama-3-8BQ4
58.66 tok/sEstimated

Auto-generated benchmark

4GB
trl-internal-testing/tiny-random-LlamaForCausalLMQ4
58.59 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/Phi-3.5-mini-instructQ4
58.52 tok/sEstimated

Auto-generated benchmark

4GB
Nanbeige/Nanbeige4.1-3BQ4
58.48 tok/sEstimated

Auto-generated benchmark

3GB
trl-internal-testing/tiny-LlamaForCausalLM-3.2Q4
58.35 tok/sEstimated

Auto-generated benchmark

4GB
parler-tts/parler-tts-large-v1Q4
58.29 tok/sEstimated

Auto-generated benchmark

4GB
mistralai/Mistral-7B-Instruct-v0.2Q4
58.29 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen2.5-Coder-1.5BQ4
58.15 tok/sEstimated

Auto-generated benchmark

3GB
huggyllama/llama-7bQ4
58.14 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/Phi-3-mini-4k-instructQ4
58.14 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-R1-0528Q4
58.04 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/Phi-3.5-mini-instructQ4
58.01 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen3-8B-BaseQ4
58.00 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/deepseek-coder-1.3b-instruct
Q4
2GB
70.68 tok/sEstimated
Auto-generated benchmark
ibm-research/PowerMoE-3b
Q4
2GB
70.49 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-1B-Instruct
Q4
1GB
70.21 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-Guard-3-1B
Q4
1GB
70.17 tok/sEstimated
Auto-generated benchmark
nineninesix/kani-tts-2-en
Q4
1GB
70.15 tok/sEstimated
Auto-generated benchmark
tencent/HunyuanOCR
Q4
1GB
69.97 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B-Instruct
Q4
1GB
69.17 tok/sEstimated
Auto-generated benchmark
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16
Q4
2GB
69.11 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B
Q4
2GB
69.00 tok/sEstimated
Auto-generated benchmark
nari-labs/Dia2-2B
Q4
2GB
68.41 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-OCR-2
Q4
2GB
68.38 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B
Q4
1GB
68.12 tok/sEstimated
Auto-generated benchmark
google/embeddinggemma-300m
Q4
1GB
67.59 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B-Instruct
Q4
2GB
67.41 tok/sEstimated
Auto-generated benchmark
TinyLlama/TinyLlama-1.1B-Chat-v1.0
Q4
1GB
66.76 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-3B-Instruct
Q4
2GB
66.68 tok/sEstimated
Auto-generated benchmark
google-bert/bert-base-uncased
Q4
1GB
66.18 tok/sEstimated
Auto-generated benchmark
bigcode/starcoder2-3b
Q4
2GB
65.54 tok/sEstimated
Auto-generated benchmark
unsloth/gemma-3-1b-it
Q4
1GB
65.46 tok/sEstimated
Auto-generated benchmark
facebook/sam3
Q4
1GB
65.25 tok/sEstimated
Auto-generated benchmark
inference-net/Schematron-3B
Q4
2GB
65.06 tok/sEstimated
Auto-generated benchmark
WeiboAI/VibeThinker-1.5B
Q4
1GB
64.52 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B
Q4
2GB
64.08 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-ASR-1.7B
Q4
2GB
63.78 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B-Instruct
Q4
2GB
63.51 tok/sEstimated
Auto-generated benchmark
LiquidAI/LFM2-1.2B
Q4
1GB
63.49 tok/sEstimated
Auto-generated benchmark
apple/OpenELM-1_1B-Instruct
Q4
1GB
63.12 tok/sEstimated
Auto-generated benchmark
ibm-granite/granite-3.3-2b-instruct
Q4
1GB
62.08 tok/sEstimated
Auto-generated benchmark
google/gemma-2-2b-it
Q4
1GB
61.52 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-OCR
Q4
2GB
60.87 tok/sEstimated
Auto-generated benchmark
allenai/OLMo-2-0425-1B
Q4
1GB
59.91 tok/sEstimated
Auto-generated benchmark
google/gemma-3-1b-it
Q4
1GB
59.89 tok/sEstimated
Auto-generated benchmark
google-t5/t5-3b
Q4
2GB
59.23 tok/sEstimated
Auto-generated benchmark
MiniMaxAI/MiniMax-M2
Q4
4GB
58.98 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-2-7b-hf
Q4
4GB
58.91 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-V3-0324
Q4
4GB
58.83 tok/sEstimated
Auto-generated benchmark
google/gemma-2b
Q4
1GB
58.77 tok/sEstimated
Auto-generated benchmark
meta-llama/Meta-Llama-3-8B
Q4
4GB
58.66 tok/sEstimated
Auto-generated benchmark
trl-internal-testing/tiny-random-LlamaForCausalLM
Q4
4GB
58.59 tok/sEstimated
Auto-generated benchmark
microsoft/Phi-3.5-mini-instruct
Q4
4GB
58.52 tok/sEstimated
Auto-generated benchmark
Nanbeige/Nanbeige4.1-3B
Q4
3GB
58.48 tok/sEstimated
Auto-generated benchmark
trl-internal-testing/tiny-LlamaForCausalLM-3.2
Q4
4GB
58.35 tok/sEstimated
Auto-generated benchmark
parler-tts/parler-tts-large-v1
Q4
4GB
58.29 tok/sEstimated
Auto-generated benchmark
mistralai/Mistral-7B-Instruct-v0.2
Q4
4GB
58.29 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-Coder-1.5B
Q4
3GB
58.15 tok/sEstimated
Auto-generated benchmark
huggyllama/llama-7b
Q4
4GB
58.14 tok/sEstimated
Auto-generated benchmark
microsoft/Phi-3-mini-4k-instruct
Q4
4GB
58.14 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-R1-0528
Q4
4GB
58.04 tok/sEstimated
Auto-generated benchmark
microsoft/Phi-3.5-mini-instruct
Q4
2GB
58.01 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-8B-Base
Q4
4GB
58.00 tok/sEstimated
Auto-generated benchmark

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Model compatibility

ModelQuantizationVerdictEstimated speedVRAM needed
openai-community/gpt2Q8Fits comfortably
35.01 tok/sEstimated
7GB (have 96GB)
openai-community/gpt2FP16Fits comfortably
20.64 tok/sEstimated
15GB (have 96GB)
Qwen/Qwen2.5-7B-InstructQ4Fits comfortably
56.08 tok/sEstimated
4GB (have 96GB)
Qwen/Qwen2.5-7B-InstructQ8Fits comfortably
35.83 tok/sEstimated
7GB (have 96GB)
Qwen/Qwen2.5-7B-InstructFP16Fits comfortably
19.87 tok/sEstimated
15GB (have 96GB)
Qwen/Qwen3-0.6BQ4Fits comfortably
56.11 tok/sEstimated
3GB (have 96GB)
Qwen/Qwen3-0.6BQ8Fits comfortably
37.69 tok/sEstimated
6GB (have 96GB)
Qwen/Qwen3-0.6BFP16Fits comfortably
18.82 tok/sEstimated
13GB (have 96GB)
Gensyn/Qwen2.5-0.5B-InstructQ4Fits comfortably
53.78 tok/sEstimated
3GB (have 96GB)
Gensyn/Qwen2.5-0.5B-InstructQ8Fits comfortably
39.92 tok/sEstimated
5GB (have 96GB)
Gensyn/Qwen2.5-0.5B-InstructFP16Fits comfortably
21.28 tok/sEstimated
11GB (have 96GB)
meta-llama/Llama-3.1-8B-InstructQ4Fits comfortably
54.08 tok/sEstimated
4GB (have 96GB)
meta-llama/Llama-3.1-8B-InstructQ8Fits comfortably
36.41 tok/sEstimated
9GB (have 96GB)
meta-llama/Llama-3.1-8B-InstructFP16Fits comfortably
20.40 tok/sEstimated
17GB (have 96GB)
dphn/dolphin-2.9.1-yi-1.5-34bQ4Fits comfortably
19.89 tok/sEstimated
17GB (have 96GB)
dphn/dolphin-2.9.1-yi-1.5-34bQ8Fits comfortably
13.72 tok/sEstimated
35GB (have 96GB)
dphn/dolphin-2.9.1-yi-1.5-34bFP16Fits comfortably
7.48 tok/sEstimated
70GB (have 96GB)
openai/gpt-oss-20bQ4Fits comfortably
28.14 tok/sEstimated
10GB (have 96GB)
openai/gpt-oss-20bQ8Fits comfortably
19.81 tok/sEstimated
20GB (have 96GB)
openai/gpt-oss-20bFP16Fits comfortably
12.07 tok/sEstimated
41GB (have 96GB)
google/gemma-3-1b-itQ4Fits comfortably
59.89 tok/sEstimated
1GB (have 96GB)
google/gemma-3-1b-itQ8Fits comfortably
49.59 tok/sEstimated
1GB (have 96GB)
google/gemma-3-1b-itFP16Fits comfortably
24.33 tok/sEstimated
2GB (have 96GB)
Qwen/Qwen3-Embedding-0.6BQ4Fits comfortably
55.88 tok/sEstimated
3GB (have 96GB)
Qwen/Qwen3-Embedding-0.6BQ8Fits comfortably
39.38 tok/sEstimated
6GB (have 96GB)
Qwen/Qwen3-Embedding-0.6BFP16Fits comfortably
19.56 tok/sEstimated
13GB (have 96GB)
Qwen/Qwen2.5-1.5B-InstructQ4Fits comfortably
50.25 tok/sEstimated
3GB (have 96GB)
Qwen/Qwen2.5-1.5B-InstructQ8Fits comfortably
34.81 tok/sEstimated
5GB (have 96GB)
Qwen/Qwen2.5-1.5B-InstructFP16Fits comfortably
22.20 tok/sEstimated
11GB (have 96GB)
facebook/opt-125mQ4Fits comfortably
49.55 tok/sEstimated
4GB (have 96GB)
facebook/opt-125mQ8Fits comfortably
39.30 tok/sEstimated
7GB (have 96GB)
facebook/opt-125mFP16Fits comfortably
19.63 tok/sEstimated
15GB (have 96GB)
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4Fits comfortably
66.76 tok/sEstimated
1GB (have 96GB)
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q8Fits comfortably
48.01 tok/sEstimated
1GB (have 96GB)
TinyLlama/TinyLlama-1.1B-Chat-v1.0FP16Fits comfortably
25.17 tok/sEstimated
2GB (have 96GB)
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q4Fits comfortably
49.72 tok/sEstimated
4GB (have 96GB)
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q8Fits comfortably
39.08 tok/sEstimated
7GB (have 96GB)
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5FP16Fits comfortably
20.59 tok/sEstimated
15GB (have 96GB)
Qwen/Qwen3-4B-Instruct-2507Q4Fits comfortably
50.45 tok/sEstimated
2GB (have 96GB)
Qwen/Qwen3-4B-Instruct-2507Q8Fits comfortably
34.22 tok/sEstimated
4GB (have 96GB)
Qwen/Qwen3-4B-Instruct-2507FP16Fits comfortably
18.59 tok/sEstimated
9GB (have 96GB)
meta-llama/Llama-3.2-1B-InstructQ4Fits comfortably
69.17 tok/sEstimated
1GB (have 96GB)
meta-llama/Llama-3.2-1B-InstructQ8Fits comfortably
46.78 tok/sEstimated
1GB (have 96GB)
meta-llama/Llama-3.2-1B-InstructFP16Fits comfortably
23.77 tok/sEstimated
2GB (have 96GB)
openai/gpt-oss-120bQ4Fits comfortably
11.03 tok/sEstimated
59GB (have 96GB)
openai/gpt-oss-120bQ8Not supported
7.03 tok/sEstimated
117GB (have 96GB)
openai/gpt-oss-120bFP16Not supported
4.00 tok/sEstimated
235GB (have 96GB)
Qwen/Qwen2.5-3B-InstructQ4Fits comfortably
67.41 tok/sEstimated
2GB (have 96GB)
Qwen/Qwen2.5-3B-InstructQ8Fits comfortably
43.88 tok/sEstimated
3GB (have 96GB)
openai-community/gpt2Q4Fits comfortably
55.70 tok/sEstimated
4GB (have 96GB)
openai-community/gpt2Q8
Fits comfortably7GB required · 96GB available
35.01 tok/sEstimated
openai-community/gpt2FP16
Fits comfortably15GB required · 96GB available
20.64 tok/sEstimated
Qwen/Qwen2.5-7B-InstructQ4
Fits comfortably4GB required · 96GB available
56.08 tok/sEstimated
Qwen/Qwen2.5-7B-InstructQ8
Fits comfortably7GB required · 96GB available
35.83 tok/sEstimated
Qwen/Qwen2.5-7B-InstructFP16
Fits comfortably15GB required · 96GB available
19.87 tok/sEstimated
Qwen/Qwen3-0.6BQ4
Fits comfortably3GB required · 96GB available
56.11 tok/sEstimated
Qwen/Qwen3-0.6BQ8
Fits comfortably6GB required · 96GB available
37.69 tok/sEstimated
Qwen/Qwen3-0.6BFP16
Fits comfortably13GB required · 96GB available
18.82 tok/sEstimated
Gensyn/Qwen2.5-0.5B-InstructQ4
Fits comfortably3GB required · 96GB available
53.78 tok/sEstimated
Gensyn/Qwen2.5-0.5B-InstructQ8
Fits comfortably5GB required · 96GB available
39.92 tok/sEstimated
Gensyn/Qwen2.5-0.5B-InstructFP16
Fits comfortably11GB required · 96GB available
21.28 tok/sEstimated
meta-llama/Llama-3.1-8B-InstructQ4
Fits comfortably4GB required · 96GB available
54.08 tok/sEstimated
meta-llama/Llama-3.1-8B-InstructQ8
Fits comfortably9GB required · 96GB available
36.41 tok/sEstimated
meta-llama/Llama-3.1-8B-InstructFP16
Fits comfortably17GB required · 96GB available
20.40 tok/sEstimated
dphn/dolphin-2.9.1-yi-1.5-34bQ4
Fits comfortably17GB required · 96GB available
19.89 tok/sEstimated
dphn/dolphin-2.9.1-yi-1.5-34bQ8
Fits comfortably35GB required · 96GB available
13.72 tok/sEstimated
dphn/dolphin-2.9.1-yi-1.5-34bFP16
Fits comfortably70GB required · 96GB available
7.48 tok/sEstimated
openai/gpt-oss-20bQ4
Fits comfortably10GB required · 96GB available
28.14 tok/sEstimated
openai/gpt-oss-20bQ8
Fits comfortably20GB required · 96GB available
19.81 tok/sEstimated
openai/gpt-oss-20bFP16
Fits comfortably41GB required · 96GB available
12.07 tok/sEstimated
google/gemma-3-1b-itQ4
Fits comfortably1GB required · 96GB available
59.89 tok/sEstimated
google/gemma-3-1b-itQ8
Fits comfortably1GB required · 96GB available
49.59 tok/sEstimated
google/gemma-3-1b-itFP16
Fits comfortably2GB required · 96GB available
24.33 tok/sEstimated
Qwen/Qwen3-Embedding-0.6BQ4
Fits comfortably3GB required · 96GB available
55.88 tok/sEstimated
Qwen/Qwen3-Embedding-0.6BQ8
Fits comfortably6GB required · 96GB available
39.38 tok/sEstimated
Qwen/Qwen3-Embedding-0.6BFP16
Fits comfortably13GB required · 96GB available
19.56 tok/sEstimated
Qwen/Qwen2.5-1.5B-InstructQ4
Fits comfortably3GB required · 96GB available
50.25 tok/sEstimated
Qwen/Qwen2.5-1.5B-InstructQ8
Fits comfortably5GB required · 96GB available
34.81 tok/sEstimated
Qwen/Qwen2.5-1.5B-InstructFP16
Fits comfortably11GB required · 96GB available
22.20 tok/sEstimated
facebook/opt-125mQ4
Fits comfortably4GB required · 96GB available
49.55 tok/sEstimated
facebook/opt-125mQ8
Fits comfortably7GB required · 96GB available
39.30 tok/sEstimated
facebook/opt-125mFP16
Fits comfortably15GB required · 96GB available
19.63 tok/sEstimated
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4
Fits comfortably1GB required · 96GB available
66.76 tok/sEstimated
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q8
Fits comfortably1GB required · 96GB available
48.01 tok/sEstimated
TinyLlama/TinyLlama-1.1B-Chat-v1.0FP16
Fits comfortably2GB required · 96GB available
25.17 tok/sEstimated
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q4
Fits comfortably4GB required · 96GB available
49.72 tok/sEstimated
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q8
Fits comfortably7GB required · 96GB available
39.08 tok/sEstimated
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5FP16
Fits comfortably15GB required · 96GB available
20.59 tok/sEstimated
Qwen/Qwen3-4B-Instruct-2507Q4
Fits comfortably2GB required · 96GB available
50.45 tok/sEstimated
Qwen/Qwen3-4B-Instruct-2507Q8
Fits comfortably4GB required · 96GB available
34.22 tok/sEstimated
Qwen/Qwen3-4B-Instruct-2507FP16
Fits comfortably9GB required · 96GB available
18.59 tok/sEstimated
meta-llama/Llama-3.2-1B-InstructQ4
Fits comfortably1GB required · 96GB available
69.17 tok/sEstimated
meta-llama/Llama-3.2-1B-InstructQ8
Fits comfortably1GB required · 96GB available
46.78 tok/sEstimated
meta-llama/Llama-3.2-1B-InstructFP16
Fits comfortably2GB required · 96GB available
23.77 tok/sEstimated
openai/gpt-oss-120bQ4
Fits comfortably59GB required · 96GB available
11.03 tok/sEstimated
openai/gpt-oss-120bQ8
Not supported117GB required · 96GB available
7.03 tok/sEstimated
openai/gpt-oss-120bFP16
Not supported235GB required · 96GB available
4.00 tok/sEstimated
Qwen/Qwen2.5-3B-InstructQ4
Fits comfortably2GB required · 96GB available
67.41 tok/sEstimated
Qwen/Qwen2.5-3B-InstructQ8
Fits comfortably3GB required · 96GB available
43.88 tok/sEstimated
openai-community/gpt2Q4
Fits comfortably4GB required · 96GB available
55.70 tok/sEstimated

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Alternative GPUs

RTX 5070
12GB

Explore how RTX 5070 stacks up for local inference workloads.

RTX 4060 Ti 16GB
16GB

Explore how RTX 4060 Ti 16GB stacks up for local inference workloads.

RX 6800 XT
16GB

Explore how RX 6800 XT stacks up for local inference workloads.

RTX 4070 Super
12GB

Explore how RTX 4070 Super stacks up for local inference workloads.

RTX 3080
10GB

Explore how RTX 3080 stacks up for local inference workloads.

Can it play popular games?

Cyberpunk 2077
8GB VRAM

RPG • 2020

Baldur's Gate 3
8GB VRAM

RPG • 2023

Hogwarts Legacy
12GB VRAM

Action RPG • 2023

Starfield
8GB VRAM

RPG • 2023

Alan Wake 2
12GB VRAM

Survival Horror • 2023

Elden Ring
8GB VRAM

Action RPG • 2022

Black Myth: Wukong
12GB VRAM

Action RPG • 2024

Grand Theft Auto VI
12GB VRAM

Action Adventure • 2025

Resident Evil 4 Remake
12GB VRAM

Survival Horror • 2023

Marvel's Spider-Man Remastered
12GB VRAM

Action • 2022

The Last of Us Part I
12GB VRAM

Action Adventure • 2023

Red Dead Redemption 2
8GB VRAM

Action Adventure • 2019

View all 64 compatible games