L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Community

  • Leaderboard

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. GPUs
  3. RTX 4080

Quick Answer: RTX 4080 offers 16GB VRAM and starts around $1496.19. It delivers approximately 162 tokens/sec on ibm-granite/granite-3.3-2b-instruct. It typically draws 320W under load.

RTX 4080

In Stock
By NVIDIAReleased 2022-11MSRP $1,199.00

RTX 4080 balances throughput and efficiency. It crushes 8B–13B models, handles most 70B work with clever quantization, and stays manageable in terms of power and thermals.

Buy on Amazon - $1,496.19View Benchmarks
Specs snapshot
Key hardware metrics for AI workloads.
VRAM16GB
Cores9,728
TDP320W
ArchitectureAda Lovelace

Where to Buy

Buy directly on Amazon with fast shipping and reliable customer service.

AmazonIn Stock
$1,496.19
Buy on Amazon

💡 Not ready to buy? Try cloud GPUs first

Test RTX 4080 performance in the cloud before investing in hardware. Pay by the hour with no commitment.

Vast.aifrom $0.20/hrRunPodfrom $0.30/hrLambda Labsenterprise-grade

AI benchmarks

ModelQuantizationTokens/secVRAM used
ibm-granite/granite-3.3-2b-instructQ4
161.91 tok/sEstimated

Auto-generated benchmark

1GB
deepseek-ai/deepseek-coder-1.3b-instructQ4
161.62 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-1B-InstructQ4
160.52 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen2.5-3B-InstructQ4
160.21 tok/sEstimated

Auto-generated benchmark

2GB
nari-labs/Dia2-2BQ4
159.83 tok/sEstimated

Auto-generated benchmark

2GB
deepseek-ai/DeepSeek-OCRQ4
158.59 tok/sEstimated

Auto-generated benchmark

2GB
google/gemma-2bQ4
157.92 tok/sEstimated

Auto-generated benchmark

1GB
apple/OpenELM-1_1B-InstructQ4
157.43 tok/sEstimated

Auto-generated benchmark

1GB
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4
154.93 tok/sEstimated

Auto-generated benchmark

1GB
tencent/HunyuanOCRQ4
153.42 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/Llama-3.2-1B-InstructQ4
153.14 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-2-2b-itQ4
152.89 tok/sEstimated

Auto-generated benchmark

1GB
bigcode/starcoder2-3bQ4
152.86 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-Guard-3-1BQ4
152.48 tok/sEstimated

Auto-generated benchmark

1GB
LiquidAI/LFM2-1.2BQ4
149.85 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen2.5-3BQ4
148.23 tok/sEstimated

Auto-generated benchmark

2GB
google/gemma-3-1b-itQ4
148.12 tok/sEstimated

Auto-generated benchmark

1GB
google-bert/bert-base-uncasedQ4
145.00 tok/sEstimated

Auto-generated benchmark

1GB
google-t5/t5-3bQ4
145.00 tok/sEstimated

Auto-generated benchmark

2GB
ibm-research/PowerMoE-3bQ4
144.00 tok/sEstimated

Auto-generated benchmark

2GB
google/embeddinggemma-300mQ4
142.71 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-3BQ4
142.19 tok/sEstimated

Auto-generated benchmark

2GB
WeiboAI/VibeThinker-1.5BQ4
142.16 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-3B-InstructQ4
141.28 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-1BQ4
141.20 tok/sEstimated

Auto-generated benchmark

1GB
allenai/OLMo-2-0425-1BQ4
140.71 tok/sEstimated

Auto-generated benchmark

1GB
inference-net/Schematron-3BQ4
139.36 tok/sEstimated

Auto-generated benchmark

2GB
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q4
136.30 tok/sEstimated

Auto-generated benchmark

2GB
dicta-il/dictalm2.0-instructQ4
135.00 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-R1Q4
134.84 tok/sEstimated

Auto-generated benchmark

4GB
unsloth/gemma-3-1b-itQ4
134.70 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen3-4B-Instruct-2507Q4
134.65 tok/sEstimated

Auto-generated benchmark

2GB
HuggingFaceTB/SmolLM-135MQ4
134.54 tok/sEstimated

Auto-generated benchmark

4GB
facebook/sam3Q4
134.53 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen2.5-Coder-7B-InstructQ4
134.12 tok/sEstimated

Auto-generated benchmark

4GB
openai-community/gpt2-mediumQ4
133.69 tok/sEstimated

Auto-generated benchmark

4GB
skt/kogpt2-base-v2Q4
133.30 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-V3Q4
132.96 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-Coder-V2-Lite-InstructQ4
132.72 tok/sEstimated

Auto-generated benchmark

4GB
unsloth/Llama-3.2-3B-InstructQ4
132.64 tok/sEstimated

Auto-generated benchmark

2GB
Gensyn/Qwen2.5-0.5B-InstructQ4
132.49 tok/sEstimated

Auto-generated benchmark

3GB
Qwen/Qwen2.5-1.5BQ4
132.49 tok/sEstimated

Auto-generated benchmark

3GB
microsoft/DialoGPT-mediumQ4
132.18 tok/sEstimated

Auto-generated benchmark

4GB
numind/NuExtract-1.5Q4
131.85 tok/sEstimated

Auto-generated benchmark

4GB
kaitchup/Phi-3-mini-4k-instruct-gptq-4bitQ4
131.81 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen2-0.5BQ4
131.72 tok/sEstimated

Auto-generated benchmark

3GB
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-4bitQ4
131.45 tok/sEstimated

Auto-generated benchmark

4GB
allenai/Olmo-3-7B-ThinkQ4
131.45 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/Phi-4-multimodal-instructQ4
131.42 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/Phi-3.5-vision-instructQ4
130.87 tok/sEstimated

Auto-generated benchmark

4GB
ibm-granite/granite-3.3-2b-instruct
Q4
1GB
161.91 tok/sEstimated
Auto-generated benchmark
deepseek-ai/deepseek-coder-1.3b-instruct
Q4
2GB
161.62 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B-Instruct
Q4
1GB
160.52 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B-Instruct
Q4
2GB
160.21 tok/sEstimated
Auto-generated benchmark
nari-labs/Dia2-2B
Q4
2GB
159.83 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-OCR
Q4
2GB
158.59 tok/sEstimated
Auto-generated benchmark
google/gemma-2b
Q4
1GB
157.92 tok/sEstimated
Auto-generated benchmark
apple/OpenELM-1_1B-Instruct
Q4
1GB
157.43 tok/sEstimated
Auto-generated benchmark
TinyLlama/TinyLlama-1.1B-Chat-v1.0
Q4
1GB
154.93 tok/sEstimated
Auto-generated benchmark
tencent/HunyuanOCR
Q4
1GB
153.42 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-1B-Instruct
Q4
1GB
153.14 tok/sEstimated
Auto-generated benchmark
google/gemma-2-2b-it
Q4
1GB
152.89 tok/sEstimated
Auto-generated benchmark
bigcode/starcoder2-3b
Q4
2GB
152.86 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-Guard-3-1B
Q4
1GB
152.48 tok/sEstimated
Auto-generated benchmark
LiquidAI/LFM2-1.2B
Q4
1GB
149.85 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B
Q4
2GB
148.23 tok/sEstimated
Auto-generated benchmark
google/gemma-3-1b-it
Q4
1GB
148.12 tok/sEstimated
Auto-generated benchmark
google-bert/bert-base-uncased
Q4
1GB
145.00 tok/sEstimated
Auto-generated benchmark
google-t5/t5-3b
Q4
2GB
145.00 tok/sEstimated
Auto-generated benchmark
ibm-research/PowerMoE-3b
Q4
2GB
144.00 tok/sEstimated
Auto-generated benchmark
google/embeddinggemma-300m
Q4
1GB
142.71 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B
Q4
2GB
142.19 tok/sEstimated
Auto-generated benchmark
WeiboAI/VibeThinker-1.5B
Q4
1GB
142.16 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B-Instruct
Q4
2GB
141.28 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B
Q4
1GB
141.20 tok/sEstimated
Auto-generated benchmark
allenai/OLMo-2-0425-1B
Q4
1GB
140.71 tok/sEstimated
Auto-generated benchmark
inference-net/Schematron-3B
Q4
2GB
139.36 tok/sEstimated
Auto-generated benchmark
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16
Q4
2GB
136.30 tok/sEstimated
Auto-generated benchmark
dicta-il/dictalm2.0-instruct
Q4
4GB
135.00 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-R1
Q4
4GB
134.84 tok/sEstimated
Auto-generated benchmark
unsloth/gemma-3-1b-it
Q4
1GB
134.70 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-4B-Instruct-2507
Q4
2GB
134.65 tok/sEstimated
Auto-generated benchmark
HuggingFaceTB/SmolLM-135M
Q4
4GB
134.54 tok/sEstimated
Auto-generated benchmark
facebook/sam3
Q4
1GB
134.53 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-Coder-7B-Instruct
Q4
4GB
134.12 tok/sEstimated
Auto-generated benchmark
openai-community/gpt2-medium
Q4
4GB
133.69 tok/sEstimated
Auto-generated benchmark
skt/kogpt2-base-v2
Q4
4GB
133.30 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-V3
Q4
4GB
132.96 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct
Q4
4GB
132.72 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-3B-Instruct
Q4
2GB
132.64 tok/sEstimated
Auto-generated benchmark
Gensyn/Qwen2.5-0.5B-Instruct
Q4
3GB
132.49 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-1.5B
Q4
3GB
132.49 tok/sEstimated
Auto-generated benchmark
microsoft/DialoGPT-medium
Q4
4GB
132.18 tok/sEstimated
Auto-generated benchmark
numind/NuExtract-1.5
Q4
4GB
131.85 tok/sEstimated
Auto-generated benchmark
kaitchup/Phi-3-mini-4k-instruct-gptq-4bit
Q4
2GB
131.81 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2-0.5B
Q4
3GB
131.72 tok/sEstimated
Auto-generated benchmark
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-4bit
Q4
4GB
131.45 tok/sEstimated
Auto-generated benchmark
allenai/Olmo-3-7B-Think
Q4
4GB
131.45 tok/sEstimated
Auto-generated benchmark
microsoft/Phi-4-multimodal-instruct
Q4
4GB
131.42 tok/sEstimated
Auto-generated benchmark
microsoft/Phi-3.5-vision-instruct
Q4
4GB
130.87 tok/sEstimated
Auto-generated benchmark

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Model compatibility

ModelQuantizationVerdictEstimated speedVRAM needed
lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-8bitQ8Not supported
45.02 tok/sEstimated
31GB (have 16GB)
Qwen/Qwen2.5-14B-InstructQ4Fits comfortably
88.01 tok/sEstimated
8GB (have 16GB)
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-8bitQ8Fits comfortably
78.95 tok/sEstimated
4GB (have 16GB)
openai-community/gpt2Q4Fits comfortably
112.87 tok/sEstimated
4GB (have 16GB)
openai-community/gpt2Q8Fits comfortably
92.74 tok/sEstimated
7GB (have 16GB)
openai-community/gpt2FP16Fits (tight)
42.95 tok/sEstimated
15GB (have 16GB)
Qwen/Qwen2.5-7B-InstructQ4Fits comfortably
122.24 tok/sEstimated
4GB (have 16GB)
Qwen/Qwen2.5-7B-InstructQ8Fits comfortably
92.91 tok/sEstimated
7GB (have 16GB)
Qwen/Qwen2.5-7B-InstructFP16Fits (tight)
45.14 tok/sEstimated
15GB (have 16GB)
Qwen/Qwen3-0.6BQ4Fits comfortably
126.54 tok/sEstimated
3GB (have 16GB)
Qwen/Qwen3-0.6BQ8Fits comfortably
78.07 tok/sEstimated
6GB (have 16GB)
Qwen/Qwen3-0.6BFP16Fits comfortably
50.52 tok/sEstimated
13GB (have 16GB)
Gensyn/Qwen2.5-0.5B-InstructQ4Fits comfortably
132.49 tok/sEstimated
3GB (have 16GB)
Gensyn/Qwen2.5-0.5B-InstructQ8Fits comfortably
81.36 tok/sEstimated
5GB (have 16GB)
Gensyn/Qwen2.5-0.5B-InstructFP16Fits comfortably
42.69 tok/sEstimated
11GB (have 16GB)
meta-llama/Llama-3.1-8B-InstructQ4Fits comfortably
124.44 tok/sEstimated
4GB (have 16GB)
meta-llama/Llama-3.1-8B-InstructQ8Fits comfortably
92.63 tok/sEstimated
9GB (have 16GB)
meta-llama/Llama-3.1-8B-InstructFP16Not supported
44.90 tok/sEstimated
17GB (have 16GB)
dphn/dolphin-2.9.1-yi-1.5-34bFP16Not supported
17.42 tok/sEstimated
70GB (have 16GB)
openai/gpt-oss-20bQ4Fits comfortably
61.23 tok/sEstimated
10GB (have 16GB)
openai/gpt-oss-20bQ8Not supported
44.02 tok/sEstimated
20GB (have 16GB)
google/gemma-3-1b-itQ8Fits comfortably
104.58 tok/sEstimated
1GB (have 16GB)
facebook/opt-125mQ8Fits comfortably
85.63 tok/sEstimated
7GB (have 16GB)
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q8Fits comfortably
102.34 tok/sEstimated
1GB (have 16GB)
TinyLlama/TinyLlama-1.1B-Chat-v1.0FP16Fits comfortably
51.25 tok/sEstimated
2GB (have 16GB)
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q4Fits comfortably
112.91 tok/sEstimated
4GB (have 16GB)
Qwen/Qwen3-4B-Instruct-2507Q8Fits comfortably
85.28 tok/sEstimated
4GB (have 16GB)
Qwen/Qwen3-4B-Instruct-2507FP16Fits comfortably
49.89 tok/sEstimated
9GB (have 16GB)
meta-llama/Llama-3.2-1B-InstructQ4Fits comfortably
160.52 tok/sEstimated
1GB (have 16GB)
meta-llama/Llama-3.2-1B-InstructQ8Fits comfortably
97.93 tok/sEstimated
1GB (have 16GB)
openai/gpt-oss-120bQ4Not supported
24.09 tok/sEstimated
59GB (have 16GB)
openai/gpt-oss-120bQ8Not supported
18.52 tok/sEstimated
117GB (have 16GB)
openai/gpt-oss-120bFP16Not supported
9.25 tok/sEstimated
235GB (have 16GB)
Qwen/Qwen2.5-3B-InstructQ4Fits comfortably
160.21 tok/sEstimated
2GB (have 16GB)
Qwen/Qwen2.5-3B-InstructQ8Fits comfortably
93.22 tok/sEstimated
3GB (have 16GB)
Qwen/Qwen3-32BQ4Fits (tight)
44.69 tok/sEstimated
16GB (have 16GB)
Qwen/Qwen3-32BQ8Not supported
27.21 tok/sEstimated
33GB (have 16GB)
Qwen/Qwen3-32BFP16Not supported
16.12 tok/sEstimated
66GB (have 16GB)
Qwen/Qwen3-Next-80B-A3B-InstructQ4Not supported
23.12 tok/sEstimated
39GB (have 16GB)
Qwen/Qwen3-Next-80B-A3B-InstructQ8Not supported
16.67 tok/sEstimated
78GB (have 16GB)
Qwen/Qwen3-Next-80B-A3B-InstructFP16Not supported
9.13 tok/sEstimated
156GB (have 16GB)
microsoft/Phi-3-mini-4k-instructFP16Fits (tight)
51.12 tok/sEstimated
15GB (have 16GB)
openai-community/gpt2-largeQ4Fits comfortably
122.31 tok/sEstimated
4GB (have 16GB)
openai-community/gpt2-largeQ8Fits comfortably
92.66 tok/sEstimated
7GB (have 16GB)
openai-community/gpt2-largeFP16Fits (tight)
44.97 tok/sEstimated
15GB (have 16GB)
Qwen/Qwen3-1.7BQ4Fits comfortably
114.77 tok/sEstimated
4GB (have 16GB)
Qwen/Qwen3-1.7BQ8Fits comfortably
83.15 tok/sEstimated
7GB (have 16GB)
Qwen/Qwen3-1.7BFP16Fits (tight)
46.29 tok/sEstimated
15GB (have 16GB)
Qwen/Qwen3-4BQ4Fits comfortably
128.39 tok/sEstimated
2GB (have 16GB)
unsloth/Llama-3.2-1B-InstructQ8Fits comfortably
110.31 tok/sEstimated
1GB (have 16GB)
lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-8bitQ8
Not supported31GB required · 16GB available
45.02 tok/sEstimated
Qwen/Qwen2.5-14B-InstructQ4
Fits comfortably8GB required · 16GB available
88.01 tok/sEstimated
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-8bitQ8
Fits comfortably4GB required · 16GB available
78.95 tok/sEstimated
openai-community/gpt2Q4
Fits comfortably4GB required · 16GB available
112.87 tok/sEstimated
openai-community/gpt2Q8
Fits comfortably7GB required · 16GB available
92.74 tok/sEstimated
openai-community/gpt2FP16
Fits (tight)15GB required · 16GB available
42.95 tok/sEstimated
Qwen/Qwen2.5-7B-InstructQ4
Fits comfortably4GB required · 16GB available
122.24 tok/sEstimated
Qwen/Qwen2.5-7B-InstructQ8
Fits comfortably7GB required · 16GB available
92.91 tok/sEstimated
Qwen/Qwen2.5-7B-InstructFP16
Fits (tight)15GB required · 16GB available
45.14 tok/sEstimated
Qwen/Qwen3-0.6BQ4
Fits comfortably3GB required · 16GB available
126.54 tok/sEstimated
Qwen/Qwen3-0.6BQ8
Fits comfortably6GB required · 16GB available
78.07 tok/sEstimated
Qwen/Qwen3-0.6BFP16
Fits comfortably13GB required · 16GB available
50.52 tok/sEstimated
Gensyn/Qwen2.5-0.5B-InstructQ4
Fits comfortably3GB required · 16GB available
132.49 tok/sEstimated
Gensyn/Qwen2.5-0.5B-InstructQ8
Fits comfortably5GB required · 16GB available
81.36 tok/sEstimated
Gensyn/Qwen2.5-0.5B-InstructFP16
Fits comfortably11GB required · 16GB available
42.69 tok/sEstimated
meta-llama/Llama-3.1-8B-InstructQ4
Fits comfortably4GB required · 16GB available
124.44 tok/sEstimated
meta-llama/Llama-3.1-8B-InstructQ8
Fits comfortably9GB required · 16GB available
92.63 tok/sEstimated
meta-llama/Llama-3.1-8B-InstructFP16
Not supported17GB required · 16GB available
44.90 tok/sEstimated
dphn/dolphin-2.9.1-yi-1.5-34bFP16
Not supported70GB required · 16GB available
17.42 tok/sEstimated
openai/gpt-oss-20bQ4
Fits comfortably10GB required · 16GB available
61.23 tok/sEstimated
openai/gpt-oss-20bQ8
Not supported20GB required · 16GB available
44.02 tok/sEstimated
google/gemma-3-1b-itQ8
Fits comfortably1GB required · 16GB available
104.58 tok/sEstimated
facebook/opt-125mQ8
Fits comfortably7GB required · 16GB available
85.63 tok/sEstimated
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q8
Fits comfortably1GB required · 16GB available
102.34 tok/sEstimated
TinyLlama/TinyLlama-1.1B-Chat-v1.0FP16
Fits comfortably2GB required · 16GB available
51.25 tok/sEstimated
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q4
Fits comfortably4GB required · 16GB available
112.91 tok/sEstimated
Qwen/Qwen3-4B-Instruct-2507Q8
Fits comfortably4GB required · 16GB available
85.28 tok/sEstimated
Qwen/Qwen3-4B-Instruct-2507FP16
Fits comfortably9GB required · 16GB available
49.89 tok/sEstimated
meta-llama/Llama-3.2-1B-InstructQ4
Fits comfortably1GB required · 16GB available
160.52 tok/sEstimated
meta-llama/Llama-3.2-1B-InstructQ8
Fits comfortably1GB required · 16GB available
97.93 tok/sEstimated
openai/gpt-oss-120bQ4
Not supported59GB required · 16GB available
24.09 tok/sEstimated
openai/gpt-oss-120bQ8
Not supported117GB required · 16GB available
18.52 tok/sEstimated
openai/gpt-oss-120bFP16
Not supported235GB required · 16GB available
9.25 tok/sEstimated
Qwen/Qwen2.5-3B-InstructQ4
Fits comfortably2GB required · 16GB available
160.21 tok/sEstimated
Qwen/Qwen2.5-3B-InstructQ8
Fits comfortably3GB required · 16GB available
93.22 tok/sEstimated
Qwen/Qwen3-32BQ4
Fits (tight)16GB required · 16GB available
44.69 tok/sEstimated
Qwen/Qwen3-32BQ8
Not supported33GB required · 16GB available
27.21 tok/sEstimated
Qwen/Qwen3-32BFP16
Not supported66GB required · 16GB available
16.12 tok/sEstimated
Qwen/Qwen3-Next-80B-A3B-InstructQ4
Not supported39GB required · 16GB available
23.12 tok/sEstimated
Qwen/Qwen3-Next-80B-A3B-InstructQ8
Not supported78GB required · 16GB available
16.67 tok/sEstimated
Qwen/Qwen3-Next-80B-A3B-InstructFP16
Not supported156GB required · 16GB available
9.13 tok/sEstimated
microsoft/Phi-3-mini-4k-instructFP16
Fits (tight)15GB required · 16GB available
51.12 tok/sEstimated
openai-community/gpt2-largeQ4
Fits comfortably4GB required · 16GB available
122.31 tok/sEstimated
openai-community/gpt2-largeQ8
Fits comfortably7GB required · 16GB available
92.66 tok/sEstimated
openai-community/gpt2-largeFP16
Fits (tight)15GB required · 16GB available
44.97 tok/sEstimated
Qwen/Qwen3-1.7BQ4
Fits comfortably4GB required · 16GB available
114.77 tok/sEstimated
Qwen/Qwen3-1.7BQ8
Fits comfortably7GB required · 16GB available
83.15 tok/sEstimated
Qwen/Qwen3-1.7BFP16
Fits (tight)15GB required · 16GB available
46.29 tok/sEstimated
Qwen/Qwen3-4BQ4
Fits comfortably2GB required · 16GB available
128.39 tok/sEstimated
unsloth/Llama-3.2-1B-InstructQ8
Fits comfortably1GB required · 16GB available
110.31 tok/sEstimated

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

GPU FAQs

Data-backed answers pulled from community benchmarks, manufacturer specs, and live pricing.

How fast is RTX 4080 on Llama 3.3 70B today?

Umbrella’s CUDA build runs the 16 GB chat preset for Llama 3.3 70B at roughly 10 tokens/sec on a stock RTX 4080—around 20× faster than older GGUF pipelines on the same card.

Source: Reddit – /r/LocalLLaMA (m7daipg)

Can software tuning double RTX 4080 throughput?

Yes. One builder logged Llama 3.3 70B Q3_s at ~15 tok/s on Windows with Ollama, then jumped to ~30 tok/s after switching to Linux with ExLlama and performance-tuned CUDA kernels.

Source: Reddit – /r/LocalLLaMA (mi1gu0s)

What are the thermal and power requirements?

RTX 4080 carries a 320 W board power rating, ships with 16 GB of GDDR6X, and uses the 16-pin 12VHPWR connector. NVIDIA recommends at least a 750 W PSU.

Source: TechPowerUp – RTX 4080 Specs

Is 16 GB VRAM sufficient for 70B-class models?

Only with heavy offloading. Users experimenting with DDR6 system memory and PCIe offload confirm that 70B models can run, but bandwidth limits keep throughput well below 24 GB cards.

Source: Reddit – /r/LocalLLaMA (m76rp0l)

Where are RTX 4080 prices sitting?

Price snapshot from Nov 2025: Amazon at $1,199 in stock.

Source: Supabase price tracker snapshot – 2025-11-03

Alternative GPUs

RTX 4090
24GB

Explore how RTX 4090 stacks up for local inference workloads.

RTX 4070 Ti
12GB

Explore how RTX 4070 Ti stacks up for local inference workloads.

RTX 4070
12GB

Explore how RTX 4070 stacks up for local inference workloads.

RTX 3080
10GB

Explore how RTX 3080 stacks up for local inference workloads.

RX 7900 XTX
24GB

Explore how RX 7900 XTX stacks up for local inference workloads.

Compare RTX 4080

RTX 4080 vs RTX 4070 Ti

Side-by-side VRAM, throughput, efficiency, and pricing benchmarks for both GPUs.

RTX 4080 vs RTX 4070

Side-by-side VRAM, throughput, efficiency, and pricing benchmarks for both GPUs.

RTX 4080 vs RX 7900 XTX

Side-by-side VRAM, throughput, efficiency, and pricing benchmarks for both GPUs.