L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsOpenClawMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Guides

  • OpenClaw Guide
  • How-To Guides

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. GPUs
  3. RTX 5070

Quick Answer: RTX 5070 offers 12GB VRAM and starts around $761.10. It delivers approximately 142 tokens/sec on Qwen/Qwen3-ASR-1.7B. It typically draws 250W under load.

RTX 5070

Unknown
By NVIDIAReleased 2025-02MSRP $599.00

This GPU offers reliable throughput for local AI workloads. Pair it with the right model quantization to hit your desired tokens/sec, and monitor prices below to catch the best deal.

Buy on Amazon - $761.10View Benchmarks
Specs snapshot
Key hardware metrics for AI workloads.
VRAM12GB
Cores6,400
TDP250W
ArchitectureBlackwell
Key Takeaways
  • 12GB VRAM - runs models up to ~30B parameters
  • Solid mid-to-high range performance
  • Efficient (250W) - works with standard PSU configurations
  • Strong price-to-VRAM value

What this means for you

With 12GB VRAM, RTX 5070 can run models up to approximately 30B parameters using 4-bit quantization. This is suitable for 7B-13B models like Llama 3 8B, Mistral 7B, and Qwen 7B.

Who should buy

  • Running 7B-13B parameter models
  • Stable Diffusion at standard resolutions
  • Learning and experimentation

Looking to upgrade?

Consider RTX 4080 Super or RTX 4090 — More VRAM and cores for demanding workloads.

AI benchmarks

ModelQuantizationTokens/secVRAM used
Qwen/Qwen3-ASR-1.7BQ4
141.72 tok/sEstimated

Auto-generated benchmark

2GB
google-bert/bert-base-uncasedQ4
140.01 tok/sEstimated

Auto-generated benchmark

1GB
ibm-granite/granite-3.3-2b-instructQ4
138.92 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-3BQ4
137.40 tok/sEstimated

Auto-generated benchmark

2GB
ibm-research/PowerMoE-3bQ4
136.36 tok/sEstimated

Auto-generated benchmark

2GB
deepseek-ai/DeepSeek-OCRQ4
135.92 tok/sEstimated

Auto-generated benchmark

2GB
deepseek-ai/deepseek-coder-1.3b-instructQ4
135.19 tok/sEstimated

Auto-generated benchmark

2GB
inference-net/Schematron-3BQ4
134.90 tok/sEstimated

Auto-generated benchmark

2GB
google/gemma-2bQ4
134.86 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-3-1b-itQ4
134.66 tok/sEstimated

Auto-generated benchmark

1GB
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q4
133.94 tok/sEstimated

Auto-generated benchmark

2GB
facebook/sam3Q4
133.32 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen2.5-3B-InstructQ4
133.21 tok/sEstimated

Auto-generated benchmark

2GB
bigcode/starcoder2-3bQ4
132.96 tok/sEstimated

Auto-generated benchmark

2GB
unsloth/gemma-3-1b-itQ4
132.93 tok/sEstimated

Auto-generated benchmark

1GB
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4
132.92 tok/sEstimated

Auto-generated benchmark

1GB
deepseek-ai/DeepSeek-OCR-2Q4
132.19 tok/sEstimated

Auto-generated benchmark

2GB
google-t5/t5-3bQ4
130.41 tok/sEstimated

Auto-generated benchmark

2GB
nari-labs/Dia2-2BQ4
128.99 tok/sEstimated

Auto-generated benchmark

2GB
google/gemma-2-2b-itQ4
128.98 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/Llama-3.2-1B-InstructQ4
126.30 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-3B-InstructQ4
126.09 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-1BQ4
125.75 tok/sEstimated

Auto-generated benchmark

1GB
nineninesix/kani-tts-2-enQ4
125.46 tok/sEstimated

Auto-generated benchmark

1GB
LiquidAI/LFM2-1.2BQ4
124.86 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-Guard-3-1BQ4
123.98 tok/sEstimated

Auto-generated benchmark

1GB
apple/OpenELM-1_1B-InstructQ4
123.74 tok/sEstimated

Auto-generated benchmark

1GB
tencent/HunyuanOCRQ4
123.41 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-1B-InstructQ4
122.79 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/Llama-3.2-3B-InstructQ4
122.20 tok/sEstimated

Auto-generated benchmark

2GB
WeiboAI/VibeThinker-1.5BQ4
121.39 tok/sEstimated

Auto-generated benchmark

1GB
allenai/OLMo-2-0425-1BQ4
120.48 tok/sEstimated

Auto-generated benchmark

1GB
google/embeddinggemma-300mQ4
119.38 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen2.5-3BQ4
118.90 tok/sEstimated

Auto-generated benchmark

2GB
swiss-ai/Apertus-8B-Instruct-2509Q4
118.29 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/Phi-3-mini-128k-instructQ4
118.22 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen2.5-0.5BQ4
118.02 tok/sEstimated

Auto-generated benchmark

3GB
huggyllama/llama-7bQ4
117.65 tok/sEstimated

Auto-generated benchmark

4GB
FireRedTeam/FireRed-Image-Edit-1.0Q4
117.33 tok/sEstimated

Auto-generated benchmark

4GB
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-8bitQ4
117.29 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Meta-Llama-3-8BQ4
117.28 tok/sEstimated

Auto-generated benchmark

4GB
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-8bitQ4
117.18 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen2.5-Coder-7B-InstructQ4
116.88 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-V3-0324Q4
116.57 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-Embedding-0.6BQ4
116.52 tok/sEstimated

Auto-generated benchmark

3GB
GSAI-ML/LLaDA-8B-BaseQ4
116.41 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Meta-Llama-3-8B-InstructQ4
116.11 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/DialoGPT-mediumQ4
116.06 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-R1Q4
115.89 tok/sEstimated

Auto-generated benchmark

4GB
FireRedTeam/FireRed-Image-Edit-1.0Q4
115.68 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-ASR-1.7B
Q4
2GB
141.72 tok/sEstimated
Auto-generated benchmark
google-bert/bert-base-uncased
Q4
1GB
140.01 tok/sEstimated
Auto-generated benchmark
ibm-granite/granite-3.3-2b-instruct
Q4
1GB
138.92 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B
Q4
2GB
137.40 tok/sEstimated
Auto-generated benchmark
ibm-research/PowerMoE-3b
Q4
2GB
136.36 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-OCR
Q4
2GB
135.92 tok/sEstimated
Auto-generated benchmark
deepseek-ai/deepseek-coder-1.3b-instruct
Q4
2GB
135.19 tok/sEstimated
Auto-generated benchmark
inference-net/Schematron-3B
Q4
2GB
134.90 tok/sEstimated
Auto-generated benchmark
google/gemma-2b
Q4
1GB
134.86 tok/sEstimated
Auto-generated benchmark
google/gemma-3-1b-it
Q4
1GB
134.66 tok/sEstimated
Auto-generated benchmark
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16
Q4
2GB
133.94 tok/sEstimated
Auto-generated benchmark
facebook/sam3
Q4
1GB
133.32 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B-Instruct
Q4
2GB
133.21 tok/sEstimated
Auto-generated benchmark
bigcode/starcoder2-3b
Q4
2GB
132.96 tok/sEstimated
Auto-generated benchmark
unsloth/gemma-3-1b-it
Q4
1GB
132.93 tok/sEstimated
Auto-generated benchmark
TinyLlama/TinyLlama-1.1B-Chat-v1.0
Q4
1GB
132.92 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-OCR-2
Q4
2GB
132.19 tok/sEstimated
Auto-generated benchmark
google-t5/t5-3b
Q4
2GB
130.41 tok/sEstimated
Auto-generated benchmark
nari-labs/Dia2-2B
Q4
2GB
128.99 tok/sEstimated
Auto-generated benchmark
google/gemma-2-2b-it
Q4
1GB
128.98 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-1B-Instruct
Q4
1GB
126.30 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B-Instruct
Q4
2GB
126.09 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B
Q4
1GB
125.75 tok/sEstimated
Auto-generated benchmark
nineninesix/kani-tts-2-en
Q4
1GB
125.46 tok/sEstimated
Auto-generated benchmark
LiquidAI/LFM2-1.2B
Q4
1GB
124.86 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-Guard-3-1B
Q4
1GB
123.98 tok/sEstimated
Auto-generated benchmark
apple/OpenELM-1_1B-Instruct
Q4
1GB
123.74 tok/sEstimated
Auto-generated benchmark
tencent/HunyuanOCR
Q4
1GB
123.41 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B-Instruct
Q4
1GB
122.79 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-3B-Instruct
Q4
2GB
122.20 tok/sEstimated
Auto-generated benchmark
WeiboAI/VibeThinker-1.5B
Q4
1GB
121.39 tok/sEstimated
Auto-generated benchmark
allenai/OLMo-2-0425-1B
Q4
1GB
120.48 tok/sEstimated
Auto-generated benchmark
google/embeddinggemma-300m
Q4
1GB
119.38 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B
Q4
2GB
118.90 tok/sEstimated
Auto-generated benchmark
swiss-ai/Apertus-8B-Instruct-2509
Q4
4GB
118.29 tok/sEstimated
Auto-generated benchmark
microsoft/Phi-3-mini-128k-instruct
Q4
4GB
118.22 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-0.5B
Q4
3GB
118.02 tok/sEstimated
Auto-generated benchmark
huggyllama/llama-7b
Q4
4GB
117.65 tok/sEstimated
Auto-generated benchmark
FireRedTeam/FireRed-Image-Edit-1.0
Q4
4GB
117.33 tok/sEstimated
Auto-generated benchmark
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-8bit
Q4
4GB
117.29 tok/sEstimated
Auto-generated benchmark
meta-llama/Meta-Llama-3-8B
Q4
4GB
117.28 tok/sEstimated
Auto-generated benchmark
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-8bit
Q4
2GB
117.18 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-Coder-7B-Instruct
Q4
4GB
116.88 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-V3-0324
Q4
4GB
116.57 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-Embedding-0.6B
Q4
3GB
116.52 tok/sEstimated
Auto-generated benchmark
GSAI-ML/LLaDA-8B-Base
Q4
4GB
116.41 tok/sEstimated
Auto-generated benchmark
meta-llama/Meta-Llama-3-8B-Instruct
Q4
4GB
116.11 tok/sEstimated
Auto-generated benchmark
microsoft/DialoGPT-medium
Q4
4GB
116.06 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-R1
Q4
4GB
115.89 tok/sEstimated
Auto-generated benchmark
FireRedTeam/FireRed-Image-Edit-1.0
Q4
4GB
115.68 tok/sEstimated
Auto-generated benchmark

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Model compatibility

ModelQuantizationVerdictEstimated speedVRAM needed
openai-community/gpt2Q8Fits comfortably
73.80 tok/sEstimated
7GB (have 12GB)
openai-community/gpt2FP16Not supported
44.28 tok/sEstimated
15GB (have 12GB)
Qwen/Qwen2.5-7B-InstructQ4Fits comfortably
105.45 tok/sEstimated
4GB (have 12GB)
Qwen/Qwen2.5-7B-InstructQ8Fits comfortably
77.89 tok/sEstimated
7GB (have 12GB)
Qwen/Qwen2.5-7B-InstructFP16Not supported
41.72 tok/sEstimated
15GB (have 12GB)
Qwen/Qwen3-0.6BQ4Fits comfortably
105.76 tok/sEstimated
3GB (have 12GB)
Qwen/Qwen3-0.6BQ8Fits comfortably
80.08 tok/sEstimated
6GB (have 12GB)
Qwen/Qwen3-0.6BFP16Not supported
40.66 tok/sEstimated
13GB (have 12GB)
Gensyn/Qwen2.5-0.5B-InstructQ4Fits comfortably
113.19 tok/sEstimated
3GB (have 12GB)
Gensyn/Qwen2.5-0.5B-InstructQ8Fits comfortably
80.78 tok/sEstimated
5GB (have 12GB)
Gensyn/Qwen2.5-0.5B-InstructFP16Fits (tight)
42.28 tok/sEstimated
11GB (have 12GB)
meta-llama/Llama-3.1-8B-InstructQ4Fits comfortably
105.69 tok/sEstimated
4GB (have 12GB)
meta-llama/Llama-3.1-8B-InstructQ8Fits comfortably
77.38 tok/sEstimated
9GB (have 12GB)
meta-llama/Llama-3.1-8B-InstructFP16Not supported
44.35 tok/sEstimated
17GB (have 12GB)
dphn/dolphin-2.9.1-yi-1.5-34bQ4Not supported
35.11 tok/sEstimated
17GB (have 12GB)
dphn/dolphin-2.9.1-yi-1.5-34bQ8Not supported
24.74 tok/sEstimated
35GB (have 12GB)
dphn/dolphin-2.9.1-yi-1.5-34bFP16Not supported
15.67 tok/sEstimated
70GB (have 12GB)
openai/gpt-oss-20bQ4Fits comfortably
60.23 tok/sEstimated
10GB (have 12GB)
openai/gpt-oss-20bQ8Not supported
42.10 tok/sEstimated
20GB (have 12GB)
openai/gpt-oss-20bFP16Not supported
21.14 tok/sEstimated
41GB (have 12GB)
google/gemma-3-1b-itQ4Fits comfortably
134.66 tok/sEstimated
1GB (have 12GB)
google/gemma-3-1b-itQ8Fits comfortably
94.76 tok/sEstimated
1GB (have 12GB)
google/gemma-3-1b-itFP16Fits comfortably
45.16 tok/sEstimated
2GB (have 12GB)
Qwen/Qwen3-Embedding-0.6BQ4Fits comfortably
116.52 tok/sEstimated
3GB (have 12GB)
Qwen/Qwen3-Embedding-0.6BQ8Fits comfortably
79.85 tok/sEstimated
6GB (have 12GB)
Qwen/Qwen3-Embedding-0.6BFP16Not supported
44.55 tok/sEstimated
13GB (have 12GB)
Qwen/Qwen2.5-1.5B-InstructQ4Fits comfortably
107.28 tok/sEstimated
3GB (have 12GB)
Qwen/Qwen2.5-1.5B-InstructQ8Fits comfortably
69.54 tok/sEstimated
5GB (have 12GB)
Qwen/Qwen2.5-1.5B-InstructFP16Fits (tight)
38.90 tok/sEstimated
11GB (have 12GB)
facebook/opt-125mQ4Fits comfortably
99.36 tok/sEstimated
4GB (have 12GB)
facebook/opt-125mQ8Fits comfortably
79.73 tok/sEstimated
7GB (have 12GB)
facebook/opt-125mFP16Not supported
43.52 tok/sEstimated
15GB (have 12GB)
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4Fits comfortably
132.92 tok/sEstimated
1GB (have 12GB)
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q8Fits comfortably
97.71 tok/sEstimated
1GB (have 12GB)
TinyLlama/TinyLlama-1.1B-Chat-v1.0FP16Fits comfortably
50.58 tok/sEstimated
2GB (have 12GB)
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q4Fits comfortably
96.87 tok/sEstimated
4GB (have 12GB)
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q8Fits comfortably
77.81 tok/sEstimated
7GB (have 12GB)
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5FP16Not supported
38.40 tok/sEstimated
15GB (have 12GB)
Qwen/Qwen3-4B-Instruct-2507Q4Fits comfortably
114.49 tok/sEstimated
2GB (have 12GB)
Qwen/Qwen3-4B-Instruct-2507Q8Fits comfortably
82.11 tok/sEstimated
4GB (have 12GB)
Qwen/Qwen3-4B-Instruct-2507FP16Fits comfortably
43.50 tok/sEstimated
9GB (have 12GB)
meta-llama/Llama-3.2-1B-InstructQ4Fits comfortably
122.79 tok/sEstimated
1GB (have 12GB)
meta-llama/Llama-3.2-1B-InstructQ8Fits comfortably
93.43 tok/sEstimated
1GB (have 12GB)
meta-llama/Llama-3.2-1B-InstructFP16Fits comfortably
44.85 tok/sEstimated
2GB (have 12GB)
openai/gpt-oss-120bQ4Not supported
20.74 tok/sEstimated
59GB (have 12GB)
openai/gpt-oss-120bQ8Not supported
14.12 tok/sEstimated
117GB (have 12GB)
openai/gpt-oss-120bFP16Not supported
8.45 tok/sEstimated
235GB (have 12GB)
Qwen/Qwen2.5-3B-InstructQ4Fits comfortably
133.21 tok/sEstimated
2GB (have 12GB)
Qwen/Qwen2.5-3B-InstructQ8Fits comfortably
88.09 tok/sEstimated
3GB (have 12GB)
openai-community/gpt2Q4Fits comfortably
102.45 tok/sEstimated
4GB (have 12GB)
openai-community/gpt2Q8
Fits comfortably7GB required · 12GB available
73.80 tok/sEstimated
openai-community/gpt2FP16
Not supported15GB required · 12GB available
44.28 tok/sEstimated
Qwen/Qwen2.5-7B-InstructQ4
Fits comfortably4GB required · 12GB available
105.45 tok/sEstimated
Qwen/Qwen2.5-7B-InstructQ8
Fits comfortably7GB required · 12GB available
77.89 tok/sEstimated
Qwen/Qwen2.5-7B-InstructFP16
Not supported15GB required · 12GB available
41.72 tok/sEstimated
Qwen/Qwen3-0.6BQ4
Fits comfortably3GB required · 12GB available
105.76 tok/sEstimated
Qwen/Qwen3-0.6BQ8
Fits comfortably6GB required · 12GB available
80.08 tok/sEstimated
Qwen/Qwen3-0.6BFP16
Not supported13GB required · 12GB available
40.66 tok/sEstimated
Gensyn/Qwen2.5-0.5B-InstructQ4
Fits comfortably3GB required · 12GB available
113.19 tok/sEstimated
Gensyn/Qwen2.5-0.5B-InstructQ8
Fits comfortably5GB required · 12GB available
80.78 tok/sEstimated
Gensyn/Qwen2.5-0.5B-InstructFP16
Fits (tight)11GB required · 12GB available
42.28 tok/sEstimated
meta-llama/Llama-3.1-8B-InstructQ4
Fits comfortably4GB required · 12GB available
105.69 tok/sEstimated
meta-llama/Llama-3.1-8B-InstructQ8
Fits comfortably9GB required · 12GB available
77.38 tok/sEstimated
meta-llama/Llama-3.1-8B-InstructFP16
Not supported17GB required · 12GB available
44.35 tok/sEstimated
dphn/dolphin-2.9.1-yi-1.5-34bQ4
Not supported17GB required · 12GB available
35.11 tok/sEstimated
dphn/dolphin-2.9.1-yi-1.5-34bQ8
Not supported35GB required · 12GB available
24.74 tok/sEstimated
dphn/dolphin-2.9.1-yi-1.5-34bFP16
Not supported70GB required · 12GB available
15.67 tok/sEstimated
openai/gpt-oss-20bQ4
Fits comfortably10GB required · 12GB available
60.23 tok/sEstimated
openai/gpt-oss-20bQ8
Not supported20GB required · 12GB available
42.10 tok/sEstimated
openai/gpt-oss-20bFP16
Not supported41GB required · 12GB available
21.14 tok/sEstimated
google/gemma-3-1b-itQ4
Fits comfortably1GB required · 12GB available
134.66 tok/sEstimated
google/gemma-3-1b-itQ8
Fits comfortably1GB required · 12GB available
94.76 tok/sEstimated
google/gemma-3-1b-itFP16
Fits comfortably2GB required · 12GB available
45.16 tok/sEstimated
Qwen/Qwen3-Embedding-0.6BQ4
Fits comfortably3GB required · 12GB available
116.52 tok/sEstimated
Qwen/Qwen3-Embedding-0.6BQ8
Fits comfortably6GB required · 12GB available
79.85 tok/sEstimated
Qwen/Qwen3-Embedding-0.6BFP16
Not supported13GB required · 12GB available
44.55 tok/sEstimated
Qwen/Qwen2.5-1.5B-InstructQ4
Fits comfortably3GB required · 12GB available
107.28 tok/sEstimated
Qwen/Qwen2.5-1.5B-InstructQ8
Fits comfortably5GB required · 12GB available
69.54 tok/sEstimated
Qwen/Qwen2.5-1.5B-InstructFP16
Fits (tight)11GB required · 12GB available
38.90 tok/sEstimated
facebook/opt-125mQ4
Fits comfortably4GB required · 12GB available
99.36 tok/sEstimated
facebook/opt-125mQ8
Fits comfortably7GB required · 12GB available
79.73 tok/sEstimated
facebook/opt-125mFP16
Not supported15GB required · 12GB available
43.52 tok/sEstimated
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4
Fits comfortably1GB required · 12GB available
132.92 tok/sEstimated
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q8
Fits comfortably1GB required · 12GB available
97.71 tok/sEstimated
TinyLlama/TinyLlama-1.1B-Chat-v1.0FP16
Fits comfortably2GB required · 12GB available
50.58 tok/sEstimated
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q4
Fits comfortably4GB required · 12GB available
96.87 tok/sEstimated
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q8
Fits comfortably7GB required · 12GB available
77.81 tok/sEstimated
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5FP16
Not supported15GB required · 12GB available
38.40 tok/sEstimated
Qwen/Qwen3-4B-Instruct-2507Q4
Fits comfortably2GB required · 12GB available
114.49 tok/sEstimated
Qwen/Qwen3-4B-Instruct-2507Q8
Fits comfortably4GB required · 12GB available
82.11 tok/sEstimated
Qwen/Qwen3-4B-Instruct-2507FP16
Fits comfortably9GB required · 12GB available
43.50 tok/sEstimated
meta-llama/Llama-3.2-1B-InstructQ4
Fits comfortably1GB required · 12GB available
122.79 tok/sEstimated
meta-llama/Llama-3.2-1B-InstructQ8
Fits comfortably1GB required · 12GB available
93.43 tok/sEstimated
meta-llama/Llama-3.2-1B-InstructFP16
Fits comfortably2GB required · 12GB available
44.85 tok/sEstimated
openai/gpt-oss-120bQ4
Not supported59GB required · 12GB available
20.74 tok/sEstimated
openai/gpt-oss-120bQ8
Not supported117GB required · 12GB available
14.12 tok/sEstimated
openai/gpt-oss-120bFP16
Not supported235GB required · 12GB available
8.45 tok/sEstimated
Qwen/Qwen2.5-3B-InstructQ4
Fits comfortably2GB required · 12GB available
133.21 tok/sEstimated
Qwen/Qwen2.5-3B-InstructQ8
Fits comfortably3GB required · 12GB available
88.09 tok/sEstimated
openai-community/gpt2Q4
Fits comfortably4GB required · 12GB available
102.45 tok/sEstimated

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Where to Buy

Buy directly on Amazon with fast shipping and reliable customer service.

💰Price Analysis
Good price range.
Current Price
RTX 5070
$8
Buy on Amazon
+10.6% over 30 days
30-day low
$459
30-day avg
$503
30-day high
$531
DEALNear lowest price! ($459)
Get email when price drops
AmazonUnknown
$761.10
Check availability
Buy on Amazon

Prime shipping available • 30-day returns

More Amazon options

Rotate out primary variants whenever validation flags an issue.

Complete Your Build

Essential accessories to pair with RTX 5070

Corsair RM750x (2025) 750W
Minimum 750W recommended for RTX 40 series
$119
Find on Amazon
Corsair Vengeance 32GB DDR5-6000
32GB ideal for AI workloads
$129
Find on Amazon
Noctua NF-A12x25 PWM
Quiet and efficient cooling
$35
Find on Amazon
Thermal Grizzly Kryonaut
Premium thermal paste for optimal cooling
$15
Find on Amazon

Total Bundle Price

All items from Amazon

$306
Individual: $306
Find All on AmazonMore GPUs

💡 Not ready to buy? Try cloud GPUs first

Test RTX 5070 performance in the cloud before investing in hardware. Pay by the hour with no commitment.

Vast.aifrom $0.20/hrRunPodfrom $0.30/hrLambda Labsenterprise-grade

Alternative GPUs

RTX 4060 Ti 16GB
16GB

Explore how RTX 4060 Ti 16GB stacks up for local inference workloads.

RX 6800 XT
16GB

Explore how RX 6800 XT stacks up for local inference workloads.

RTX 4070 Super
12GB

Explore how RTX 4070 Super stacks up for local inference workloads.

RTX 3080
10GB

Explore how RTX 3080 stacks up for local inference workloads.

RTX 3090
24GB

Explore how RTX 3090 stacks up for local inference workloads.

Can it play popular games?

Cyberpunk 2077
8GB VRAM

RPG • 2020

Baldur's Gate 3
8GB VRAM

RPG • 2023

Hogwarts Legacy
12GB VRAM

Action RPG • 2023

Starfield
8GB VRAM

RPG • 2023

Alan Wake 2
12GB VRAM

Survival Horror • 2023

Elden Ring
8GB VRAM

Action RPG • 2022

Black Myth: Wukong
12GB VRAM

Action RPG • 2024

Grand Theft Auto VI
12GB VRAM

Action Adventure • 2025

Resident Evil 4 Remake
12GB VRAM

Survival Horror • 2023

Marvel's Spider-Man Remastered
12GB VRAM

Action • 2022

The Last of Us Part I
12GB VRAM

Action Adventure • 2023

Red Dead Redemption 2
8GB VRAM

Action Adventure • 2019

View all 64 compatible games