L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsOpenClawMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Guides

  • OpenClaw Guide
  • How-To Guides

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. GPUs
  3. Intel Arc B580

Quick Answer: Intel Arc B580 offers 12GB VRAM and starts around current market pricing. It delivers approximately 88 tokens/sec on deepseek-ai/DeepSeek-OCR-2. It typically draws 190W under load.

Intel Arc B580

Check availability
By IntelReleased 2024-12MSRP $249.00

This GPU offers reliable throughput for local AI workloads. Pair it with the right model quantization to hit your desired tokens/sec, and monitor prices below to catch the best deal.

Search on AmazonView Benchmarks
Specs snapshot
Key hardware metrics for AI workloads.
VRAM12GB
Cores5,120
TDP190W
ArchitectureBattlemage Xe2

Where to Buy

Buy directly on Amazon with fast shipping and reliable customer service.

No purchase links available yet. Try the Amazon search results to find this GPU.
Complete Your Build

Essential accessories to pair with Intel Arc B580

Corsair RM750x 750W
Minimum 750W recommended for RTX 40 series
$119
Buy
Corsair Vengeance 32GB DDR5
32GB ideal for AI workloads
$129
Buy
Noctua NF-A12x25
Quiet and efficient cooling
$35
Buy

Total Bundle Price

All items from Amazon

$782
Individual: $782
Find All on AmazonMore GPUs

💡 Not ready to buy? Try cloud GPUs first

Test Intel Arc B580 performance in the cloud before investing in hardware. Pay by the hour with no commitment.

Vast.aifrom $0.20/hrRunPodfrom $0.30/hrLambda Labsenterprise-grade

AI benchmarks

ModelQuantizationTokens/secVRAM used
deepseek-ai/DeepSeek-OCR-2Q4
87.61 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen3-ASR-1.7BQ4
81.24 tok/sEstimated

Auto-generated benchmark

2GB
zai-org/GLM-OCRQ4
77.49 tok/sEstimated

Auto-generated benchmark

4GB
moonshotai/Kimi-K2.5Q4
76.29 tok/sEstimated

Auto-generated benchmark

4GB
nvidia/personaplex-7b-v1Q4
72.14 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-OCR-2Q8
66.93 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-ASR-1.7BQ8
64.83 tok/sEstimated

Auto-generated benchmark

3GB
moonshotai/Kimi-K2.5Q8
55.87 tok/sEstimated

Auto-generated benchmark

8GB
zai-org/GLM-OCRQ8
55.69 tok/sEstimated

Auto-generated benchmark

8GB
nvidia/personaplex-7b-v1Q8
51.30 tok/sEstimated

Auto-generated benchmark

8GB
Qwen/Qwen3-ASR-1.7BFP16
36.60 tok/sEstimated

Auto-generated benchmark

6GB
deepseek-ai/DeepSeek-OCR-2FP16
35.14 tok/sEstimated

Auto-generated benchmark

8GB
moonshotai/Kimi-K2.5FP16
28.74 tok/sEstimated

Auto-generated benchmark

16GB
nvidia/personaplex-7b-v1FP16
28.52 tok/sEstimated

Auto-generated benchmark

16GB
zai-org/GLM-OCRFP16
27.87 tok/sEstimated

Auto-generated benchmark

16GB
zai-org/GLM-4.7-FlashQ4
24.80 tok/sEstimated

Auto-generated benchmark

18GB
zai-org/GLM-4.7-FlashQ8
16.97 tok/sEstimated

Auto-generated benchmark

35GB
Qwen/Qwen3-Coder-NextQ4
14.28 tok/sEstimated

Auto-generated benchmark

45GB
zai-org/GLM-4.7-FlashFP16
10.97 tok/sEstimated

Auto-generated benchmark

70GB
Qwen/Qwen3-Coder-NextQ8
10.53 tok/sEstimated

Auto-generated benchmark

90GB
stepfun-ai/Step-3.5-FlashQ4
9.40 tok/sEstimated

Auto-generated benchmark

112GB
stepfun-ai/Step-3.5-FlashQ8
6.28 tok/sEstimated

Auto-generated benchmark

223GB
Qwen/Qwen3-Coder-NextFP16
5.57 tok/sEstimated

Auto-generated benchmark

179GB
stepfun-ai/Step-3.5-FlashFP16
3.65 tok/sEstimated

Auto-generated benchmark

446GB
deepseek-ai/DeepSeek-OCR-2
Q4
2GB
87.61 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-ASR-1.7B
Q4
2GB
81.24 tok/sEstimated
Auto-generated benchmark
zai-org/GLM-OCR
Q4
4GB
77.49 tok/sEstimated
Auto-generated benchmark
moonshotai/Kimi-K2.5
Q4
4GB
76.29 tok/sEstimated
Auto-generated benchmark
nvidia/personaplex-7b-v1
Q4
4GB
72.14 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-OCR-2
Q8
4GB
66.93 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-ASR-1.7B
Q8
3GB
64.83 tok/sEstimated
Auto-generated benchmark
moonshotai/Kimi-K2.5
Q8
8GB
55.87 tok/sEstimated
Auto-generated benchmark
zai-org/GLM-OCR
Q8
8GB
55.69 tok/sEstimated
Auto-generated benchmark
nvidia/personaplex-7b-v1
Q8
8GB
51.30 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-ASR-1.7B
FP16
6GB
36.60 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-OCR-2
FP16
8GB
35.14 tok/sEstimated
Auto-generated benchmark
moonshotai/Kimi-K2.5
FP16
16GB
28.74 tok/sEstimated
Auto-generated benchmark
nvidia/personaplex-7b-v1
FP16
16GB
28.52 tok/sEstimated
Auto-generated benchmark
zai-org/GLM-OCR
FP16
16GB
27.87 tok/sEstimated
Auto-generated benchmark
zai-org/GLM-4.7-Flash
Q4
18GB
24.80 tok/sEstimated
Auto-generated benchmark
zai-org/GLM-4.7-Flash
Q8
35GB
16.97 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-Coder-Next
Q4
45GB
14.28 tok/sEstimated
Auto-generated benchmark
zai-org/GLM-4.7-Flash
FP16
70GB
10.97 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-Coder-Next
Q8
90GB
10.53 tok/sEstimated
Auto-generated benchmark
stepfun-ai/Step-3.5-Flash
Q4
112GB
9.40 tok/sEstimated
Auto-generated benchmark
stepfun-ai/Step-3.5-Flash
Q8
223GB
6.28 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-Coder-Next
FP16
179GB
5.57 tok/sEstimated
Auto-generated benchmark
stepfun-ai/Step-3.5-Flash
FP16
446GB
3.65 tok/sEstimated
Auto-generated benchmark

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Model compatibility

ModelQuantizationVerdictEstimated speedVRAM needed
nvidia/personaplex-7b-v1Q4Fits comfortably
72.14 tok/sEstimated
4GB (have 12GB)
moonshotai/Kimi-K2.5Q4Fits comfortably
76.29 tok/sEstimated
4GB (have 12GB)
Qwen/Qwen3-Coder-NextQ4Not supported
14.28 tok/sEstimated
45GB (have 12GB)
Qwen/Qwen3-ASR-1.7BQ4Fits comfortably
81.24 tok/sEstimated
2GB (have 12GB)
stepfun-ai/Step-3.5-FlashQ4Not supported
9.40 tok/sEstimated
112GB (have 12GB)
deepseek-ai/DeepSeek-OCR-2Q4Fits comfortably
87.61 tok/sEstimated
2GB (have 12GB)
zai-org/GLM-4.7-FlashQ4Not supported
24.80 tok/sEstimated
18GB (have 12GB)
zai-org/GLM-OCRQ4Fits comfortably
77.49 tok/sEstimated
4GB (have 12GB)
nvidia/personaplex-7b-v1Q8Fits comfortably
51.30 tok/sEstimated
8GB (have 12GB)
moonshotai/Kimi-K2.5Q8Fits comfortably
55.87 tok/sEstimated
8GB (have 12GB)
Qwen/Qwen3-Coder-NextQ8Not supported
10.53 tok/sEstimated
90GB (have 12GB)
Qwen/Qwen3-ASR-1.7BQ8Fits comfortably
64.83 tok/sEstimated
3GB (have 12GB)
stepfun-ai/Step-3.5-FlashQ8Not supported
6.28 tok/sEstimated
223GB (have 12GB)
deepseek-ai/DeepSeek-OCR-2Q8Fits comfortably
66.93 tok/sEstimated
4GB (have 12GB)
zai-org/GLM-4.7-FlashQ8Not supported
16.97 tok/sEstimated
35GB (have 12GB)
zai-org/GLM-OCRQ8Fits comfortably
55.69 tok/sEstimated
8GB (have 12GB)
nvidia/personaplex-7b-v1FP16Not supported
28.52 tok/sEstimated
16GB (have 12GB)
moonshotai/Kimi-K2.5FP16Not supported
28.74 tok/sEstimated
16GB (have 12GB)
Qwen/Qwen3-Coder-NextFP16Not supported
5.57 tok/sEstimated
179GB (have 12GB)
Qwen/Qwen3-ASR-1.7BFP16Fits comfortably
36.60 tok/sEstimated
6GB (have 12GB)
stepfun-ai/Step-3.5-FlashFP16Not supported
3.65 tok/sEstimated
446GB (have 12GB)
deepseek-ai/DeepSeek-OCR-2FP16Fits comfortably
35.14 tok/sEstimated
8GB (have 12GB)
zai-org/GLM-4.7-FlashFP16Not supported
10.97 tok/sEstimated
70GB (have 12GB)
zai-org/GLM-OCRFP16Not supported
27.87 tok/sEstimated
16GB (have 12GB)
nvidia/personaplex-7b-v1Q4
Fits comfortably4GB required · 12GB available
72.14 tok/sEstimated
moonshotai/Kimi-K2.5Q4
Fits comfortably4GB required · 12GB available
76.29 tok/sEstimated
Qwen/Qwen3-Coder-NextQ4
Not supported45GB required · 12GB available
14.28 tok/sEstimated
Qwen/Qwen3-ASR-1.7BQ4
Fits comfortably2GB required · 12GB available
81.24 tok/sEstimated
stepfun-ai/Step-3.5-FlashQ4
Not supported112GB required · 12GB available
9.40 tok/sEstimated
deepseek-ai/DeepSeek-OCR-2Q4
Fits comfortably2GB required · 12GB available
87.61 tok/sEstimated
zai-org/GLM-4.7-FlashQ4
Not supported18GB required · 12GB available
24.80 tok/sEstimated
zai-org/GLM-OCRQ4
Fits comfortably4GB required · 12GB available
77.49 tok/sEstimated
nvidia/personaplex-7b-v1Q8
Fits comfortably8GB required · 12GB available
51.30 tok/sEstimated
moonshotai/Kimi-K2.5Q8
Fits comfortably8GB required · 12GB available
55.87 tok/sEstimated
Qwen/Qwen3-Coder-NextQ8
Not supported90GB required · 12GB available
10.53 tok/sEstimated
Qwen/Qwen3-ASR-1.7BQ8
Fits comfortably3GB required · 12GB available
64.83 tok/sEstimated
stepfun-ai/Step-3.5-FlashQ8
Not supported223GB required · 12GB available
6.28 tok/sEstimated
deepseek-ai/DeepSeek-OCR-2Q8
Fits comfortably4GB required · 12GB available
66.93 tok/sEstimated
zai-org/GLM-4.7-FlashQ8
Not supported35GB required · 12GB available
16.97 tok/sEstimated
zai-org/GLM-OCRQ8
Fits comfortably8GB required · 12GB available
55.69 tok/sEstimated
nvidia/personaplex-7b-v1FP16
Not supported16GB required · 12GB available
28.52 tok/sEstimated
moonshotai/Kimi-K2.5FP16
Not supported16GB required · 12GB available
28.74 tok/sEstimated
Qwen/Qwen3-Coder-NextFP16
Not supported179GB required · 12GB available
5.57 tok/sEstimated
Qwen/Qwen3-ASR-1.7BFP16
Fits comfortably6GB required · 12GB available
36.60 tok/sEstimated
stepfun-ai/Step-3.5-FlashFP16
Not supported446GB required · 12GB available
3.65 tok/sEstimated
deepseek-ai/DeepSeek-OCR-2FP16
Fits comfortably8GB required · 12GB available
35.14 tok/sEstimated
zai-org/GLM-4.7-FlashFP16
Not supported70GB required · 12GB available
10.97 tok/sEstimated
zai-org/GLM-OCRFP16
Not supported16GB required · 12GB available
27.87 tok/sEstimated

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Alternative GPUs

RTX 5070
12GB

Explore how RTX 5070 stacks up for local inference workloads.

RTX 4060 Ti 16GB
16GB

Explore how RTX 4060 Ti 16GB stacks up for local inference workloads.

RX 6800 XT
16GB

Explore how RX 6800 XT stacks up for local inference workloads.

RTX 4070 Super
12GB

Explore how RTX 4070 Super stacks up for local inference workloads.

RTX 3080
10GB

Explore how RTX 3080 stacks up for local inference workloads.

Can it play popular games?

Cyberpunk 2077
8GB VRAM

RPG • 2020

Baldur's Gate 3
8GB VRAM

RPG • 2023

Hogwarts Legacy
12GB VRAM

Action RPG • 2023

Starfield
8GB VRAM

RPG • 2023

Alan Wake 2
12GB VRAM

Survival Horror • 2023

Elden Ring
8GB VRAM

Action RPG • 2022

Black Myth: Wukong
12GB VRAM

Action RPG • 2024

Grand Theft Auto VI
12GB VRAM

Action Adventure • 2025

Resident Evil 4 Remake
12GB VRAM

Survival Horror • 2023

Marvel's Spider-Man Remastered
12GB VRAM

Action • 2022

The Last of Us Part I
12GB VRAM

Action Adventure • 2023

Red Dead Redemption 2
8GB VRAM

Action Adventure • 2019

View all 64 compatible games