L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsOpenClawMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Guides

  • OpenClaw Guide
  • How-To Guides

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. GPUs
  3. Intel Arc B570

Quick Answer: Intel Arc B570 offers 10GB VRAM and starts around current market pricing. It delivers approximately 77 tokens/sec on Qwen/Qwen3-ASR-1.7B. It typically draws 150W under load.

Intel Arc B570

Check availability
By IntelReleased 2025-01MSRP $219.00

This GPU offers reliable throughput for local AI workloads. Pair it with the right model quantization to hit your desired tokens/sec, and monitor prices below to catch the best deal.

Search on AmazonView Benchmarks
Specs snapshot
Key hardware metrics for AI workloads.
VRAM10GB
Cores4,096
TDP150W
ArchitectureBattlemage Xe2

Where to Buy

Buy directly on Amazon with fast shipping and reliable customer service.

No purchase links available yet. Try the Amazon search results to find this GPU.
Complete Your Build

Essential accessories to pair with Intel Arc B570

Corsair RM750x 750W
Minimum 750W recommended for RTX 40 series
$119
Buy
Corsair Vengeance 32GB DDR5
32GB ideal for AI workloads
$129
Buy
Noctua NF-A12x25
Quiet and efficient cooling
$35
Buy

Total Bundle Price

All items from Amazon

$782
Individual: $782
Find All on AmazonMore GPUs

💡 Not ready to buy? Try cloud GPUs first

Test Intel Arc B570 performance in the cloud before investing in hardware. Pay by the hour with no commitment.

Vast.aifrom $0.20/hrRunPodfrom $0.30/hrLambda Labsenterprise-grade

AI benchmarks

ModelQuantizationTokens/secVRAM used
Qwen/Qwen3-ASR-1.7BQ4
76.93 tok/sEstimated

Auto-generated benchmark

2GB
deepseek-ai/DeepSeek-OCR-2Q4
69.95 tok/sEstimated

Auto-generated benchmark

2GB
moonshotai/Kimi-K2.5Q4
58.76 tok/sEstimated

Auto-generated benchmark

4GB
zai-org/GLM-OCRQ4
58.23 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-OCR-2Q8
57.08 tok/sEstimated

Auto-generated benchmark

4GB
nvidia/personaplex-7b-v1Q4
56.67 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-ASR-1.7BQ8
52.89 tok/sEstimated

Auto-generated benchmark

3GB
moonshotai/Kimi-K2.5Q8
46.76 tok/sEstimated

Auto-generated benchmark

8GB
zai-org/GLM-OCRQ8
43.54 tok/sEstimated

Auto-generated benchmark

8GB
nvidia/personaplex-7b-v1Q8
42.33 tok/sEstimated

Auto-generated benchmark

8GB
Qwen/Qwen3-ASR-1.7BFP16
28.35 tok/sEstimated

Auto-generated benchmark

6GB
deepseek-ai/DeepSeek-OCR-2FP16
27.66 tok/sEstimated

Auto-generated benchmark

8GB
zai-org/GLM-OCRFP16
23.96 tok/sEstimated

Auto-generated benchmark

16GB
zai-org/GLM-4.7-FlashQ4
23.08 tok/sEstimated

Auto-generated benchmark

18GB
moonshotai/Kimi-K2.5FP16
22.35 tok/sEstimated

Auto-generated benchmark

16GB
nvidia/personaplex-7b-v1FP16
22.15 tok/sEstimated

Auto-generated benchmark

16GB
zai-org/GLM-4.7-FlashQ8
15.07 tok/sEstimated

Auto-generated benchmark

35GB
Qwen/Qwen3-Coder-NextQ4
12.38 tok/sEstimated

Auto-generated benchmark

45GB
zai-org/GLM-4.7-FlashFP16
8.51 tok/sEstimated

Auto-generated benchmark

70GB
Qwen/Qwen3-Coder-NextQ8
7.89 tok/sEstimated

Auto-generated benchmark

90GB
stepfun-ai/Step-3.5-FlashQ4
6.93 tok/sEstimated

Auto-generated benchmark

112GB
stepfun-ai/Step-3.5-FlashQ8
5.29 tok/sEstimated

Auto-generated benchmark

223GB
Qwen/Qwen3-Coder-NextFP16
4.47 tok/sEstimated

Auto-generated benchmark

179GB
stepfun-ai/Step-3.5-FlashFP16
2.95 tok/sEstimated

Auto-generated benchmark

446GB
Qwen/Qwen3-ASR-1.7B
Q4
2GB
76.93 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-OCR-2
Q4
2GB
69.95 tok/sEstimated
Auto-generated benchmark
moonshotai/Kimi-K2.5
Q4
4GB
58.76 tok/sEstimated
Auto-generated benchmark
zai-org/GLM-OCR
Q4
4GB
58.23 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-OCR-2
Q8
4GB
57.08 tok/sEstimated
Auto-generated benchmark
nvidia/personaplex-7b-v1
Q4
4GB
56.67 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-ASR-1.7B
Q8
3GB
52.89 tok/sEstimated
Auto-generated benchmark
moonshotai/Kimi-K2.5
Q8
8GB
46.76 tok/sEstimated
Auto-generated benchmark
zai-org/GLM-OCR
Q8
8GB
43.54 tok/sEstimated
Auto-generated benchmark
nvidia/personaplex-7b-v1
Q8
8GB
42.33 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-ASR-1.7B
FP16
6GB
28.35 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-OCR-2
FP16
8GB
27.66 tok/sEstimated
Auto-generated benchmark
zai-org/GLM-OCR
FP16
16GB
23.96 tok/sEstimated
Auto-generated benchmark
zai-org/GLM-4.7-Flash
Q4
18GB
23.08 tok/sEstimated
Auto-generated benchmark
moonshotai/Kimi-K2.5
FP16
16GB
22.35 tok/sEstimated
Auto-generated benchmark
nvidia/personaplex-7b-v1
FP16
16GB
22.15 tok/sEstimated
Auto-generated benchmark
zai-org/GLM-4.7-Flash
Q8
35GB
15.07 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-Coder-Next
Q4
45GB
12.38 tok/sEstimated
Auto-generated benchmark
zai-org/GLM-4.7-Flash
FP16
70GB
8.51 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-Coder-Next
Q8
90GB
7.89 tok/sEstimated
Auto-generated benchmark
stepfun-ai/Step-3.5-Flash
Q4
112GB
6.93 tok/sEstimated
Auto-generated benchmark
stepfun-ai/Step-3.5-Flash
Q8
223GB
5.29 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-Coder-Next
FP16
179GB
4.47 tok/sEstimated
Auto-generated benchmark
stepfun-ai/Step-3.5-Flash
FP16
446GB
2.95 tok/sEstimated
Auto-generated benchmark

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Model compatibility

ModelQuantizationVerdictEstimated speedVRAM needed
nvidia/personaplex-7b-v1Q4Fits comfortably
56.67 tok/sEstimated
4GB (have 10GB)
moonshotai/Kimi-K2.5Q4Fits comfortably
58.76 tok/sEstimated
4GB (have 10GB)
Qwen/Qwen3-Coder-NextQ4Not supported
12.38 tok/sEstimated
45GB (have 10GB)
Qwen/Qwen3-ASR-1.7BQ4Fits comfortably
76.93 tok/sEstimated
2GB (have 10GB)
stepfun-ai/Step-3.5-FlashQ4Not supported
6.93 tok/sEstimated
112GB (have 10GB)
deepseek-ai/DeepSeek-OCR-2Q4Fits comfortably
69.95 tok/sEstimated
2GB (have 10GB)
zai-org/GLM-4.7-FlashQ4Not supported
23.08 tok/sEstimated
18GB (have 10GB)
zai-org/GLM-OCRQ4Fits comfortably
58.23 tok/sEstimated
4GB (have 10GB)
nvidia/personaplex-7b-v1Q8Fits comfortably
42.33 tok/sEstimated
8GB (have 10GB)
moonshotai/Kimi-K2.5Q8Fits comfortably
46.76 tok/sEstimated
8GB (have 10GB)
Qwen/Qwen3-Coder-NextQ8Not supported
7.89 tok/sEstimated
90GB (have 10GB)
Qwen/Qwen3-ASR-1.7BQ8Fits comfortably
52.89 tok/sEstimated
3GB (have 10GB)
stepfun-ai/Step-3.5-FlashQ8Not supported
5.29 tok/sEstimated
223GB (have 10GB)
deepseek-ai/DeepSeek-OCR-2Q8Fits comfortably
57.08 tok/sEstimated
4GB (have 10GB)
zai-org/GLM-4.7-FlashQ8Not supported
15.07 tok/sEstimated
35GB (have 10GB)
zai-org/GLM-OCRQ8Fits comfortably
43.54 tok/sEstimated
8GB (have 10GB)
nvidia/personaplex-7b-v1FP16Not supported
22.15 tok/sEstimated
16GB (have 10GB)
moonshotai/Kimi-K2.5FP16Not supported
22.35 tok/sEstimated
16GB (have 10GB)
Qwen/Qwen3-Coder-NextFP16Not supported
4.47 tok/sEstimated
179GB (have 10GB)
Qwen/Qwen3-ASR-1.7BFP16Fits comfortably
28.35 tok/sEstimated
6GB (have 10GB)
stepfun-ai/Step-3.5-FlashFP16Not supported
2.95 tok/sEstimated
446GB (have 10GB)
deepseek-ai/DeepSeek-OCR-2FP16Fits comfortably
27.66 tok/sEstimated
8GB (have 10GB)
zai-org/GLM-4.7-FlashFP16Not supported
8.51 tok/sEstimated
70GB (have 10GB)
zai-org/GLM-OCRFP16Not supported
23.96 tok/sEstimated
16GB (have 10GB)
nvidia/personaplex-7b-v1Q4
Fits comfortably4GB required · 10GB available
56.67 tok/sEstimated
moonshotai/Kimi-K2.5Q4
Fits comfortably4GB required · 10GB available
58.76 tok/sEstimated
Qwen/Qwen3-Coder-NextQ4
Not supported45GB required · 10GB available
12.38 tok/sEstimated
Qwen/Qwen3-ASR-1.7BQ4
Fits comfortably2GB required · 10GB available
76.93 tok/sEstimated
stepfun-ai/Step-3.5-FlashQ4
Not supported112GB required · 10GB available
6.93 tok/sEstimated
deepseek-ai/DeepSeek-OCR-2Q4
Fits comfortably2GB required · 10GB available
69.95 tok/sEstimated
zai-org/GLM-4.7-FlashQ4
Not supported18GB required · 10GB available
23.08 tok/sEstimated
zai-org/GLM-OCRQ4
Fits comfortably4GB required · 10GB available
58.23 tok/sEstimated
nvidia/personaplex-7b-v1Q8
Fits comfortably8GB required · 10GB available
42.33 tok/sEstimated
moonshotai/Kimi-K2.5Q8
Fits comfortably8GB required · 10GB available
46.76 tok/sEstimated
Qwen/Qwen3-Coder-NextQ8
Not supported90GB required · 10GB available
7.89 tok/sEstimated
Qwen/Qwen3-ASR-1.7BQ8
Fits comfortably3GB required · 10GB available
52.89 tok/sEstimated
stepfun-ai/Step-3.5-FlashQ8
Not supported223GB required · 10GB available
5.29 tok/sEstimated
deepseek-ai/DeepSeek-OCR-2Q8
Fits comfortably4GB required · 10GB available
57.08 tok/sEstimated
zai-org/GLM-4.7-FlashQ8
Not supported35GB required · 10GB available
15.07 tok/sEstimated
zai-org/GLM-OCRQ8
Fits comfortably8GB required · 10GB available
43.54 tok/sEstimated
nvidia/personaplex-7b-v1FP16
Not supported16GB required · 10GB available
22.15 tok/sEstimated
moonshotai/Kimi-K2.5FP16
Not supported16GB required · 10GB available
22.35 tok/sEstimated
Qwen/Qwen3-Coder-NextFP16
Not supported179GB required · 10GB available
4.47 tok/sEstimated
Qwen/Qwen3-ASR-1.7BFP16
Fits comfortably6GB required · 10GB available
28.35 tok/sEstimated
stepfun-ai/Step-3.5-FlashFP16
Not supported446GB required · 10GB available
2.95 tok/sEstimated
deepseek-ai/DeepSeek-OCR-2FP16
Fits comfortably8GB required · 10GB available
27.66 tok/sEstimated
zai-org/GLM-4.7-FlashFP16
Not supported70GB required · 10GB available
8.51 tok/sEstimated
zai-org/GLM-OCRFP16
Not supported16GB required · 10GB available
23.96 tok/sEstimated

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Alternative GPUs

RTX 5070
12GB

Explore how RTX 5070 stacks up for local inference workloads.

RTX 4060 Ti 16GB
16GB

Explore how RTX 4060 Ti 16GB stacks up for local inference workloads.

RX 6800 XT
16GB

Explore how RX 6800 XT stacks up for local inference workloads.

RTX 4070 Super
12GB

Explore how RTX 4070 Super stacks up for local inference workloads.

RTX 3080
10GB

Explore how RTX 3080 stacks up for local inference workloads.

Can it play popular games?

Cyberpunk 2077
8GB VRAM

RPG • 2020

Baldur's Gate 3
8GB VRAM

RPG • 2023

Starfield
8GB VRAM

RPG • 2023

Elden Ring
8GB VRAM

Action RPG • 2022

Red Dead Redemption 2
8GB VRAM

Action Adventure • 2019

Grand Theft Auto V
4GB VRAM

Action Adventure • 2015

The Witcher 3: Wild Hunt
6GB VRAM

RPG • 2015

Forza Horizon 5
8GB VRAM

Racing • 2021

Assassin's Creed Mirage
8GB VRAM

Action Adventure • 2023

Assassin's Creed Valhalla
8GB VRAM

Action RPG • 2020

Path of Exile 2
8GB VRAM

Action RPG • 2024

Diablo IV
8GB VRAM

Action RPG • 2023

View all 41 compatible games