L
localai.computer
ModelsGPUsSystemsBuildsOpenClawMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds
  • AI News

Guides

  • OpenClaw Guide
  • How-To Guides

Legal

  • Privacy
  • Terms
  • Contact

© 2026 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. Models
  3. Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic

Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic

180GB VRAM (FP16)
90B parametersReleased 2025-018,192 token context

Minimum VRAM

180GB

FP16 (full model) • Q4 option ≈ 45GB

Best Performance

AMD Instinct MI300X

~58 tok/s • FP16

Most Affordable

Apple M2 Ultra

FP16 • ~8 tok/s • From $5,999

Decision actions

AMD Instinct MI300X buy options →NVIDIA H200 SXM 141GB buy options →NVIDIA H100 SXM5 80GB buy options →Best GPU guides →Prebuilt systems →Local AI builds →

VRAM requirements at a glance

Q4 minimum
45GB
Q4_K_M
45GB
Q5_K_M
68GB
Q8 minimum
90GB
FP16 minimum
180GB

Quick answer: Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic needs roughly 45GB VRAM for Q4_K_M and 68GB for Q5_K_M. Use Q8 (90GB) or FP16 (180GB) for higher quality output.

Full-model (FP16) requirements are shown by default. Quantized builds like Q4 trade accuracy for lower VRAM usage.


Compatible GPUs

Filter by quantization, price, and VRAM to compare performance estimates.

ℹ️Speeds are estimates based on hardware specs. Actual performance depends on software configuration. Learn more

Showing FP16 compatibility. Switch tabs to explore other quantizations.

GPUSpeedVRAM RequirementTypical price
AMD Instinct MI300XEstimated
AMD
~58 tok/s
FP16
180GB VRAM used192GB total on card
$15,000View GPU →
NVIDIA H200 SXM 141GBEstimated
NVIDIA
~52 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used141GB total on card
$35,000View GPU →
NVIDIA H100 SXM5 80GBEstimated
NVIDIA
~38 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used80GB total on card
$30,000View GPU →
AMD Instinct MI250XEstimated
AMD
~36 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used128GB total on card
$11,000View GPU →
NVIDIA H100 PCIe 80GBEstimated
NVIDIA
~24 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used80GB total on card
$25,000View GPU →
RTX 5090Data coming soon
NVIDIA
~23 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used32GB total on card
$1,999View GPU →
NVIDIA A100 80GB SXM4Estimated
NVIDIA
~22 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used80GB total on card
$11,000View GPU →
AMD Instinct MI210Estimated
AMD
~18 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used64GB total on card
$6,000View GPU →
NVIDIA A100 40GB PCIeData coming soon
NVIDIA
~17 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used40GB total on card
$9,000View GPU →
RTX 4090Data coming soon
NVIDIA
~14 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used24GB total on card
$1,599View GPU →
NVIDIA RTX 6000 AdaEstimated
NVIDIA
~14 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used48GB total on card
$6,999View GPU →
NVIDIA L40Estimated
NVIDIA
~13 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used48GB total on card
$7,999View GPU →
NVIDIA L40SEstimated
NVIDIA
~13 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used48GB total on card
$10,000View GPU →
RTX 5080Data coming soon
NVIDIA
~12 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used16GB total on card
$1,199View GPU →
RTX 3090Data coming soon
NVIDIA
~12 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used24GB total on card
$1,499View GPU →
AMD Radeon Pro W7900Estimated
AMD
~11 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used48GB total on card
$3,999View GPU →
RX 7900 XTXData coming soon
AMD
~11 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used24GB total on card
$999View GPU →
RTX 5070 TiData coming soon
NVIDIA
~11 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used16GB total on card
$799View GPU →
NVIDIA A6000Estimated
NVIDIA
~10 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used48GB total on card
$4,699View GPU →
RTX 4080 SuperData coming soon
NVIDIA
~10 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used16GB total on card
$999View GPU →
RTX 3080Data coming soon
NVIDIA
~10 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used10GB total on card
$699View GPU →
NVIDIA A5000Data coming soon
NVIDIA
~10 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used24GB total on card
$2,399View GPU →
RTX 4080Data coming soon
NVIDIA
~9 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used16GB total on card
$1,199View GPU →
RX 7900 XTData coming soon
AMD
~9 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used20GB total on card
$899View GPU →
RTX 4070 Ti SuperData coming soon
NVIDIA
~9 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used16GB total on card
$799View GPU →
RTX 5070Data coming soon
NVIDIA
~8 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used12GB total on card
$599View GPU →
Apple M2 UltraEstimated
Apple
~8 tok/s
FP16
180GB VRAM used192GB total on card
$5,999View GPU →
RX 9070 XTData coming soon
AMD
~7 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used16GB total on card
$599View GPU →
RX 7800 XTData coming soon
AMD
~7 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used16GB total on card
$499View GPU →
RX 7900 GREData coming soon
AMD
~7 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used16GB total on card
$649View GPU →
AMD Radeon Pro W7800Data coming soon
AMD
~7 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used32GB total on card
$2,499View GPU →
RTX 4070 TiData coming soon
NVIDIA
~7 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used12GB total on card
$799View GPU →
RTX 4070 SuperData coming soon
NVIDIA
~7 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used12GB total on card
$599View GPU →
RX 9070Data coming soon
AMD
~7 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used16GB total on card
$499View GPU →
Intel Arc A770 16GBData coming soon
Intel
~7 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used16GB total on card
$349View GPU →
RTX 4070Data coming soon
NVIDIA
~6 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used12GB total on card
$599View GPU →
RX 6900 XTData coming soon
AMD
~6 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used16GB total on card
$999View GPU →
RX 6800 XTData coming soon
AMD
~6 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used16GB total on card
$649View GPU →
Intel Arc A750Data coming soon
Intel
~6 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used8GB total on card
$289View GPU →
NVIDIA A4000Data coming soon
NVIDIA
~6 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used16GB total on card
$999View GPU →
RTX 3070Data coming soon
NVIDIA
~6 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used8GB total on card
$499View GPU →
Intel Arc B580Data coming soon
Intel
~6 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used12GB total on card
$249View GPU →
Apple M4 MaxEstimated
Apple
~6 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used128GB total on card
$3,999View GPU →
RX 7700 XTData coming soon
AMD
~5 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used12GB total on card
$449View GPU →
Intel Arc B570Data coming soon
Intel
~5 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used10GB total on card
$219View GPU →
Intel Arc Pro A60Data coming soon
Intel
~5 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used12GB total on card
$599View GPU →
NVIDIA L4Data coming soon
NVIDIA
~5 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used24GB total on card
$5,000View GPU →
RTX 3060 12GBData coming soon
NVIDIA
~4 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used12GB total on card
$329View GPU →
Apple M3 MaxEstimated
Apple
~4 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used128GB total on card
$3,999View GPU →
Apple M2 MaxEstimated
Apple
~4 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used96GB total on card
$3,199View GPU →
RTX 4060 Ti 16GBData coming soon
NVIDIA
~4 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used16GB total on card
$499View GPU →
RTX 4060 Ti 8GBData coming soon
NVIDIA
~4 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used8GB total on card
$399View GPU →
RTX 4060Data coming soon
NVIDIA
~3 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used8GB total on card
$299View GPU →
RX 7600 XTData coming soon
AMD
~3 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used16GB total on card
$329View GPU →
RX 7600Data coming soon
AMD
~3 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used8GB total on card
$269View GPU →
Intel Arc Pro A40Data coming soon
Intel
~3 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used6GB total on card
$399View GPU →
Apple M4 ProEstimated
Apple
~3 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used64GB total on card
$1,999View GPU →
AMD Ryzen AI Max+ 395Estimated
AMD
~3 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used128GB total on card
EnterpriseView GPU →
AMD Ryzen AI Max 385Estimated
AMD
~3 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used128GB total on card
EnterpriseView GPU →
AMD Ryzen AI Max Pro 385Estimated
AMD
~3 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used128GB total on card
EnterpriseView GPU →
Apple M2 ProData coming soon
Apple
~2 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used32GB total on card
$1,999View GPU →
Apple M3 ProData coming soon
Apple
~2 tok/s
FP16⚠ Insufficient VRAM
180GB VRAM used36GB total on card
$1,999View GPU →
Don't see your GPU? View all compatible hardware →
Best GPU Options for Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic

Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic 90B parametre içerir ve 45GB VRAM gerektirir - choose the best GPU for your needs

MinimumBudget
RTX 5090
VRAM32GB
Price$20
View on Amazon
RecommendedBest Value
AMD Instinct MI300X
VRAM192GB
Price$150
View on Amazon

For Better Performance

Run Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic faster with AMD Instinct MI300X. For just $130 more, significantly boost your tokens/sec performance.

Browse All GPUsCompare RTX 5090 vs AMD Instinct MI300X
Faster inference speed
Run larger models

Detailed Specifications

Hardware requirements and model sizes at a glance.

Technical details

Parameters
90,000,000,000 (90B)
Architecture
Transformer
Developer
—
Released
January 2025
Context window
8,192 tokens

Quantization support

Q4
45GB VRAM required • 45GB download
Q4_K_M
45GB VRAM required • 45GB download
Q5_K_M
68GB VRAM required • 90GB download
Q8
90GB VRAM required • 90GB download
FP16
180GB VRAM required • 180GB download

Hardware Requirements

ComponentMinimumRecommendedOptimal
VRAM45GB (Q4)90GB (Q8)180GB (FP16)
RAM68GB135GB225GB
Disk36GB72GB-
Model size45GB (Q4)90GB (Q8)180GB (FP16)
CPUModern CPU (Ryzen 5/Intel i5 or better)Modern CPU (Ryzen 5/Intel i5 or better)Modern CPU (Ryzen 5/Intel i5 or better)

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data


Quantization requirement shortcuts
Built for high-intent queries like "Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic q4 vram requirements".
Q4 VRAM usageQ4_K_M VRAM usageQ5_K_M VRAM usageQ8 VRAM usageFP16 VRAM usage
Model speed shortcuts
Direct answers for "Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic speed on [GPU]" searches.
Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic speed on Apple M4 Max
Q4 • ~15 tok/s
Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic speed on RTX 4090
Q4 • ~36 tok/s
Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic speed on RTX 5090
Q4 • ~60 tok/s
Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic speed on RTX 5080
Q4 • ~32 tok/s
Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic speed on NVIDIA L4
Q4 • ~12 tok/s
Best GPU buying guides →Compare prebuilt systems →Local AI build recipes →

Frequently Asked Questions

Common questions about running Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic locally

What should I know before running Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic?

This model delivers strong local performance when paired with modern GPUs. Use the hardware guidance below to choose the right quantization tier for your build.

How do I deploy this model locally?

Use runtimes like llama.cpp, text-generation-webui, or vLLM. Download the quantized weights from Hugging Face, ensure you have enough VRAM for your target quantization, and launch with GPU acceleration (CUDA/ROCm/Metal).

Which quantization should I choose?

Start with Q4 for wide GPU compatibility. Upgrade to Q8 if you have spare VRAM and want extra quality. FP16 delivers the highest fidelity but demands workstation or multi-GPU setups.

What is the difference between Q4, Q4_K_M, Q5_K_M, and Q8 quantization for Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic?

Q4_K_M and Q5_K_M are GGUF quantization formats that balance quality and VRAM usage. Q4_K_M uses about 45GB VRAM. Q5_K_M uses about 68GB VRAM and keeps more accuracy. Q8 (~90GB) offers near-FP16 quality. Standard Q4 is the most memory-efficient option for Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic.

Where can I download Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic?

Official weights are available via Hugging Face. Quantized builds (Q4, Q8) can be loaded into runtimes like llama.cpp, text-generation-webui, or vLLM. Always verify the publisher before downloading.


Related models

Xgen Universe Capybara— params
Nineninesix Kani Tts 2 En— params
Unsloth Qwen3 5 397B A17b Gguf397B params

Compare models

See how Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic compares to other popular models.

All comparisons →Redhatai Llama 3.2 90B Vision Instruct FP8 Dynamic vs others