L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsOpenClawMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Guides

  • OpenClaw Guide
  • How-To Guides

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. Models
  3. TinyLlama/TinyLlama-1.1B-Chat-v1.0
  4. Speed on NVIDIA L40
NVIDIA L40~200 tok/s (Q4)

TinyLlama/TinyLlama-1.1B-Chat-v1.0 speed on NVIDIA L40

Quantization-specific throughput and VRAM requirements for TinyLlama/TinyLlama-1.1B-Chat-v1.0 running on NVIDIA L40.

Speed Snapshot
Topline estimate from compatibility data
ModelTinyLlama/TinyLlama-1.1B-Chat-v1.0
GPUNVIDIA L40
Q4 speed200 tok/s
Q4 VRAM required1GB
Data Source
Calculation and benchmark status

Speed values come from the compatibility dataset (`estimatedTokensPerSec`) and are sorted by quantization.

For full verdict logic and alternate GPUs, see the canonical compatibility page.

Open full compatibility report

Quantization Speed Table

QuantizationVRAM neededVRAM availableSpeedVerdict
Q41GB48GB200 tok/s✅ Fits
Q81GB48GB145 tok/s✅ Fits
FP162GB48GB79 tok/s✅ Fits
Back to TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4 requirement pageFull compatibility breakdown