Step-by-Step Tutorials

How-To Guides

Complete guides to run AI models locally on your own hardware. From beginner-friendly Ollama setup to advanced ComfyUI workflows.

Popular Guides

Read How to Run ChatGPT Offline
14,000/mobeginner15-30 min
How to Run ChatGPT Offline
GPT-4 quality AI without internet or subscriptions.
Read How to Run Llama Locally
12,000/mobeginner15-30 min
How to Run Llama Locally
Run Llama 3 on your own hardware with Jan.
Read How to Run DeepSeek Locally
8,400/mointermediate15-30 min
How to Run DeepSeek Locally
Run DeepSeek R1 reasoning model locally.
Read How to Use ComfyUI
12,000/mointermediate45-60 min
How to Use ComfyUI
The most powerful image generation interface.

More Guides

Read How to Run Stable Diffusion Locally
18,000/mobeginner
How to Run Stable Diffusion Locally
Generate AI images with ComfyUI and SDXL.
Read How to Run SDXL Locally
8,200/mointermediate
How to Run SDXL Locally
High-resolution 1024x1024 image generation.
Read How to Run Flux Locally
6,800/mointermediate
How to Run Flux Locally
Generate images with Black Forest Labs' Flux.
Read How to Run Gemma Locally
3,200/mobeginner
How to Run Gemma Locally
Run Google's open-weight LLM on your hardware.
Read How to Run Qwen Locally
4,100/mobeginner
How to Run Qwen Locally
Run Alibaba's powerful multilingual LLM.
Read How to Run Mistral Locally
4,200/mobeginner
How to Run Mistral Locally
Run Mistral 7B and Mixtral 8x7B on your PC.
Read How to Run Phi-4 Locally
2,800/mobeginner
How to Run Phi-4 Locally
Microsoft's remarkably capable small LLM.
Read How to Run CodeLlama Locally
3,600/mobeginner
How to Run CodeLlama Locally
Your private coding assistant.
Read How to Run LLaVA Locally
2,400/mointermediate
How to Run LLaVA Locally
Understand images with this vision-language model.
Read How to Run Whisper Locally
5,200/mobeginner
How to Run Whisper Locally
Transcribe audio with OpenAI's Whisper.
Read How to Clone Voices Locally
5,400/mointermediate
How to Clone Voices Locally
Create AI voice clones on your hardware.
Read How to Generate AI Video Locally
4,800/moadvanced
How to Generate AI Video Locally
Create AI videos on your own hardware.
Read How to Fine-tune LLMs Locally
2,200/moadvanced
How to Fine-tune LLMs Locally
Train custom AI models on your own data.
Read How to Set Up RAG Locally
3,800/mointermediate
How to Set Up RAG Locally
Give AI access to your documents.

Setup workflow

Model requirements
Model requirements
Confirm VRAM target first
Compatibility checks
Compatibility checks
Validate model + GPU fit
GPU pages
GPU pages
Compare VRAM and throughput
Build guides
Build guides
Map guides to full hardware plans
Fundamentals
Fundamentals
Deepen quantization and runtime basics

Guides FAQ

What is the fastest way to start running AI locally?
Start with a beginner guide for your target model, then validate your hardware with model requirement and compatibility pages before installing additional tooling.
How do I pick the right guide for my hardware?
Check your GPU VRAM first, then choose guides that match your model size and workflow goals. Use compatibility checks when you need exact GPU-model fit confirmation.
What should I do if a guide fails on my setup?
Move to a lighter model, use a lower-bit quantization tier, or verify your GPU path in /can and /models before retrying the same runtime stack.

Need Hardware First?

Check our GPU buying guides to find the right hardware for your use case.