//
Train custom AI models on your own data
Fine-tuning lets you train AI on your specific data - company documents, writing style, specialized knowledge. LoRA (Low-Rank Adaptation) makes this possible on consumer GPUs.
Use Unsloth for efficient LoRA training.
pip install unsloth
pip install transformers datasets accelerate peft
# Unsloth is 2-5x faster than standard trainingFormat your data as instruction/response pairs.
# dataset.jsonl format:
{"instruction": "Summarize this email", "input": "...", "output": "..."}
{"instruction": "Reply to customer", "input": "...", "output": "..."}
# Aim for 1,000-10,000 examples for good resultsSet up LoRA hyperparameters.
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="unsloth/llama-3-8b-bnb-4bit",
max_seq_length=2048,
load_in_4bit=True,
)
# Add LoRA adapters
model = FastLanguageModel.get_peft_model(
model,
r=16, # LoRA rank
lora_alpha=16,
target_modules=["q_proj", "v_proj", "k_proj", "o_proj"],
)Run training and export your model.
from trl import SFTTrainer
trainer = SFTTrainer(
model=model,
train_dataset=dataset,
max_seq_length=2048,
args=TrainingArguments(
per_device_train_batch_size=2,
num_train_epochs=3,
output_dir="./output",
),
)
trainer.train()
model.save_pretrained("my-finetuned-model")❓ Out of memory during training
✅ Reduce batch size, use gradient checkpointing, or switch to QLoRA (4-bit training).
❓ Model overfits
✅ Use more diverse training data. Reduce epochs. Add regularization.