📄️ 9.1 Fine-Tune vs Prompt
Decision framework and cost/benefit analysis for fine-tuning vs prompting LLMs.
📄️ 9.2 Dataset Preparation
Instruction-response pairs, data quality, and formatting (Alpaca, ShareGPT) for fine-tuning.
📄️ 9.3 LoRA & QLoRA
Parameter-efficient fine-tuning, adapter layers, and memory savings with LoRA/QLoRA.
📄️ 9.4 Fine-Tuning with Unsloth
Hands-on: fine-tune a 7B model on Google Colab using the Unsloth framework.
📄️ 9.5 Post Fine-Tune Eval
Before/after comparison, catastrophic forgetting detection, and overfitting signals.