9.1 When to Fine-Tune vs Prompt
Decision framework and cost/benefit analysis for fine-tuning vs prompting LLMs.
Decision framework and cost/benefit analysis for fine-tuning vs prompting LLMs.
Instruction-response pairs, data quality, and formatting (Alpaca, ShareGPT) for fine-tuning.
Parameter-efficient fine-tuning, adapter layers, and memory savings with LoRA/QLoRA.
Hands-on: fine-tune a 7B model on Google Colab using the Unsloth framework.
Before/after comparison, catastrophic forgetting detection, and overfitting signals.