Lectura rápida
Empieza por la explicación más corta y útil antes de profundizar.
A technique for transferring capabilities from a large 'teacher' model to a smaller 'student' model, typically by having the teacher generate a synthetic dataset that the student is fine-tuned on. Distilled models can match or exceed teacher performance on specific tasks while being much cheaper to deploy. Common in 2024-2025 for creating efficient specialized models.