Pular para o conteúdo principal

Destilação de conhecimento

Definição

Destilação de conhecimento treina um modelo estudante menor para igualar as saídas (e às vezes representações intermediárians) de um professor maior. The student gains from the teacher’s soft labels and can run with less compute.

É a model compression technique that preserves more of the teacher’s behavior than training the student on hard labels alone. Used for BERT → DistilBERT, large LLMs → smaller variants, and transfer learning from ensembles.

Como funciona

O professor (modelo grande) produz logits (ou embeddings) em dados de treinamento. O estudante (modelo menor) é trained to match the teacher’s logits (por ex. KL divergence with temperature scaling) in addition to or instead of hard labels (ground truth). Temperature softens the teacher distribution so the student learns from dark knowledge (relative scores across classes). Optionally, intermediate layers or attention can be matched. The student is trained with a mix of distillation loss and task loss; after training it runs with the student’s capacity and latency.

Casos de uso

Knowledge distillation fits when you want a small, fast student that approximates a large teacher for deployment.

  • Training smaller, faster models that approximate large ones (por ex. BERT → DistilBERT)
  • Enabling deployment when the teacher is too heavy for production
  • Transferring knowledge from ensembles or from multiple teachers

Documentação externa

Veja também