Few-shot learning
Definition
Few-shot learning aims to adapt quickly from a small number of labeled examples (e.g. 1–5 per class). Meta-learning (e.g. MAML) trains models to be good at few-shot adaptation.
It sits between transfer learning (more target data) and zero-shot (no target examples). LLMs do few-shot implicitly via in-context examples in the prompt; classical few-shot uses episodic meta-training (e.g. MAML) so the model learns to adapt from a support set.
How it works
Each task has a support set (few labeled examples, e.g. 1–5 per class) and a query set (examples to predict). Adapt: the model uses the support set to adapt (e.g. compute prototypes, or take a few gradient steps in MAML). Predict: the adapted model predicts labels for the query set. Episodic training: sample many few-shot tasks from a meta-train set; for each, adapt on the task support set and optimize so that predictions on the query set improve. At test time, the model gets a new task’s support set and predicts on its query set. For LLMs, "adapt" is just conditioning on the support examples in the prompt (in-context few-shot).
Use cases
Few-shot learning applies when you have only a handful of examples per class or task (including in-context LLM prompts).
- Classifying rare classes with only a few labeled examples
- LLM in-context learning (e.g. 1–5 examples in the prompt)
- Rapid adaptation in robotics or personalization with minimal data