Skip to main content

Large language models (LLMs)

Definition

Large language models are transformer-based models trained on massive text (and sometimes multimodal) data. They exhibit emergent abilities: few-shot learning, reasoning, and tool use when scaled and aligned (e.g. via RLHF).

A useful mental model: pretraining learns next-token prediction on huge corpora and gives the model broad knowledge and language ability. Instruction tuning (and similar) trains the model to follow user instructions and formats. Alignment (e.g. RLHF, DPO) shapes behavior to be helpful, honest, and safe. At inference time you can use the model zero-shot, few-shot, or augment it with retrieval (RAG) or tools (agents).

How it works

Pretraining learns next-token prediction on large corpora and produces a base model. Optional fine-tuning (e.g. fine-tuning) adapts it to tasks or instruction formats; alignment (e.g. RLHF, DPO) optimizes human preference and safety. The deployed model is then used at inference time. You can call it zero-shot (no examples), few-shot (with prompt engineering), or augment it with RAG (retrieval as context) or agents (tools and loops). The diagram summarizes the training pipeline and the two main inference augmentations.

Use cases

LLMs are used wherever you need flexible language understanding or generation, from chat to code to analysis.

  • Chat, summarization, and translation
  • Code assistance and generation
  • Question answering and research assistance (often with RAG or tools)

Pros and cons

ProsCons
Flexible, one model for many tasksCost and latency
Strong few-shot performanceHallucination, bias
Enables agents and tool useRequires careful evaluation

External documentation

See also