Saltar al contenido principal

Modelos de lenguaje grandes (LLMs)

Definición

Los grandes modelos de lenguaje son modelos basados en transformers entrenados con datos textuales masivos (y a veces multimodales). They exhibit emergent abilities: few-shot learning, razonamiento, and tool use when scaled and aligned (por ej. via RLHF).

Un modelo mental útil: preentrenamiento aprende predicción del siguiente token en enormes corpus y da al modelo amplio conocimiento y lenguaje ability. Instruction tuning (and similar) trains the model to follow user instructions and formats. Alignment (por ej. RLHF, DPO) shapes behavior to be helpful, honest, and safe. At inference time you can use the model zero-shot, few-shot, or augment it with recuperación (RAG) or tools (agents).

Cómo funciona

Pretraining learns predicción del siguiente token on large corpora and produce a base model. Optional fine-tuning (por ej. fine-tuning) adapts it to tasks or instruction formats; alignment (por ej. RLHF, DPO) optimizes human preference and safety. The deployed model is then used at inference time. You can call it zero-shot (no examples), few-shot (with prompt engineering), or augment it with RAG (recuperación as context) or agents (tools and loops). The diagram summarizes the training pipeline and the two main inference augmentations.

Casos de uso

LLMs are used wherever you need flexible language understanding or generation, from chat to code to analysis.

  • Chat, summarization, and translation
  • Code assistance and generation
  • Question answering and research assistance (often with RAG or tools)

Ventajas y desventajas

ProsCons
Flexible, one model for many tasksCost and latency
Strong few-shot performanceHallucination, bias
Enables agents and tool useRequires careful evaluation

Documentación externa

Ver también