Saltar al contenido principal

Ingeniería de prompts

Definición

El prompt engineering es la práctica de diseñar texto de entrada (prompts) para obtener el comportamiento deseado de los LLMs: task format, few-shot examples, chain-of-thought, role-playing, and constraints.

Es the primary way to steer LLMs without fine-tuning: you control context, format, and examples in the prompt. Combined with RAG, prompts often include retrieved passages; with agents, they define tool use and razonamiento style.

Cómo funciona

Se compone un prompt (mensaje de sistema, descripción de tarea, restricciones) y opcionalmente ejemplos (few-shot). El LLM takes this as input and produce an output. Zero-shot uses only instructions; few-shot adds example input-output pairs so the model infers the task. Chain-of-thought (see CoT) asks the model to “think paso a paso” to improve razonamiento. Structured output (por ej. “respond in JSON”) can be enforced via parsing or API options. Iterate on prompt wording and examples, and evaluate on a dev set to improve reliability.

Casos de uso

Prompt engineering matters whenever you call an LLM: it shapes behavior, format, and razonamiento without changing weights.

  • Steering chat and task completion (role, format, examples)
  • Eliciting razonamiento (chain-of-thought) for math or logic
  • Constraining outputs (JSON, length, tone) for APIs or UX

Documentación externa

Ver también