Ingénierie des prompts
Définition
Le prompt engineering est la pratique de concevoir du texte d'entrée (prompts) pour obtenir le comportement souhaité des LLMs: task format, few-shot examples, chain-of-thought, role-playing, and constraints.
C'est the primary way to steer LLMs without fine-tuning: you control context, format, and examples in the prompt. Combined with RAG, prompts often include retrieved passages; with agents, they define tool use and raisonnement style.
Comment ça fonctionne
On compose un prompt (message système, description de tâche, contraintes) et optionnellement des exemples (few-shot). Le LLM takes this as input and produces an output. Zero-shot uses only instructions; few-shot adds example input-output pairs so the model infers the task. Chain-of-thought (see CoT) asks the model to “think étape par étape” to improve raisonnement. Structured output (par ex. “respond in JSON”) can be enforced via parsing or API options. Iterate on prompt wording and examples, and evaluate on a dev set to improve reliability.
Cas d'utilisation
Prompt engineering matters whenever you call an LLM: it shapes behavior, format, and raisonnement without changing weights.
- Steering chat and task completion (role, format, examples)
- Eliciting raisonnement (chain-of-thought) for math or logic
- Constraining outputs (JSON, length, tone) for APIs or UX