Saltar al contenido principal

Arquitectura RAG

Definición

RAG architecture covers how you chunk documents, choose embeddings and vector stores, run recuperación (dense, sparse, or hybrid), and combine context with the LLM (prompt diseño, reranking).

Design choices here directly affect RAG quality and latency. Trade-offs include chunk size (larger = more context per chunk, less precision), embedding model (quality vs cost), and whether to add a reranker or hybrid search. See vector databases for indexing options.

Cómo funciona

Fragmentación: Los documentos se dividen en segmentos (por párrafo, oración o tamaño fijo); se pueden añadir solapamiento y metadatos. Embed and index: Chunks are turned into vectors via an embedding model and stored in a vector database. Query: At query time the query is embedded; retrieve fetches the top-k similar chunks (dense search), optionally combined with keyword (sparse) for hybrid. Rank: An optional reranker (por ej. cross-encoder) rescores the top candidates. The chosen chunks are then formatted into the LLM prompt. Advanced setups add query rewriting, multi-hop recuperación, and citation extraction.

Casos de uso

Architecture choices (chunking, recuperación, reranking) directly affect answer quality and latency in production RAG.

  • Designing chunking and indexing for long documents or codebases
  • Choosing dense vs. sparse or hybrid recuperación for domain data
  • Adding reranking and citation for production RAG systems

Documentación externa

Ver también