RAG Pipeline
Ingest documents, embed them, and retrieve relevant context during chat. Uses @agentskit/rag with any embedder and vector store.
Setup
import { createRAG } from '@agentskit/rag'
import { openaiEmbedder } from '@agentskit/adapters'
const rag = createRAG({
embed: openaiEmbedder({ apiKey: process.env.OPENAI_API_KEY! }),
store: yourVectorStore, // SQLite, Redis, or in-memory
chunkSize: 512,
chunkOverlap: 50,
})
Ingest Documents
await rag.ingest([
{ id: 'readme', content: readFileSync('README.md', 'utf-8'), source: 'README.md' },
{ id: 'guide', content: readFileSync('docs/guide.md', 'utf-8'), source: 'guide.md' },
])
Search Directly
const results = await rag.search('how to configure tools', { topK: 3 })
results.forEach(doc => {
console.log(`[${doc.source}] ${doc.content.slice(0, 100)}...`)
})
Use with Chat
createRAG returns a Retriever — pass it directly to useChat or the runtime:
import { useChat } from '@agentskit/react'
function RAGChat() {
const chat = useChat({
adapter: yourAdapter,
retriever: rag, // retrieved context auto-injected into system prompt
})
// ... render chat UI
}
Custom Chunking
const rag = createRAG({
embed: yourEmbedder,
store: yourStore,
split: (text) => text.split('\n\n'), // paragraph-based chunking
})