AgentsKit API
    Preparing search index...

    Module @agentskit/rag

    @agentskit/rag

    Plug-and-play retrieval-augmented generation: chunk documents, embed them, and retrieve the right context at query time.

    npm install @agentskit/rag @agentskit/memory @agentskit/adapters
    
    import { createRAG } from '@agentskit/rag'
    import { openaiEmbedder } from '@agentskit/adapters'
    import { fileVectorMemory } from '@agentskit/memory'

    const rag = createRAG({
    embed: openaiEmbedder({ apiKey: process.env.OPENAI_API_KEY! }),
    store: fileVectorMemory({ path: './vectors' }),
    })

    await rag.ingest([
    { id: 'doc-1', content: 'AgentsKit is a JavaScript agent toolkit...' },
    ])

    const docs = await rag.search('How does AgentsKit work?', { topK: 5 })

    Pass the RAG instance as retriever so the runtime injects retrieved context into the task:

    import { createRuntime } from '@agentskit/runtime'
    import { openai } from '@agentskit/adapters'

    const runtime = createRuntime({
    adapter: openai({ apiKey: process.env.OPENAI_API_KEY!, model: 'gpt-4o' }),
    retriever: rag,
    })

    const result = await runtime.run('Explain the AgentsKit architecture based on ingested docs')
    console.log(result.content)

    You can also call rag.retrieve({ query, messages }) to satisfy the core Retriever contract (for example from a custom controller).

    • Tune chunking with chunkSize, chunkOverlap, or a custom split function on createRAG
    • Swap fileVectorMemory for redisVectorMemory or a custom VectorMemory for production
    • Use geminiEmbedder, ollamaEmbedder, or any (text) => Promise<number[]> as embed
    Package Role
    @agentskit/core Retriever, VectorMemory, types
    @agentskit/memory Vector backends (fileVectorMemory, etc.)
    @agentskit/adapters openaiEmbedder and other embedders
    @agentskit/runtime retriever integration for agents
    @agentskit/react useChat + chat UI with the same core types

    Full documentation