Persist conversations and add vector search to your agents — swap backends without changing agent code.
fileVectorMemory (pure JS, no native deps) or Redis vector search for scaleVectorStore interface is 3 methods; bring LanceDB, Pinecone, or any custom store in minutesnpm install @agentskit/memory better-sqlite3
# For production: npm install redis
# For vectors: npm install vectra
import { createRuntime } from '@agentskit/runtime'
import { anthropic } from '@agentskit/adapters'
import { sqliteChatMemory, fileVectorMemory } from '@agentskit/memory'
const runtime = createRuntime({
adapter: anthropic({ apiKey: process.env.ANTHROPIC_API_KEY, model: 'claude-sonnet-4-6' }),
memory: sqliteChatMemory({ path: './chat.db' }),
})
// Agent now remembers previous conversations across process restarts
const result = await runtime.run('What did we discuss yesterday?')
console.log(result.content)
Use a vector backend with @agentskit/rag createRAG({ embed, store }) — fileVectorMemory and redisVectorMemory implement VectorMemory for chunk storage and search.
sqliteChatMemory for Redis or in-memory variants from the same package for different deployment targets@agentskit/adapters with RAG — see @agentskit/rag| Package | Role |
|---|---|
| @agentskit/core | Memory, VectorMemory types |
| @agentskit/rag | Chunking + retrieval on top of vector memory |
| @agentskit/runtime | memory / retriever options |
| @agentskit/adapters | Embeddings for RAG |