Skip to main content

Memory

@agentskit/memory provides pluggable backends for chat history (ChatMemory) and semantic vector search (VectorMemory). All backends use lazy imports — the underlying driver is loaded only when the memory is first used, so unused backends add no runtime cost.

When to use

  • Persist chat transcripts across reloads or server restarts (sqliteChatMemory, redisChatMemory).
  • Store embeddings for semantic search, RAG, or custom retrieval (fileVectorMemory, redisVectorMemory, or a custom VectorStore).

For quick tests without persistence, prefer createInMemoryMemory from @agentskit/core (no extra drivers).

Install

npm install @agentskit/memory

@agentskit/core defines ChatMemory and VectorMemory (pulled in by UI/runtime/RAG packages).

Contract overview

ChatMemory (conceptual): load and save the conversation Message[] for a session (conversation id or equivalent is backend-specific).

VectorMemory: upsert searchable documents with embeddings, query by vector (cosine similarity), delete by id. Used directly or via createRAG.

Public exports

ExportKind
sqliteChatMemory, redisChatMemoryChat persistence
fileVectorMemory, redisVectorMemoryVector persistence
Types: SqliteChatMemoryConfig, RedisChatMemoryConfig, FileVectorMemoryConfig, RedisVectorMemoryConfig, VectorStore, VectorStoreDocument, VectorStoreResult, RedisClientAdapter, RedisConnectionConfigConfiguration and extension points

Backend Comparison

BackendTypePersistenceExtra dependencyBest for
sqliteChatMemoryChatFile (SQLite)better-sqlite3Single-server, local dev
redisChatMemoryChatRemote (Redis)redisMulti-instance, production
redisVectorMemoryVectorRemote (Redis Stack)redisProduction semantic search
fileVectorMemoryVectorFile (JSON via vectra)vectraLocal dev, prototyping

Chat Memory

Chat memory persists conversation history across sessions. Pass it to useChat via the memory option.

SQLite

npm install better-sqlite3
import { sqliteChatMemory } from '@agentskit/memory'

const memory = sqliteChatMemory({
path: './chat.db',
conversationId: 'user-123', // optional, default: 'default'
})

The database and table are created automatically on first use.

Redis

npm install redis
import { redisChatMemory } from '@agentskit/memory'

const memory = redisChatMemory({
url: process.env.REDIS_URL!, // e.g. redis://localhost:6379
conversationId: 'user-123', // optional
keyPrefix: 'myapp:chat', // optional, default: 'agentskit:chat'
})

Using chat memory with useChat

import { useChat } from '@agentskit/react'
import { anthropic } from '@agentskit/adapters'
import { sqliteChatMemory } from '@agentskit/memory'

const memory = sqliteChatMemory({ path: './chat.db', conversationId: 'session-1' })

function Chat() {
const chat = useChat({
adapter: anthropic({ apiKey: process.env.ANTHROPIC_API_KEY!, model: 'claude-sonnet-4-6' }),
memory,
})
// ...
}

Vector Memory

Vector memory stores embeddings for semantic search. It is used by @agentskit/rag but can also be queried directly.

File-based (vectra)

npm install vectra
import { fileVectorMemory } from '@agentskit/memory'

const store = fileVectorMemory({
path: './vector-index', // directory where the index files are stored
})

Redis Vector (Redis Stack / Redis Cloud)

Requires a Redis instance with the RediSearch module enabled (Redis Stack, Redis Cloud, Upstash with Search).

npm install redis
import { redisVectorMemory } from '@agentskit/memory'

const store = redisVectorMemory({
url: process.env.REDIS_URL!,
indexName: 'myapp:docs:idx', // optional
keyPrefix: 'myapp:vec', // optional
dimensions: 1536, // optional — auto-detected from first insert
})

The HNSW index is created automatically on first write.

Storing and searching manually

import { openaiEmbedder } from '@agentskit/adapters'

const embed = openaiEmbedder({ apiKey: process.env.OPENAI_API_KEY! })

// Store
await store.store([{
id: 'doc-1',
content: 'AgentsKit makes AI chat easy.',
embedding: await embed('AgentsKit makes AI chat easy.'),
metadata: { source: 'readme' },
}])

// Search
const queryEmbedding = await embed('how do I build a chatbot?')
const results = await store.search(queryEmbedding, { topK: 3, threshold: 0.7 })

Custom VectorStore

Provide your own storage backend by implementing the VectorStore interface. Pass it to fileVectorMemory via the store option.

import type { VectorStore, VectorStoreDocument, VectorStoreResult } from '@agentskit/memory'
import { fileVectorMemory } from '@agentskit/memory'

const myStore: VectorStore = {
async upsert(docs: VectorStoreDocument[]): Promise<void> {
// persist docs to your database
},
async query(vector: number[], topK: number): Promise<VectorStoreResult[]> {
// return nearest neighbours
return []
},
async delete(ids: string[]): Promise<void> {
// remove by id
},
}

const memory = fileVectorMemory({ path: '', store: myStore })

RedisClientAdapter for library portability

If you already have a Redis client (e.g., ioredis), wrap it with RedisClientAdapter instead of letting the library create its own connection.

import type { RedisClientAdapter } from '@agentskit/memory'
import { redisChatMemory } from '@agentskit/memory'
import IORedis from 'ioredis'

const ioredis = new IORedis(process.env.REDIS_URL)

const clientAdapter: RedisClientAdapter = {
get: (key) => ioredis.get(key),
set: (key, value) => ioredis.set(key, value).then(() => undefined),
del: (key) => ioredis.del(Array.isArray(key) ? key : [key]).then(() => undefined),
keys: (pattern) => ioredis.keys(pattern),
disconnect: () => ioredis.quit().then(() => undefined),
call: (cmd, ...args) => ioredis.call(cmd, ...args.map(String)),
}

const memory = redisChatMemory({
url: '', // ignored when client is provided
client: clientAdapter,
conversationId: 'session-1',
})

Lazy imports pattern

All backends load their drivers with a dynamic import() or require() on first use. This means you only pay the cost of better-sqlite3, redis, or vectra when that backend is actually instantiated — not at module load time.

Troubleshooting

IssueWhat to check
better-sqlite3 install failsNative addon; use Node LTS and matching architecture, or switch to Redis.
Redis vector errorsEnsure RediSearch / vector module; dimensions matches embedder output.
Empty search resultsThreshold too high; wrong embedding model between ingest and query.
Multiple users seeing same historySet distinct conversationId per user/session for chat memory.

See also

Start here · Packages · TypeDoc (@agentskit/memory) · Adapters · RAG · useChat · @agentskit/core