Adapters
@agentskit/adapters normalizes every supported AI provider into a single streaming interface. Swap providers by changing one line — the rest of your app stays the same.
When to use
- You need a drop-in
AdapterFactoryforuseChat,createRuntime, orcreateChatController. - You need embedders for
@agentskit/ragor vector memory.
If you only use a hosted route (e.g. Vercel AI SDK handler), vercelAI (below) may be enough without other provider packages.
Install
npm install @agentskit/adapters
Peer: @agentskit/core (pulled in by UI/runtime packages).
Public surface (summary)
| Category | Exports |
|---|---|
| Chat adapters | anthropic, openai, gemini, ollama, deepseek, grok, kimi, langchain, langgraph, vercelAI, generic, createAdapter |
| Embedders | openaiEmbedder, geminiEmbedder, ollamaEmbedder, deepseekEmbedder, grokEmbedder, kimiEmbedder, createOpenAICompatibleEmbedder |
| Types | CreateAdapterConfig, GenericAdapterConfig, provider-specific *Config, embedder configs |
Built-in Providers
Anthropic
import { anthropic } from '@agentskit/adapters'
const adapter = anthropic({
apiKey: process.env.ANTHROPIC_API_KEY!,
model: 'claude-sonnet-4-6',
maxTokens: 4096, // optional, default 4096
baseUrl: 'https://api.anthropic.com', // optional
})
OpenAI
import { openai } from '@agentskit/adapters'
const adapter = openai({
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o',
baseUrl: 'https://api.openai.com', // optional
})
Gemini
import { gemini } from '@agentskit/adapters'
const adapter = gemini({
apiKey: process.env.GOOGLE_API_KEY!,
model: 'gemini-2.0-flash',
})
Ollama (local)
import { ollama } from '@agentskit/adapters'
const adapter = ollama({
model: 'llama3.2',
baseUrl: 'http://localhost:11434', // optional
})
DeepSeek
import { deepseek } from '@agentskit/adapters'
const adapter = deepseek({ apiKey: process.env.DEEPSEEK_API_KEY!, model: 'deepseek-chat' })
Grok
import { grok } from '@agentskit/adapters'
const adapter = grok({ apiKey: process.env.XAI_API_KEY!, model: 'grok-3' })
Kimi
import { kimi } from '@agentskit/adapters'
const adapter = kimi({ apiKey: process.env.KIMI_API_KEY!, model: 'moonshot-v1-8k' })
LangChain / LangGraph
import { langchain, langgraph } from '@agentskit/adapters'
import { ChatOpenAI } from '@langchain/openai'
// Wrap any LangChain runnable
const adapter = langchain({
runnable: new ChatOpenAI({ model: 'gpt-4o' }),
mode: 'stream', // or 'events' for streamEvents()
})
// LangGraph: uses streamEvents under the hood
const graphAdapter = langgraph({ graph: myCompiledGraph })
Vercel AI SDK
import { vercelAI } from '@agentskit/adapters'
// Points at a Next.js / Vercel AI route handler
const adapter = vercelAI({
api: '/api/chat',
headers: { 'X-Custom-Header': 'value' }, // optional
})
One-line provider swap
// Before
const adapter = anthropic({ apiKey, model: 'claude-sonnet-4-6' })
// After — nothing else changes
const adapter = openai({ apiKey, model: 'gpt-4o' })
Custom Adapter with createAdapter
Use createAdapter when you need a provider not listed above. Provide a send function that returns a Response or ReadableStream, and a parse generator that yields StreamChunk values.
import { createAdapter } from '@agentskit/adapters'
import type { AdapterRequest, StreamChunk } from '@agentskit/core'
const adapter = createAdapter({
send: async (request: AdapterRequest) => {
return fetch('https://my-llm.example.com/v1/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ messages: request.messages }),
})
},
parse: async function* (stream: ReadableStream): AsyncIterableIterator<StreamChunk> {
const reader = stream.getReader()
const decoder = new TextDecoder()
while (true) {
const { done, value } = await reader.read()
if (done) break
yield { type: 'text', content: decoder.decode(value) }
}
yield { type: 'done' }
},
abort: () => { /* optional cancel logic */ },
})
For the simplest case — a stream that emits raw text — use generic instead:
import { generic } from '@agentskit/adapters'
const adapter = generic({
send: async (request) => {
const res = await fetch('/api/my-llm', {
method: 'POST',
body: JSON.stringify({ messages: request.messages }),
})
return res.body!
},
})
Embedder Functions
Embedders return an EmbedFn — an async (text: string) => number[] — used by @agentskit/rag and @agentskit/memory.
import {
openaiEmbedder,
geminiEmbedder,
ollamaEmbedder,
deepseekEmbedder,
grokEmbedder,
kimiEmbedder,
createOpenAICompatibleEmbedder,
} from '@agentskit/adapters'
// OpenAI (default model: text-embedding-3-small)
const embed = openaiEmbedder({ apiKey: process.env.OPENAI_API_KEY! })
// Gemini
const embed = geminiEmbedder({ apiKey: process.env.GOOGLE_API_KEY!, model: 'text-embedding-004' })
// Ollama (local)
const embed = ollamaEmbedder({ model: 'nomic-embed-text' })
// OpenAI-compatible endpoint (Cohere, Voyage, etc.)
const embed = createOpenAICompatibleEmbedder({
apiKey: process.env.COHERE_API_KEY!,
model: 'embed-english-v3.0',
baseUrl: 'https://api.cohere.com',
})
Pass any embedder directly to createRAG — see RAG.
createAdapter pitfalls
- Your
parsegenerator must eventually yield{ type: 'done' }(and tool chunks if the provider streams tool calls) or consumers will hang instreaming. abortshould cancel the underlying HTTP request or reader sostop()in the UI works.- Reuse
AdapterRequestshape: models expect OpenAI-stylemessagesplus tool definitions when tools are enabled.
Troubleshooting
| Issue | What to check |
|---|---|
| 401 / 403 | API key env vars and baseUrl for self-hosted gateways. |
| Empty stream | parse not decoding SSE or NDJSON; compare with generic + known-good route. |
| Tool JSON errors | Provider-specific tool schema limits; trim description length or simplify schema. |
| Embedder dimension mismatch | Vector index dimensions must match the model (e.g. 1536 for many OpenAI embeddings). |
See also
Start here · Packages · TypeDoc (@agentskit/adapters) · Memory · RAG · useChat · @agentskit/core