Connect to any LLM provider — and swap between them — without touching your app code.
ReadableStreamnpm install @agentskit/adapters
import { anthropic, openai, ollama } from '@agentskit/adapters'
import { createRuntime } from '@agentskit/runtime'
// Switch provider by swapping one import
const adapter = anthropic({ apiKey: process.env.ANTHROPIC_API_KEY, model: 'claude-sonnet-4-6' })
// const adapter = openai({ apiKey: process.env.OPENAI_API_KEY, model: 'gpt-4o' })
// const adapter = ollama({ model: 'llama3.1' })
const runtime = createRuntime({ adapter })
const result = await runtime.run('Summarize the latest AI news')
console.log(result.content)
Use the same package for vector embeddings — wire openaiEmbedder, geminiEmbedder, or ollamaEmbedder into @agentskit/rag:
import { openaiEmbedder } from '@agentskit/adapters'
import { createRAG } from '@agentskit/rag'
import { fileVectorMemory } from '@agentskit/memory'
const rag = createRAG({
embed: openaiEmbedder({ apiKey: process.env.OPENAI_API_KEY! }),
store: fileVectorMemory({ path: './vectors' }),
})
@agentskit/runtime, @agentskit/react, or @agentskit/ink — the adapter instance is the only provider-specific piece| Package | Role |
|---|---|
| @agentskit/core | Adapter, EmbedFn, types |
| @agentskit/runtime | Headless createRuntime |
| @agentskit/rag | createRAG + embedders |
| @agentskit/memory | Vector + chat memory backends |