DSPy for TypeScript

Declare signatures, not prompts. Ax compiles type-safe inputs and outputs into optimized LLM calls — then chains them into agents, flows, and self-improving pipelines.

$
npm install @ax-llm/ax
15+ LLM Providers
End-to-end Streaming
Auto Prompt Tuning
Auto-installs Claude & Codex skills
GitHub StarsNPM PackageTwitter Follow

Why teams choose Ax

Built for production from day one

Battle-tested

Production-Ready

Streaming, validation with auto-retry, OpenTelemetry observability, structured error handling.

Streaming
Validation
Telemetry
Auto-retry
0

Dependencies

Only 2 optional peer deps (OpenTelemetry + dayjs). Your bundle stays lean.

3 Runtimes

Universal Runtime

Works in Node.js, Deno, and browsers. Web Workers for sandboxed execution.

Node.jsDenoBrowser
Multi-Objective

GEPA Optimizer

Returns Pareto frontiers — balance accuracy vs speed vs cost. Pick the optimal trade-off.

Deep Context

Recursive Language Model

Long-context analysis with persistent sessions and iterative refinement. Keeps long context out of root prompt.

Autonomous

AxAgent

ReAct loops, tool calling, child agents, context policies, dynamic function discovery.

Simple & Powerful

Define what you want, not how to prompt for it

classify.ts
import { ax, ai } from '@ax-llm/ax'// Create an AI instanceconst llm = ai({ name: 'openai' })// Define a classifier with a signatureconst classify = ax(  'review:string -> sentiment:class "positive, negative, neutral"')// Run itconst result = await classify.forward(llm, {  review: 'This product is amazing!'})
output
{  sentiment: 'positive'}
Type-safe, validated, auto-retried on failure

Two ways to define signatures

Quick string syntax for simple tasks. Fluent builder for complex structured outputs with validation.

analyze.ts
import { f, ax, ai } from '@ax-llm/ax'const sig = f() .input('document', f.string().min(10)) .output('summary', f.string().max(500)) .output('entities', f.object({ name: f.string().min(1), type: f.class(['person', 'org', 'place']), confidence: f.number().min(0).max(1), }).array()) .output('contact', f.object({ email: f.email(), website: f.url().optional(), })) .output('tags', f.string().array()) .build()const gen = ax(sig)const result = await gen.forward(llm, { document: contractText})
output
Typed Output
{ summary: 'Service agreement between...', entities: [ { name: 'Acme Corp', type: 'org', confidence: 0.95 }, { name: 'Jane Smith', type: 'person', confidence: 0.88 }, ], contact: { email: 'jane@acme.com', website: 'https://acme.com' }, tags: ['contract', 'legal', 'NDA']}
Nested objects & typed arrays
Email & URL format validated
Auto-retry on validation failure

Autonomous agents, built in

AxAgent combines ReAct reasoning with a recursive language model — a persistent JavaScript sandbox that keeps long context out of the root prompt.

agent.ts
// Define an autonomous research agent
const researcher = agent({
name: 'researcher',
description: 'Deep research agent',
signature: 'query -> report',
functions: [search, scrape, summarize],
agents: [factChecker, writer],
contextPolicy: 'adaptive',
})
// Agent runs autonomously with RLM
const result = await researcher.forward(llm, {
query: 'Compare React vs Vue in 2025'
})

ReAct Loops & RLM

Multi-turn autonomous reasoning with a persistent JavaScript sandbox. State survives across turns — long context stays out of the root prompt.

Multi-turnPersistent stateSandboxed JS

Hierarchical Agents

Delegate subtasks to child agents with shared state and namespaced functions. Discover tools at runtime — the agent picks what it needs.

Child agentsShared stateNamespaces

Adaptive Context

Choose full, adaptive, or lean memory policies. Old context compresses into checkpoint summaries automatically, keeping prompts focused.

3 policiesAuto-compressCheckpoints

Production-ready from day one

Extensive test coverage, full OpenTelemetry integration, cost tracking, and enterprise-grade error handling — built in, not bolted on.

1000+
Tests
40+
OTel Metrics
15+
LLM Providers
3
Runtimes

OpenTelemetry

Full distributed tracing with spans per LLM call, function invocation, and agent turn. Drop-in Jaeger, Prometheus, or cloud exporters.

Detailed Metrics

Token usage, latency histograms, error rates, context window utilization, and thinking budget tracking — all as OpenTelemetry metrics.

Streaming & Validation

End-to-end streaming with structured output validation. Auto-retries on schema failures with error correction built in.

Cost Tracking

Per-request cost estimation across all providers. Budget monitoring, optimization insights, and cost allocation labels.

Multi-Runtime

Same code runs in Node.js, Deno, and browsers. Web Workers for sandboxed execution — deploy anywhere.

Enterprise Ready

Rate limiting, configurable sampling, content redaction, error handling with hindsight evaluation, and custom metric creation.

What's in the box

Everything you need to build production AI applications

Declare capabilities, not prompts

Define your inputs and outputs with type-safe signatures. Ax generates the optimal prompt automatically.

Classification

Categorize text into predefined classes

'text:string -> category:class "spam, ham, promo"'

Extraction

Pull structured data from unstructured text

'document:string -> names:string[], dates:date[], amounts:number[]'

Question Answering

Answer questions given context

'context:string, question:string -> answer:string'

Multi-Modal

Process images and audio alongside text

'photo:image, question:string -> answer:string'

Validation

Auto-validate outputs with built-in constraints

f.string().email() f.number().min(0).max(100)

Streaming

Get results as they generate in real-time

await gen.forward(llm, input, { stream: true })

Translation

Translate between any languages

'text:string, targetLanguage:string -> translation:string'

Complex Workflows

Multiple typed outputs from a single call

'doc:string -> summary:string, keyPoints:string[], sentiment:class "pos, neg"'

One interface, every LLM

Switch providers with a single line. Your signatures work everywhere.

OpenAI
ai({ name: 'openai' })
Anthropic
ai({ name: 'anthropic' })
Google Gemini
ai({ name: 'google-gemini' })
Ollama
ai({ name: 'ollama' })
Cohere
ai({ name: 'cohere' })
DeepSeek
ai({ name: 'deepseek' })
Groq
ai({ name: 'groq' })
Together
ai({ name: 'together' })
Mistral
ai({ name: 'mistral' })
HuggingFace
ai({ name: 'huggingface' })
Reka
ai({ name: 'reka' })
AWS Bedrock
new AxAIBedrock({ region: 'us-east-2' })

Rich type system

Type-safe signatures with automatic validation and retry on failure.

stringname:stringText
numberscore:numberNumeric
booleanvalid:booleanTrue/false
classcat:class "a,b"Enum
string[]tags:string[]Array
jsondata:jsonObject
imagephoto:imageImage
audioclip:audioAudio
datedue:dateDate
?notes?:stringOptional
Also Checkout

Connect AI to your database

GraphJin compiles GraphQL to efficient SQL and doubles as an MCP server — giving Claude Desktop and Ax agents direct, safe access to your data.

ax-agent.ts
import {
  AxAI, AxAgent, AxMCPClient,
  AxMCPHTTPSSETransport
} from '@ax-llm/ax';

// Connect to GraphJin's MCP server
const transport = new AxMCPHTTPSSETransport(
  'http://localhost:8080/api/v1/mcp'
);
const mcp = new AxMCPClient(transport);
await mcp.init();

// Use GraphJin tools in an Ax agent
const agent = new AxAgent({
  name: 'data-analyst',
  description: 'Queries databases',
  signature: 'question:string -> answer:string',
  functions: mcp.toFunction(),
});

Connect GraphJin as an MCP tool inside Ax agents

Works withPostgreSQLMySQLSQLiteMongoDBOracleMSSQLSnowflake