Ax: Build Reliable AI Apps in TypeScript
Stop wrestling with prompts. Start shipping AI features.
Ax brings DSPyβs approach to TypeScript β describe what you want, and let the framework handle the rest. Production-ready, type-safe, works with all major LLMs.
The Problem
Building with LLMs is painful. You write prompts, test them, they break. You switch providers, everything needs rewriting. You add validation, error handling, retries β suddenly youβre maintaining infrastructure instead of shipping features.
The Solution
Define what goes in and what comes out. Ax handles the rest.
import { ai, ax } from "@ax-llm/ax";
const llm = ai({ name: "openai", apiKey: process.env.OPENAI_APIKEY });
const classifier = ax(
'review:string -> sentiment:class "positive, negative, neutral"',
);
const result = await classifier.forward(llm, {
review: "This product is amazing!",
});
console.log(result.sentiment); // "positive"
No prompt engineering. No trial and error. Works with GPT-4, Claude, Gemini, or any LLM.
Why Ax
Write once, run anywhere. Switch between OpenAI, Anthropic, Google, or 15+ providers with one line. No rewrites.
Ship faster. Stop tweaking prompts. Define inputs and outputs. The framework generates optimal prompts automatically.
Production-ready. Built-in streaming, validation, error handling, observability. Used in production handling millions of requests.
Gets smarter. Train your programs with examples. Watch accuracy improve automatically. No ML expertise needed.
Examples
Extract structured data
const extractor = ax(`
customerEmail:string, currentDate:datetime ->
priority:class "high, normal, low",
sentiment:class "positive, negative, neutral",
ticketNumber?:number,
nextSteps:string[],
estimatedResponseTime:string
`);
const result = await extractor.forward(llm, {
customerEmail: "Order #12345 hasn't arrived. Need this resolved immediately!",
currentDate: new Date(),
});
Complex nested objects
import { f, ax } from "@ax-llm/ax";
const productExtractor = f()
.input("productPage", f.string())
.output("product", f.object({
name: f.string(),
price: f.number(),
specs: f.object({
dimensions: f.object({
width: f.number(),
height: f.number()
}),
materials: f.array(f.string())
}),
reviews: f.array(f.object({
rating: f.number(),
comment: f.string()
}))
}))
.build();
const generator = ax(productExtractor);
const result = await generator.forward(llm, { productPage: "..." });
// Full TypeScript inference
console.log(result.product.specs.dimensions.width);
console.log(result.product.reviews[0].comment);
Validation and constraints
const userRegistration = f()
.input("userData", f.string())
.output("user", f.object({
username: f.string().min(3).max(20),
email: f.string().email(),
age: f.number().min(18).max(120),
password: f.string().min(8).regex("^(?=.*[A-Za-z])(?=.*\\d)", "Must contain letter and digit"),
bio: f.string().max(500).optional(),
website: f.string().url().optional(),
}))
.build();
Available constraints: .min(n), .max(n), .email(), .url(), .date(), .datetime(), .regex(pattern, description), .optional()
Validation runs on both input and output. Automatic retry with corrections on validation errors.
Agents with tools (ReAct pattern)
const assistant = ax(
"question:string -> answer:string",
{
functions: [
{ name: "getCurrentWeather", func: weatherAPI },
{ name: "searchNews", func: newsAPI },
],
},
);
const result = await assistant.forward(llm, {
question: "What's the weather in Tokyo and any news about it?",
});
Multi-modal (images, audio)
const analyzer = ax(`
image:image, question:string ->
description:string,
mainColors:string[],
category:class "electronics, clothing, food, other",
estimatedPrice:string
`);
Install
npm install @ax-llm/ax
Additional packages:
# AWS Bedrock provider
npm install @ax-llm/ax-ai-aws-bedrock
# Vercel AI SDK v5 integration
npm install @ax-llm/ax-ai-sdk-provider
# Tools: MCP stdio transport, JS interpreter
npm install @ax-llm/ax-tools
Features
- 15+ LLM Providers β OpenAI, Anthropic, Google, Mistral, Ollama, and more
- Type-safe β Full TypeScript support with auto-completion
- Streaming β Real-time responses with validation
- Multi-modal β Images, audio, text in the same signature
- Optimization β Automatic prompt tuning with MiPRO, ACE, GEPA
- Observability β OpenTelemetry tracing built-in
- Workflows β Compose complex pipelines with AxFlow
- RAG β Multi-hop retrieval with quality loops
- Agents β Tools and multi-agent collaboration
- Zero dependencies β Lightweight, fast, reliable
Documentation
Get Started
- Quick Start Guide β Set up in 5 minutes
- Examples Guide β Comprehensive examples
- DSPy Concepts β Understanding the approach
- Signatures Guide β Type-safe signature design
Deep Dives
- AI Providers β All providers, AWS Bedrock, Vercel AI SDK
- AxFlow Workflows β Build complex AI systems
- Optimization (MiPRO, ACE, GEPA) β Make programs smarter
- Advanced RAG β Production search and retrieval
Run Examples
OPENAI_APIKEY=your-key npm run tsx ./src/examples/[example-name].ts
Core examples: extract.ts, react.ts, agent.ts, streaming1.ts, multi-modal.ts
Production patterns: customer-support.ts, food-search.ts, ace-train-inference.ts, ax-flow-enhanced-demo.ts
Community
- Twitter β Updates
- Discord β Help and discussion
- GitHub β Star the project
- DeepWiki β AI-powered docs
Production Ready
- Battle-tested in production
- Stable minor versions
- Comprehensive test coverage
- OpenTelemetry built-in
- TypeScript first
Contributors
- Author: @dosco
- GEPA and ACE optimizers: @monotykamary
License
Apache 2.0
npm install @ax-llm/ax