Documentation

Build LLM-powered agents
with production-ready TypeScript

DSPy for TypeScript. Working with LLMs is complex—they don't always do what you want. DSPy makes it easier to build amazing things with LLMs. Just define your inputs and outputs (signature) and an efficient prompt is auto-generated and used. Connect together various signatures to build complex systems and workflows using LLMs.

15+ LLM Providers
End-to-end Streaming
Auto Prompt Tuning

DSPy in TypeScript: The Future of Building with LLMs

The Problem: LLMs Are Powerful but Unpredictable

Working with LLMs today feels like herding cats. You write prompts, tweak them endlessly, and still get inconsistent results. When you switch models or providers, everything breaks. Sound familiar?

What if you could just describe what you want, and let the system figure out the best way to get it?

Enter DSPy: A Revolutionary Approach

DSPy (Demonstrate–Search–Predict) changes everything. Instead of writing prompts, you write signatures – simple declarations of what goes in and what comes out. The framework handles the rest.

Think of it like this:

That’s it. The system generates optimal prompts, validates outputs, and even improves itself over time.

See It in Action (30 Seconds)

import { ai, ax } from "@ax-llm/ax";

// 1. Pick your LLM
const llm = ai({ name: "openai", apiKey: process.env.OPENAI_APIKEY! });

// 2. Declare what you want
const classifier = ax('reviewText:string -> sentiment:class "positive, negative, neutral"');

// 3. Just use it
const result = await classifier.forward(llm, { 
  reviewText: "This product exceeded my expectations!" 
});
console.log(result.sentiment); // "positive"

That’s a complete, production-ready sentiment analyzer. No prompt engineering. No trial and error.

Why DSPy Will Change How You Build

1. 🎯 Write Once, Run Anywhere

Your code works with OpenAI, Google, Anthropic, or any LLM. Switch providers with one line. No rewrites.

2. ⚡ Stream Everything

Get results as they generate. Validate on-the-fly. Fail fast. Ship faster.

const gen = ax("question:string -> answer:string");
// Stream responses in real-time
await gen.forward(llm, { question: "Hello" }, { stream: true });

3. 🛡️ Built-in Quality Control

Add assertions that run during generation. Catch issues before they reach users.

gen.addAssert(
  ({ answer }) => answer.length > 10,
  "Answer must be detailed"
);

4. 🚀 Automatic Optimization

Train your programs with examples. Watch them improve automatically.

const optimizer = new AxMiPRO({ studentAI: llm, examples: trainingData });
const improved = await optimizer.compile(classifier, examples, metric);
// Your classifier just got 30% more accurate!

5. 🎨 Multi-Modal Native

Images, audio, text – all in the same signature. It just works.

const vision = ax("photo:image, question:string -> description:string");

Real-World Power: Build Complex Systems Simply

Smart Customer Support in 5 Lines

const supportBot = ax(`
  customerMessage:string -> 
  category:class "billing, technical, general",
  priority:class "high, medium, low",
  suggestedResponse:string
`);

// That's it. You have intelligent ticket routing and response generation.

Multi-Step Reasoning? Trivial.

const researcher = ax(`
  question:string -> 
  searchQueries:string[] "3-5 queries",
  analysis:string,
  confidence:number "0-1"
`);

Beyond Simple Generation: Production Features

Complete Observability

Enterprise-Ready Workflows

AxFlow lets you compose signatures into complex pipelines with automatic parallelization:

new AxFlow()
  .node("analyzer", "text:string -> sentiment:string")
  .node("summarizer", "text:string -> summary:string")
  .execute("analyzer", (state) => ({ text: state.text }))
  .execute("summarizer", (state) => ({ text: state.text }))
  // Both run in parallel automatically!

Advanced RAG Out of the Box

const rag = axRAG(vectorDB, {
  maxHops: 3,           // Multi-hop retrieval
  qualityTarget: 0.85,  // Self-healing quality loops
});
// Enterprise RAG in 3 lines

Start Now: From Zero to Production

Install (30 seconds)

npm install @ax-llm/ax

Your First Intelligent App (2 minutes)

import { ai, ax } from "@ax-llm/ax";

const llm = ai({ name: "openai", apiKey: process.env.OPENAI_APIKEY! });

// Create any AI capability with a signature
const translator = ax(`
  text:string, 
  targetLanguage:string -> 
  translation:string,
  confidence:number "0-1"
`);

const result = await translator.forward(llm, {
  text: "Hello world",
  targetLanguage: "French"
});
// { translation: "Bonjour le monde", confidence: 0.95 }

The Bottom Line

Stop fighting with prompts. Start building with signatures.

DSPy isn’t just another LLM library. It’s a fundamental shift in how we build AI systems:

Ready to Build the Future?

Quick Wins

Level Up

Join the Revolution


Remember: Every prompt you write today is technical debt. Every signature you write is an asset that gets better over time.

Welcome to the future of building with LLMs. Welcome to DSPy with Ax.