Documentation

Build LLM-powered agents
with production-ready TypeScript

DSPy for TypeScript. Working with LLMs is complex—they don't always do what you want. DSPy makes it easier to build amazing things with LLMs. Just define your inputs and outputs (signature) and an efficient prompt is auto-generated and used. Connect together various signatures to build complex systems and workflows using LLMs.

15+ LLM Providers
End-to-end Streaming
Auto Prompt Tuning

Getting Started with Ax AI Providers and Models

This guide helps beginners get productive with Ax quickly: pick a provider, choose a model, and send a request. You’ll also learn how to define model presets and common options.

1. Install and set up

npm i @ax-llm/ax

Set your API keys as environment variables:

2. Create an AI instance

Use the ai() factory with a provider name and your API key.

import { ai } from "@ax-llm/ax";

const llm = ai({
  name: "google-gemini",
  apiKey: process.env.GOOGLE_APIKEY!,
  config: {
    model: "gemini-2.0-flash",
  },
});

Supported providers include: openai, anthropic, google-gemini, mistral, groq, cohere, together, deepseek, ollama, huggingface, openrouter, azure-openai, reka, x-grok.

Define a models list with user-friendly keys. Each item describes a preset and can include provider-specific settings. When you use a key in model, Ax maps it to the right backend model and merges the preset config.

import { ai } from "@ax-llm/ax";

const gemini = ai({
  name: "google-gemini",
  apiKey: process.env.GOOGLE_APIKEY!,
  config: { model: "simple" },
  models: [
    {
      key: "tiny",
      model: "gemini-2.0-flash-lite",
      description: "Fast + cheap",
      // Provider config merged automatically
      config: { maxTokens: 1024, temperature: 0.3 },
    },
    {
      key: "simple",
      model: "gemini-2.0-flash",
      description: "Balanced general-purpose",
      config: { temperature: 0.6 },
    },
  ],
});

// Use a preset by key
await gemini.chat({
  model: "tiny",
  chatPrompt: [{ role: "user", content: "Summarize this:" }],
});

What gets merged when you pick a key:

You can still override per-request:

await gemini.chat(
  { model: "simple", chatPrompt: [{ role: "user", content: "Hi" }] },
  { stream: false, thinkingTokenBudget: "medium" },
);

4. Send your first chat

const res = await gemini.chat({
  chatPrompt: [
    { role: "system", content: "You are concise." },
    { role: "user", content: "Write a haiku about the ocean." },
  ],
});

console.log(res.results[0]?.content);

5. Common options

Example with overrides:

await gemini.chat(
  { chatPrompt: [{ role: "user", content: "Plan a weekend trip" }] },
  { stream: false, thinkingTokenBudget: "high", showThoughts: true },
);

6. Embeddings (if supported)

const { embeddings } = await gemini.embed({
  texts: ['hello', 'world'],
  embedModel: 'text-embedding-005',
})
``;

### 7. Tips

- Prefer presets: gives friendly names and consistent tuning across your app
- Start with fast/cheap models for iteration; switch keys later without code changes
- Use `stream: false` in tests for simpler assertions
- In the browser, set `corsProxy` if needed

For more examples, see the examples directory and provider-specific docs.