AxAgentConfig
Defined in: https://github.com/ax-llm/ax/blob/71ea5064d766efdc031d375243a8e525911833e7/src/ax/prompts/agent.ts#L1301
Configuration options for creating an agent using the agent() factory function.
Extends
Type Parameters
| Type Parameter |
|---|
_IN extends AxGenIn |
_OUT extends AxGenOut |
Properties
| Property | Type | Description | Overrides | Inherited from |
|---|---|---|---|---|
abortSignal? | AbortSignal | AbortSignal for cancelling in-flight requests. | - | AxAgentOptions.abortSignal |
actorCallback? | (result: Record<string, unknown>) => void | Promise<void> | Called after each Actor turn with the full actor result. | - | AxAgentOptions.actorCallback |
actorFields? | string[] | Output field names the Actor should produce (in addition to javascriptCode). | - | AxAgentOptions.actorFields |
actorOptions? | Partial<Omit<AxProgramForwardOptions<string>, "functions"> & object> | Default forward options for the Actor sub-program. | - | AxAgentOptions.actorOptions |
agentIdentity? | object | - | - | - |
agentIdentity.description | string | - | - | - |
agentIdentity.name | string | - | - | - |
agents? | AxAnyAgentic[] | - | - | - |
ai? | AxAIService<unknown, unknown, string> | - | AxAgentOptions.ai | - |
asserts? | AxAssertion<any>[] | - | - | AxAgentOptions.asserts |
cachingFunction? | (key: string, value?: AxGenOut) => | undefined | AxGenOut | Promise<undefined | AxGenOut> | - | - | AxAgentOptions.cachingFunction |
compressLog? | boolean | If true, the Actor must return actionDescription and action logs will store short descriptions. | - | AxAgentOptions.compressLog |
contextCache? | AxContextCacheOptions | Context caching options for large prompt prefixes. When enabled, large prompt prefixes can be cached for cost savings and lower latency on subsequent requests. Currently supported by: Google Gemini/Vertex AI | - | AxAgentOptions.contextCache |
contextFields | string[] | Input fields holding long context (will be removed from the LLM prompt). | - | AxAgentOptions.contextFields |
corsProxy? | string | CORS proxy URL for browser environments. When running in a browser, API calls may be blocked by CORS. Specify a proxy URL to route requests through. Example 'https://cors-anywhere.herokuapp.com/' | - | AxAgentOptions.corsProxy |
customLabels? | Record<string, string> | Custom labels for OpenTelemetry metrics. These labels are merged with axGlobals.customLabels (service-level options override global settings). Example { environment: 'production', feature: 'search' } | - | AxAgentOptions.customLabels |
debug? | boolean | Enable debug logging for troubleshooting. When true, logs detailed information about prompts, responses, and the generation pipeline. Useful for understanding AI behavior. | - | AxAgentOptions.debug |
debugHideSystemPrompt? | boolean | Hide system prompt in debug output (for cleaner logs). | - | AxAgentOptions.debugHideSystemPrompt |
disableMemoryCleanup? | boolean | - | - | AxAgentOptions.disableMemoryCleanup |
examplesInSystem? | boolean | Render examples/demos in the system prompt instead of as message pairs. - false (default) - Examples rendered as alternating user/assistant messages - true - Examples embedded in system prompt (legacy behavior) Message pair rendering generally produces better results. | - | AxAgentOptions.examplesInSystem |
excludeContentFromTrace? | boolean | Exclude message content from OpenTelemetry traces (for privacy). | - | AxAgentOptions.excludeContentFromTrace |
fastFail? | boolean | - | - | AxAgentOptions.fastFail |
fetch? | { (input: URL | RequestInfo, init?: RequestInit): Promise<Response>; (input: string | URL | Request, init?: RequestInit): Promise<Response>; } | Custom fetch implementation (useful for proxies or custom HTTP handling). | - | AxAgentOptions.fetch |
functionCall? | | "auto" | "none" | "required" | { function: { name: string; }; type: "function"; } | - | - | AxAgentOptions.functionCall |
functionCallMode? | "auto" | "native" | "prompt" | How to handle function/tool calling. - 'auto' - Let the provider decide the best approach (default) - 'native' - Use the provider’s native function calling API. Fails if the model doesn’t support it. - 'prompt' - Simulate function calling via prompt engineering. Works with any model but may be less reliable. Default 'auto' | - | AxAgentOptions.functionCallMode |
functionResultFormatter? | (result: unknown) => string | - | - | AxAgentOptions.functionResultFormatter |
functions? | AxInputFunctionType | - | - | - |
logger? | AxLoggerFunction | Custom logger function for debug output. | - | AxAgentOptions.logger |
maxBatchedLlmQueryConcurrency? | number | Maximum parallel llmQuery calls in batched mode (default: 8). | - | AxAgentOptions.maxBatchedLlmQueryConcurrency |
maxLlmCalls? | number | Cap on recursive sub-LM calls (default: 50). | - | AxAgentOptions.maxLlmCalls |
maxRetries? | number | - | - | AxAgentOptions.maxRetries |
maxRuntimeChars? | number | Maximum characters for RLM runtime payloads (default: 5000). | - | AxAgentOptions.maxRuntimeChars |
maxSteps? | number | - | - | AxAgentOptions.maxSteps |
maxTurns? | number | Maximum Actor turns before forcing Responder (default: 10). | - | AxAgentOptions.maxTurns |
mem? | AxAIMemory | - | - | AxAgentOptions.mem |
meter? | Meter | OpenTelemetry meter for metrics collection. | - | AxAgentOptions.meter |
mode? | "simple" | "advanced" | Sub-query execution mode (default: ‘simple’). | - | AxAgentOptions.mode |
model? | string | - | - | AxAgentOptions.model |
modelConfig? | AxModelConfig | - | - | AxAgentOptions.modelConfig |
promptTemplate? | typeof AxPromptTemplate | - | - | AxAgentOptions.promptTemplate |
rateLimiter? | AxRateLimiterFunction | Custom rate limiter function to control request throughput. | - | AxAgentOptions.rateLimiter |
recursionOptions? | AxAgentRecursionOptions | Default forward options for recursive llmQuery sub-agent calls. | - | AxAgentOptions.recursionOptions |
responderOptions? | Partial<Omit<AxProgramForwardOptions<string>, "functions"> & object> | Default forward options for the Responder sub-program. | - | AxAgentOptions.responderOptions |
resultPicker? | AxResultPickerFunction<AxGenOut> | - | - | AxAgentOptions.resultPicker |
retry? | Partial<RetryConfig> | Retry configuration for failed requests. Controls automatic retry behavior for transient errors (rate limits, timeouts, server errors). | - | AxAgentOptions.retry |
runtime? | AxCodeRuntime | Code runtime for the REPL loop (default: AxJSRuntime). | - | AxAgentOptions.runtime |
sampleCount? | number | - | - | AxAgentOptions.sampleCount |
selfTuning? | boolean | AxSelfTuningConfig | - | - | AxAgentOptions.selfTuning |
sessionId? | string | Session identifier for conversation tracking and memory isolation. | - | AxAgentOptions.sessionId |
showThoughts? | boolean | Include the model’s thinking/reasoning in the output. When true and thinkingTokenBudget is set, the model’s internal reasoning is included in the response. Useful for debugging and understanding AI behavior. Default false | - | AxAgentOptions.showThoughts |
stepHooks? | AxStepHooks | - | - | AxAgentOptions.stepHooks |
stepIndex? | number | Internal: Current step index for multi-step operations. | - | AxAgentOptions.stepIndex |
stopFunction? | string | string[] | - | - | AxAgentOptions.stopFunction |
stream? | boolean | Enable streaming responses. When true, the AI returns responses as a stream of chunks, enabling real-time display of generated text. | - | AxAgentOptions.stream |
streamingAsserts? | AxStreamingAssertion[] | - | - | AxAgentOptions.streamingAsserts |
strictMode? | boolean | - | - | AxAgentOptions.strictMode |
structuredOutputMode? | "function" | "auto" | "native" | - | - | AxAgentOptions.structuredOutputMode |
thinkingTokenBudget? | "high" | "low" | "minimal" | "medium" | "highest" | "none" | Token budget for extended thinking (chain-of-thought reasoning). Extended thinking allows models to “think through” complex problems before responding. Higher budgets allow deeper reasoning but cost more. Approximate token allocations: - 'none' - Disabled (default) - 'minimal' - ~1,000 tokens (~750 words of thinking) - 'low' - ~4,000 tokens - 'medium' - ~10,000 tokens - 'high' - ~20,000 tokens - 'highest' - ~32,000+ tokens (provider maximum) Provider support: - Anthropic Claude: Full support with claude-sonnet-4 and above - OpenAI: Supported with o1/o3 models (uses reasoning_effort) - Google: Supported with Gemini 2.0 Flash Thinking - DeepSeek: Supported with DeepSeek-R1 Example // Enable medium thinking for complex reasoning await gen.forward(ai, values, { thinkingTokenBudget: 'medium' }); | - | AxAgentOptions.thinkingTokenBudget |
thoughtFieldName? | string | - | - | AxAgentOptions.thoughtFieldName |
timeout? | number | Request timeout in milliseconds. Default 300000 (5 minutes) | - | AxAgentOptions.timeout |
traceContext? | Context | OpenTelemetry trace context for distributed tracing. | - | AxAgentOptions.traceContext |
traceLabel? | string | - | - | AxAgentOptions.traceLabel |
tracer? | Tracer | OpenTelemetry tracer for distributed tracing. | - | AxAgentOptions.tracer |
useExpensiveModel? | "yes" | Hint to use a more capable (and expensive) model for complex tasks. Some providers offer tiered models. Setting this to ‘yes’ requests the higher-capability tier when available. | - | AxAgentOptions.useExpensiveModel |
verbose? | boolean | Enable low-level HTTP request/response logging. More verbose than debug. Shows raw HTTP traffic including headers. Useful for debugging API issues. | - | AxAgentOptions.verbose |