AxModelConfig
type AxModelConfig = object;
Defined in: https://github.com/ax-llm/ax/blob/05ff5bd88d050f7ba85a3fcc6eb0ed2975ad7d51/src/ax/ai/types.ts#L75
Configuration options for AI model behavior.
These settings control how the model generates responses. They can be set as defaults when creating an AI instance, or overridden per-request.
Example
const config: AxModelConfig = {
maxTokens: 2000,
temperature: 0.7,
topP: 0.9
};
Properties
endSequences?
optional endSequences: string[];
Defined in: https://github.com/ax-llm/ax/blob/05ff5bd88d050f7ba85a3fcc6eb0ed2975ad7d51/src/ax/ai/types.ts#L171
Similar to stopSequences, but the sequence IS included in the output.
Example
['</answer>'] to include closing tag in output
frequencyPenalty?
optional frequencyPenalty: number;
Defined in: https://github.com/ax-llm/ax/blob/05ff5bd88d050f7ba85a3fcc6eb0ed2975ad7d51/src/ax/ai/types.ts#L154
Penalizes tokens based on how frequently they’ve appeared. Range: -2.0 to 2.0.
Unlike presencePenalty, this scales with frequency: tokens that appear many times get penalized more. Useful for preventing the model from repeating the same phrases verbatim.
Example
0.5 to discourage word/phrase repetition
maxTokens?
optional maxTokens: number;
Defined in: https://github.com/ax-llm/ax/blob/05ff5bd88d050f7ba85a3fcc6eb0ed2975ad7d51/src/ax/ai/types.ts#L89
Maximum number of tokens to generate in the response.
Token estimation guide:
- ~750 tokens ≈ 1 page of English text
- ~100 tokens ≈ 75 words
- ~4 characters ≈ 1 token (English)
Set higher for long-form content (articles, code), lower for concise responses (classifications, short answers).
Example
500 for short responses, 2000 for detailed explanations, 4000+ for long-form content
n?
optional n: number;
Defined in: https://github.com/ax-llm/ax/blob/05ff5bd88d050f7ba85a3fcc6eb0ed2975ad7d51/src/ax/ai/types.ts#L189
Number of completions to generate for each prompt.
Generates multiple independent responses. Useful with result pickers to select the best response. Increases cost proportionally.
Example
3 to generate three alternatives and pick the best
presencePenalty?
optional presencePenalty: number;
Defined in: https://github.com/ax-llm/ax/blob/05ff5bd88d050f7ba85a3fcc6eb0ed2975ad7d51/src/ax/ai/types.ts#L142
Penalizes tokens that have already appeared in the output. Range: -2.0 to 2.0.
Positive values reduce repetition by penalizing tokens that have appeared at all, regardless of frequency. Useful for encouraging diverse vocabulary.
0- No penalty (default)0.5-1.0- Mild penalty, reduces obvious repetition1.5-2.0- Strong penalty, may hurt coherence
Example
0.6 to reduce repetitive phrasing
stopSequences?
optional stopSequences: string[];
Defined in: https://github.com/ax-llm/ax/blob/05ff5bd88d050f7ba85a3fcc6eb0ed2975ad7d51/src/ax/ai/types.ts#L164
Sequences that will stop generation when encountered.
The model stops generating as soon as any stop sequence is produced. The stop sequence itself is NOT included in the output.
Example
['\\n\\n', 'END', '---'] to stop at double newlines or markers
stream?
optional stream: boolean;
Defined in: https://github.com/ax-llm/ax/blob/05ff5bd88d050f7ba85a3fcc6eb0ed2975ad7d51/src/ax/ai/types.ts#L179
Enable streaming responses for real-time output.
When true, the response is returned as a stream of chunks, allowing you to display partial results as they’re generated.
temperature?
optional temperature: number;
Defined in: https://github.com/ax-llm/ax/blob/05ff5bd88d050f7ba85a3fcc6eb0ed2975ad7d51/src/ax/ai/types.ts#L104
Controls randomness in generation. Range: 0 to 2.
Use case guide:
0- Deterministic, always picks most likely token. Best for factual Q&A, classification, code generation where consistency matters.0.3-0.5- Low creativity. Good for structured outputs, summaries.0.7- Balanced (default for most models). Good for general conversation.1.0- High creativity. Good for brainstorming, creative writing.1.5-2.0- Very high randomness. Often produces incoherent output.
Default
Varies by provider, typically 0.7-1.0
topK?
optional topK: number;
Defined in: https://github.com/ax-llm/ax/blob/05ff5bd88d050f7ba85a3fcc6eb0ed2975ad7d51/src/ax/ai/types.ts#L127
Only consider the top K most likely tokens at each step.
Lower values (e.g., 10-40) make output more focused. Not supported by all providers (OpenAI doesn’t support this; Anthropic, Google do).
Example
40 for focused output, 100 for more variety
topP?
optional topP: number;
Defined in: https://github.com/ax-llm/ax/blob/05ff5bd88d050f7ba85a3fcc6eb0ed2975ad7d51/src/ax/ai/types.ts#L117
Nucleus sampling: only consider tokens with cumulative probability >= topP. Range: 0 to 1.
Lower values make output more focused and deterministic. Alternative to temperature for controlling randomness.
Recommendation: Adjust either temperature OR topP, not both.
Example
0.1 for focused output, 0.9 for diverse output