Unique identifier of the target model (e.g., "gpt-4o", "ollama/llama3").
Identifier of the provider hosting the model (e.g., "openai", "ollama").
Maximum context length in tokens supported by the model.
Optional optimalOptional: Optimal context length for best performance/cost-efficiency, if different from max. Prompts might be targeted to this length.
A list of functional capabilities of the model (e.g., 'tool_use', 'vision_input', 'json_mode').
The type of prompt format the model expects. 'openai_chat': Standard OpenAI chat completion format (array of messages). 'anthropic_messages': Anthropic Messages API format (messages array + optional system prompt). 'generic_completion': A single string prompt for older completion-style models. 'custom': A custom format handled by a specific template.
Configuration for tool/function calling support.
Format expected by the model for tool definitions and calls.
Optional maxMaximum number of tools that can be defined or called in a single interaction.
Optional visionVision input support configuration.
Optional maxOptional supportedOptional maxOptional audioAudio input support configuration (more likely for pre-processing than direct model input).
Optional requiresOptional optimizationModel-specific hints for optimizing prompt construction or token budgeting.
Optional preferredOptional optimalOptional token
Information about the target AI model that affects prompt construction. This guides template selection, token limits, and capability-specific formatting. ModelTargetInfo