Interface: BuildLlmCallerOptions
Defined in: packages/agentos/src/orchestration/planning/buildLlmCaller.ts:26
Options for building an LLM caller function.
At minimum, provide provider OR model (or both).
If neither is provided, auto-detection from env vars kicks in.
Properties
apiKey?
optionalapiKey:string
Defined in: packages/agentos/src/orchestration/planning/buildLlmCaller.ts:32
API key override (not needed for CLI providers).
baseUrl?
optionalbaseUrl:string
Defined in: packages/agentos/src/orchestration/planning/buildLlmCaller.ts:34
Base URL override (e.g. for OpenRouter, Ollama).
maxTokens?
optionalmaxTokens:number
Defined in: packages/agentos/src/orchestration/planning/buildLlmCaller.ts:38
Max tokens for planning calls. Default: 4096.
model?
optionalmodel:string
Defined in: packages/agentos/src/orchestration/planning/buildLlmCaller.ts:30
Model ID: 'gpt-4o', 'claude-opus-4-6', 'gemini-2.5-flash', etc.
provider?
optionalprovider:string
Defined in: packages/agentos/src/orchestration/planning/buildLlmCaller.ts:28
Provider ID: 'openai', 'anthropic', 'claude-code-cli', 'gemini-cli', etc.
temperature?
optionaltemperature:number
Defined in: packages/agentos/src/orchestration/planning/buildLlmCaller.ts:36
Temperature for planning calls. Default: 0.3.