Interface: ParacosmClientOptions
Defined in: apps/paracosm/src/runtime/client.ts:37
Options passed to createParacosmClient. Every field is optional and composes with env-var reads.
Properties
compilerModel?
optionalcompilerModel:string
Defined in: apps/paracosm/src/runtime/client.ts:72
Model to use for compile-time LLM calls. Env fallback:
PARACOSM_COMPILER_MODEL. If omitted the compiler picks a
provider-default (gpt-5.4-mini on OpenAI, claude-sonnet-4-6 on
Anthropic).
compilerProvider?
optionalcompilerProvider:LlmProvider
Defined in: apps/paracosm/src/runtime/client.ts:65
Provider to use for compile-time LLM calls in client.compileScenario.
Defaults to provider when unset so most users only configure one
provider. Env fallback: PARACOSM_COMPILER_PROVIDER.
costPreset?
optionalcostPreset:CostPreset
Defined in: apps/paracosm/src/runtime/client.ts:48
Default cost preset. Env fallback: PARACOSM_COST_PRESET=quality
or =economy.
models?
optionalmodels:Partial<SimulationModelConfig>
Defined in: apps/paracosm/src/runtime/client.ts:59
Per-role model pins. Env fallbacks:
PARACOSM_MODEL_COMMANDER, PARACOSM_MODEL_DEPARTMENTS,
PARACOSM_MODEL_JUDGE, PARACOSM_MODEL_DIRECTOR,
PARACOSM_MODEL_AGENT_REACTIONS
Merged at the per-role level, not whole-object: setting
models: { departments: 'gpt-5.4' } pins departments but leaves
commander / director / judge / agentReactions flowing from the
preset as before.
provider?
optionalprovider:LlmProvider
Defined in: apps/paracosm/src/runtime/client.ts:43
Default provider for runSimulation / runBatch. Env fallback:
PARACOSM_PROVIDER=openai or =anthropic. Per-call opts.provider
still wins.