Skip to main content

Interface: ParacosmClientOptions

Defined in: apps/paracosm/src/runtime/client.ts:37

Options passed to createParacosmClient. Every field is optional and composes with env-var reads.

Properties

compilerModel?

optional compilerModel: string

Defined in: apps/paracosm/src/runtime/client.ts:72

Model to use for compile-time LLM calls. Env fallback: PARACOSM_COMPILER_MODEL. If omitted the compiler picks a provider-default (gpt-5.4-mini on OpenAI, claude-sonnet-4-6 on Anthropic).


compilerProvider?

optional compilerProvider: LlmProvider

Defined in: apps/paracosm/src/runtime/client.ts:65

Provider to use for compile-time LLM calls in client.compileScenario. Defaults to provider when unset so most users only configure one provider. Env fallback: PARACOSM_COMPILER_PROVIDER.


costPreset?

optional costPreset: CostPreset

Defined in: apps/paracosm/src/runtime/client.ts:48

Default cost preset. Env fallback: PARACOSM_COST_PRESET=quality or =economy.


models?

optional models: Partial<SimulationModelConfig>

Defined in: apps/paracosm/src/runtime/client.ts:59

Per-role model pins. Env fallbacks: PARACOSM_MODEL_COMMANDER, PARACOSM_MODEL_DEPARTMENTS, PARACOSM_MODEL_JUDGE, PARACOSM_MODEL_DIRECTOR, PARACOSM_MODEL_AGENT_REACTIONS Merged at the per-role level, not whole-object: setting models: { departments: 'gpt-5.4' } pins departments but leaves commander / director / judge / agentReactions flowing from the preset as before.


provider?

optional provider: LlmProvider

Defined in: apps/paracosm/src/runtime/client.ts:43

Default provider for runSimulation / runBatch. Env fallback: PARACOSM_PROVIDER=openai or =anthropic. Per-call opts.provider still wins.