Interface: LoopContext
Defined in: packages/agentos/src/orchestration/runtime/LoopController.ts:65
Execution context provided to the LoopController by the caller. Abstracts away the underlying LLM/GMI implementation so the loop logic remains provider-agnostic.
Properties
addToolResults()
addToolResults: (
results) =>void
Defined in: packages/agentos/src/orchestration/runtime/LoopController.ts:84
Feed tool results back into the conversation so the next generateStream
call has access to them. Typically appends tool messages to the message list.
Parameters
results
Returns
void
executeTool()
executeTool: (
toolCall) =>Promise<LoopToolCallResult>
Defined in: packages/agentos/src/orchestration/runtime/LoopController.ts:78
Execute a single tool call and return its result.
Implementations should never throw — instead return a result with
success: false and a populated error field.
Parameters
toolCall
Returns
Promise<LoopToolCallResult>
generateStream()
generateStream: () =>
AsyncGenerator<LoopChunk,LoopOutput,undefined>
Defined in: packages/agentos/src/orchestration/runtime/LoopController.ts:71
Async generator that streams chunks during a single LLM inference pass.
Must return a LoopOutput as its generator return value (the value
passed to the final done: true result from .next()).
Returns
AsyncGenerator<LoopChunk, LoopOutput, undefined>