Workflow Execution Model
Overview
AgentOS workflows are DAG-based task graphs where each task can depend on zero or more upstream tasks. The execution engine resolves the graph and schedules tasks in rounds:
- Tasks with no dependencies run in parallel automatically.
- Tasks with dependencies wait until all upstream tasks complete.
- Outputs from completed tasks are available to dependent tasks as context, enabling typed data flow through the graph.
This model allows complex multi-agent pipelines to be declared as simple dependency lists, with the engine handling scheduling, parallelism, error propagation, and output passing.
Task Definition
Each task in a workflow is defined with the WorkflowTaskDefinition interface:
interface WorkflowTaskDefinition {
/** Unique identifier for this task within the workflow */
id: string;
/** Human-readable task name */
name: string;
/** Description of what this task does */
description?: string;
/**
* IDs of tasks that must complete before this task can start.
* An empty array (or omitted) means the task has no dependencies
* and is eligible to run immediately.
*/
dependsOn: string[];
/** Which agent/role executes this task */
executor: {
type: 'agent' | 'function' | 'human';
roleId?: string;
personaId?: string;
instructions: string;
};
/** JSON Schema for the task's expected input */
inputSchema?: Record<string, unknown>;
/** JSON Schema for the task's expected output */
outputSchema?: Record<string, unknown>;
/** Retry behavior on failure */
retryPolicy?: {
maxAttempts: number;
backoffMs: number;
backoffMultiplier?: number;
};
/** If true, failure of this task does not block downstream dependents */
skippable?: boolean;
/** Error callback */
onError?: (error: Error, context: TaskContext) => void;
/** Arbitrary key-value metadata */
metadata?: Record<string, unknown>;
}
The dependsOn array is the core mechanism that creates the DAG. Each entry references the id of another task in the same workflow.
Execution Model
The engine resolves the DAG into execution rounds. In each round, every task whose dependencies have all completed becomes READY and executes concurrently with other READY tasks in the same round.
Scheduling rules
- Round-based scheduling — each round, the engine scans all
PENDINGtasks and promotes those whose dependencies are allCOMPLETEDtoREADY. - Concurrent execution — multiple
READYtasks in the same round execute concurrently (bounded by the configured concurrency limit). - Output availability — a task moves to
COMPLETEDwhen its executor finishes. Its output is then stored and made available to downstream tasks. - Cycle detection — before execution begins, a DFS-based cycle detector validates the graph. Cyclic graphs are rejected at definition time.
Task lifecycle
Output Passing Between Tasks
When a task completes, its output is stored on the WorkflowTaskInstance. Dependent tasks can access upstream outputs in two ways:
Direct context access
Inside a task executor, upstream results are available on the context object:
// Inside Task D's executor (depends on Task A and Task B)
const researchFindings = context.results['task-a'].output;
const marketData = context.results['task-b'].output;
Template syntax
In inputs fields, you can use template expressions that resolve at execution time:
{
id: 'strategy',
dependsOn: ['research', 'data'],
executor: {
type: 'agent',
roleId: 'strategist',
instructions: 'Synthesize findings into a strategy.',
},
inputs: {
findings: '{{results.research.output}}',
metrics: '{{results.data.output.metrics}}',
},
}
Type safety
The inputSchema and outputSchema fields provide runtime validation. When an outputSchema is defined, the engine validates the task's output before marking it COMPLETED. When an inputSchema is defined, the resolved inputs are validated before the executor runs.
Parallel vs Sequential — Decision Rules
The execution behavior is entirely determined by the dependsOn array:
| Scenario | dependsOn | Execution |
|---|---|---|
| Independent tasks | [] or omitted | Run in PARALLEL |
| Fan-out | All share same dependency | Run in PARALLEL after dependency completes |
| Fan-in | One task depends on many | Waits for ALL dependencies |
| Chain | Each depends on previous | Strictly SEQUENTIAL |
| Diamond | A→B, A→C, B→D, C→D | A first, then B∥C parallel, then D |
Diamond pattern
The diamond is worth illustrating because it demonstrates both fan-out and fan-in:
- Round 1: A runs alone.
- Round 2: B and C run in parallel (both depend only on A).
- Round 3: D runs after both B and C complete.
Agency Integration
Workflows integrate with the AgentOS agency system to assign tasks to specialized agents:
Role-based execution
Each task's executor.roleId maps to an agency seat — a named role within a multi-agent agency. The AgencyRegistry manages which GMI (General Machine Intelligence) instance fills each role.
{
id: 'research',
executor: {
type: 'agent',
roleId: 'researcher', // Maps to an agency seat
personaId: 'market-analyst', // Optional persona overlay
instructions: 'Research current market trends for the given sector.',
},
}
Inter-task communication
Beyond the DAG's data flow, agents can communicate during execution via the AgentCommunicationBus. This enables:
- Clarification requests — a downstream agent can ask an upstream agent for elaboration.
- Progress updates — long-running tasks can broadcast status.
- Coordination signals — agents can negotiate shared resources.
Shared memory
The AgencyMemoryManager provides shared memory across tasks with role-based read/write permissions. This allows agents to build on a common knowledge base throughout the workflow without requiring explicit output passing for every piece of data.
Real-World Example
A 4-agent market analysis workflow:
const marketAnalysisWorkflow = {
id: 'market-analysis',
name: 'Market Analysis Pipeline',
tasks: [
{
id: 'research',
name: 'Market Research',
dependsOn: [],
executor: {
type: 'agent',
roleId: 'researcher',
instructions: 'Research current market trends, competitors, and opportunities in the target sector.',
},
outputSchema: {
type: 'object',
properties: {
trends: { type: 'array' },
competitors: { type: 'array' },
opportunities: { type: 'array' },
},
},
},
{
id: 'data',
name: 'Data Collection & Analysis',
dependsOn: [],
executor: {
type: 'agent',
roleId: 'analyst',
instructions: 'Collect and analyze quantitative market data, pricing trends, and volume metrics.',
},
outputSchema: {
type: 'object',
properties: {
metrics: { type: 'object' },
charts: { type: 'array' },
},
},
},
{
id: 'strategy',
name: 'Strategy Synthesis',
dependsOn: ['research', 'data'],
executor: {
type: 'agent',
roleId: 'strategist',
instructions: 'Synthesize research findings and data analysis into actionable strategic recommendations.',
},
inputs: {
findings: '{{results.research.output}}',
metrics: '{{results.data.output.metrics}}',
},
},
{
id: 'report',
name: 'Final Report',
dependsOn: ['strategy'],
executor: {
type: 'agent',
roleId: 'writer',
instructions: 'Compile the strategy into a polished executive report with visualizations.',
},
},
],
};
Execution flow
Timeline
Round 1: research ∥ data (parallel — no dependencies)
Round 2: strategy (after both research and data complete)
Round 3: report (after strategy completes)
Error Handling
Retry policy
Each task can define a retryPolicy that controls automatic retry behavior:
{
retryPolicy: {
maxAttempts: 3, // Total attempts (including the first)
backoffMs: 1000, // Initial delay between retries
backoffMultiplier: 2, // Exponential backoff factor
},
}
With the configuration above, retries occur at 1s, 2s, and 4s delays.
Skippable tasks
When skippable: true, a failed task (after exhausting retries) transitions to SKIPPED instead of blocking the pipeline. Downstream tasks still execute but receive null for the skipped task's output in their context.
Error callbacks
The onError callback fires on each failure (before retry). It receives the error and the current task context, allowing logging, alerting, or compensating actions.
Downstream impact
- Non-skippable failure: all downstream dependents are marked
BLOCKEDand will not execute. - Skippable failure: downstream dependents proceed with
nullin place of the skipped task's output.
Validation
The engine validates the workflow definition before execution begins:
Cycle detection
A DFS-based algorithm traverses the dependency graph. If a back-edge is detected (a node is visited while still on the recursion stack), the workflow is rejected with an error identifying the cycle.
Missing dependency references
If a task's dependsOn array references an id that does not exist in the workflow, the engine throws a validation error at definition time — not at runtime.
Duplicate task IDs
Each task id must be unique within a workflow. Duplicates are detected during validation and rejected immediately.
// All three validations run before any task executes
WorkflowValidator.validate(workflow);
// Throws: "Cycle detected: research → strategy → research"
// Throws: "Task 'report' depends on unknown task 'nonexistent'"
// Throws: "Duplicate task ID: 'research'"