Skip to main content

Workflow Execution Model

Overview

AgentOS workflows are DAG-based task graphs where each task can depend on zero or more upstream tasks. The execution engine resolves the graph and schedules tasks in rounds:

  • Tasks with no dependencies run in parallel automatically.
  • Tasks with dependencies wait until all upstream tasks complete.
  • Outputs from completed tasks are available to dependent tasks as context, enabling typed data flow through the graph.

This model allows complex multi-agent pipelines to be declared as simple dependency lists, with the engine handling scheduling, parallelism, error propagation, and output passing.

Task Definition

Each task in a workflow is defined with the WorkflowTaskDefinition interface:

interface WorkflowTaskDefinition {
/** Unique identifier for this task within the workflow */
id: string;

/** Human-readable task name */
name: string;

/** Description of what this task does */
description?: string;

/**
* IDs of tasks that must complete before this task can start.
* An empty array (or omitted) means the task has no dependencies
* and is eligible to run immediately.
*/
dependsOn: string[];

/** Which agent/role executes this task */
executor: {
type: 'agent' | 'function' | 'human';
roleId?: string;
personaId?: string;
instructions: string;
};

/** JSON Schema for the task's expected input */
inputSchema?: Record<string, unknown>;

/** JSON Schema for the task's expected output */
outputSchema?: Record<string, unknown>;

/** Retry behavior on failure */
retryPolicy?: {
maxAttempts: number;
backoffMs: number;
backoffMultiplier?: number;
};

/** If true, failure of this task does not block downstream dependents */
skippable?: boolean;

/** Error callback */
onError?: (error: Error, context: TaskContext) => void;

/** Arbitrary key-value metadata */
metadata?: Record<string, unknown>;
}

The dependsOn array is the core mechanism that creates the DAG. Each entry references the id of another task in the same workflow.

Execution Model

The engine resolves the DAG into execution rounds. In each round, every task whose dependencies have all completed becomes READY and executes concurrently with other READY tasks in the same round.

🔍 Click to zoom

Scheduling rules

  1. Round-based scheduling — each round, the engine scans all PENDING tasks and promotes those whose dependencies are all COMPLETED to READY.
  2. Concurrent execution — multiple READY tasks in the same round execute concurrently (bounded by the configured concurrency limit).
  3. Output availability — a task moves to COMPLETED when its executor finishes. Its output is then stored and made available to downstream tasks.
  4. Cycle detection — before execution begins, a DFS-based cycle detector validates the graph. Cyclic graphs are rejected at definition time.

Task lifecycle

🔍 Click to zoom

Output Passing Between Tasks

When a task completes, its output is stored on the WorkflowTaskInstance. Dependent tasks can access upstream outputs in two ways:

Direct context access

Inside a task executor, upstream results are available on the context object:

// Inside Task D's executor (depends on Task A and Task B)
const researchFindings = context.results['task-a'].output;
const marketData = context.results['task-b'].output;

Template syntax

In inputs fields, you can use template expressions that resolve at execution time:

{
id: 'strategy',
dependsOn: ['research', 'data'],
executor: {
type: 'agent',
roleId: 'strategist',
instructions: 'Synthesize findings into a strategy.',
},
inputs: {
findings: '{{results.research.output}}',
metrics: '{{results.data.output.metrics}}',
},
}

Type safety

The inputSchema and outputSchema fields provide runtime validation. When an outputSchema is defined, the engine validates the task's output before marking it COMPLETED. When an inputSchema is defined, the resolved inputs are validated before the executor runs.

Parallel vs Sequential — Decision Rules

The execution behavior is entirely determined by the dependsOn array:

ScenariodependsOnExecution
Independent tasks[] or omittedRun in PARALLEL
Fan-outAll share same dependencyRun in PARALLEL after dependency completes
Fan-inOne task depends on manyWaits for ALL dependencies
ChainEach depends on previousStrictly SEQUENTIAL
DiamondA→B, A→C, B→D, C→DA first, then B∥C parallel, then D

Diamond pattern

The diamond is worth illustrating because it demonstrates both fan-out and fan-in:

🔍 Click to zoom
  • Round 1: A runs alone.
  • Round 2: B and C run in parallel (both depend only on A).
  • Round 3: D runs after both B and C complete.

Agency Integration

Workflows integrate with the AgentOS agency system to assign tasks to specialized agents:

Role-based execution

Each task's executor.roleId maps to an agency seat — a named role within a multi-agent agency. The AgencyRegistry manages which GMI (General Machine Intelligence) instance fills each role.

{
id: 'research',
executor: {
type: 'agent',
roleId: 'researcher', // Maps to an agency seat
personaId: 'market-analyst', // Optional persona overlay
instructions: 'Research current market trends for the given sector.',
},
}

Inter-task communication

Beyond the DAG's data flow, agents can communicate during execution via the AgentCommunicationBus. This enables:

  • Clarification requests — a downstream agent can ask an upstream agent for elaboration.
  • Progress updates — long-running tasks can broadcast status.
  • Coordination signals — agents can negotiate shared resources.

Shared memory

The AgencyMemoryManager provides shared memory across tasks with role-based read/write permissions. This allows agents to build on a common knowledge base throughout the workflow without requiring explicit output passing for every piece of data.

Real-World Example

A 4-agent market analysis workflow:

const marketAnalysisWorkflow = {
id: 'market-analysis',
name: 'Market Analysis Pipeline',
tasks: [
{
id: 'research',
name: 'Market Research',
dependsOn: [],
executor: {
type: 'agent',
roleId: 'researcher',
instructions: 'Research current market trends, competitors, and opportunities in the target sector.',
},
outputSchema: {
type: 'object',
properties: {
trends: { type: 'array' },
competitors: { type: 'array' },
opportunities: { type: 'array' },
},
},
},
{
id: 'data',
name: 'Data Collection & Analysis',
dependsOn: [],
executor: {
type: 'agent',
roleId: 'analyst',
instructions: 'Collect and analyze quantitative market data, pricing trends, and volume metrics.',
},
outputSchema: {
type: 'object',
properties: {
metrics: { type: 'object' },
charts: { type: 'array' },
},
},
},
{
id: 'strategy',
name: 'Strategy Synthesis',
dependsOn: ['research', 'data'],
executor: {
type: 'agent',
roleId: 'strategist',
instructions: 'Synthesize research findings and data analysis into actionable strategic recommendations.',
},
inputs: {
findings: '{{results.research.output}}',
metrics: '{{results.data.output.metrics}}',
},
},
{
id: 'report',
name: 'Final Report',
dependsOn: ['strategy'],
executor: {
type: 'agent',
roleId: 'writer',
instructions: 'Compile the strategy into a polished executive report with visualizations.',
},
},
],
};

Execution flow

🔍 Click to zoom

Timeline

Round 1: research ∥ data     (parallel — no dependencies)
Round 2: strategy (after both research and data complete)
Round 3: report (after strategy completes)

Error Handling

Retry policy

Each task can define a retryPolicy that controls automatic retry behavior:

{
retryPolicy: {
maxAttempts: 3, // Total attempts (including the first)
backoffMs: 1000, // Initial delay between retries
backoffMultiplier: 2, // Exponential backoff factor
},
}

With the configuration above, retries occur at 1s, 2s, and 4s delays.

Skippable tasks

When skippable: true, a failed task (after exhausting retries) transitions to SKIPPED instead of blocking the pipeline. Downstream tasks still execute but receive null for the skipped task's output in their context.

Error callbacks

The onError callback fires on each failure (before retry). It receives the error and the current task context, allowing logging, alerting, or compensating actions.

Downstream impact

  • Non-skippable failure: all downstream dependents are marked BLOCKED and will not execute.
  • Skippable failure: downstream dependents proceed with null in place of the skipped task's output.
🔍 Click to zoom

Validation

The engine validates the workflow definition before execution begins:

Cycle detection

A DFS-based algorithm traverses the dependency graph. If a back-edge is detected (a node is visited while still on the recursion stack), the workflow is rejected with an error identifying the cycle.

Missing dependency references

If a task's dependsOn array references an id that does not exist in the workflow, the engine throws a validation error at definition time — not at runtime.

Duplicate task IDs

Each task id must be unique within a workflow. Duplicates are detected during validation and rejected immediately.

// All three validations run before any task executes
WorkflowValidator.validate(workflow);
// Throws: "Cycle detected: research → strategy → research"
// Throws: "Task 'report' depends on unknown task 'nonexistent'"
// Throws: "Duplicate task ID: 'research'"