AgentOS is a TypeScript-first orchestration runtime for building adaptive, emergent AI agents . Unlike traditional agent frameworks that treat agents as stateless functions, AgentOS introduces Generalized Mind Instances (GMIs) — context-aware entities that learn, evolve, and maintain coherent personalities across interactions.
npm install @framers/agentos
Copy
GMI Architecture — Persistent agent identities with working memory
Dynamic Personas — Contextual personality adaptation
Multi-model Support — OpenAI, Anthropic, local models
Token-level streaming — Real-time response delivery
Async generators — Native TypeScript patterns
WebSocket & SSE — Multiple transport protocols
Permission management — Fine-grained access control
Dynamic registration — Runtime tool discovery
Guardrails — Safety constraints and validation
Vector storage — Semantic memory retrieval
SQL adapters — SQLite, PostgreSQL support
Context optimization — Automatic window management
Agency system — Agent hierarchies and teams
Message bus — Inter-agent communication
Handoffs — Context transfer between agents
Approval workflows — High-risk action gates
Clarification requests — Ambiguity resolution
Escalation handling — Human takeover paths
# npm npm install @framers/agentos # pnpm pnpm add @framers/agentos # yarn yarn add @framers/agentos
Copy
Requirements: Node.js 18+ · TypeScript 5.0+
import { AgentOS } from '@framers/agentos' ; // Initialize const agent = new AgentOS (); await agent. initialize ({ llmProvider: { provider: 'openai' , apiKey: process.env. OPENAI_API_KEY , model: 'gpt-4o' } }); // Process requests with streaming for await ( const chunk of agent. processRequest ({ message: 'Help me analyze this data' , context: { userId: 'user-123' } })) { if (chunk.type === 'content' ) { process.stdout. write (chunk.content); } }
Copy
import { AgentOS } from '@framers/agentos' ; const agent = new AgentOS (); await agent. initialize ({ llmProvider: { provider: 'openai' , apiKey: process.env. OPENAI_API_KEY , model: 'gpt-4o' }, tools: [{ name: 'get_weather' , description: 'Get current weather for a city' , parameters: { type: 'object' , properties: { city: { type: 'string' } }, required: [ 'city' ] }, execute : async ({ city }) => { const res = await fetch ( `https://api.weather.com/${ city }` ); return res. json (); } }] }); // Tools are called automatically when the model decides to use them for await ( const chunk of agent. processRequest ({ message: 'Weather in Tokyo?' })) { if (chunk.type === 'tool_call' ) console. log ( 'Calling:' , chunk.tool); if (chunk.type === 'content' ) process.stdout. write (chunk.content); }
Copy
// OpenRouter for multi-model access await agent. initialize ({ llmProvider: { provider: 'openrouter' , apiKey: process.env. OPENROUTER_API_KEY , model: 'anthropic/claude-3.5-sonnet' } }); // Local Ollama await agent. initialize ({ llmProvider: { provider: 'ollama' , baseUrl: 'http://localhost:11434' , model: 'llama3' } });
Copy
import { AgentOS } from '@framers/agentos' ; const agent = new AgentOS (); await agent. initialize ({ llmProvider: { provider: 'openai' , apiKey: process.env. OPENAI_API_KEY , model: 'gpt-4o' }, memory: { vectorStore: 'memory' , // or 'sqlite', 'postgres' embeddingModel: 'text-embedding-3-small' } }); // Ingest documents await agent.memory. ingest ([ { content: 'AgentOS supports streaming responses...' , metadata: { source: 'docs' } }, { content: 'GMIs maintain context across sessions...' , metadata: { source: 'docs' } } ]); // Queries automatically retrieve relevant context for await ( const chunk of agent. processRequest ({ message: 'How does streaming work?' })) { process.stdout. write (chunk.content); }
Copy
┌─────────────────────────────────────────────────────────────────┐ │ AgentOS Runtime │ ├─────────────────────────────────────────────────────────────────┤ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │ │ Request │ │ Prompt │ │ Streaming │ │ │ │ Router │→ │ Engine │→ │ Manager │ │ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │ ↓ ↓ ↓ │ │ ┌─────────────────────────────────────────────────────────┐ │ │ │ GMI Manager │ │ │ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │ │ │ │ Working │ │ Context │ │ Persona │ │Learning │ │ │ │ │ │ Memory │ │ Manager │ │ Overlay │ │ Module │ │ │ │ │ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │ │ │ └─────────────────────────────────────────────────────────┘ │ │ ↓ ↓ ↓ │ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │ │ Tool │ │ RAG │ │ Planning │ │ │ │Orchestrator │ │ Memory │ │ Engine │ │ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │ ↓ ↓ ↓ │ │ ┌─────────────────────────────────────────────────────────┐ │ │ │ LLM Provider Manager │ │ │ │ OpenAI │ Anthropic │ Azure │ Local Models │ │ │ └─────────────────────────────────────────────────────────┘ │ └─────────────────────────────────────────────────────────────────┘
Copy
import { AgentOS, StructuredOutputManager } from '@framers/agentos' ; const agent = new AgentOS (); await agent. initialize ({ llmProvider: { provider: 'openai' , apiKey: process.env. OPENAI_API_KEY , model: 'gpt-4o' } }); // Extract typed data from unstructured text const structured = new StructuredOutputManager ({ llmProviderManager: agent.llmProviderManager }); const contact = await structured. generate ({ prompt: 'Extract: "Meeting with Sarah Chen (sarah@startup.io) on Jan 15 re: Series A"' , schema: { type: 'object' , properties: { name: { type: 'string' }, email: { type: 'string' , format: 'email' }, date: { type: 'string' }, topic: { type: 'string' } }, required: [ 'name' , 'email' ] }, schemaName: 'ContactInfo' }); // → { name: 'Sarah Chen', email: 'sarah@startup.io', date: 'Jan 15', topic: 'Series A' }
Copy
import { HumanInteractionManager } from '@framers/agentos' ; const hitl = new HumanInteractionManager ({ defaultTimeoutMs: 300000 }); // Gate high-risk operations with human approval const decision = await hitl. requestApproval ({ action: { type: 'database_mutation' , description: 'Archive 50K inactive accounts older than 2 years' , severity: 'high' , metadata: { affectedRows: 50000 , table: 'users' } }, alternatives: [ { action: 'soft_delete' , description: 'Mark as inactive instead of archiving' }, { action: 'export_first' , description: 'Export to CSV before archiving' } ] }); if (decision.approved) { await executeArchive (); } else if (decision.selectedAlternative) { await executeAlternative (decision.selectedAlternative); }
Copy
import { AgentOS, PlanningEngine } from '@framers/agentos' ; const agent = new AgentOS (); await agent. initialize ({ llmProvider: { provider: 'openai' , apiKey: process.env. OPENAI_API_KEY , model: 'gpt-4o' } }); const planner = new PlanningEngine ({ llmProvider: agent.llmProviderManager, strategy: 'react' }); // Decompose complex goals into executable steps with ReAct reasoning const plan = await planner. generatePlan ({ goal: 'Migrate authentication from sessions to JWT' , constraints: [ 'Zero downtime' , 'Backwards compatible for 30 days' , 'Audit logging required' ], context: { currentStack: 'Express + Redis sessions' , userCount: '50K' } }); for await ( const step of planner. executePlan (plan.id)) { console. log ( `[${ step . status }] ${ step . action }` ); if (step.requiresHumanApproval) { const approved = await promptUser (step.description); if ( ! approved) break ; } }
Copy
import { AgentOS, AgencyRegistry, AgentCommunicationBus } from '@framers/agentos' ; // Create specialized agents const researcher = new AgentOS (); await researcher. initialize ({ llmProvider: llmConfig, persona: 'Research analyst' }); const writer = new AgentOS (); await writer. initialize ({ llmProvider: llmConfig, persona: 'Technical writer' }); // Register in agency with shared communication const agency = new AgencyRegistry (); const bus = new AgentCommunicationBus (); agency. register ( 'researcher' , researcher, { bus }); agency. register ( 'writer' , writer, { bus }); // Agents coordinate via message passing bus. on ( 'research:complete' , async ({ findings }) => { await writer. processRequest ({ message: `Write documentation based on: ${ JSON . stringify ( findings ) }` }); }); await researcher. processRequest ({ message: 'Analyze the authentication module' });
Copy
import { AgentOS } from '@framers/agentos' ; const agent = new AgentOS (); await agent. initialize ({ llmProvider: { provider: 'openai' , apiKey: process.env. OPENAI_API_KEY , model: 'gpt-4o' } }); // Collect full response without streaming const chunks = []; for await ( const chunk of agent. processRequest ({ message: 'Explain OAuth 2.0 briefly' })) { if (chunk.type === 'content' ) { chunks. push (chunk.content); } } const fullResponse = chunks. join ( '' );
Copy
import { AgentOS, GMIMood } from '@framers/agentos' ; const agent = new AgentOS (); await agent. initialize ({ llmProvider: { provider: 'openai' , apiKey: process.env. OPENAI_API_KEY , model: 'gpt-4o' }, persona: { name: 'Support Agent' , moodAdaptation: { enabled: true , defaultMood: GMIMood. EMPATHETIC , allowedMoods: [GMIMood. EMPATHETIC , GMIMood. FOCUSED , GMIMood. ANALYTICAL ], sensitivityFactor: 0.7 , // Mood-specific prompt modifiers moodPrompts: { [GMIMood. EMPATHETIC ]: 'Prioritize understanding and emotional support.' , [GMIMood. FRUSTRATED ]: 'Acknowledge difficulty, offer step-by-step guidance.' , [GMIMood. ANALYTICAL ]: 'Provide detailed technical explanations with examples.' } } } }); // Agent automatically adapts tone based on conversation context for await ( const chunk of agent. processRequest ({ message: 'This is so frustrating, nothing works!' })) { // Response adapts with empathetic tone, mood shifts to EMPATHETIC }
Copy
import { AgentOS } from '@framers/agentos' ; const agent = new AgentOS (); await agent. initialize ({ llmProvider: llmConfig, persona: { name: 'Adaptive Tutor' , // Dynamic prompt elements injected based on runtime context contextualPromptElements: [ { id: 'beginner-guidance' , type: 'SYSTEM_INSTRUCTION_ADDON' , content: 'Explain concepts simply, avoid jargon, use analogies.' , criteria: { userSkillLevel: [ 'novice' , 'beginner' ] }, priority: 10 }, { id: 'expert-mode' , type: 'SYSTEM_INSTRUCTION_ADDON' , content: 'Assume deep technical knowledge, be concise, skip basics.' , criteria: { userSkillLevel: [ 'expert' , 'advanced' ] }, priority: 10 }, { id: 'debugging-context' , type: 'FEW_SHOT_EXAMPLE' , content: { role: 'assistant' , content: 'Let \' s trace through step by step...' }, criteria: { taskHint: [ 'debugging' , 'troubleshooting' ] } } ], // Meta-prompts for self-reflection and planning metaPrompts: [ { id: 'mid-conversation-check' , trigger: 'every_n_turns' , triggerConfig: { n: 5 }, prompt: 'Assess: Is the user making progress? Should I adjust my approach?' } ] } }); // Prompts automatically adapt based on user context and task await agent. updateUserContext ({ skillLevel: 'expert' }); for await ( const chunk of agent. processRequest ({ message: 'Explain monads' })) { // Uses expert-mode prompt element, skips beginner explanations }
Copy
Version
Status
Features
0.1
✓
Core runtime, GMI, streaming, tools, RAG
0.2
→
Knowledge graphs, marketplace, visual planning
0.3
○
Distributed agents, edge deployment
1.0
○
Production hardening, enterprise features
See CHANGELOG.md for release history.
We welcome contributions. See our Contributing Guide for details.
# Clone and setup git clone https://github.com/framersai/agentos.git cd agentos pnpm install # Development pnpm run build # Build the package pnpm run test # Run tests pnpm run docs # Generate documentation
Copy
We use Conventional Commits :
feat : New features → minor version bump fix : Bug fixes → patch version bump docs : Documentation only BREAKING CHANGE : → major version bump
Copy
Apache 2.0 © Framers