Skip to main content

Cognitive Memory System

Personality-modulated, decay-aware memory grounded in cognitive science — replacing flat key-value memory with Ebbinghaus forgetting curves, Baddeley's working memory, spreading activation, and HEXACO-driven encoding biases.

Overview

Traditional agent memory systems treat memory as a flat store: ingest text, embed it, retrieve by similarity. This ignores decades of cognitive science on how biological memory actually works.

The Cognitive Memory System models memory as a dynamic, personality-modulated process:

  • Encoding is shaped by the agent's HEXACO personality traits and current emotional state (PAD model)
  • Forgetting follows the Ebbinghaus exponential decay curve with spaced repetition reinforcement
  • Retrieval combines six signals (strength, similarity, recency, emotional congruence, graph activation, importance)
  • Working memory enforces Baddeley's slot-based capacity limits (7 ± 2, personality-modulated)
  • Consolidation runs periodically to prune weak traces, merge clusters into schemas, and resolve conflicts

Core features (Batch 1) work with zero LLM calls. Advanced features (Batch 2 — observer, reflector, graph, consolidation) activate when configured and degrade gracefully when absent.

Cognitive Science Foundations

ModelApplication in AgentOS
Atkinson-ShiffrinSensory input → working memory → long-term memory pipeline
Baddeley's working memorySlot-based capacity limits with activation levels
Tulving's LTM taxonomyEpisodic, semantic, procedural, prospective memory types
Ebbinghaus forgetting curveExponential strength decay over time
Yerkes-Dodson lawEncoding quality peaks at moderate arousal (inverted U)
Brown & Kulik flashbulb memoriesHigh-emotion events create vivid, persistent traces
Anderson's ACT-RSpreading activation through associative memory graph
HEXACO personality modelTrait-driven attention weights and memory capacity modulation

Memory Types

Based on Tulving's long-term memory taxonomy:

TypeDescriptionExample
episodicAutobiographical events"User asked about deployment on Tuesday"
semanticGeneral knowledge/facts"User prefers TypeScript over Python"
proceduralSkills and how-to"To deploy, run wunderland deploy"
prospectiveFuture intentions"Remind user about the PR review"

Memory Scopes

ScopeVisibilityUse Case
threadSingle conversationIn-conversation working context
userAll conversations with a userUser preferences, facts, history
personaAll users of a personaPersona's learned knowledge
organizationAll agents in an orgShared organizational knowledge

Encoding

Encoding determines how strongly a new input is committed to memory. Four cognitive mechanisms combine:

1. HEXACO Personality → Encoding Weights

Each HEXACO trait modulates attention to specific content features:

TraitEffect
OpennessHigh O notices novel, creative content
ConscientiousnessHigh C notices procedures, structure
EmotionalityHigh E amplifies emotional content
ExtraversionHigh X notices social dynamics
AgreeablenessHigh A notices cooperation cues
HonestyHigh H notices ethical/moral content

2. Yerkes-Dodson Arousal Curve

Encoding quality peaks at moderate arousal (inverted U). Very low (bored) and very high (panicked) arousal both impair encoding.

3. Mood-Congruent Encoding

Content whose emotional valence matches the current mood is encoded more strongly.

4. Flashbulb Memories

When emotional intensity exceeds a threshold (default: 0.8), the memory becomes a flashbulb memory with 2x strength and 5x stability.

Composite Formula

S₀ = min(1.0, base × arousalBoost × emotionalBoost × attentionMultiplier × congruenceBoost × flashbulbBoost)

Forgetting & Decay

Memory strength decays exponentially (Ebbinghaus curve):

S(t) = S₀ × e^(-dt / stability)

Spaced repetition: Each retrieval increases stability with diminishing returns, doubling the reinforcement interval.

Interference: New memories that overlap with existing ones cause retroactive interference (weakening old traces) and proactive interference (impairing new encoding).

Pruning: Traces below the pruning threshold (default: 0.05) are soft-deleted during consolidation, unless emotionally significant.

Retrieval

Six signals combine into a composite priority score:

SignalWeightDescription
Strength0.25Current decay-adjusted strength
Similarity0.35Cosine similarity from vector search
Recency0.10Exponential recency boost
Emotional congruence0.15Mood-matching bias
Graph activation0.10Spreading activation score
Importance0.05Confidence-weighted importance

Tip-of-the-tongue: Traces with high similarity but low strength are returned as partially retrieved, with suggested cues to help recovery.

Working Memory

Slot-based capacity following Miller's number (7 ± 2), modulated by personality:

  • High openness: +1 slot (broader attention)
  • High conscientiousness: -1 slot (deeper focus)
  • Result clamped to [5, 9]

Slots track activation levels that decay each turn. Items below minimum activation are evicted (and may be encoded to long-term memory).

Prospective Memory

Future intentions with three trigger types:

TriggerFires WhenExample
time_basedCurrent time ≥ trigger time"Remind me at 3pm"
event_basedNamed event occurs"When user mentions deployment"
context_basedSemantic similarity exceeds threshold"When we discuss pricing"

Memory Graph

An associative graph connects related memories with typed edges:

Edge TypeMeaning
SHARED_ENTITYSame entity mentioned
TEMPORAL_SEQUENCECreated within 5 minutes
SAME_TOPICShared topic cluster
CONTRADICTSConflicting information
CO_ACTIVATEDRetrieved together (Hebbian learning)
SCHEMA_INSTANCEEpisodic instance of semantic schema

Spreading activation (Anderson's ACT-R): Given seed nodes, activation propagates through edges to surface associated memories.

Consolidation Pipeline

Runs periodically (default: hourly) with five steps:

  1. Decay sweep — Apply Ebbinghaus curve, soft-delete weak traces
  2. Co-activation replay — Create SHARED_ENTITY and TEMPORAL_SEQUENCE edges
  3. Schema integration — Cluster detection → LLM summarization into semantic schemas
  4. Conflict resolution — Resolve CONTRADICTS edges (personality-driven)
  5. Spaced repetition — Reinforce traces past their reinforcement interval

Observer / Reflector

Observer: Monitors conversation tokens. When threshold is reached (default: 30K tokens), extracts observation notes via LLM, biased by personality traits.

Reflector: Consolidates accumulated observation notes into long-term memory traces. Merges redundant observations, detects conflicts, and resolves based on personality.

Quick Start

import { CognitiveMemoryManager } from '@framers/agentos/memory';

const memory = new CognitiveMemoryManager();

await memory.initialize({
workingMemory: existingWorkingMemory,
knowledgeGraph: existingKnowledgeGraph,
vectorStore: existingVectorStore,
embeddingManager: existingEmbeddingManager,
agentId: 'my-agent',
traits: { openness: 0.7, conscientiousness: 0.8, emotionality: 0.5 },
moodProvider: () => ({ valence: 0, arousal: 0.3, dominance: 0 }),
featureDetectionStrategy: 'keyword',
});

// Encode
const trace = await memory.encode('I prefer deploying with Docker Compose', mood, 'content', {
type: 'semantic',
scope: 'user',
tags: ['deployment', 'docker'],
});

// Retrieve
const result = await memory.retrieve('How should I deploy?', mood, { topK: 5 });

// Assemble for prompt injection
const context = await memory.assembleForPrompt('How should I deploy?', 1000, mood);

Prompt Assembly

Token-budgeted context assembly across six sections with overflow redistribution:

SectionBudget %Content
Working Memory15%Active context from slot buffer
Semantic Recall45%Retrieved semantic/procedural traces
Recent Episodic25%Retrieved episodic traces
Prospective Alerts5%Triggered reminders
Graph Associations5%Spreading activation context
Observation Notes5%Recent observer notes

Source Files

All source lives in packages/agentos/src/memory/. See the full technical specification in packages/agentos/docs/COGNITIVE_MEMORY.md.