Memory
Reactive Agents provides a four-tier memory architecture inspired by cognitive science.
Memory Types
Section titled “Memory Types”Working Memory
Section titled “Working Memory”Short-term, capacity-limited (default 7 items). Automatically evicts based on FIFO or importance policy.
// Items are automatically managed during agent execution.// Working memory holds the current conversation context,// recent tool results, and active reasoning state.Semantic Memory
Section titled “Semantic Memory”Long-term factual knowledge stored in SQLite with FTS5 full-text search.
// Semantic entries have importance scores, access counts,// and support Zettelkasten-style linking between concepts.Episodic Memory
Section titled “Episodic Memory”Event log of agent actions and experiences. Supports session snapshots for conversation continuity.
Procedural Memory
Section titled “Procedural Memory”Stored workflows and learned procedures with success rate tracking. Agents improve their strategies over time.
Memory Tiers
Section titled “Memory Tiers”The runtime still labels tiers internally as "1" and "2", but the builder API prefers:
| User-facing | Builder call | Storage / search | Use case |
|---|---|---|---|
| Default | .withMemory() or { tier: "standard" } | bun:sqlite WAL, FTS5 full-text | Most applications (no embedding API required) |
| Enhanced | { tier: "enhanced" } | WAL + sqlite-vec | FTS5 + KNN vector similarity |
Passing .withMemory("1") or .withMemory("2") still works but logs a deprecation warning; use the forms above.
Default tier
Section titled “Default tier”const agent = await ReactiveAgents.create() .withMemory() // Same internal tier as legacy "1" — FTS5 search, no embeddings required .build();Enhanced tier (vector search)
Section titled “Enhanced tier (vector search)”Requires an embedding provider:
EMBEDDING_PROVIDER=openaiEMBEDDING_MODEL=text-embedding-3-smallconst agent = await ReactiveAgents.create() .withMemory({ tier: "enhanced" }) // FTS5 + KNN vector search (legacy: "2") .build();Memory Bootstrap
Section titled “Memory Bootstrap”At the start of each task, the memory layer bootstraps context:
- Loads recent semantic entries for the agent
- Retrieves the last session snapshot
- Generates a markdown projection of relevant knowledge
- Injects this into the agent’s system prompt
This gives agents continuity across conversations without explicit context management.
ExperienceStore — Cross-Agent Learning
Section titled “ExperienceStore — Cross-Agent Learning”The ExperienceStore records tool usage patterns and error recovery hints across all runs, then injects relevant tips at bootstrap time. This lets agents benefit from what previous agents (or previous runs of the same agent) learned.
Enabling
Section titled “Enabling”const agent = await ReactiveAgents.create() .withProvider("anthropic") .withMemory({ tier: "1", dbPath: "./memory-db" }) .withExperienceLearning() // Enable ExperienceStore .withReasoning() .withTools() .build();How It Works
Section titled “How It Works”- After each task, the execution engine records: which tools were used, whether the run succeeded, step count, and token count — keyed by
(taskType, toolPattern). - At the next bootstrap, patterns with ≥ 2 occurrences and ≥ 50% success rate are loaded and converted to natural-language tips injected into the agent’s context.
- Error recoveries are tracked separately: when a tool fails and the agent recovers, the recovery strategy is stored and suggested on future similar errors.
◉ [experience] 1 tip(s) from prior runsThe tip in context looks like:
For query tasks, use [file-write] — 100% success rate over 3 runs (avg 4 steps, 1,190 tokens)What Gets Recorded
Section titled “What Gets Recorded”| Field | Description |
|---|---|
| Tool pattern | Ordered unique list of tools called in the run |
| Success / failure | Whether the task completed without errors |
| Avg steps | Running average across all occurrences |
| Avg tokens | Running average token usage |
| Error recoveries | (tool, errorPattern) → recovery mappings |
Inspecting the Database
Section titled “Inspecting the Database”Experience is stored in the same SQLite database as memory:
bun -e "import { Database } from 'bun:sqlite';const db = new Database('./memory-db');const patterns = db.query('SELECT * FROM experience_tool_patterns').all();console.log(patterns);"SessionStoreService — Persistent Chat Sessions
Section titled “SessionStoreService — Persistent Chat Sessions”SessionStoreService persists conversation history to SQLite so sessions survive process restarts and can be resumed later. Enable it via agent.session({ persist: true }).
Enabling
Section titled “Enabling”const agent = await ReactiveAgents.create() .withProvider("anthropic") .withMemory({ tier: "1", dbPath: "./memory-db" }) .withReasoning() .build();
// Start a named session — persisted to SQLiteconst session = agent.session({ persist: true, id: "my-project-session" });await session.chat("What are the main risks in this architecture?");await session.chat("How would you mitigate the top one?");
// On next process start, restore by IDconst restoredSession = agent.session({ persist: true, id: "my-project-session" });const reply = await restoredSession.chat("Continue from where we left off");// The agent has full history of the previous conversationHow It Works
Section titled “How It Works”Each session is stored as a row in the agent_sessions SQLite table (in the same database as memory). The session record contains:
- Session ID (user-provided or auto-generated UUID)
- Agent ID and provider
- Full message history as JSON
- Created/updated timestamps
When persist: true is passed and an id is provided, the session is loaded from the database at construction time. Each new message is written back immediately.
Sessions are cleaned up by calling session.end(), which removes the database record.
Inspecting Sessions
Section titled “Inspecting Sessions”bun -e "import { Database } from 'bun:sqlite';const db = new Database('./memory-db');const sessions = db.query('SELECT id, agent_id, created_at, json_array_length(messages) as msg_count FROM agent_sessions').all();console.table(sessions);"MemoryConsolidatorService — Background Memory Intelligence
Section titled “MemoryConsolidatorService — Background Memory Intelligence”The MemoryConsolidatorService runs background maintenance cycles on episodic memory: decaying stale entries, pruning noise, and replaying recent experience for potential semantic promotion.
Enabling
Section titled “Enabling”const agent = await ReactiveAgents.create() .withMemory({ tier: "1", dbPath: "./memory-db" }) .withMemoryConsolidation({ threshold: 10, // Trigger consolidation after 10 new episodic entries decayFactor: 0.95, // Multiply importance × 0.95 each cycle pruneThreshold: 0.1, // Remove entries with importance < 0.1 }) .build();All config fields are optional — defaults are threshold: 10, decayFactor: 0.95, pruneThreshold: 0.1.
Consolidation Cycle
Section titled “Consolidation Cycle”Each cycle runs two phases:
- COMPRESS — All episodic entries have their
importancemultiplied bydecayFactor. Entries that fall belowpruneThresholdare deleted, keeping the episodic log focused on recent, high-signal events. - REPLAY — Counts episodic entries added since the last consolidation run. This count can drive future LLM-based semantic extraction (connecting episodic → semantic memory).
The cycle is triggered automatically when the agent has accumulated threshold new episodic entries since the last run. You can also trigger it manually via the Effect API:
import { MemoryConsolidatorService } from "@reactive-agents/memory";import { Effect } from "effect";
// Trigger a consolidation cycle for a specific agentyield* MemoryConsolidatorService.consolidate("my-agent-id");