Skip to content

Memory

Reactive Agents provides a four-tier memory architecture inspired by cognitive science.

Short-term, capacity-limited (default 7 items). Automatically evicts based on FIFO or importance policy.

// Items are automatically managed during agent execution.
// Working memory holds the current conversation context,
// recent tool results, and active reasoning state.

Long-term factual knowledge stored in SQLite with FTS5 full-text search.

// Semantic entries have importance scores, access counts,
// and support Zettelkasten-style linking between concepts.

Event log of agent actions and experiences. Supports session snapshots for conversation continuity.

Stored workflows and learned procedures with success rate tracking. Agents improve their strategies over time.

The runtime still labels tiers internally as "1" and "2", but the builder API prefers:

User-facingBuilder callStorage / searchUse case
Default.withMemory() or { tier: "standard" }bun:sqlite WAL, FTS5 full-textMost applications (no embedding API required)
Enhanced{ tier: "enhanced" }WAL + sqlite-vecFTS5 + KNN vector similarity

Passing .withMemory("1") or .withMemory("2") still works but logs a deprecation warning; use the forms above.

const agent = await ReactiveAgents.create()
.withMemory() // Same internal tier as legacy "1" — FTS5 search, no embeddings required
.build();

Requires an embedding provider:

Terminal window
EMBEDDING_PROVIDER=openai
EMBEDDING_MODEL=text-embedding-3-small
const agent = await ReactiveAgents.create()
.withMemory({ tier: "enhanced" }) // FTS5 + KNN vector search (legacy: "2")
.build();

At the start of each task, the memory layer bootstraps context:

  1. Loads recent semantic entries for the agent
  2. Retrieves the last session snapshot
  3. Generates a markdown projection of relevant knowledge
  4. Injects this into the agent’s system prompt

This gives agents continuity across conversations without explicit context management.

The ExperienceStore records tool usage patterns and error recovery hints across all runs, then injects relevant tips at bootstrap time. This lets agents benefit from what previous agents (or previous runs of the same agent) learned.

const agent = await ReactiveAgents.create()
.withProvider("anthropic")
.withMemory({ tier: "1", dbPath: "./memory-db" })
.withExperienceLearning() // Enable ExperienceStore
.withReasoning()
.withTools()
.build();
  1. After each task, the execution engine records: which tools were used, whether the run succeeded, step count, and token count — keyed by (taskType, toolPattern).
  2. At the next bootstrap, patterns with ≥ 2 occurrences and ≥ 50% success rate are loaded and converted to natural-language tips injected into the agent’s context.
  3. Error recoveries are tracked separately: when a tool fails and the agent recovers, the recovery strategy is stored and suggested on future similar errors.
◉ [experience] 1 tip(s) from prior runs

The tip in context looks like:

For query tasks, use [file-write] — 100% success rate over 3 runs (avg 4 steps, 1,190 tokens)
FieldDescription
Tool patternOrdered unique list of tools called in the run
Success / failureWhether the task completed without errors
Avg stepsRunning average across all occurrences
Avg tokensRunning average token usage
Error recoveries(tool, errorPattern) → recovery mappings

Experience is stored in the same SQLite database as memory:

Terminal window
bun -e "
import { Database } from 'bun:sqlite';
const db = new Database('./memory-db');
const patterns = db.query('SELECT * FROM experience_tool_patterns').all();
console.log(patterns);
"

SessionStoreService — Persistent Chat Sessions

Section titled “SessionStoreService — Persistent Chat Sessions”

SessionStoreService persists conversation history to SQLite so sessions survive process restarts and can be resumed later. Enable it via agent.session({ persist: true }).

const agent = await ReactiveAgents.create()
.withProvider("anthropic")
.withMemory({ tier: "1", dbPath: "./memory-db" })
.withReasoning()
.build();
// Start a named session — persisted to SQLite
const session = agent.session({ persist: true, id: "my-project-session" });
await session.chat("What are the main risks in this architecture?");
await session.chat("How would you mitigate the top one?");
// On next process start, restore by ID
const restoredSession = agent.session({ persist: true, id: "my-project-session" });
const reply = await restoredSession.chat("Continue from where we left off");
// The agent has full history of the previous conversation

Each session is stored as a row in the agent_sessions SQLite table (in the same database as memory). The session record contains:

  • Session ID (user-provided or auto-generated UUID)
  • Agent ID and provider
  • Full message history as JSON
  • Created/updated timestamps

When persist: true is passed and an id is provided, the session is loaded from the database at construction time. Each new message is written back immediately.

Sessions are cleaned up by calling session.end(), which removes the database record.

Terminal window
bun -e "
import { Database } from 'bun:sqlite';
const db = new Database('./memory-db');
const sessions = db.query('SELECT id, agent_id, created_at, json_array_length(messages) as msg_count FROM agent_sessions').all();
console.table(sessions);
"

MemoryConsolidatorService — Background Memory Intelligence

Section titled “MemoryConsolidatorService — Background Memory Intelligence”

The MemoryConsolidatorService runs background maintenance cycles on episodic memory: decaying stale entries, pruning noise, and replaying recent experience for potential semantic promotion.

const agent = await ReactiveAgents.create()
.withMemory({ tier: "1", dbPath: "./memory-db" })
.withMemoryConsolidation({
threshold: 10, // Trigger consolidation after 10 new episodic entries
decayFactor: 0.95, // Multiply importance × 0.95 each cycle
pruneThreshold: 0.1, // Remove entries with importance < 0.1
})
.build();

All config fields are optional — defaults are threshold: 10, decayFactor: 0.95, pruneThreshold: 0.1.

Each cycle runs two phases:

  1. COMPRESS — All episodic entries have their importance multiplied by decayFactor. Entries that fall below pruneThreshold are deleted, keeping the episodic log focused on recent, high-signal events.
  2. REPLAY — Counts episodic entries added since the last consolidation run. This count can drive future LLM-based semantic extraction (connecting episodic → semantic memory).

The cycle is triggered automatically when the agent has accumulated threshold new episodic entries since the last run. You can also trigger it manually via the Effect API:

import { MemoryConsolidatorService } from "@reactive-agents/memory";
import { Effect } from "effect";
// Trigger a consolidation cycle for a specific agent
yield* MemoryConsolidatorService.consolidate("my-agent-id");