Skip to content

Memory

Reactive Agents provides a four-tier memory architecture inspired by cognitive science.

Short-term, capacity-limited (default 7 items). Automatically evicts based on FIFO or importance policy.

// Items are automatically managed during agent execution.
// Working memory holds the current conversation context,
// recent tool results, and active reasoning state.

Long-term factual knowledge stored in SQLite with FTS5 full-text search.

// Semantic entries have importance scores, access counts,
// and support Zettelkasten-style linking between concepts.

Event log of agent actions and experiences. Supports session snapshots for conversation continuity.

Stored workflows and learned procedures with success rate tracking. Agents improve their strategies over time.

TierStorageSearchUse Case
1bun:sqlite WALFTS5 full-textMost applications
2bun:sqlite WAL + sqlite-vecFTS5 + KNN vectorSemantic similarity
const agent = await ReactiveAgents.create()
.withMemory("1") // FTS5 search, no embeddings needed
.build();

Requires an embedding provider:

Terminal window
EMBEDDING_PROVIDER=openai
EMBEDDING_MODEL=text-embedding-3-small
const agent = await ReactiveAgents.create()
.withMemory("2") // FTS5 + KNN vector search
.build();

At the start of each task, the memory layer bootstraps context:

  1. Loads recent semantic entries for the agent
  2. Retrieves the last session snapshot
  3. Generates a markdown projection of relevant knowledge
  4. Injects this into the agent’s system prompt

This gives agents continuity across conversations without explicit context management.

The ExperienceStore records tool usage patterns and error recovery hints across all runs, then injects relevant tips at bootstrap time. This lets agents benefit from what previous agents (or previous runs of the same agent) learned.

const agent = await ReactiveAgents.create()
.withProvider("anthropic")
.withMemory({ tier: "1", dbPath: "./memory-db" })
.withExperienceLearning() // Enable ExperienceStore
.withReasoning()
.withTools()
.build();
  1. After each task, the execution engine records: which tools were used, whether the run succeeded, step count, and token count — keyed by (taskType, toolPattern).
  2. At the next bootstrap, patterns with ≥ 2 occurrences and ≥ 50% success rate are loaded and converted to natural-language tips injected into the agent’s context.
  3. Error recoveries are tracked separately: when a tool fails and the agent recovers, the recovery strategy is stored and suggested on future similar errors.
◉ [experience] 1 tip(s) from prior runs

The tip in context looks like:

For query tasks, use [file-write] — 100% success rate over 3 runs (avg 4 steps, 1,190 tokens)
FieldDescription
Tool patternOrdered unique list of tools called in the run
Success / failureWhether the task completed without errors
Avg stepsRunning average across all occurrences
Avg tokensRunning average token usage
Error recoveries(tool, errorPattern) → recovery mappings

Experience is stored in the same SQLite database as memory:

Terminal window
bun -e "
import { Database } from 'bun:sqlite';
const db = new Database('./memory-db');
const patterns = db.query('SELECT * FROM experience_tool_patterns').all();
console.log(patterns);
"

MemoryConsolidatorService — Background Memory Intelligence

Section titled “MemoryConsolidatorService — Background Memory Intelligence”

The MemoryConsolidatorService runs background maintenance cycles on episodic memory: decaying stale entries, pruning noise, and replaying recent experience for potential semantic promotion.

const agent = await ReactiveAgents.create()
.withMemory({ tier: "1", dbPath: "./memory-db" })
.withMemoryConsolidation({
threshold: 10, // Trigger consolidation after 10 new episodic entries
decayFactor: 0.95, // Multiply importance × 0.95 each cycle
pruneThreshold: 0.1, // Remove entries with importance < 0.1
})
.build();

All config fields are optional — defaults are threshold: 10, decayFactor: 0.95, pruneThreshold: 0.1.

Each cycle runs two phases:

  1. COMPRESS — All episodic entries have their importance multiplied by decayFactor. Entries that fall below pruneThreshold are deleted, keeping the episodic log focused on recent, high-signal events.
  2. REPLAY — Counts episodic entries added since the last consolidation run. This count can drive future LLM-based semantic extraction (connecting episodic → semantic memory).

The cycle is triggered automatically when the agent has accumulated threshold new episodic entries since the last run. You can also trigger it manually via the Effect API:

import { MemoryConsolidatorService } from "@reactive-agents/memory";
import { Effect } from "effect";
// Trigger a consolidation cycle for a specific agent
yield* MemoryConsolidatorService.consolidate("my-agent-id");