Type-Safe from End to End
Every agent, tool, memory entry, and LLM call is validated by Effect-TS schemas. Catch errors at compile time. Runtime validation at every service boundary. Typed errors mean failures are explicit, not surprises.
Fluent Builder API
Chain capabilities like a sentence — readable and naturally discoverable
6 LLM Providers
Anthropic, OpenAI, Gemini, Ollama, LiteLLM (40+ models) — one unified interface
5 Reasoning Strategies
ReAct, Reflexion, Plan-Execute, Tree-of-Thought, Adaptive
Built-in Tool Suite
web-search, file-read, code-execute, http-get, calculator
4-Tier Memory
Working, Semantic, Episodic, Procedural — all composable layers
Web Framework Hooks
React, Vue & Svelte — useAgentStream, useAgent, createAgentStream out of the box
Effect-TS Type Safety
RuntimeErrors union, typed hooks, zero runtime surprises
const agent = await ReactiveAgents .create() .withProvider("anthropic") .withReasoning() // ReAct .withTools() // Built-ins .withMemory({ tier: "enhanced" }) .withObservability() .build(); const result = await agent.run(task); // .output .metadata .debrief
5 Entropy Sources
Token, structural, semantic, behavioral, context pressure — real-time reasoning quality
Early Stop
Detect convergence and stop early — save tokens and time automatically
Strategy Switching
Auto-switch reasoning strategy when entropy shows the agent is stuck
Trajectory Analysis
Track entropy over time: converging, flat, diverging, oscillating
Per-Model Calibration
Conformal thresholds adapt to each model's characteristics over time
Local Learning
Thompson Sampling bandit learns optimal strategies per task category
.withReactiveIntelligence({ controller: { earlyStop: true, contextCompression: true, strategySwitch: true, }, }) // Dashboard output: 🧠 Reasoning Signal ├─ Grade: B Signal: converging ↘ ├─ Trace: ████▓▒░ 0.65→0.25 └─ Tip: Enable earlyStop
10-Phase Execution Engine
bootstrap → guardrail → think → act → observe → complete
EventBus Auto-Wiring
Zero manual instrumentation — MetricsCollector subscribes automatically
Live Log Streaming
Real-time phase events at 4 verbosity levels: minimal → debug
Distributed Tracing
OpenTelemetry spans with correlation IDs across every phase
Smart Alerts
Bottleneck detection, budget warnings, optimization suggestions
Cost Metrics
Token count and USD estimate tracked and reported per run
┌──────────────────────────────┐ │ ✅ Execution Summary │ ├──────────────────────────────┤ │ Duration: 13.9s Steps: 7 │ │ Tokens: 1,963 Cost: ~$0.003│ └──────────────────────────────┘ 📊 Execution Timeline ├─ [bootstrap] 100ms ✅ ├─ [guardrail] 50ms ✅ ├─ [think] 10,001ms ⚠️ 7 iter ├─ [act] 1,000ms ✅ 2 tools └─ [complete] 28ms ✅
Prompt Injection Detection
Blocks injection attacks with configurable threshold scoring
PII & Toxicity Scrubbing
Auto-detects sensitive data and toxic content before LLM ingestion
Kill Switch
Pause, resume, or terminate any running agent with zero state corruption
Behavioral Contracts
Tool deny lists, iteration caps, and output pattern enforcement
Budget Enforcement
Per-request, daily, monthly cost caps — auto-halts before overspend
Approval Gates
Human-in-the-loop confirmation for high-risk tool execution
.withGuardrails({ injectionThreshold: 0.8, piiThreshold: 0.9, toxicityThreshold: 0.7, }) .withKillSwitch() .withBehavioralContracts({ toolDenyList: ["shell-execute"], maxIterations: 20, }) .withCostTracking({ budget: { perRequest: 0.10 }, })
Token Streaming
AsyncGenerator with TextDelta, IterationProgress, and SSE adapter
Persistent Gateway
24/7 agent harness with crons, webhooks, adaptive heartbeats
A2A Protocol
Agent-to-agent JSON-RPC 2.0 with SSE streaming and Agent Cards
Hallucination Detection
Semantic entropy + fact decomposition verification layer
Chat Sessions
Multi-turn conversation with adaptive routing and persistent memory
Error Recovery
Retry policies, global error handler, clean FiberFailure unwrapping
for await (const e of agent.runStream(task, { signal: ctrl.signal, })) { if (e._tag === "TextDelta") write(e.text); if (e._tag === "IterationProgress") log(e.iteration, e.maxIterations); }
React Hooks
useAgentStream + useAgent — token streaming and one-shot calls from any React component
Vue Composables
useAgentStream + useAgent with reactive refs — drop into any Vue 3 component
Svelte Stores
createAgentStream writable store — reactive $agent.text, $agent.status out of the box
One-Line SSE Endpoint
AgentStream.toSSE() returns a standard Response — works with Next.js App Router, SvelteKit, Nuxt, Bun
60s to First Agent
One install, three lines, full observability dashboard — then layer in capabilities as you need them
rax CLI + 3,472 Tests
Scaffold, run, inspect — 25 modular packages, battle-tested across 409 test files
# scaffold a new project $ rax init my-agent \ --template standard # run with cloud provider $ rax run "Analyze codebase" \ --provider anthropic # run local — zero API cost $ rax run "Summarize logs" \ --provider ollama \ --model qwen3:14b
Beacon Agent Grid
Live grid of all connected agents with real-time cognitive state and entropy status
Entropy Signal Charts
D3-powered entropy trajectory: watch reasoning quality converge, plateau, or diverge in real time
Step-by-Step Trace Panel
Full Thought → Action → Observation breakdown per iteration, live-streamed or replayed from SQLite
Debrief Summaries
Structured post-run cards: task, plan, outcome, sources, confidence score, and agent self-critique
Interactive Chat
Multi-turn conversational sessions tied to agent runs — same context, persistent history
Lab: Visual Builder
Configure and launch agents without code — skills browser, tool workshop, gateway agent manager
# Terminal 1: start studio $ rax cortex --dev UI → http://localhost:5173 # Terminal 2: connect agent $ rax run "Analyze codebase" \ --provider anthropic \ --cortex // or in code: .withCortex() // one line // URL: CORTEX_URL env → localhost:4321
Not sure where to begin? Pick the path that matches where you are.
bun add reactive-agentsimport { ReactiveAgents } from 'reactive-agents'
const agent = await ReactiveAgents.create() .withProvider('anthropic') .withReasoning() // ReAct loop: Think → Act → Observe .withTools() // Built-in: web-search, file-read, code-execute .withObservability() .build()
const result = await agent.run('Find the top 3 TypeScript testing frameworks')console.log(result.output)One package. Composable layers. Enable exactly what you need — skip everything you don’t.
Type-Safe from End to End
Every agent, tool, memory entry, and LLM call is validated by Effect-TS schemas. Catch errors at compile time. Runtime validation at every service boundary. Typed errors mean failures are explicit, not surprises.
Composable Layer Architecture
Enable exactly the capabilities you need. Memory without guardrails? Just reasoning and tools? Full production stack? Each layer is an independent Effect Layer with explicit dependencies — no hidden coupling, no wasted resources.
Observable Execution Engine
Every agent task flows through a deterministic 10-phase lifecycle with before/after/error hooks. Every phase emits spans, metrics, and EventBus events. You see exactly what your agent decided, why, and how long it took.
5 Reasoning Strategies
ReAct for tool use. Reflexion for self-improvement. Plan-Execute for structured work. Tree-of-Thought for creative exploration. Adaptive to auto-select the best strategy. Register your own strategies too.
Model-Adaptive Intelligence
Context profiles tune prompts, budgets, and tool strategies per model tier. LLM-based tool classification, completion gap detection, and circuit breakers help smaller models punch above their weight — same code, better results across the full model spectrum.
Great DX
60 seconds to your first agent. Progressive disclosure — start with 3
lines, add reasoning, memory, guardrails, and observability as you need
them. The builder API reads like a sentence. rax CLI scaffolds, runs,
and inspects.
Cortex Local Studio
A full-featured companion web app — live agent grid, entropy signal
charts, step-by-step reasoning traces, debrief summaries, interactive
chat, and a visual agent builder. One flag away: .withCortex() or rax run --cortex.
vs. LangChain / LlamaIndex
Python-first, dynamically typed, monolithic. Reactive Agents is TypeScript-native with Effect-TS type safety, fully modular layers, and built-in observability — you see every decision your agent makes, not just the final output.
vs. Vercel AI SDK
Great for streaming and tool calling, but stops there. Reactive Agents adds 5 reasoning strategies, persistent 4-tier memory, guardrails, verification, cost routing, and a 10-phase execution engine with full observability.
vs. AutoGen / CrewAI
Multi-agent frameworks without type safety, composable architecture, or model-adaptive intelligence. Reactive Agents gives you all three — plus model-adaptive context profiles that help local models perform far beyond naive prompting.
vs. Building From Scratch
25 production-ready packages with 3,879 tests covering memory, reasoning, tools, A2A protocol, gateway, reactive intelligence, safety, cost, identity, and orchestration. Focus on your agent’s logic, not infrastructure.
// Token-by-token streaming via AsyncGeneratorfor await (const event of agent.runStream("Write a haiku about TypeScript")) { if (event._tag === "TextDelta") process.stdout.write(event.text); if (event._tag === "IterationProgress") console.log(`Step ${event.iteration}/${event.maxIterations}`); if (event._tag === "StreamCompleted") console.log("\nDone!");}
// One-liner SSE endpointBun.serve({ fetch: (req) => AgentStream.toSSE(agent.runStream("Hello")) });// Multi-turn conversation with memoryconst session = agent.session();
await session.chat("What's the capital of France?");// → "Paris is the capital of France."
await session.chat("What's the population?");// → "Paris has approximately 2.1 million residents..."// (remembers context from previous turn)// Autonomous agent that runs 24/7const agent = await ReactiveAgents.create() .withProvider("anthropic") .withReasoning() .withTools() .withGateway({ heartbeat: { intervalMs: 3_600_000, policy: "adaptive" }, crons: [{ schedule: "0 9 * * MON", instruction: "Weekly report" }], webhooks: [{ path: "/github", adapter: "github" }], policies: { dailyTokenBudget: 50_000 }, }) .build();
agent.start(); // Runs forever, Ctrl+C to stopReactiveAgentBuilder→ CoreServices (EventBus, AgentService, TaskService)→ LLMProvider (Anthropic, OpenAI, Gemini, Ollama, LiteLLM 40+)→ Memory (Working, Semantic, Episodic, Procedural)→ Reasoning (ReAct, Reflexion, Plan-Execute, ToT, Adaptive)→ Tools (Registry, Sandbox, MCP Client)→ A2A (Agent Cards, JSON-RPC, SSE Streaming)→ Guardrails (Injection, PII, Toxicity, Behavioral Contracts)→ Verification (Semantic Entropy, Fact Decomposition, Hallucination Detection)→ Cost (Complexity Router, Budget Enforcer)→ Identity (Certificates, RBAC)→ Observability (Tracing, Metrics, Structured Logging)→ Interaction (5 Modes, Checkpoints, Preference Learning)→ Gateway (Heartbeats, Crons, Webhooks, Policy Engine)→ ExecutionEngine (10-phase lifecycle with hooks)