Skip to content

Agent Skills

Reactive Agents has two distinct skill systems that serve different audiences:

Developer SkillsLiving Skills
AudienceCoding agents (Cursor, Copilot, Claude Code) building with the frameworkAgents running inside the framework
PurposeImplementation playbooks for developersRuntime behavior guidance for agents
FormatSKILL.md published at /.well-known/skills/SKILL.md loaded from filesystem or SQLite
Consumed byExternal coding tools via HTTP discoveryThe framework’s SkillResolverService at bootstrap
Evolves?No — static reference docsYes — LLM-refined over time based on agent performance

Part 1: Developer Skills (for coding agents)

Section titled “Part 1: Developer Skills (for coding agents)”

This docs site publishes Developer Skills so coding agents can discover reusable implementation playbooks directly from your docs URL.

At build time, the docs generate:

  • /.well-known/skills/index.json — skill index
  • /.well-known/skills/<skill-name>/SKILL.md — canonical skill file

The list below is generated from the live skills collection at build time, and each link points to the published markdown endpoint:

  • a2a-agent-networking — Expose agents as A2A JSON-RPC servers discoverable via Agent Cards, and connect agents to remote A2A agents using the client discovery and capability-matching APIs.
  • builder-api-reference — Configure a ReactiveAgentBuilder with the correct layer composition for any agent use case.
  • context-and-continuity — Manage context pressure, configure message windowing, and use checkpoint tools to preserve critical findings across context compaction.
  • cost-budget-enforcement — Set per-request, per-session, daily, and monthly spend limits, configure rate limiting and circuit breakers, and isolate costs per user or tenant.
  • gateway-persistent-agents — Build always-on agents with heartbeats, cron scheduling, webhook triggers, and a persistent policy engine using the Gateway layer.
  • identity-and-guardrails — Enable prompt injection detection, PII masking, behavioral contracts, kill switch controls, and agent identity for safe production deployments.
  • interaction-autonomy — Configure one of 5 human-agent interaction modes (autonomous through interrogative) and implement mode-switching, approval gates, and collaborative workflows.
  • mcp-tool-integration — Connect agents to MCP servers using stdio or HTTP transport, with automatic Docker lifecycle management and transport auto-detection.
  • memory-patterns — Configure the 4-layer memory system with SQLite/FTS5/vec storage for persistent agent knowledge that survives sessions.
  • multi-agent-orchestration — Compose multiple agents as callable tools, spawn dynamic sub-agents at runtime, and wire remote A2A agents into a coordinated pipeline.
  • observability-instrumentation — Configure verbosity levels, live log streaming, JSONL file export, model I/O logging, and audit trails for monitoring agent execution.
  • provider-patterns — Configure per-provider behavior, understand streaming quirks, and use the 7-hook adapter system for optimal performance across LLM providers.
  • quality-assurance — Enable output verification (hallucination detection, semantic entropy, self-consistency), add post-run verification steps, and run LLM-scored evals across 5 quality dimensions.
  • reactive-agents — Orient to the Reactive Agents framework, understand the builder API shape, and select the right capability skills for your task.
  • reasoning-strategy-selection — Select and configure the right reasoning strategy, native FC behavior, and output quality pipeline for any task type.
  • recipe-code-assistant — Full recipe for a code assistant with shell execution, file read/write, git integration, and sandboxed code running.
  • recipe-embedded-app-agent — Full recipe for embedding an agent in a Next.js app with streaming API routes, React hooks, progressive disclosure of reasoning steps, and error handling.
  • recipe-orchestrated-workflow — Full recipe for a 3-agent pipeline (researcher → writer → reviewer) coordinated by a lead orchestrator agent using withAgentTool() and withOrchestration().
  • recipe-persistent-monitor — Full recipe for a persistent monitoring agent with heartbeats, daily cron reports, webhook triggers, daily token budgets, and graceful shutdown.
  • recipe-research-agent — Full recipe for a web research agent with memory, semantic search, hallucination verification, and source-cited synthesis.
  • recipe-saas-agent — Full recipe for a production-ready SaaS agent with guardrails, per-user cost isolation, rate limiting, A2A exposure, audit logging, and graceful error handling.
  • shell-execution-sandbox — Enable and configure the sandboxed shell execution tool with command allowlists, Docker isolation, and audit logging for agents that run terminal commands.
  • tool-creation — Create custom tools with defineTool() or tool(), register them with the agent, and configure required-tools gates and per-tool call budgets.
  • ui-integration — Wire agents into React, Vue, and Svelte frontends with streaming hooks, and set up server-side Next.js App Router or Express API routes using AgentStream.toSSE().

Skills are stored in:

  • apps/docs/skills/<skill-name>/SKILL.md

Current example:

  • apps/docs/skills/reactive-agents-framework/SKILL.md

Each SKILL.md must include frontmatter fields:

  • name (string)
  • description (string)

Example:

---
name: reactive-agents-framework
description: Design and implement production-grade TypeScript AI agents using Reactive Agents.
---
# Reactive Agents Framework Skill
...

The docs app uses:

  • astro-skills integration for discovery routes
  • A Starlight-safe custom content loader for the skills collection

Key files:

  • apps/docs/astro.config.mjs
  • apps/docs/src/content.config.ts
  • apps/docs/src/content/skills-loader.ts

Build docs:

Terminal window
bun run docs:build

Verify generated outputs:

Terminal window
cd apps/docs
find dist -maxdepth 8 -type f | grep '.well-known/skills' | sort
cat dist/.well-known/skills/index.json

You should see entries like:

  • dist/.well-known/skills/index.json
  • dist/.well-known/skills/reactive-agents-framework/SKILL.md
  1. Create a new folder under apps/docs/skills/ using kebab-case (for example, reasoning-optimization).
  2. Add SKILL.md with valid name + description frontmatter.
  3. Rebuild docs.
  4. Confirm the new skill appears in dist/.well-known/skills/index.json.

This lets external coding agents consume implementation guidance that matches Reactive Agents architecture and conventions, directly from the public docs. These skills help developers build with the framework — they are not consumed by agents running inside it.


Part 2: Living Skills (for agents running inside the framework)

Section titled “Part 2: Living Skills (for agents running inside the framework)”

The Living Skills System is a runtime capability that discovers, loads, evolves, and manages skills for agents built with Reactive Agents. Unlike Developer Skills above, Living Skills are consumed by the agent itself during execution — they guide the agent’s behavior, not the developer’s.

Skills are the actionable distillation of agent memory — what an agent has learned to do well, refined over time.

const agent = await ReactiveAgents.create()
.withProvider("anthropic")
.withReasoning()
.withSkills({
paths: ["./my-skills/"], // Additional scan paths
evolution: { mode: "suggest" }, // "auto" | "suggest" | "locked"
overrides: { "critical-skill": { evolutionMode: "locked" } },
})
.withReactiveIntelligence() // Enables entropy-driven skill activation
.build();

Skills are discovered from three sources, merged with precedence:

SourcePathDefault Mode
LearnedSQLite (skills table)auto
Project-level./<agentId>/skills/, ./.agents/skills/locked
User-level~/.agents/skills/, ~/.reactive-agents/skills/locked

On name collision, learned skills always win over installed.

Skills follow the agentskills.io open standard:

---
name: github-review
description: Review GitHub PRs for correctness, style, and security.
metadata:
requires: web-search citation-formatter
allowed-tools: gh-api file-read
---
## Steps
1. Fetch the PR diff
2. Review each changed file...
## Examples
...
Bootstrap → Catalog → Activation → Post-Run Learning → Background Refinement
  1. Bootstrap: SkillResolver combines SQLite + filesystem skills, ranks by confidence
  2. Catalog: Skills appear in <available_skills> XML in the system prompt
  3. Activation: Model calls activate_skill or controller pre-activates on entropy match
  4. Post-Run: LearningEngine updates skill config (strategy, temperature, success rate)
  5. Refinement: MemoryConsolidator CONNECT phase triggers LLM refinement of instructions
TierThresholdBehavior
tentative< 5 uses or < 80% successCatalog only — model decides when to activate
trusted5-20 uses, >= 80% successController may pre-activate on entropy match
expert> 20 uses, >= 90% successAuto-injected at bootstrap

Skill content is budget-aware — smaller models get compressed skill bodies:

TierBudgetDefault Verbosity
local512 tokenscondensed
mid1,500 tokenssummary
large4,000 tokensfull
frontier8,000 tokensfull

When a skill is too large, the injection guard degrades through modes: fullsummarycondensedcatalog-only. The get_skill_section meta-tool (auto-included for local/mid tiers) lets agents fetch specific sections on demand without expanding base context.

// List all loaded skills
const skills = await agent.skills();
// Export a skill to SKILL.md format
await agent.exportSkill("data-analysis", "./exported-skills/");
// Load a skill at runtime
await agent.loadSkill("./new-skill/");
// Trigger manual refinement pass
await agent.refineSkills();
ToolWhen AvailablePurpose
activate_skillAlways (when skills enabled)Inject skill instructions into context
get_skill_sectionLocal/mid tiers onlyFetch a specific section without expanding base context