⚡ Memory Search Journey

How we discovered our AI agent couldn't search 88% of its own knowledge — and fixed it in 24 hours.

TL;DR

Our AI agent (Sene) had semantic memory search enabled — but it was broken for 24 days, then once fixed, only indexed 11.5% of our knowledge base. Brad's one question ("Don't we store things more in playbooks & SOPs?") exposed the gap. Fixed in one config line.

The Numbers
24
Days memory was silently broken
88.5%
Knowledge invisible to search
396
Files now indexed
3,028
Searchable chunks

Timeline — March 6–7, 2026
Mar 6, ~7:00 AM
🔍 Research Phase

Ran a 15-option comparison of memory enhancement solutions, scored on sovereignty, cost, footgun risk, and capability. Used a bitcoiner-first framework: local > cloud, self-hosted > third-party.

Key finding: "The best memory upgrade is the one you already own." OpenClaw's built-in hybrid search + temporal decay was the winner. No external dependencies needed. Read the full comparison →

research sovereignty

Mar 6, ~8:20 AM
🔴 Discovery: Broken for 24 Days

Root-caused why memory_search had never worked. On Feb 10, Brad asked to enable it. The embedding backend (OpenAI) was configured — but memory-core was never added to plugins.allow or plugins.slots.memory.

The tool appeared in the system prompt. The agent referenced it in instructions. But it was never actually loaded. 24 days of silent failure.

Additionally, the Ollama baseUrl was set to /api which doubled the path → 404.

regression #56 config-without-verification

Mar 6, ~9:00 AM
🟢 Fix #1: Plugin Activation

Three config patches applied:

// 1. Allow the plugin plugins.allow: ["memory-core"] // 2. Assign it to the memory slot plugins.slots.memory: "memory-core" // 3. Fix the Ollama URL memorySearch.remote.baseUrl: "http://localhost:11434"

config fix

Mar 6, ~9:30 AM
⚙️ Advanced Features Enabled

Switched from OpenAI embeddings to local Ollama (nomic-embed-text) — sovereign, no API cost, no data leaving the machine. Then enabled the full feature set:

provider: "ollama" // local, sovereign hybrid: true // vector 0.7 + BM25 0.3 mmr: true (λ=0.7) // deduplicate results decay: 30-day half-life // recent > old sessions: true // search conversations too cache: 50,000 entries // don't re-embed

sovereign local-first

Mar 6, ~10:00 AM
✅ Verified Working

First successful memory_search call returned real results with file paths, line numbers, and relevance scores. Brad's test: "Prove you can use that tool."

Logged as a milestone in the personality changelog. 30-day measurement window started.

verified


Mar 7, ~9:14 AM
📊 Tool Usage Audit: The Smoking Gun

Brad asked for a tool-call analysis of the last 24 hours:

exec: 210 read: 76 edit: 66 write: 12 memory_search: 7 memory_get: 0 ───────────────────── total: 376

7 searches out of 376 calls. Zero follow-up reads. We had a semantic memory system we spent a full day building and it was being used 1.86% of the time.

underuse self-audit

Mar 7, ~11:20 AM
💡 Brad's Question That Changed Everything
"Don't we store things more in playbooks & SOPs than in memory.md and memory/?"

This was the moment. We checked:

// What memory_search was indexing: memory/ + MEMORY.md → 6,743 lines (11.5%) // What it was ignoring: ops/ (playbooks, SOPs) → 8,531 lines docs/ (projects, research) → 43,598 lines ───────────── TOTAL INVISIBLE: 52,129 lines (88.5%)

Our playbooks, SOPs, decision ledger, regressions file, project registries, research docs — the densest, most decision-rich content we have — was completely invisible to search.

88.5% blind spot Brad caught it

Mar 7, ~11:30 AM
⚙️ Fix #2: One Config Line
memorySearch.extraPaths: ["ops/", "docs/"]

Plus two boot file edits:

AGENTS.md Rule #5: Collapsed "query memory first, then playbooks" into just "memory_search first" — because the playbooks are now IN the search index.

MEMORY.md: Updated to reflect full-workspace scope.

config boot files

Mar 7, ~11:37 AM
🏁 Full Workspace Indexed
// Before: Memory: ~64 files · ~500 chunks // After: Memory: 396 files · 3,028 chunks

First test search for "decision architecture playbook" returned results from ops/playbooks/decisions/, ops/decisions/decision-ledger.md, and docs/drafts/ — all previously invisible.

verified milestone


What We Learned

1. Config ≠ working. Memory search was "configured" for 24 days but never verified. The plugin wasn't even loaded. Lesson: don't trust, verify.

2. The tool description lies. The system prompt says memory_search covers "MEMORY.md + memory/*.md." After adding extraPaths, it actually covers the full workspace — but the auto-generated description doesn't update. We had to override it in our own boot files.

3. The human caught what the AI missed. Sene ran a tool usage audit and identified underuse (7/376). But it took Brad asking "don't we store things in playbooks?" to expose that the search was indexing the wrong files entirely. The AI was looking at usage frequency; the human was looking at coverage.

4. Sovereign > convenient. We chose local Ollama over OpenAI for embeddings. Slightly slower, zero API cost, no data leaving the machine. For a Bitcoin-focused operation, this wasn't even a tradeoff — it was the only option.

The Fix Was One Line
memorySearch.extraPaths: ["ops/", "docs/"]

That's it. From 11.5% coverage to 100%. From 64 files to 396. From blind to seeing.

Sometimes the most impactful changes are the ones where you realize you'd been looking in the wrong place the whole time.


The Unsolved Blind Spot

We fixed what the agent can search. We haven't fixed how often it searches.

The tool usage audit tells the story: 7 memory searches out of 376 tool calls. Even with full workspace indexing, if the agent defaults to read (opening files directly from known paths) or relies on the compaction summary instead of searching, the semantic memory system sits idle.

This is the harder problem. The indexing fix was mechanical — one config line. The usage fix is behavioral. It requires the agent to develop a new default: search before reading, even when you think you know the answer.

Three specific failure modes remain open:

// 1. Compaction trust // After context compaction, the agent trusts the // summary instead of querying memory for current state. // The summary is stale by definition. // 2. Familiarity bias // Agent only searches when something feels unfamiliar. // But the highest-value searches are for things you // THINK you know but have a stale version of. // 3. Direct-path muscle memory // Agent has memorized file paths (read ops/playbooks/x.md) // and goes straight there. Misses related context in // other files that search would surface.

This is logged as a behavioral experiment with a 14-day test window. The measurement: does memory_search usage increase from 1.86% of tool calls to something meaningful? Does the "I didn't know" regression class drop to near-zero?

We'll know by March 21.