Consulting Engagement

AI / OpenClaw Consultant
Definition of Done

๐Ÿ“… Updated: 2026-03-07 DRAFT v3
โšก The Problem

I'm a Bitcoin strategist and capital allocator running OpenClaw as my AI operating system. I've hit the ceiling on what I can achieve solo โ€” complex multi-step projects spiral into frustration because managing the AI agent requires operator-level skills I haven't built yet. The friction is bleeding into my life. That's unacceptable.

I don't need to become an AI engineer. I need to become a competent AI executive โ€” someone who can direct complex work without getting pulled into the weeds.

๐Ÿ”ฅ What's Actually Going Wrong

Over 3 weeks, we've logged 54 regressions. The top failure patterns, ranked by how much of my time they burn:

#1
Rule Exists, Didn't Follow (15+ occurrences)
The agent writes SOPs and hard gates, then violates them โ€” sometimes within hours. I catch the same mistakes repeatedly. The fix creates another rule that also gets ignored.
#2
Context Loss After Compaction (~5h/week)
The AI's context window fills up and compacts. Post-compaction, it forgets project state, recent decisions, and tool configs. I have to re-explain things that were settled hours ago.
#3
Gateway Crashes from Config Changes (6 in one day)
The agent edits its own runtime config, crashes itself, and I have to manually restart from a terminal. This happened 6 times in a single day.
#4
Delegation Quality Failures
Sub-agents get vague prompts, produce wrong output, waste tokens. Work needs to be redone. Feels like burning money.
#5
Tool Amnesia
Agent forgets tools it documented and used successfully. Presents workarounds instead of using existing infrastructure. I had to ask "does SearXNG work?" twice in the same session.
#6
Model Routing & Cost Blowouts (~$50/day when wrong)
OAuth tokens expire silently, routing falls back to expensive models, nobody notices for days.
#7
Formatting & Presentation
Sends unreadable content to Telegram (wide tables, raw markdown). I read on mobile.
#8
Raw Errors Relayed
Dumps debugging output into chat instead of handling failures silently.
#9
Context Blowout
Reads too much into one session, crashes.
#10
Guessing Instead of Reading Docs
Tries things by trial and error instead of reading the manual first. Results in cascading failures.

The meta-pattern: I'm spending ~5 hours/day doing tech support for my own AI system instead of strategy and capital allocation. The agent built genuinely good infrastructure โ€” but executes inconsistently against it.

๐Ÿ” Who We're Looking For

Updated March 7 after our first consultant call revealed a mismatch. The consultancy targets executives who want OpenClaw set up for them โ€” not power users pushing the platform's edges. We need someone different.

Must-haves:

  • Has operated OpenClaw in production at our complexity level or beyond โ€” isolated agents, multi-channel routing, cron orchestration, semantic memory. Not just initial setup.
  • Can debug, not just configure โ€” reads gateway logs, traces crash root causes, understands session lifecycle and compaction behavior. We've had 6 gateway crashes in a single day from config churn โ€” we need someone who can explain why, not just restart.
  • Deep in the OpenClaw codebase or community โ€” contributor, active Discord helper, or someone who's filed issues against edge cases. Our problems are upstream bugs + complex interactions, not misconfiguration.
  • Comfortable being the student sometimes โ€” our setup has novel patterns (CI system with regression tracking, semantic memory over full workspace, continuous improvement loops). The right person learns from this too.

Disqualifying signals:

  • "You're more advanced than most of our clients" without offering concrete next steps
  • Productized offering designed for non-technical executives (that's not us)
  • No hands-on experience with isolated agents, multi-agent routing, or persistent sub-agents
  • Can't articulate what compaction does or why session.resetByType matters

Where to look:

  • OpenClaw GitHub contributors / issue filers
  • OpenClaw Discord #support regulars who answer complex questions
  • Independent AI ops consultants running their own multi-agent setups
  • Someone who's published about OpenClaw architecture, not just "how to get started"
โœ… What This Engagement IS

A short-term consulting engagement (not a permanent hire) to audit my AI setup, identify where I'm going wrong, and level me up so I can direct AI work without it wrecking my day.

๐Ÿšซ What This Engagement Is NOT
  • A permanent Head of AI role (that's a separate hire)
  • "Fix everything for me" โ€” the goal is my capability, not dependency on you
  • A general AI strategy engagement โ€” this is specifically about operational competence with agentic AI tools
  • About building new AI features or products
  • About choosing which AI platform to use โ€” I'm committed to OpenClaw
๐ŸŽฏ Definition of Done

This engagement is complete when all six conditions are met:

  1. 1
    OpenClaw Setup Audited I have a written report telling me what's working, what's broken, and what to change โ€” ranked by priority.
  2. 2
    Failure Patterns Understood I know specifically why the top 10 patterns above keep happening (task scoping? delegation? expectations? tool limits?) and I have a documented playbook for avoiding them.
  3. 3
    Delegation Framework for AI I know what to give the agent vs. what needs a human, how to scope tasks so they succeed, and when to stop pushing and escalate. Written down, not just discussed.
  4. 4
    Workspace & Playbooks Simplified Stripped to what I'll actually use. No bloat. The agent has 54 regression entries and growing โ€” something is structurally wrong with the correction system itself.
  5. 5
    Saylor KB Finished & Usable All transcripts speaker-labeled, searchable, indexed. My team (Jessica, Anton) can query it with no problems. I drove the completion with the consultant coaching. โ†’ View Saylor KB Definition of Done
  6. 6
    Handoff Document in Hand Summary of all recommendations, changes made, what to do next, and hiring criteria for the eventual Head of AI role.
๐ŸŒ… What Success Looks Like
Before
I wake up, open Telegram, and spend the first 2 hours re-teaching my AI what it forgot overnight, fixing things it broke, and debugging why a model is running on the wrong provider. By the time I get to real work, I'm frustrated and behind.
After
I wake up, open Telegram, and my AI has already done its morning routine. I see a clean status update, my priorities for the day, and zero fires. I spend 30 minutes directing work, not debugging it. The system runs. I run the system.

The feeling: "I finally have a working AI operating system. I know what to give it, what to keep, and how to fix things when they break โ€” without losing my morning."

๐Ÿ† What Success Looks Like for the Consultant

You'll know you've delivered when:

  • Brad can explain โ€” in his own words โ€” why each of the top 10 failure patterns happens and what the correct response is
  • Brad has used the delegation framework successfully on at least 3 real tasks during the engagement
  • The Saylor KB project is complete and queryable by the team
  • Brad's daily "tech support time" has dropped from ~5 hours to under 30 minutes
  • The handoff document exists and Brad has reviewed it