Michael Saylor Knowledge Base — Definition of Done

v1.0 — 2026-03-03


Context

Michael Saylor has done thousands of hours of public speaking — podcasts, news interviews, conference keynotes, corporate earnings calls, and Bitcoin-focused presentations. He's one of the most quotable, framework-heavy thinkers in the Bitcoin space, but his knowledge is scattered across hundreds of YouTube videos, podcast feeds, and corporate archives. You can't search it, cross-reference it, or ask it a question.

What makes Saylor unusual isn't just the volume — it's the discipline. When asked about Dogecoin, he stays in his lane. When markets crash, he's cheerful and constructive. He's built a library of mental models and repeatable frameworks for thinking about money, energy, and civilization — and he deploys them with laser focus on Bitcoin. That signal-to-noise ratio is what makes this corpus worth building into a knowledge base.

I want to scrape every Michael Saylor interview and public appearance from the internet and transform it into a semantic search knowledge base — fully transcribed, speaker-identified, categorized, and queryable by natural language.

The Outcome

I ask "How would Saylor respond to someone who says Bitcoin is too volatile to be a treasury asset?" — and I get his actual answer, synthesized from dozens of appearances where he addressed that exact objection, with his signature analogies intact. Cheerful, constructive, laser-focused on Bitcoin. Sourced, dated, and weighted toward his most recent thinking.

It feels like having Saylor in the room — his discipline, his mental models, his refusal to get pulled off-topic — except he's searchable, organized, and you can cross-reference what he said on Lex Fridman against what he told the SEC.

The Experience

I'm preparing for a debate about Bitcoin as a corporate treasury strategy. I ask the KB: "What's Saylor's steel-man case for why Bitcoin beats bonds, gold, and real estate as a store of value?" Within a minute I get his full framework — the thermodynamic argument, the digital energy analogy, the comparison to Manhattan real estate — sourced from 8 different appearances across 3 years, with the most recent arguments weighted highest. Each claim links back to the specific interview and timestamp.

A teammate asks: "What analogies does Saylor use to explain Bitcoin to normies?" They get a ranked list of his most-used analogies — digital gold, cyber Manhattan, melting ice cube, mosquito net over your wealth — with the exact quotes and which interviews he deployed them in.

Done When

1. Every public appearance transcribed & speaker-labeled

2. Categorized by appearance type

3. All participants identified & tracked

4. Knowledge distilled into structured frameworks

5. Timestamped & recency-weighted

6. Saylor-voice query responses with analogies

7. Semantic search with citations

8. Automated ingestion pipeline

Stretch Goal: Reusable across public figures

Feedback Form

Criterion 1 (Poor) 3 (Acceptable) 5 (Excellent) Score
1. Transcription & speakers Gaps, errors, unresolved tags All done, some noise Clean, every speaker named /5
2. Categorization Missing tags Tagged, basic filtering Filtered by type + show name /5
3. Participant tracking Names only Names + counts Full query: who, how often, what about /5
4. Structured frameworks Mixed together Separated into categories Sourced, cross-referenced, mental models tagged /5
5. Timestamps & recency No dates Dated, no weighting Recency-weighted with timeless override /5
6. Saylor-voice responses Generic summaries Uses some analogies Reads like Saylor wrote it, analogies catalogued /5
7. Semantic search Keyword only Vector, decent answers Natural language, sourced citations /5
8. Automated pipeline Manual, every step Scripted, some manual Fully auto, failure alerts /5
Stretch: Reusable Hardcoded to Saylor Config swap, some rework Israetel KB working, no rebuild /5

Not Done Until Brad Says So

Brad tests with 5 real questions he actually cares about. 4/5 return useful, sourced, Saylor-voice answers — it ships.

Not In Scope

Changelog

Version Date Change
1.2 2026-03-03 Finalized per DoD SOP: added Feedback Form, "Not Done Until Brad Says So" gate. Criterion #9 moved to Stretch Goal.
1.1 2026-03-03 Added Saylor's mental models (cheerful & constructive, staying in lane, laser focus) to context, outcome, and criterion #4. Criterion #9 test updated: re-deploy to Mike Israetel/bodybuilding as proof of reusability.
1.0 2026-03-03 Initial draft. 9 criteria.