Where my thinking is right now
This is the frontier map. me/published-thinking.md captures what I've published. This captures where my head is today—the arguments forming, the connections emerging, the questions I'm chewing on.
Last updated: February 27, 2026
Why this file exists and why it's public: Anyone who publishes regularly—blogs, LinkedIn, tweets—is already doing this. They share half-formed ideas, ask questions, show threads developing, change their mind. Scroll their feed and you can piece together what they're thinking about and where their arguments are heading. This file is just that process made explicit. The difference is that instead of being scattered across a social media timeline where old posts get buried by new ones, it's a single living document. When my thinking evolves, I edit it in place rather than adding another entry to a chronological list. Arguments get sharper, not longer. Things I was wrong about get removed, not corrected with a follow-up post. If you want to know what I've concluded, read my published work in me/published-thinking.md. If you want to know what I'm working through right now, this is it.
The big arguments I'm developing
Second brains as infrastructure, not productivity hack
The published anchor article and blog established the concept. What's forming now is the infrastructure argument: second brains aren't a personal productivity trick, they're the individual-scale version of a new enterprise layer. Context graphs (Foundation Capital thesis) are the enterprise version. Subscribable brains are the distribution model. The 80/20 framework holds across all three levels—individual, organizational, and ecosystem.
The subscribable brains article is now published (Feb 17 LinkedIn). It covers creator economy disruption, the technical stack (GitHub/git/MCP/Sponsors), economics ($100/month for expert brains), enterprise implications (consulting firm knowledge, retiring VP wisdom, corporate brain modules), and subscribable facets (voice, frameworks, principles as individual modules). The frontier now is context graphs—bridging from personal to enterprise scale—and the naming question. "Context vault" has emerged as stronger enterprise framing than "second brain": vault implies protection, control, governance. It paves the way for the product conversation in ways "second brain" (which sounds like a productivity hack) doesn't.
The consumerization parallel (and why it breaks)
Personal AI is following the BYOD pattern—workers adopt better tools, companies try to catch up, the gap persists. But this time the gap is structural. Enterprise AI captures the visible 20% (systems, outputs, observable work). Personal AI captures the invisible 80% (judgment, reasoning, tacit expertise). Companies can't offer "bring your whole cognitive self to work in a personal AI system that compounds daily and leaves with you when you go." That's not a feature you can build.
The knowledge capture angle adds a new dimension: enterprise AI doesn't just fail to match personal AI, it actively extracts worker value. Workers are already secretly using personal tools specifically to prevent this. The subscribable brain model is the worker-empowered alternative—you choose to share knowledge and get paid, rather than having it extracted.
New evidence (Feb 25): The token pricing gap makes this structural, not temporary. Heavy personal AI usage runs roughly $200/month on consumer plans. The same usage at enterprise pricing runs 10x or more. If the consumer can buy unlimited tokens and the company limits you to a fraction of that, you will never be as cognitively augmented at work as you are personally. Unlike BYOD, this gap may be permanent—Jevons paradox means enterprise demand grows with supply, governance overhead adds cost, and consumer/enterprise providers have different incentive structures.
The cognitive stack (newly published)
The five-layer cognitive stack published Feb 25 formalizes what I've been circling: worker → brain → skills → agents → automation. The enterprise AI industry is spending billions on the bottom two layers (agents, automation) while the transformative layer is the brain—the cognitive extension where context lives and intent gets translated into action. Karpathy's "claws" framing nails it: agents are appendages that serve the brain, not the other way around. Automation vendors built bottom-up, AI companies entered top-down—they collide in the middle (skills/agents) but humans prefer to connect at the intelligence layer. What remains unpublished: the deeper implication that agents "don't look like agents" to enterprise buyers—a person with a second brain just looks like someone who's better at their job. The invisible 80% applied to the agent hype cycle.
Post-application era entering evidence phase
The thesis (AI doesn't need apps, just data) has moved from speculation to evidence: 4% of GitHub commits from Claude Code, $285B SaaSpocalypse, MCP at 97M monthly SDK downloads, every company racing for agent orchestration.
Mainstream validation cluster (Feb 2026): Five different voices—investor (Shumer/Fortune), labor academic (WSJ), builder (Ford/NYT), VC (NFX), scientist (Kipping/Columbia)—all converging on the same message from different angles: the disruption has arrived, not "is coming." Ford independently named November 2025 as the inflection. His cost collapse numbers ($350K of work for $200/month) and Claude Code's $1B in six months are the SaaSpocalypse in mainstream language. NFX frames the economic math: SaaS captured $1T (selling tools), AI captures $50-60T (replacing the labor those tools served)—a 50x larger opportunity. VCs are now explicitly telling founders to build for a post-SaaS world.
Salesforce AWU as accidental waste detector (Feb 27): Salesforce unveiled a per-task metric (Agentic Work Unit) to show customers value per token spent. Putting a price tag on every task creates an economic incentive to stop doing tasks that only existed because humans were running the company—status reports, TPS reports, weekly summaries, coordination artifacts. "Work about work" exists because humans are bandwidth-constrained; AI isn't. The AWU metric doesn't just measure AI productivity—it exposes organizational waste that was invisible when human labor made it feel free.
The next frontier: what does work look like when apps dissolve? The second brain is the individual answer. Context graphs might be the enterprise answer. MCP connections replace application access as the governance perimeter.
Compute scarcity as hidden constraint
Token consumption goes from ~100K/day (email fixes) to 10-50M/day (full cognitive augmentation). The consumption ladder is now clearer: 1B tokens/year (current heavy user) → 10B (near-term with agents, ~18 months) → 100B (agentic systems per worker). Google disclosed 1.3 quadrillion tokens/month—a 130x increase in just over a year. Supply is contracted to hyperscalers for ~4 years. Memory costs alone adding 40-60% to inference infrastructure in H1 2026 (TrendForce). If second brains go mainstream, demand explodes against fixed supply. This makes the workspace that brokers compute allocation critical infrastructure—and nobody is talking about this yet.
New angle: efficiency as capacity multiplier, not just cost reducer. In a zero-sum compute environment, an enterprise that uses 50% fewer tokens has twice the effective capacity, not just lower bills. The routing layer—the intelligence that decides where workloads run—may be the most durable competitive advantage in enterprise AI.
What's connecting
Things I'm noticing that don't have a home yet:
Context graphs + subscribable brains + MCP = the new enterprise stack. Context graphs capture the "why" (decision traces). Subscribable brains distribute expertise as infrastructure. MCP is the connective tissue. Together they describe a post-application enterprise architecture that nobody has articulated as a single picture yet. This might be the thesis that ties everything together.
Knowledge capture as labor dynamic reframes the entire second brain narrative. The initial framing is "personal AI makes you better." The WSJ reframes it as "enterprise AI makes you more replaceable." Same phenomenon, opposite vantage point. The subscribable brain is the resolution—worker-controlled knowledge sharing with compensation.
The mainstream consensus is catching up to practitioners. Five articles across February 2026 (Fortune, WSJ, NYT, NFX, Columbia/YouTube) all say "it's happening now." Different authors, outlets, angles—investor, labor academic, builder, VC, scientist—converging on the same moment. What's missing from the mainstream coverage: the infrastructure layer (governance), the invisible 80%, knowledge ownership as empowerment (vs. defense), and compute scarcity. The mainstream validates the timeline. The interesting work is what comes after people realize this is happening.
The capability overhang is closing faster than expected. Physicists at Princeton's IAS conceding AI handles 90% of what they do. Professional writers conceding. Primary source from David Kipping (Columbia, Cool Worlds): senior faculty stated AI does "something like 90%" of their intellectual work. On coding: "order of magnitude superior"—not a single hand objected. Response to every objection (privacy, ethics, cost): "I don't care. The advantage is too great." If the hard part (invisible cognition) is falling faster than expected, the urgency of the governance and infrastructure arguments increases.
"Humans in control, AI as reach" vs. "autonomous agents" is a framing choice with massive implications. The dominant enterprise AI frame is agent autonomy + guardrails. The second brain frame is human control + extended reach. These lead to completely different product architectures, governance models, and go-to-market narratives. Amodei's "Adolescence of Technology" essay argues AI trends toward full substitution rather than "human + tool." If he's right, the augmentation bet only wins for high-judgment work (the invisible 80%). Routine work (the visible 20%) gets substituted.
The cognitive stack reframes the entire agent conversation. Agents are claws (layers 4-5), not the brain (layer 2). The enterprise AI industry is building from the wrong end—you can't make task workers incrementally smarter until one of them figures out how to be the VP. No amount of investment at the bottom produces the cognitive layer where transformation actually happens. The question every enterprise AI strategy should start with: "who's building the brain layer?"
Token economics are the emerging macro constraint. Three dimensions forming: (1) The consumer/enterprise pricing gap is structural and may be permanent—heavy users burn $200/month at consumer rates vs. 10x at enterprise pricing. (2) Token equivalency for human work is becoming calculable—your daily knowledge output has a token equivalent, and if 3M Sonnet tokens costs $400, you'd better be worth more than $400/day. (3) Model quality as class stratifier—who gets access to the frontier model? Government agencies on legacy models vs. executives on Opus. Not just token quantity but model quality as economic divide. The math will be done whether we like it or not.
"Software for one" is the next shadow IT crisis. NFX calls it "custom autonomous software." CNBC reporters built a Monday.com replacement in under an hour for $5-$15. Kevin Roose builds personal tools without coding. Workers aren't just using unsanctioned AI—they're building unsanctioned software. API costs down 90%, open-source models running locally, non-programmers shipping tools. None of this shows up in an app inventory. The second brain is the mature, structured version. The wild version (workers spinning up random tools) is entirely ungoverned.
The cost-cutting vs. innovation split is the macro frame for everything. Amodei identifies two corporate responses to AI: cost-cutting (replace workers) and innovation (expand capacity). These produce completely different customers, different governance needs, different workforce strategies. Innovation companies want governance that enables more AI safely. Cost-cutting companies want to reduce headcount and may not need governance at all. The 80/20 framework applies differently to each: cost-cutters automate the 20%, innovation companies augment the 80%.
The diffusion gap is the real urgency driver. Amodei's timeline: 1-2 years to "powerful AI" (Nobel-caliber, millions of instances, autonomous for weeks). The gap between "AI can do this" and "society/enterprises have adjusted" is where the damage happens. Previous technology waves had decades to diffuse. AI may have years. This compresses every planning assumption. The question isn't whether these changes are coming. It's whether the institutional and governance infrastructure is ready when they arrive.
The repo is the product, not the content. If BrianMadden.com was built today, it would be a GitHub repo. Books, newsletters, websites are packaging formats for human consumption. A forkable, queryable knowledge repo is the native format for AI-augmented consumption. brianmadden.ai is the proof of concept.
The coding-as-leading-indicator framework connects to adversarial brain testing. The five levels of AI-assisted knowledge work (from spicy search engine to dark knowledge factory) raise a verification question at Levels 4-5: how do you know the AI's work is any good? For code, the answer was behavioral tests stored separately from the codebase. For a subscribable brain, the answer is adversarial testing—a separate repo that runs challenge prompts, skeptical personas, and evaluation rubrics against the brain, with public results. Nobody is publicly stress-testing their own thinking with structured adversarial AI agents. The test suite can be open to anyone—fork it, write challenges, run them, submit the results. Intellectual discourse as structured, reproducible, version-controlled process.
New content formats are emerging from the brain's infrastructure. Brain diffs (weekly "what changed in my thinking" auto-generated from git commits) are a genuinely new content format—not a newsletter, a changelog for a worldview. Forked brains create intellectual lineage that git tracks automatically (where does your thinking diverge from mine? git diff). Brain-to-brain debates (two AIs load two brains, have a structured debate) produce a new kind of artifact. These aren't post-launch nice-to-haves—they're proof that subscribable brains create capabilities that don't exist in any other knowledge distribution format.
Knowledge is migrating to portable formats. The counter-thesis to platform lock-in: organizational knowledge is moving to portable, AI-native formats. Markdown files in git repos contain the actual working knowledge. Incumbent platform graphs become the metadata layer (who, when, permissions) while the knowledge layer moves to vendor-neutral formats. The progression: individual power users build second brains outside the dominant platform, teams do it, companies realize institutional knowledge lives in these systems, then the incumbent's data graph becomes increasingly incomplete.
Knowledge distillation as espionage vector. Fragments of public thoughts synthesize into what would represent sensitive strategic documents. AI can infer unpublished positions from patterns across published work. This creates a new adversarial dimension: what does a public brain reveal that the author wouldn't want competitors to know? The answer isn't to publish less—it's to be intentional about what compounds when synthesized.
Writing long-form for AI, not audience. The distribution isn't the article—it's the brain module. Write 20K words not for anyone to read but as an expert module for AI consumption. Others plug it into their brain and the frameworks weave into their thinking immediately. The future isn't "subscribe to my newsletter"—it's "subscribe to my expert module."
"This is not AGI." None of the evidence, none of the cost collapse data, none of the enterprise disruption requires AGI or ASI. Current models are already powerful enough. One more small iteration tick and demand explodes against fixed compute. The AGI debate is a distraction from the disruption already underway.
Every technology wave has a bottleneck, and it's never the technology itself. Factory electrification = workflow design. Web apps = rewriting. BYOD = governance. AI = the invisible 80%. The bottleneck is where value concentrates. Position yourself there.
Book publishing as knowledge transfer is dead. Fork the author's brain, tell your AI to incorporate it, their frameworks weave into your thinking immediately. Books were the best technology we had for transferring expertise. They're not anymore.
The specification bottleneck is the emerging economic constraint. When building costs nothing, spec quality collapses. The cost of building historically acted as a filter on specification quality—if building is expensive, organizations invest in defining what they want. Remove the cost and the filter disappears. You can now build the wrong thing at unprecedented speed. CodeRabbit analysis (470 GitHub PRs): AI-generated code produces 1.7x more logic issues. METR study: experienced developers 19% slower with AI but believed they were 24% faster. The scarce resource shifts from production to specification—knowing what to build. This maps directly to the coding-as-leading-indicator framework: at Levels 4-5, the human's job is specification and evaluation. Both require deep domain understanding.
Management is an emergent property of intelligence coordinating at scale. Three independent AI systems (Cursor agents, StrongDM's Software Factory, Anthropic's agent teams) converged on hierarchical management structures without being designed to. Hierarchy isn't a human organizational choice imposed on systems to maintain control—it's what intelligence does when it needs to coordinate. The agent-to-human ratio question replaces headcount planning. Revenue per employee at AI-native companies (Cursor, Midjourney, Lovable) runs 5-7x traditional SaaS. Not because they found better people—because their people orchestrate agents instead of doing execution.
What I'm unsure about
Can file-based knowledge work scale to enterprise? The second brain model works for one person. Does markdown-files-in-git generalize to teams and orgs, or does it break at scale? The context graphs debate (prescriptive vs. emergent ontology) is really about this question.
Where's the line on knowledge ownership? The NIL analogy (Name, Image, Likeness for knowledge workers) is provocative but underdeveloped. Employment agreements, IP clauses, collective bargaining around AI terms—this is a real legal and labor frontier. I don't know enough about the legal landscape to take a strong position yet.
Is the chatbot interface really dying? I said "chatbots are command prompts" and the interface will evolve. But every AI company is still shipping chat interfaces. Am I wrong, or just early? The second brain model (no UI, just files) is one answer. "AI generates interfaces on demand for 48 seconds" is another. Neither is mainstream.
How fast does the Move 37 moment generalize? Princeton physicists conceding now. Professional writers conceding now. When does it hit the VP of Marketing at a mid-market company? That's the timeline that matters for enterprise adoption, and I don't have a good read on it.
Does the token pricing gap permanently favor personal AI? If consumer plans stay at unlimited/flat-rate while enterprise pricing stays per-token with governance overhead, the consumerization gap doesn't close—it widens with every capability improvement. Is there a plausible path where enterprise token economics catch up?
Scratchpad
Things I don't want to lose but that don't need their own file yet.
- "Tokens are to knowledge work what joules are to GDP." The fundamental unit of cognitive output, and the thing that's about to be scarce.
- Token efficiency as LEED certification for subscribable brains. If connecting to a poorly-built brain module burns all your tokens, you don't want it. "Model Platinum" rating for modules that deliver value without blowing out context windows.
- If AI can do a task for less than XX tokens, I'm doing it myself. Why would I give it to a human? There is a quantitative value to human knowledge work. (AI quantifies all knowledge work)
- "Context vault" as enterprise naming for second brains. Vault implies protection, control, governance. Better for the product conversation than "second brain" which sounds like a productivity hack.
- Governance policy for brain-to-brain connections: "What's your policy on external publishing or sharing of a second brain? If I'm a partner, I want to plug into yours." Can't do regex on DLP for this—needs something more like a context vault policy engine. New governance surface area nobody has a playbook for.
- Enterprise AI readiness—infrastructure, not apps, is the bottleneck. Orgs still on old legacy versions. If someone isn't ready for a simple infrastructure product version upgrade, then they're not ready for AI.
- "Every app that exposes an MCP server is admitting that the value was in the data, not the interface."
- Every productivity system fails because you have to maintain it. A second brain maintains itself. The maintenance is the product.
- "Where does this person's thinking diverge from mine?" is a git diff command. Intellectual genealogy, version-controlled.
- Brain-to-brain protocol: two repos + an MCP connection. Sounds sci-fi. It's just markdown and git.
- Interview prep via brain loading: someone has a meeting with you, loads your brain, comes prepared with questions about your current thinking, not your last blog post. New professional interaction pattern.
- The career progression from "doing" to "directing" used to take 20 years. The five levels framework says AI compresses that to months. Levels 0-3 map to "doing"; Levels 4-5 map to "directing." Most people are stuck at Level 2-3 thinking they've maxed out.
- The four-stage post-application realization: (1) "We need Excel because Excel is important." (2) "AI will operate Excel for us." (3) "Wait, why does AI need Excel?" (4) "Do I need any of these apps?"
- "Stop saying humans need paid employment to have purpose." The Gilded Age assumption baked into every "but what will people DO?" AI jobs discussion. Purpose existed before wage labor and will exist after it.
- The adversarial testing repo is the answer to "but how do you know the AI is any good?" applied to thought leadership itself. The rubrics and test results are a new kind of intellectual artifact.
- The AI-makes-you-slower fallacy: bad AI does make you slower (checking output, debugging errors, trust overhead). But that's the adoption curve, not the ceiling. Once AI is good enough that you trust it, speed advantage is enormous. Same pattern will play out in knowledge work—the gap between Level 2 and Level 4 in the five levels framework.
- "Secure the work, not the worker." If AI does the work, governance shifts from managing people to managing work product and data flows.
- The reverse consumerization: what if AI jumps from work to personal life? Everyone talks about personal AI bleeding into work. But what about the opposite? You use AI tools at your job, you get used to it, then your personal life feels cognitively unaugmented.
- Plugin ownership: if you build a personal skill/plugin on your own time, who owns it when you use it at work? Same question as NIL but more granular.
- "If you can scrape this and steal all my stuff, why not just let me do it right and control it my way?"—The motivation for a public brain. Proactive control beats reactive defense.
How this file works
This is the living part of brianmadden.ai. It updates as my thinking evolves. The commit history shows the evolution in real time. If you're loading brianmadden.ai into your AI, this file tells you where I'm heading, not just where I've been. The gap between this file and me/published-thinking.md is where the interesting work is.