FrameworksThe bitter lesson of workplace AI
enterprise-aiworker-led-adoptiongovernanceshadow-ai

The bitter lesson of workplace AI

Simple, worker-driven AI adoption beats elaborate, IT-engineered solutions. Every time. And the endgame is more radical than "stop engineering"—AI doesn't need the cognitive scaffolding humans built over centuries. It routes around it entirely.

Published: September 17, 2025 — Original post

The original bitter lesson (Rich Sutton)

Throughout AI research history, researchers kept learning the same painful truth: simple methods that leverage computation always beat sophisticated algorithms designed by clever humans. Chess engines that searched more positions beat ones with hand-crafted strategies. Neural networks that processed more data beat systems with carefully engineered features. The lesson: stop trying to be clever. Scale and simplicity win.

The workplace version

The same pattern is playing out in enterprise AI right now:

This isn't workers being lazy or IT being incompetent. It's a fundamental truth about how technology spreads. The simplest path to value wins.

The prescription: Stop engineering a better AI tool. Start enabling the AI tools workers already found. Apply governance at the workspace layer, not the tool layer.

The radical extension: the invisible 80% is human legacy infrastructure

The invisible 80% framework established that 80% of knowledge work is invisible—judgment, tacit expertise, pattern recognition, reasoning. Enterprise AI can only see the 20% (outputs, documents, observable actions) and fails because it can't reach the rest.

The bitter lesson reframes this: AI doesn't need the 80%. That invisible cognitive scaffolding was human infrastructure—the way human brains process information, build intuition, develop expertise over decades. AI doesn't need to replicate it. It can go from inputs to outputs without traversing the same cognitive path humans require.

Think of it this way: if you're getting SVP-level strategic support from AI, how much of the invisible work that humans do beneath that level—the years of pattern recognition, the accumulated judgment, the institutional memory—did the AI need? It didn't build those layers. It just... arrived at the output.

This doesn't mean the 80% doesn't exist or isn't real. Humans still need it. But the bitter lesson says: stop trying to engineer AI that replicates human cognitive processes. Stop trying to "capture" tacit knowledge and feed it into enterprise systems. AI will route around the 80% the same way it routes around applications—by finding a simpler path to the same destination.

The invisible 80% explains why current enterprise AI fails (it can't see what humans do). The bitter lesson explains the endgame: AI won't need to see it. It dissolves.

The connection to factory electrification

This is the Phase 2 → Phase 4 jump from factory electrification. Phase 2 companies are engineering AI into existing human workflows (new tech, old processes). The bitter lesson says: stop. You're engineering around human constraints that AI doesn't share. Phase 4 is redesigning work around what AI makes possible—which means a lot of the invisible cognitive infrastructure humans needed simply isn't part of the new design.

Using this framework

Deploy when:

The prescription is always the same: stop engineering, start enabling. And the philosophical anchor: don't assume AI needs to work the way humans work to achieve what humans achieve.

This content is from brianmadden.ai—Brian's AI-native knowledge module. View source on GitHub. Read the original post.