The Fogbank Problem: Why We're Losing the Ability to Build Things

I was watching a defense industry panel last year when the CEO of Raytheon said something that lodged in my brain like a splinter. He was explaining why it took four years to restart Stinger missile production after the Pentagon placed an order in 2022.
They had to pull 70-year-old engineers out of retirement. Not as consultants—as teachers. The young employees didn't know how to read paper blueprints from the Carter administration. The testing equipment no longer existed. The seeker components had been discontinued decades ago. An order placed in 2022 wouldn't deliver until 2026, not because of funding, but because the knowledge had retired and died.
The Pentagon hadn't bought new Stingers in twenty years. They assumed the capability would just... stay on the shelf. Like a skill is a physical object you can store in a warehouse.
Then there's Fogbank. It's a classified material used in nuclear warheads from 1975 to 1989. When the government needed to make it again for a life-extension program, they discovered they literally couldn't. The people who knew how were dead or retired. The documentation existed, but it was insufficient in ways that only became clear after $69 million in failed attempts.
They finally produced a batch. It was "too pure." The original process had relied on an accidental impurity—something no one, not even the original engineers, had known was critical. The knowledge wasn't just lost. The deep, unwritten context of why the system worked had never been articulated. It lived in the bones of people who were now gone.
I run engineering teams for a living. When I look at the defense industrial base—the inability to manufacture artillery shells, the single points of failure, the generational expertise evaporating—I don't just see a military crisis.
I see the exact same collapse happening in software engineering. And it's happening faster.
The Peace Dividend Is AI
In 1993, after the Cold War ended, the Pentagon told defense CEOs to consolidate or die. They optimized for extreme cost-efficiency. They stopped planning for volume. They stopped training the next generation for crises they assumed would never come.
In software, our "Peace Dividend" is AI.
We're three years into this optimization cycle. Major tech companies have frozen junior engineering roles. A LeadDev survey found that 54% of engineering leaders believe AI copilots will permanently reduce junior hiring. Computer science enrollments are dropping at top universities.
The logic is seductive: why hire a junior developer when ChatGPT can generate the code in thirty seconds?
But here's the math nobody wants to do out loud. A junior developer takes three to five years to become mid-level. Five to eight years to become senior. A decade to become an architect. You cannot buy that time with money. You cannot compress it with a prompt.
When junior engineers use AI to skip the grueling process of debugging—when they bypass the painful mistakes that actually forge competence—they never develop tacit knowledge. They become "AI Prompters." They can tell the machine what to do, but they cannot tell you why the machine's confident output is architecturally flawed. They can't smell the bad pattern. They can't feel the technical debt accumulating.
When my generation of senior engineers retires, our knowledge will not magically transfer to the AI. Like Fogbank, it will just vanish. And the "prompters" left behind won't know what they don't know until the system collapses.
The Context Crisis
Right now, AI code generation is incredibly fast. But human code review has become the bottleneck. The industry's predictable solution? Let the AI review the AI's code.
This is a catastrophic mistake dressed in efficiency clothing.
An AI does not understand your business logic. It doesn't know the historical technical debt from the 2019 migration that still haunts your database schema. It doesn't know the unwritten rule that this microservice can never call that one directly because of a regulatory constraint from a client who left two years ago. It lacks context.
Even if you document everything—Site Books, Software Design Documents, full test coverage—it only works today because the humans reading those documents possess the tacit expertise to interpret them. The documents are a map, but you still need someone who understands the terrain.
What happens when the readers are "prompters" who don't actually understand distributed systems? When the person reviewing the AI's output can't recognize that the solution is elegant, correct, and disastrous for your specific infrastructure?
Fogbank failed because the recipe lacked the unwritten context of an impurity. Your enterprise software will fail for the exact same reason. The AI will generate beautiful, functional code that slowly, invisibly destroys the architecture—because no one alive remembers why the original constraint existed.
Why We Built Mercury Bridge
I started Mercury Bridge because I saw this gap widening in real time. On one side: AI execution speed that doubles every six months. On the other: human contextual understanding that takes a decade to build and one retirement to lose.
Mercury Bridge isn't another AI copilot. It's not a faster way to generate code. It's an architectural framework designed to capture, structure, and deploy the deep, unwritten business context of your organization.
Here's what that actually means:
Contextual Anchoring: Before the AI writes a line of code or drafts a strategy, Mercury Bridge forces alignment on your proprietary logic. It maps your historical decisions, your tech stack dependencies, and the "unknown knowns" that keep your systems running—the things nobody documents because everyone assumes they're obvious.
Bridging the Junior Gap: Because we structure enterprise context so deeply, when a less-experienced engineer (or an AI agent) interacts with the system, they're constrained by guardrails built by your senior experts. We're not skipping the learning process; we're providing a safety net of institutional memory so the learning doesn't require a decade of scar tissue.
The Decision Vault: We don't just store code. We store the decisions behind the code. Why was this pull request approved? Why did we reject this database architecture in 2024? What constraint from a client contract in 2021 still affects how we handle data today? By preserving the context of the decision, we ensure the AI (and future humans) don't repeat historical mistakes out of ignorance.
The Bill Comes Due
The defense industry bet that geopolitical peace would last forever. They paid for that gamble when the world changed.
The software industry is betting that AI will advance fast enough to render senior human judgment obsolete before the current generation retires. It's a terrifying wager. If the AI plateaus—and all technology eventually plateaus—and you've spent five years firing juniors and relying on synthetic code generation, your company will wake up one day and realize you've forgotten how to build your own product.
Money has never been the limiting factor. Knowledge is. And knowledge, unlike money, dies with the people who carry it.
You have to hardcode your context before the people holding it leave the building.
— James, Mercury Technology Solutions, Hong Kong, May 2026
Originally published on MTS Blog & Research