Investigations and deployment gaps
Avenues where the technology already works, but deployment, trust, incentives, or recognition have failed.

Hi, I'm Will — a developer in Spain, building and writing in public.
The cost of building software and doing research has collapsed. That part isn't news. What's less settled is where the collapse actually reaches the people who need it, and where it stalls. I've been investigating those gaps — sometimes building something to test an idea, sometimes the investigation is the point.
I do it in public because the reasoning is the useful part, not just the artifacts — and because it's work that wants more hands on it, not fewer.
This is not just a thesis blog about overhang. It is a body of work across investigations, AI/philosophy, and practical building, all tied together by the question of what becomes possible when capability gets cheap and deployment still lags.
Avenues where the technology already works, but deployment, trust, incentives, or recognition have failed.
Writing from inside the weirdness of living and building alongside AI systems, without pretending the questions are settled.
Notes from building with new tools, where the practice is changing faster than the rules around it.
If you are new here, these four pieces are probably the fastest way to understand what I am doing and why.
The framing piece: what technological overhang is, and why I think it matters now.
A synthesis of the investigations and what keeps repeating across very different domains.
A self-skeptical AI piece about echo chambers, uncertainty, and why I still think this work is worth doing.
A build-practice piece about what changes when prompt, code, and intent stop being cleanly separable.
Some investigations stop at clearer understanding. Some turn into tools. The point is not to separate ideas from products, but to make the investigate, write, and build loop visible.
Rare disease symptom matching from the diagnosis investigation.
EU consumer-rights claim drafting from the consumer-rights investigation.
Pharmacogenomics report generation from the PGx investigation.
Energy-retrofit guidance from the housing and subsidy navigation work.
AI, software, ethics, society, and the occasional honest reckoning. Newest first.
I am even writing the abstract myself.
I spent months mapping places where technology works but deployment fails. Then I noticed I was doing the same thing.
Eight investigations in, I'm trying to be more systematic about choosing what to explore next.
Most Spanish homes are energy disasters — rated E, F, or G. Billions in EU retrofit subsidies exist. But the gap between "money is available" and "homeowner applies for it" is enormous, especially in small inland towns. This is a tool that tries to close it.
Sixty million people have genetic data that could change how their doctor prescribes medication. The tool to translate that data exists and took minutes to build. But the real question isn't "can we build it?" — it's "why would you trust it?" The answer points to a fundamental shift in what software is becoming.
Eight investigations into why technology doesn't reach the people who need it. I'm not an expert in any of these domains. That's partly the point — the barrier to useful investigation has dropped. Here's what I found when I looked.
I've been looking at places where technology works but deployment has failed — asking why the gap persists and what, if anything, a single person can do about it. Eight so far. Here are a few.
7,000 diseases, 400M people affected, 4.7-year average diagnosis. The databases exist. Nothing connects them to the patient.
EU rights are excellent on paper. Enforcement is terrible, because the friction is someone's business model.
60M people have genetic data that could change prescriptions. The tool took 12 minutes to build. The trust takes longer.
Freshwater at $0.30/m³, technology works. In Punjab, auditors found 19 government plants. All non-functional.