Eight investigations into why technology doesn't reach the people who need it. I'm not an expert in any of these domains. That's partly the point — the barrier to useful investigation has dropped. Here's what I found when I looked.
6 min read
I started this project because I kept noticing the same thing: problems that seemed hard aren't hard anymore. The cost of building software collapsed. The cost of researching a problem domain collapsed alongside it. Things that needed teams and months now take an afternoon.
I'm a software developer who uses AI daily and keeps seeing the same gap: the capability exists, it's published, it works — and it isn't reaching the people who need it. So I started mapping where the gaps are, asking: what technology exists, where isn't it deployed, and why not?
Eight avenues in, here's what I've found.
What I keep finding
The technology works. But that observation gives me a new signal: because technology is no longer the problem, the bottlenecks have moved.
Historically, building custom software to personalise daily activities in an elderly care centre would have been intractably expensive. The technology was the problem. Now, with AI, that execution is effectively free. The barrier has shifted from "can we build this?" to "who has the agency to connect this free capability to the people who need it?"
Consider three examples:
- Reverse osmosis membranes can desalinate seawater for $0.30 per cubic metre, yet two billion people lack reliable water access.
- Algorithms match rare disease phenotypes with 87% accuracy, yet patients wait an average of 4.7 years for diagnosis.
- Pharmacogenomic guidelines to prevent adverse drug reactions are public, yet sixty million people sit on unused genetic data.
If the technology works, why doesn't it land?
Eight different answers
There's no single answer. Each domain fails for its own specific reason:
- Desalination: Operational failure. Plants are built but fail in month fourteen when a part breaks or the village council can't pay the electricity bill.
- Landfill Robotics: Economic failure. The tech to sort waste exists, but extracting value is unprofitable once you account for the 80% that is worthless.
- Rare Disease Diagnosis: Knowledge routing. Databases require standardised phenotype terms, but patients speak in natural language. The connection is missing.
- Hearing Aids: Human infrastructure. $100 OTC devices work perfectly, but stigma and gradual onset mean algorithms alone won't drive adoption.
- Soil: Trust failure. Free soil data and SMS alerts can't compete with a supply chain corrupted by fake fertilisers.
- Consumer Rights: Friction is load-bearing. Bureaucracy is intentionally obtuse to protect profits.
- Pharmacogenomics: Verification failure. Code generation is now trivial, making verification the new massive field. How do we build systems that allow doctors to quickly validate tools, and teach non-technical people to safely run local repos with their own health data?
- Energy Retrofit: Bureaucratic bridge failure. Billions in EU subsidies exist, but citizens can't navigate the application process.
What the pattern actually is
As I explored in Scaling Down the Ambition, the real pattern is uncomfortable: the problems that persist are the ones nobody prestigious wants to work on.
Village water kiosks. Soil chemistry. Elderly care. Benefits navigation. There's no venture-scale return, no academic prestige, and no glamour. So nobody does them.
AI hasn't changed human incentives, but it has changed who can show up. A solo developer can now do the reconnaissance that used to require a research team, and build the prototype that used to require a funded startup.
Treading on Toes
I have perhaps been naive.
The initial version of the overhang thesis says: technology exists, it should reach people, let's remove the barriers. The more honest version asks: what is the barrier doing? Who depends on it? What breaks when you remove it?
A tool that automates EU flight delay claims serves consumers, but it also threatens the livelihoods of people (gestores) who navigate the system professionally. A recommendation engine for farmers provides information, but it could displace extension agents.
There are second-order effects to closing gaps, and there is a dubious morality in shrugging and saying, "Well, if I don't build it, someone else will." Not all gaps should be naively closed. We must build with our eyes open.
What I've actually built
I've built four prototype tools as middleware—small things that sit in the gap between capability and deployment:
- Zebra Scout: Translates plain-language symptoms into standardised terms for rare disease databases.
- Rightsclaim: Generates correctly-cited EU consumer rights claims.
- RxLens: Turns raw genetic data into a clinician-ready pharmacogenomics report (client-side).
- Retrofit: Walks Spanish homeowners through energy renovation options and subsidies.
The most tractable gaps might be smaller than I first imagined. I recently built a complete website for a friend who is a music tutor in no time at all. The ability to just help people with custom digital infrastructure is now widely available.
The Epistemic Danger
I built these four tools. They function. But I am constantly wrestling with the fear of LLM Psychosis.
When the cost of intelligence collapses and execution drops to near-zero, it becomes terrifyingly easy to build a detailed map of a territory that doesn't actually exist. It is easy to sit in an isolated room generating "solutions" that are nothing more than sophisticated procrastination dressed up as civilisational cartography.
This doubt is why the deployed tools matter. But intellectual honesty requires me to acknowledge a hard truth: sticking an app on a subdomain isn't surviving contact with the real world. As far as I am aware, literally zero people have actually used these tools.
I have been incredibly hesitant to even share what I've built on Twitter or LinkedIn. I am upfront about this hesitation because it is the next actual bottleneck I have to face.
Why I'm documenting this
I shouldn't be disheartened that I haven't found some amazing, world-changing way to help yet. We should expect this to take time.
But there must be opportunities out there to help people if I keep looking, building, and documenting what I'm doing. You don't need to be an expert or have venture funding. You just need curiosity, AI tools, and a willingness to look at problems that aren't glamorous.
I am writing this from a position of privilege. I have a job, a roof, and my family is healthy. The fact that I get to sit here and think about purpose and what to do with AI is itself a luxury that sits near the top of Maslow's hierarchy.
That privilege comes with a heuristic I can't ignore: Use the collapsed cost of intelligence to help other people climb the hierarchy. Raise the floor. Make the privilege of direction less exclusive.
I don't have this entirely figured out. I'm just looking at the gaps, documenting what I see, and trying to figure out what to do next.
If you want to explore the specific gaps I've mapped so far against Maslow's hierarchy, I built an interactive Maslow Heuristic Explorer to help visualise them.