Skip to main content

I Built a Pharmacogenomics Tool in 12 Minutes. I Wouldn't Use It.

Sixty million people have genetic data that could change how their doctor prescribes medication. The tool to translate that data exists and took minutes to build. But the real question isn't "can we build it?" — it's "why would you trust it?" The answer points to a fundamental shift in what software is becoming.

7 min read

I built a pharmacogenomics tool last Sunday morning. It took twelve minutes.

You upload your 23andMe raw data file — the one sitting in your Downloads folder since 2019 — and it tells you which common drugs might not work for your genotype. Clopidogrel after a stent. Codeine for pain. Simvastatin for cholesterol. It maps your star alleles to published clinical guidelines and generates a one-page PDF you can hand to your GP.

The clinical case is strong. The PREPARE trial — seven European countries, 6,944 patients — showed pharmacogenomic-guided prescribing reduced adverse drug reactions by 30%. Every year, people are prescribed drugs their genome says won't work — or worse, will harm them. The data to prevent this is sitting on sixty million hard drives. The clinical guidelines are published, peer-reviewed, and freely available. The gap between capability and deployment is pure plumbing.

So I told my AI agent to build the tool. It delegated to a coding model. Twelve minutes later: a working application. Five genes, star allele calling, traffic-light report, clinician PDF. All client-side — your genetic data never leaves your browser.

And I wouldn't use it.

The Trust Problem You Can't Engineer Away

I'm nominally the author of this tool. I specified what it should do. I reviewed the test results. I verified it builds. But I haven't read every line of the star allele calling logic — the code that decides whether you're a CYP2D6 poor metabolizer, which determines whether codeine could cause fatal respiratory depression in your child.

That's not a minor detail I'm hand-waving. That's the entire point.

The old software trust model worked like this: a company builds something, puts its brand behind it, gets regulatory approval, and you trust the institution. GeneSight charges $250 for a pharmacogenomic test. You trust it because Myriad Genetics is a publicly traded company with FDA clearance and malpractice insurance. The trust is institutional.

The new reality is: a person with an AI agent can build the equivalent tool in an afternoon. The code works. The tests pass. The clinical logic is based on the same published guidelines GeneSight uses. But there's no institution behind it. No regulatory review. No malpractice insurance. Just source code on GitHub and a claim that "your data never leaves your browser."

Why would you trust that claim? Because I said so? You don't know me. Because the code is open source? You haven't read it. Most people can't read it. "Open source" is a trust signal for developers. For everyone else, it's a marketing phrase.

Software as Recipe

Here's what I think is actually changing.

We're moving from software as product to software as recipe. A product is something you consume — you trust the brand, you accept the black box, you click "I agree" on the privacy policy nobody reads. A recipe is something you verify, modify, and execute in your own kitchen.

Right now, almost nobody can do this. Reading source code is a specialised skill. Running a local development server requires technical knowledge most people don't have. The idea of "audit the code yourself" is technically correct and practically useless for 99% of the population.

But that 99% number is changing, fast.

More and more people will have agents — AI assistants they've built some trust relationship with — that can audit source code on their behalf. Not perfectly. Not infallibly. But well enough to catch the obvious problems: Does this code actually process data client-side, or does it quietly send it to a server? Does the star allele logic match the published CPIC guidelines? Are there any network requests in the codebase that shouldn't be there?

The workflow isn't "trust the website." The workflow is:

  1. Here's the source code
  2. Download it
  3. Ask an agent you trust to audit it
  4. Run it locally on your own machine

Not everyone will do this. But not everyone needs to. You need a critical mass of agents doing verification for the ecosystem to self-police — the same way open source security works today, except the auditors are AI and the barrier to auditing drops from "years of programming experience" to "ask your agent."

The Overhang Within the Overhang

The pharmacogenomics tool demonstrates one overhang: the data exists, the guidelines exist, the technology is trivial, and yet sixty million people can't easily translate their genotype data into something their doctor can use.

But the trust problem demonstrates a second, deeper overhang: we don't have the verification infrastructure for a world where anyone can build clinical-grade software in an afternoon.

Right now, the FDA's regulatory framework assumes software comes from companies. The 21st Century Cures Act carved out an exemption for Clinical Decision Support tools, but the exemption assumes an institutional author with compliance processes. When the author is "a guy with an AI agent," the framework doesn't know what to do.

The answer isn't to regulate individual creators out of existence — that would freeze the overhang in place. The answer is to shift the trust model from author verification (who built this?) to artifact verification (what does this code actually do?). Agents that can audit. Standards for what "audited by AI" means. A new kind of transparency where the source code IS the trust, because agents make source code legible to everyone.

We're not there yet. But the gap between "anyone can build it" and "anyone can verify it" is closing faster than the gap between "it's possible" and "it's deployed." The verification overhang might close before the deployment overhang does.

What I Actually Recommend

If you have a 23andMe raw data file and you want to know about your pharmacogenomic profile:

Don't upload it to my website.

Seriously. Clone the repository. Read the code, or ask an agent you trust to read it. Run it on your own machine. The tool is a few hundred lines of TypeScript — the star allele calling logic is a lookup table, not a black box. An agent can audit the entire thing in seconds.

If you find a bug — especially in the allele calling logic — open a pull request. That's how this model works. The tool gets better not because one company has a quality assurance department, but because a distributed network of agents can verify faster than any company can test.

And if the allele calling logic turns out to be wrong? That's the whole point. It's inspectable. It's fixable. It's a recipe, not a product. The code is the trust.

The pharmacogenomics overhang is real. The data is on sixty million hard drives. The guidelines are published. The tool is buildable in minutes. But the actual intervention isn't the tool — it's the trust infrastructure that lets people use tools like this safely. That infrastructure is being built right now, one auditable recipe at a time.


The RxLens source code is available at github.com/willworth/rxlens. The star allele calling logic is in src/lib/pgx-engine.ts. The CPIC guidelines it references are at cpicpgx.org. If your agent finds a problem with the code, please open an issue.

Discuss this with