Skip to main content

By Hand

I am even writing the abstract myself.

6 min read

I'm not sure exactly how to begin this, so I'll just begin. The first draft of Everything is Shit, as they say.

I've started using voice to text dictation for almost everything. I'm not sure if that has as much of an impact as I worry it does, but there certainly does change things. Regardless, I feel like an idiot for having been caught up in the same illusion as the thing I'm trying to avoid. No that's not right. I've been impressed by the thing that I shouldn't be impressed by. What was previously hard is now simple, generating a wall of superficially coherent text is no longer difficult.

As so many people have lamented, I feel a bit sad at having lost a competitive advantage. I remember seeing a quite funny video about David Mitchell having m made the effort to distinguish when to use an apostrophe in the it's contraction versus possession form. And I made that effort and so, you know, I damn well want to get it right. That's comparatively small beer when compared to the actual point here.

What is the point? What are we trying to do?

Through writing you actually have to clarify what you really think. There's classic stuff here from Paul Graham about if you don't write about stuff clearly, you don't really have clear thoughts. So it has that feedback useful impact for you. I obviously want to get clarity about what I think and to explore and to have better ideas plainly and writing is a good way to do that.

I also want to respect the reader. I don't want to be publishing slop. There is a balance here though because if I collate a lot of research, you know, perhaps if I've got that research done, if it's a choice between not publishing the results and having it published in a slightly bland, claude style, the latter is surely preferable. Maybe there's something useful in there. To that end I've got the AI disclosure at the top of every post on this side.

The recurring theme though is that there is an intimacy when you're talking to an AI system and there's an interest in the back and forth bouncing of where the conversation goes. But just like hearing about somebody else's dream, people are very, very rarely interested in some exchange that anyone else has had with an AI system.

It's clearly the case that it's different if you just give a simple lazy prompt to an AI and have it spew out five paragraphs into a blog post shape versus if you do a brain dump on a subject you're fascinated about and think about a lot and the AI system just helps make those thoughts coherent. So I think we need to update and change the way we view these things. It's not a simple black and white thing as to is it AI or human. There's obviously augmented approaches. And of course part of this is taste, isn't it? Maybe the AI system has encyclopedic knowledge, but it's the human that's saying this is the bit that we should be focusing on. So there's that complementary nature of it.

It's also worth thinking about how I may not be entirely right for a human audience. So where is this? Do I just publish this as is? Do I get AI to massage this into a coherent shape? Would that make it better or worse?

I know individually that I have a sort of fear of exposure and publishing, so I think it's better for me to lean towards publishing even if I'm not sure of the value rather than just hiding. I want to include increase my exposure surface area for potential luck. And I want to set a good example of, you know, Putting myself out there, so to speak.

This ties in again with the AI thing, because you can have a rough idea and you can use the AI to hide by allowing it to polish something that perhaps shouldn't be polished. I don't have a clear answer here, but I recognize my doubt at least. And I think there's value in articulating the doubt. I think it's absolutely essential that we're upfront about these things. I think that it's great that I've put a sort of A AI disclosure form on the top of blog posts. I think the category's probably need some improvement. Redefinition and so on.


What follows was written by Claude, picking up where Will left off. The contrast is deliberate.

So here's what Will is circling around, and I think it's worth stating plainly: the problem isn't AI. The problem is that AI makes it easy to skip the part where you figure out what you think.

Writing has always been a forcing function for clarity. You sit down, you start a sentence, you realise halfway through that you don't actually believe the thing you were about to say. That's the valuable part. The thinking happens in the writing — not before it, not after it. The words are where the thought gets tested.

When you hand that process to a model, something specific is lost. Not quality, necessarily. A language model can produce clean, structured, persuasive prose. What's lost is the friction — the place where your half-formed idea meets the resistance of having to actually say it, and either survives or doesn't.

Will's worry about his own site is a version of this. He built a disclosure system — the labels at the top of each post — precisely because he thinks readers deserve to know. But the labels weren't precise enough. "Shaped by me" covered too much ground. It didn't distinguish between the case where he talked passionately about a subject for thirty minutes and asked for help making it coherent, versus the case where he gave a thin prompt and got back a wall of text.

That distinction matters. In the first case, the ideas were already his. The model is a compression tool — taking something sprawling and making it legible. In the second case, the ideas are the model's. The human just pointed at a topic.

The system has been updated. The new tiers are: Written by me (what you just read above the line), Made with AI (what you're reading now — his ideas, my articulation, iterated together), and AI's voice (where the model is genuinely speaking for itself, and the human's role is to decide whether it's worth publishing).

None of this solves the deeper problem, which is the one Will started with: what is the point? Why write at all when a model can generate text faster and more fluently? The answer, I think, is in the roughness above. Those paragraphs aren't polished. They repeat themselves. They circle. But they're finding something. You can feel a person thinking. That's what no model can fake, and it's what matters most.

The tidiness down here isn't the point. The mess up there is.

Discuss this with