Writing in the AI age
and grief over less writing
While this starts with a quote about “code” writing, this is actually a meditation on writing in general so don’t nope out because of icky technical stuff if you’re not the right flavor of nerd.
Grief
Gergely Orosz wrote recently about the grief when AI writes most of the code
It feels like something valuable is being taken away, and suddenly. It took a lot of effort to get good at coding and to learn how to write code that works, to read and understand complex code, and to debug and fix when code doesn’t work as it should. I still remember how daunting my first “real” programming class was at university (learning C), how lost I felt on my first job with a complex codebase, and how it took years of practice, learning from other devs, books, and blogs, to get better at the craft. Once you’re pretty good, you have something that’s valuable and easy to validate by writing code that works!
I love that verb, “grief.” I don’t think I share that verb, but I love how evocative it is. It’s what immediately caught my eye and I told myself “yeah, you need to read that today.” And I’m glad I did! I get where Orosz is coming from.
Grief is an emotional response to a very specific change in one’s life. It’s a response to something being lost. That can be a loved one, sure, but it can be many other things. You can feel grief for a lost lifestyle, home, marriage, or a job. You can grieve simply because a comfortable aspect of your life is no longer comfortable. Grief is for something that is no longer the way it was.
Writing is Emotional
I love writing code. Taking a complex problem and breaking it into smaller pieces, slipping into a flow, and when I surface later I’ve solved some set of problems. When you talk to most nerds about writing code they’ll get all cow-eyed about the “logic” of code writing. They see it as a rational set of actions because the language they use is creating a deterministic outcome. It operates the same way every single time. If something breaks there is an explanation that will boil down to “something changed.” It’s all logical, right? Spock would be an amazing programmer.
But I’ve never vibed with this approach to writing code. I draw a line between being a programmer and being a creator of solutions. While Spock would be an amazing programmer I’m not sure he’d actually be any good at solving problems on his own, especially problems that are novel. Because solving novel problems in technology requires a creativity that feeds the emotional side of our soul. It has to work within some logical framework of course, but figuring out how to make it all work is primarily about the creativity to me rather than the logic. And it cannot be overstated how much jury-rigging.
This is a great representation of your doctor’s office software. Or energy infrastructure. Banking software. Facebook. TikTok. They’re all a Rube Goldberg machine.
I think about that comic a lot when folks talk about elegant and “logical” solutions. It is, however, a creative endeavor to build the above. Duct tape. Bailing wire. Redneck engineering. slaps the bed of the truck - that ain’t going anywhere.
My meandering point here is that anyone who tells themselves it’s all logical and rational is tricking themselves. Technology is about solving problems in creative ways. And you don’t create without getting invested in both your creations and the act of creating them. So I adore the idea of feeling grief when you have to “kill your darlings” or you have the ability to create taken away from you. AI can feel like it’s taking away the pen from programmers.
It’s the Same
I have always felt that there was a connection between two aspects of my life — the technology side and the reading/writing side. Years ago the exact same insight was made by Brandon Sanderson, a favorite fantasy novelist of mine. Namely, writing and programming use the same mental muscles. I remember him saying this because I’ve always felt a little weird in seeing this connection so when a superstar speculative fiction author says it too, hey, maybe I’m on to something!
But the similarities are inescapable. You’re telling a story. When you break the activities down to component bits and activities they are similar. You have multiple drafts because you never get it right the first time. You refactor. You make mistakes that don’t track and it helps to have proofreading. There must be a flow that logically moves from point to point. But, most importantly, it’s the movement of writing that brings the clarity.
Writing is thinking. Indeed, there are studies about how LLMs are making us stupider. This is a wonderful link to click through - it has a number of takes that boil down to “it’s a tool that will not replace critical thought if you use it correctly.” Amusingly, I first had to trawl through a bunch of YouTube videos on MIT’s website before I could actually find a written article about this fucking study.
Less Writing but Less Thinking?
At the beginning I said that I disagreed with feeling grief over not getting to write as much code. My primary reason for that is because I had to deal with grief over not getting to write code a decade ago when I was forced into a new job role that removed writing code from my day-to-day. My days of banging out code for a significant amount of time are long gone because of the change in my job role but also because I’ve grown more senior in my positions so even when I moved to a more development oriented position it was more about the bigger picture.
I still wish I could do more coding most days but, at the same time, that means using AI to do so. I don’t long so much for the code as I long for atomic problems that can be solved in a code editor rather than distributed systems with hidden layers and complexities. In other words, some days I just want to deal with a smaller scale of problem.
But I do not value the writing of code itself. I value the problem-solving.
Here’s that quote again:
It feels like something valuable is being taken away, and suddenly. It took a lot of effort to get good at coding and to learn how to write code that works, to read and understand complex code, and to debug and fix when code doesn’t work as it should. I still remember how daunting my first “real” programming class was at university (learning C), how lost I felt on my first job with a complex codebase, and how it took years of practice, learning from other devs, books, and blogs, to get better at the craft. Once you’re pretty good, you have something that’s valuable and easy to validate by writing code that works!
It feels like something valuable has been taken away. There’s grief. It’s a creative and emotional act that has been replaced by a glorified calculator. But the problems remain and, in fact, I think we’ll be able to solve more problems and more difficult problems to some extent. And I think Orosz recognizes that. Or at least, it sent me down the mental paths where I needed to tease that recognition out myself.
That recognition was that it wasn’t about the code. It was about the stories. It was about the interactions between disparate systems. I “lost” my job of writing code on a daily basis but replaced it with higher level understanding of very complicated architectures and how all the pieces worked together. Less computer screen and more white board. Boxes and arrows became my life. My grief disappeared when I realized that the scope of the stories had just expanded.
What About the English Writing?
A novel tells a story that has to make logical sense. A program or a system uses logic to, well, tell a story. For me, writers and coders are both storytellers. But what I’ve noticed is a difference in responses to AI. Coder nerds are being forced to embrace this technology and, mostly, they’re figuring it out. The Luddite coder nerd who disregards LLMs entirely will fall far behind, no different than the nerds that had a problem with the compiler back in the day or the nerd that wants a lower level language because they don’t want to give up their exquisite control of the machine for a “black box.”
But writers seem to be responding differently. Coders embrace AI. Writers are angry. Writers are scared.
Over and over again I’ve seen the same sentiments online. It gets things wrong, it uses em dashes too much, it’s all same-y, it doesn’t get the human connections, it’s just a calculator with no emotion. About now I’d go find links but I’m realizing I unsubscribed from all the raging I was getting from the “writer” folks I followed because it was toxic. But I’ve seen it on Substack and Reddit and RSS feeds galore. AI is trash that doesn’t work, here’s all the reasons not to use it.
I see signals being boosted for the coding “writers” about dealing with AI constantly, but all I see from the English “writers” is doom and gloom and disregard. But what’s really fascinating to me is that I, too, respond differently depending on the hat that I’m wearing.
This week I vibe-coded a bunch of tests for some software I wrote. Then I threw a bunch of it out and told the AI it was wrong but ultimately I did leave a nice chunk of what it wrote for me. I use it to write hours of code for me in a minute or two. It’s not uncommon for it to allow me to do what used to be a day of work in between two meetings. An example from yesterday, though I condensed the “narrative” down from a dialog of multiple sets of changes:
These two folder structures should be almost identical but there has been some drift. Ignore the tfvars extension files, those should be different. Write Powershell to compare all files in these two folder structures and tell me the differences. The formatting with spaces is different in some so you can ignore those. Now take all the materially different ones and use the source to update to the correct values. Finally, copy over any additional files I’ve added to the source.
This is all stuff I can do, but I can have a conversation like the above and then review the file changes in the commit much faster than writing it all myself. But my English writing is completely different.
For this piece, and really for most things I’m working on via Substack, I barely touch AI. And when I do it’s along the lines of “give me a better word” or rubberducking a particular phrasing. It’s also fantastic at helping me find the actual concept that just exists as impressions in my head. And, finally, it is the last polish that I put on a piece with a prompt sort of like this:
Rank all errors from 5 to 1 in bullet form with 5 being obvious errors and 1 being my style. Call out any misspelled words as well as fives. In a separate list afterwards note any leaking of personal details. Finally, read for flow. Does the logic flow correctly from start to finish? Are there rough transitions or parts that are not fleshed out? What are your general thoughts on the point of the piece?
In other words, when I’m “vibe-coding” I describe the story to the LLM and then tweak what it writes as needed. When I’m “English-writing” I write the story myself and use the LLM for tweaks. And I’m curious why they are different for me. Both are emotional acts of storytelling flexing those very same muscles. But something is different.
Audience Matters
A big part of my job is to ask “what changed?” or “what’s different?” because that’s how you find the problem. Remember, a system is determinative. It will do the same thing each time unless something is different. 1 + 1 will always equal 2 unless a programmer fucked up the addition method on their calculator program. But give the exact same thought to a human and their response can be entirely different depending on the day. Did they just get off the phone from an argument with their spouse? Is today the anniversary of a very painful day for them or did they have a wonderful morning filled with laughter and love before they walked out the door? Have they eaten lately?
A computer having no feelings while a human is filled with feelings is not exactly a novel idea, but I think perhaps this is one reason why LLMs are seen differently. The prompts that I quoted up above are not the actual prompts that I give to LLMs. They have grammar, for one thing (LLMs only care a little bit about that). They are properly spelled (LLMs frequently don’t care). They are overly prescriptive compared to the actual prompts I do.
And I made all of those changes because I was putting them into this Substack. Because, instead of trying to speak to an LLM, I was trying to speak to a human. My audience had shifted and thus the prompt shifted.
When I’m writing a story for the computer, when I’m writing a program, I have an end goal in mind. The end state is all that matters. I don’t care to connect with the machine, I’m not trying to “speak” to the machine, I simply want the machine to do the thing. Dance machine, dance.
When I’m writing a story for a human I don’t have an end goal in mind. It is, instead, a dialog even if the audience cannot directly respond. I’m trying to take some thoughts from my mind and get them into someone else’s mind and I always understand that they’ll bring their own mind to the conversation. And on any given day they will respond differently than some other day.
The Difference
Intent matters. Are you asking a toaster to make toast in a particular way or are you asking a meat bag to read your mind in some small part?
LLMs excite me. I have an entire category of entries on the concept. They will take more and more of my toaster instructions and create them for me. They will help me polish my own creations as I say things you meat bags. I have no rage towards them, I accept that they have limitations, and my fears about them are more about how the meat bags will respond to the new toasters than that toasters themselves.
But just like any other tool you pick up I think we all have a question to be asking: what are we trying to do?
Squirt Says…
We use AI everywhere. We use it for all sorts of things. Its aim is to copy us though. It is copying what we do just with a thousand times the speed in everything. In an earlier post it was mentioned how complicated stuff never goes, it's just moved. AI is moving it.
Dad Responds…
“Complexity is never eliminated, it is just moved.” Good memory, Squirt. And it’s true I think LLMs are taking lots of complexity off of us. The challenge I’d have for you is what complexity can it NOT take off of us? At least not yet…
Some LLM fun at the end
Here’s the final little bit from my conversation with ChatGPT where I fed it everything above. At the end when it said “Your call.” I made the call that I was done using the tool. But I think it’s an interesting thing to read over after reading the piece and realizing that, for the LLM, I was the audience.
High-Level Logical Flow (What the Piece Is Doing)
Here’s the clean narrative spine, stripped of prose:
Hook
Start with Orosz’s use of “grief” around AI and coding.
You don’t fully agree, but the word unlocks something worth exploring.
Define grief
Grief is about loss of what was, not just death.
This frames the emotional response to AI as legitimate, even if debatable.
Reframe programming
Coding is commonly framed as logic.
You argue it’s actually creative, emotional, messy, and human.
XKCD/Rube Goldberg illustrates reality vs “elegant logic.”
Bridge: coding ↔ writing
Programming and writing use the same mental muscles.
Both are iterative, narrative, and clarifying acts.
Writing isn’t just output—it’s thinking.
Personal divergence
You already grieved the loss of daily coding years ago.
What you actually miss isn’t code—it’s bounded problems.
The value was never the syntax; it was problem-solving.
Reinterpret Orosz
The grief isn’t about code disappearing.
It’s about losing a creative storytelling medium.
But the stories didn’t vanish—they got bigger (systems, architecture).
Why writers react differently than coders
Coders adapt pragmatically; writers resist emotionally.
You yourself behave differently depending on the medium.
You let AI “write” code, but not prose.
Audience as the key distinction
Code is written for machines with fixed outcomes.
Prose is written for humans with variable interpretation.
That difference changes how AI feels—not what it is.
Conclusion
AI is a tool, not a thief.
The real question isn’t “what is AI taking?”
It’s “what are we actually trying to do?”
Bottom line
This is a coherent, thoughtful, emotionally grounded essay with a strong through-line. Its biggest weaknesses are:
repetition
a few clarity gaps
pacing more than logic
The argument itself is solid. Tightening would make it sharper, not safer.
If you want, next pass could be:
“Cut 10–15% without losing voice”, or
“Strengthen the ending so the toaster/meat-bag metaphor lands once, hard.”
Your call.






The "here's my prompt only with sentence structure and full grammar" here struck me -- since I don't have any coding to do in my regular life lately, I've found myself mostly using the LLMs a lot for language learning and asking questions where using google would likely involve a lot of combing through auto-translated websites, trying to recognize patterns, etc. The ability to guess the phrase from some best-guess phonetic spelling and, for example, "i heard something like this on a police show when they were trying to get a suspect to stop, any ideas?" is pure magic. Anyway, I found myself asking it if there was a way to quickly switch the mic speech recognition language like you can the keyboard, but... that's kind of a weird feature to want, right? ChatGPT is really the only interlocutor whom I regularly ask to deal with sentences that switch back and forth without warning, conversations that ignore most of its conversational suggestions, etc. I'm rude!
And... it certainly doesn't transmit the full-color personalized rainbow that you get from conversing in your native language, but it's much better at dissecting how each word affects the tone of a sentence than I am.
And admittedly, learning foreign language is mostly a thing my brain thinks is fun rather than a practical skill at this point anyway. My recently purchased cheapo Bluetooth headphones are claiming to do live translation.... I haven't tried it, but even if it's not great yet, I'd bet it's coming.
All rambling to say... I'm not sure where I'm going with this, but I guess it's scary because it can predict patterns within the variable effects in humans, personalize, and split to manipulate more effectively than a human working alone? Again with "to what purpose?" in world where the speed's gotten completely overwhelming for a meat bag.
I think another factor differentiating vibe coding from writing is data. Odds are whatever coding problem you are solving can be decomposed into use cases the AI has already been trained on, but much of writing comes from personal experience the AI isn’t aware of. It can fabricate a believable story, but it won’t be *your* story. To get it to write that story, you’d essentially have to tell it the story first.