link.dump.3.3.2026
change, wikipedia, personality, anthropic
I read a lot of random stuff. Link Dumps are the things I think are worth remembering. You can always check out my Link Dump Tag if you’re looking for even more to read and please forward to anyone else you think might find these interesting.
Interesting, and long, read from Hugh Howey on his upbringing, atheism, and how Change Is Okay.
I think it’s incredibly healthy to be reading folks who think the AI hype is…not going to go nearly as far as the advocates say. I agree with David William Silwa to some extent but I do think this time is fundamentally a bit different. I suspect we’ll land in the middle.
Chaser to the shot - Noah Smith gives a laundry list of just how far current technology has supplemented and surpassed us. Lost in all the noise about AGI and superintelligence is just what has happened only in my lifetime. Fun to compare that to the claims Silwa gave up above and then look at the current state.
Nate Silver on whether Trump has gone to “war” with Iran and what it might mean. On the one hand, I’m always up for some semantics/linguistics comments. On the other, yeah we’re in a fucking war.
On the problem of “reliable sources” on Wikipedia. Incredibly long and detailed but, in the end, something worth reading to understand how Wikipedia is being abused.
Sean Goedecke on why giving LLMs personalities is just good practice. I am fascinated by this topic because I already see heavy differentiation between the “personalities” of the models. That is — they are being raised in different nurture environments. Even having been trained on mostly the same stuff they are fundamentally different because of these personalities. This will only grow and people already show attachment to particular models because of this.
Fetal surgery with stem cells to repair spina bifida is safe. This is just fucking cool.
I already love Go so when I saw Go is the Best Language for AI Agents I was sold before I clicked the link.
I pulled out all of the Anthropic / Department of Defense drama because, frankly, this conversation and the complexities of it are deeply complex and will have very far-reaching consequences.
(Note: I think arbitrarily changing “Department of Defense” to “Department of War,” but only kinda-sorta because it’s Congress’ job to pick that name, is why I continue to refer to it as DoD — I’m not interested in normalizing more erosion of Congressional power even if only with my language).
A friend sent me this experiment with LLMs and The Prisoner’s Dilemma and it was a riveting read right after my piece that used the same concept.
The Department of Defense labelling Anthropic a supply chain risk in a fit of pique is a hell of a thing.
Ben Thompson from Stratechery has a fantastic write-up however on the very complex nuances and wrinkles associated with the showdown between Anthropic and the Department of Defense. Some similar arguments I made, some arguments that a friend has made, and overall just an amazing piece. I think these questions will matter very deeply to all Americans.
Yet another take on the Anthropic/Department of Defense stuff which makes a pretty dark read on the future. Not sure I disagree with it either but I’ll have to muddle over it.
One more take from The Dispatch in their The Morning Dispatch where you can scroll down and read a just-the-facts summary.


