link.dump.2.28.2026
Bubbles, Hydrogen Technicolor, Scaling Laws, and Age Verification
I read a lot of random stuff. Link Dumps are the things I think are worth remembering. You can always check out my Link Dump Tag if you’re looking for even more to read and please forward to anyone else you think might find these interesting.
“Optimization costs me joy.” Truer words. Voltaire’s quote about perfect being the enemy of good is one of my all-time favorites and this dude’s take on it was well worth a read.
Brink Lindsey has some compelling thoughts on his “permanent problem” of human existence and how AI may very well make us face it. All sorts of interesting thoughts sprang from reading this one but the biggest question of them all is exactly that: what is the point of a human being?
Noah Smith asks Does anyone know why we’re still doing tariffs? And I think his answer for “why” is spot-on.
David William Silwa has a fiery take to burst the AI superintelligence bubble. Interesting chaser after Lindsey’s shot above. Great read also.
And old one but something I was reminded of today - Charity Majors on Friday Deploy Freezes Are Exactly Like Murdering Puppies.
I think this is something everyone needs to understand: YOU CANNOT IDENTIFY AI WRITTEN STUFF. There are some caveats to it but as a general rule when you build a Chinese Room based on creating writing like a human you’re mathematically creating something you cannot separate from human writing. Obviously there are some caveats but they are diminishing. This has happened, we are doing it, let’s figure out what it means.
Microsoft has a new way of storing data that they say will last for 10,000 years - glass.
I did not know that bubbles were such a big problem in manufacturing. Bubbles.
An analysis of durable software companies in the age of AI. Fascinating and quite a bit less breathless than much of the “future talk” that I’ve been reading. And then by the same guy so he doesn’t get another bullet here - When to Join a Startup.
A deeply crunchy analysis on The Fickleness of Scaling Laws as it relates to LLM training in different domains. I’m not going to pretend to have grokked all of this but the look into different domains and some of the training data was cool. I also wish we could measure this type of thing in humans - I feel like the smooth “drop” in a training session is analogous to what happens with a kid growing up. Something rolling around in my head right now is how there is a gap between LLM training (glorified calculator) and human mind training (actual consciousness).
A well-considered explanation of the Age Verification Trap.
In energy research, apparently, hydrogen comes in different colors. So-to-speak.


