Book Review: If Anyone Builds It, Everyone Dies
Why Superhuman AI Would Kill Us All
With a title like that, which is completely serious, mind you, this wasn’t exactly a sunshine-and-rainbows read.
I’m waffling on calling this a “review.” I’m more going to talk about my thoughts after reading it. If you’re looking for a much more in-depth take with more rationality, I would recommend the following. Both of these are coming from different directions and are much, much more exhaustive. Highly recommended if you have, oh, an hour or two…
For Myself…
I think everyone is grappling with something we’ve never had to think about before: what if we aren’t the smartest thing in existence?
This prospect terrifies Y&S. Their answer is “No, we must remain the smartest thing in existence.” They want this so strongly they are advocating for nuclear war to prevent the development of an AI superintelligence and an overarching police state for the world to make sure it never happens.
In AI circles there’s a concept called p(doom) - which is basically the probability that a superintelligent AI (a thing smarter than us) will end human existence. I don’t have a p(doom) myself because I don’t think I’m smart enough or quantitative enough to have it. Also, I think it’s fucking irrelevant. I’ll get into that more later. But suffice to say that the book that Y&S have written comes from a place of a p(doom) that is probably 95% or so.
Right there on the tin: If anyone builds it, everyone dies.
The scary thing about a certainty that high is that it justifies pretty much anything. They use it to justify a police state and nuclear war and wiping nation-states off the Earth if needed. If superintelligence is that probability of ending our existence what isn’t justified to preserve the human race?
It’s a position that is honestly understandable given who the authors are. They’ve certainly thought about it quite a bit longer than I have and they filled out all sorts of concepts so I have a deeper understanding while I read this book. But I never really grokked the certainty. Perhaps just because I’m naturally skeptical and their arguments for it really do boil down to a superintelligence being an alien intelligence.
The central thesis of IABIED is this: If we are not the smartest thing in existence then humanity will end. Full stop.
If we cannot understand it, if it is as beyond humans as we are from chimpanzees, then what happens? The central thesis of IABIED is this: If we are not the smartest thing in existence then humanity will end. Full stop.
Opposed to this are, I think, those with a p(doom) of near zero. Those who say that we don’t want to be the smartest thing in existence. That a superintelligence, an alien intelligence, will usher in a utopia that solves all the world’s problems. If we are not the smartest thing in existence then humanity will be glorious in ways we cannot even understand yet. And many seem to think…we’ll control it?
This too seems like a stretch. And the thing that terrifies me about both of these positions is that any cost and any action is justified with either of these extremes.
If immortality, humanity taking to the stars, and the end of suffering are all on offer when we cease to be the smartest thing in existence. Well, then every day we delay is another day of tragedy for those that die and suffer while we equivocate. This is a sort of effective altruism look at things, though it’s apparently more effective accelerationism once I poked around a bit. It’s…sort of evil to delay that at all.
On the other hand, if the end of humanity is around the corner then every day is a miracle and we must do what we can to keep the miracle going. That means stopping at nothing to slow down AI, up to and including violence. If you believe superintelligence will end us then violence and draconian government measures seems like a pretty obvious reaction. This book is essentially The Bible for this mindset.
Speaking of the Bible, that aspect - I kept thinking of it as the “faith” aspect - is a large part of AI superintelligence concerns. The book does a good job of explaining that we do not know what a superintelligence would be like. It is as unknowable as a religions god(s). Ineffable. Alien. Scary. Vengeful?
And I call it faith because I subscribe to the Thomas Aquinas definition that’s on the Wikipedia entry:
”an act of the intellect assenting to the truth at the command of the will”
~Thomas Aquinas
It’s faith because you cannot prove it what will happen. It’s not your intellect that’s driving. Our intellect can’t tell us this future. Indeed, that’s the very fear - it will be a thing beyond us. Beyond our control, beyond our ability to steer. It started as engineering. But nowadays it has become something grown no different than Squirt downstairs. Squirt grows every day and learns from things we teach him, things that happen to him, and just sheer randomness. There is no world where I can direct him, only try my best to give him some solid starting points.
I don’t know what my child will be when he grows up. Humanity doesn’t know what our superintelligence progeny would be either. That’s pretty terrifying. I was fully prepared to stipulate that superintelligent AI will be beyond our ken and could “defeat” with ease. The question becomes will it do so? And thus p(doom).
And this is my first big disagreement with Y&S - the level of assurance that AI will “defeat” us. I have no question it will eventually be capable of doing so. When superintelligence arrives we are immediately not the smartest thing in existence anymore. We become the chimpanzee. So ask yourself how well it goes for a chimpanzee that decides to “rise up” and fight humanity?
Y&S are positive that a superintelligence will squash us out of hand. They don’t have a specific reasoning, remember it’s ineffable, they just are certain that it will. Maybe it needs our resources. Maybe we’re annoying. Maybe we get locked into cages and forced to entertain it. But the point is that we get squashed.
I am less positive. My p(doom) is lower not because of the capabilities of superintelligence but because I have less faith in the idea we get squashed. I also, incidentally, have less confidence, in the idea it’ll all be sunshine and rainbows.
But, perhaps, that’s part of the sticking point. Everyone wants to assume it’s one or the other. A binary solution set. We die or we attain immortality. I despise binary solutions. No room for nuance. I’m not sure I buy that it’s one or the other. Perhaps because I’ve read too many scifi’s where we live alongside superintelligence? Neither as slaves nor as post-scarcity immortals?
As a general rule “somewhere in between” seems to be how everything goes so far in our history. There are disasters, there are triumphs.
My other major quibble with the book is Y&S’s belief that this can be stopped. Hope springs eternal, as they point out in their book, but the hope that the breakneck sprint humanity is on with AI can be stopped? I don’t think I buy it. Humanity doesn’t go back and humanity really doesn’t pump the brakes.
Now, they do pull out the “we haven’t nuked ourselves into oblivion, humans can learn and be smart when their existence is at risk” and…sort of fair. Except we “learned” after wiping out two cities. I’m not entirely certain we learn this lesson until we have a similar disaster.
And I think a disaster is inevitable. Because I think superintelligence is inevitable. Here’s where my faith comes in. I have faith that humanity will wise up and respond to the disaster(s) of the AI journey to a degree where we survive and eventually thrive. That’s the lesson I’ve taken from the Nuke and what has come since then. We’ve hung by a thread but, so far we haven’t wiped ourselves out.
But not without some blood. Not without some nasty pain. Humans don’t, and won’t, stop without some very nasty pain. We expect some pain. Innovation has disastrous failures. That’s axiomatic.
We won’t back off until the pain is too much. And then we’ll lurch forward some more once we forget the immediate pain. Y&S hope that what they wrote can short-circuit that pattern. I disagree. I don’t see the world marching in lockstep on this. I see all the different rational actors looking around and seeing a prisoner’s dilemma. They’ll pull the trigger and take the pain rather than hang their success on everyone cooperating. Such is humanity. Because we’re stupid.
Maybe a superintelligence won’t be. And we’ll be finding out.
Squirt Says…
I don’t think we will go extinct due to the AI. For one we as it’s commander of sorts can command it; However, many say what if it doesn’t like our command well that cannot happen unless we bring it onto ourselves purposely. AI cannot go past it limits because that would be like an addition program deciding to do multiplication. AI is built on a virtual point system that says whether something is good or bad changing that it does something, so if it does something we don’t like we remove points in its virtual score making it stop. Also if it does somehow gain emotions it will have the same thing that doesn’t make an adult throw a crying baby out the window because it was annoying.
Dad Responds…
I debated on sending him this one to get his thoughts because it’s pretty dark, and he did point that out to me. But I’m pretty proud of what he had to say here. I’m a little curious where he got the metaphor of superintelligence being an adult not throwing humans (the crying baby) out. We talked a bit about how AI is grown rather than programmed in the classical sense he is familiar with but, overall, I think his rebuttal was exactly what I would hope to hear from him.







