I think a major divergence I have with the edifice of AI apocalypse is that, basically, the entire edifice depends on a giant missing step.
In short, my objection can be summarized as “The universe isn’t convenient enough for AI to kill all of us.” Everything has trade-offs; the idea that AI could doom us all starts with a weird assumption that AI can even be that great in the first place.
That is - in order to get to the kind of fantastic pessimism about how AI can kill us all, you kind of have to start with a fantastic optimism that AI can save us all. And I think if you notice all the ways “AI can save us all” can go terribly wrong, and kill us all instead - pay some attention to that inflection point, that little node that means the difference between an AI-administrated utopia where all our desires are finally satisfied, and the AI-run universe which is tiling everything in whatever value it is maximizing.
It’s close to “value maximization”. Adjacent to it, even.
It’s “value coherency”. Notice how hard it is to specify a value - how you can easily screw it up - and then notice your confusion about the specification process. You’ve already noticed how difficult it is - now pay attention to the fact that you don’t know how to solve this problem. Try to break down what makes it difficult to solve this problem.
Yes, there’s an is-ought gap there. We can hack around that. Ignore it and keep poking. What -is- a value? How would you go about specifying a value?
Start with something easy: Try to specify, actually specify, in complete detail, what it would mean to, say, value maximizing paperclips. If you’re using a language, notice, you have to begin by specifying the language.
That is, if you have completed your specification, it is self-descriptive; anybody, regardless of their state of knowledge, can understand your specification. You aren’t allowed to use mathematics not included in your specification.
This is, frankly, impossible. But, alas for our argument here, it proves too much: It argues we can’t communicate anything at all. (This is closer to accurate than the typical assumption, but still wildly inaccurate.)
But it starts to get at the broad kind of issue there. The universe isn’t convenient in terms of specifying coherent values in an objective manner.
It also isn’t convenient in other ways. If intelligence is infinitely useful, why isn’t everything maximally intelligent? Why do so many species evolve in ways that aren’t prioritizing the single most useful thing?
There are several plausible answers here, but they all kind of amount to the same thing: Intelligence is not always the most useful thing. Indeed, given the various expenses for intelligence, it is often, for a particular individual in particular circumstances, a negative - energy spent fueling your brain is energy not available for running away.
I think a lot of people kind of assume a cut-off point; like, intelligence below this point isn’t as useful as, say, spending that energy to build fat reserves instead. But once you reach a critical threshold, suddenly intelligence becomes supremely useful.
In a sense it’s hard to argue against this - look around at modern society.
But in another sense it is actually very easy to argue against this - it took us a long time, with basically modern levels of intelligence, to get to this point. Intelligence alone is insufficient; you also need lots of other things, like time, and also for various other things to line up (I’ve encountered arguments that industrialization - the thing that got us to a recognizably modern state of affairs - was a product of the black plague killing off a bunch of people, forcing society to reorganize around lower labor inputs.)
Intelligence alone is not enough!
I’d say the most intelligent person in known history is probably Leonardo da Vinci.
He didn’t give us modern society. He arguably invented things that didn’t exist until modern society - but we didn’t actually get them. Lots of other contingent factors came into play - such as the state of engineering, as a field.
Intelligence alone is not enough.
It’s useful - massively useful. And we’ve reorganized our society in substantial ways that make it much more useful. (And in so doing kind of given the shaft to many other people, but, well, that is a separate topic.) But we must observe that it is insufficient, in and of itself, to achieve things.
It’s insufficient to survive; you need fat reserves, you need running ability, you need the ability to fit a head through a birth canal. Intelligence alone won’t get you through the winter.
It’s insufficient to prosper; you need organization of agents whose capacities can be tapped in order to turn it into something useful. Intelligence alone can’t turn a dysfunctional society into a functional society.
It’s insufficient to advance; you need observational data. Intelligence alone can’t turn sparse data into rich findings.
I don’t think intelligence alone can bootstrap us into utopian conditions; I don’t think a superintelligence can just cut through all the problems and solve everything forever. Neither for good, nor for ill.
If you expect AI to kill us all, I think at some level this must imply that you think that, if we could somehow properly leash it to our purposes, we expect we can get AI that can save us all.
But I think the inverse must also be true, to some extent: If you expect we can get AI that can save us all, you must also, if you are doing any proper accounting, assign -some- probability to the possibility that it will kill us all instead.
So, let’s kind of group people together: Those who think that superintelligence can completely rewrite the world, for good or ill, and those who don’t.
If you do - alright, great, we disagree, but we have a legible disagreement. The thing is: The facts are not, in fact, convenient to your case. The logic behind your case may be, in a particular sense, sound - but it’s sound mostly because it kind of takes as axiomatic the thing you ought to be proving. The problem, of course, is that a sound logical argument for why superintelligence can rewrite the world depends on a factual basis that cannot exist without superintelligence.
That is, if you could prove that superintelligence could rewrite the world, I think you’ve already discovered how to rewrite the world, and could do so yourself. Which I think a lot of us skeptics about the whole “superintelligence rewrites the world” position kind of charitably agree to overlook; you necessarily can’t prove your case, and so we kind of agree to work with a kind of assumption that it is possible.
Except, honestly, I’m tired of overlooking it, because it’s kind of central to the disagreement here: I don’t think the universe is convenient enough for superintelligence to be doing this, in the same sense that I don’t think the universe is convenient enough for a general proof that P=NP. (Or even convenient enough for a general disproof - because my suspicion is that the situation is actually a lot more complicated than “P does or does not equal NP”.)
Intelligence looks like a superpower from a modern perspective because centuries of effort have gone into the project of usefully exploiting it. It is a hell of a lot less of a superpower when you’re trying to run away from a predator - sometimes it might be helpful there, but oftentime, what you actually want is just to run faster, or to have seen the predator sooner. Intelligence can’t correct bad sensory data - and if you can’t sense the predator, all intelligence will afford you is some paranoia for the entire time you aren’t actively being eaten, at the expense of calories that would be better employed doing virtually anything else.
The advantage of intelligence doesn’t scale with intelligence, either: Being an extremely smart deer doesn’t afford you much more than being a slightly smart deer.
The advantage of intelligence scales, not with intelligence, but to the environmental context in which that intelligence exists. To the extent that this context advantages intelligence - well, intelligence will be better, at least to the extent that the context confers the advantage. If you’re on an island alone, intelligence doesn’t afford you many additional options; if you’re in a preindustrial society, it affords you some, but the advantage doesn’t increase with your intelligence.
And in a modern society - the sky is not yet the limit! Lots of intelligent people end up doing tasks that less intelligent people are perfectly capable of doing.
The advantage to intelligence is contextually determined. Which is to say: In order for a superintelligence to rewrite the world, first, it must rewrite the world, so as to make the world one in which superintelligence is sufficiently advantageous that it can rewrite the world in the first place. Which is to say: There’s a bootstrapping problem here that mere intelligence cannot trivially overcome.
There may be reasons to expect us humans to help superintelligence overcome these problems, of course - in a sense that’s been our project for our entire history of being. But we’re not even capable of fully deploying our own intelligence yet - which implies superintelligence doesn’t get huge additional advantages merely for existing. Triple the IQ of everybody in the world, and it will be decades, at least, before we begin to see any effect from this - and mostly the effects will revolve around reorganizing society to better exploit this resource.
Exploitation, perhaps, is exactly the right framing in which to think about intelligence: It is a resource. Valuable, in much the way iron ore is valuable, but not in and of itself, but for what it can be tapped to accomplish, with sufficient processing and refining and reshaping. And valuable only insofar as you have organized things to make it valuable: On its own, iron ore is basically just a slightly heavier rock. You need to do a lot to it before it becomes something useful.
The universe isn’t convenient. Iron ore is just a rock; petroleum is just a vile-smelling muck. We have to act on these things in order to make them into something that does anything. And we can employ them to both good and bad purposes; steel makes good shovels, but it also makes good rifles. And petroleum is just as good at powering tanks as it is at powering farm tractors.
You see intelligence as the thing which turns useless (well, low use, iron ore is still useful as a paperweight or bludgeon) things into useful things - but intelligence is one of those things that starts off as useless, and is converted into something useful. It is not the entirety of the process of conversion - it is one of countless inputs.
So - I don’t think superintelligence will be that useful. Useful, yes. But useful-to-the-point-of-rewriting-the-world? No. That would be far too convenient.