Nick Bostrom has a plan for humanity’s “big retirement.”


Philosopher Nick Bostrom He recently published a paper in which he posited that the small chance of annihilating all humans by AI might be worth the risk, because advanced AI might relieve humanity of a “global death penalty.” This optimistic gamble is a big leap from his previous dark musings about artificial intelligence, which made him a doomed Godfather. His 2014 book Super intelligence It was an early examination of the existential risks of artificial intelligence. One memorable thought experiment: an AI tasked with making a paperclip ends up destroying humanity because all those people in need of resources are an obstacle to producing the paperclip. His most recent book, Deep utopiaIt reflects a shift in his focus. Bostrom, who leads the Future of Humanity Institute at Oxford, dwells on the “solvent world” that will come if we get AI right.

Steven Levy: Deep utopia More optimistic than your previous book. What has changed for you?

Nick Bostrom: I call myself a nervous optimist. I am very passionate about the possibility of radically improving human life and opening up possibilities for our civilization. This corresponds to the real probability of an error occurring.

I’ve written a paper with a startling argument: since we’re all going to die anyway, the worst that could happen with AI is that we die sooner. But if AI succeeds, it could extend our lives, perhaps indefinitely.

This paper explicitly addresses only one aspect of this. In any academic research, you cannot address life, the universe and the meaning of everything. So let’s take a look at this little problem and try to solve it.

This is no small issue.

I think I’ve been bothered by some of the arguments made by convicts who say that if I build an AI, it will kill me and my children, and how dare you. Like the last book If anyone built it, everyone would die. The most likely thing is that if no one Build it, everyone dies! This has been the experience for the past 100,000 years.

But in the destructive scenario, everyone dies and no more people are born. Big difference.

Obviously I was very worried about that. But in this paper, I’m looking at a different question, which is, what is best for the existing population like you and me and our families and people in Bangladesh? It seems that our life expectancy will increase if we develop artificial intelligence, even if it is risky.

in Deep utopia You speculate that artificial intelligence could create incredible abundance, to the point that humanity may have great trouble finding purpose. I live in the United States. We are a very rich country, but our government, ostensibly with the support of the people, has policies that deprive the poor of services and distribute rewards to the rich. I believe that even if AI is able to provide abundance for everyone, we will not provide it for everyone.

You may be right. Deep utopia It takes as its starting point the idea that everything is going well. If we do a reasonably good job of governance, everyone will get a share. There is a deep philosophical question about what a good human life might look like under these ideal conditions.

The meaning of life is something you hear a lot about in Woody Allen films and perhaps in the community of philosophers. I’m more concerned about the means to support myself and have a share in this abundance.

The book is not just about meaning. This is one of a range of different values ​​that you take into account. This could be a wonderful release from the drudgery to which humans have been subjected. If you have to give up, say, half your waking hours as an adult just to make ends meet, doing some work that you don’t enjoy and that you don’t believe in, that’s a sad state of affairs. Society is so used to it that we invent all kinds of justifications about it. It’s like a partial form of slavery.

Leave a Reply