Enter the URL of the YouTube video to download subtitles in many different formats and languages.
I'm going to talk\nabout a failure of intuition
It's really a failure\nto detect a certain kind of danger.
I'm going to describe a scenario
that I think is both terrifying
and that's not a good combination
And yet rather than be scared,\nmost of you will feel
that what I'm talking about\nis kind of cool.
I'm going to describe\nhow the gains we make
And in fact, I think it's very difficult\n
or inspire us to destroy ourselves.
And yet if you're anything like me
you'll find that it's fun\nto think about these things.
And that response is part of the problem.
OK? That response should worry you.
And if I were to convince you in this talk
that we were likely\nto suffer a global famine
either because of climate change\nor some other catastrophe
and that your grandchildren,\nor their grandchildren
are very likely to live like this
Death by science fiction,\non the other hand, is fun
and one of the things that worries me most\n
is that we seem unable to marshal\n
I am unable to marshal this response,\nand I'm giving this talk.
It's as though we stand before two doors.
we stop making progress\nin building intelligent machines.
Our computer hardware and software\n
Now take a moment\nto consider why this might happen.
I mean, given how valuable\nintelligence and automation are
we will continue to improve our technology\n
What could stop us from doing this?
Justin Bieber becoming\npresident of the United States?
The point is, something would have to\n
You have to imagine\nhow bad it would have to be
to prevent us from making\nimprovements in our technology
Almost by definition,\nthis is the worst thing
that's ever happened in human history.
and this is what lies\nbehind door number two
is that we continue\nto improve our intelligent machines
At a certain point, we will build\n
and once we have machines\nthat are smarter than we are
they will begin to improve themselves.
And then we risk what\nthe mathematician IJ Good called
that the process could get away from us.
Now, this is often caricatured,\nas I have here
as a fear that armies of malicious robots
But that isn't the most likely scenario.
It's not that our machines\nwill become spontaneously malevolent.
The concern is really\nthat we will build machines
that are so much\nmore competent than we are
that the slightest divergence\nbetween their goals and our own
Just think about how we relate to ants.
We don't go out of our way to harm them.
In fact, sometimes\nwe take pains not to harm them.
We step over them on the sidewalk.
seriously conflicts with one of our goals
let's say when constructing\na building like this one
we annihilate them without a qualm.
The concern is that we will\none day build machines
that, whether they're conscious or not
could treat us with similar disregard.
Now, I suspect this seems\nfar-fetched to many of you.
I bet there are those of you who doubt\n
But then you must find something wrong\n
And there are only three of them.
Intelligence is a matter of information\n
Actually, this is a little bit more\nthan an assumption.
We have already built\nnarrow intelligence into our machines
and many of these machines perform
at a level of superhuman\nintelligence already.
can give rise to what is called\n"general intelligence
an ability to think flexibly\nacross multiple domains
because our brains have managed it. Right?
I mean, there's just atoms in here
and as long as we continue\nto build systems of atoms
that display more and more\nintelligent behavior
we will eventually,\nunless we are interrupted
we will eventually\nbuild general intelligence
It's crucial to realize\n
because any progress\nis enough to get us into the end zone.
We don't need Moore's law to continue.\n
The second assumption\nis that we will keep going.
We will continue to improve\nour intelligent machines.
And given the value of intelligence --
I mean, intelligence is either\nthe source of everything we value
or we need it to safeguard\neverything we value.
It is our most valuable resource.
We have problems\nthat we desperately need to solve.
We want to cure diseases\nlike Alzheimer's and cancer.
We want to understand economic systems.\n
So we will do this, if we can.
The train is already out of the station,\n
Finally, we don't stand\non a peak of intelligence
And this really is the crucial insight.
This is what makes\nour situation so precarious
and this is what makes our intuitions\nabout risk so unreliable.
Now, just consider the smartest person\nwho has ever lived.
On almost everyone's shortlist here\nis John von Neumann.
I mean, the impression that von Neumann\n
and this included the greatest\n
If only half the stories\nabout him are half true
he's one of the smartest people\nwho has ever lived.
So consider the spectrum of intelligence.
Here we have John von Neumann.
There's no reason for me to make this talk\n
It seems overwhelmingly likely, however,\n
extends much further\nthan we currently conceive
and if we build machines\nthat are more intelligent than we are
they will very likely\nexplore this spectrum
in ways that we can't imagine
and exceed us in ways\nthat we can't imagine.
And it's important to recognize that\n
Right? So imagine if we just built\na superintelligent AI
that was no smarter\nthan your average team of researchers
Well, electronic circuits\nfunction about a million times faster
so this machine should think\nabout a million times faster
So you set it running for a week
and it will perform 20,000 years\n
How could we even understand,\nmuch less constrain
a mind making this sort of progress?
The other thing that's worrying, frankly
is that, imagine the best case scenario.
So imagine we hit upon a design\nof superintelligent AI
We have the perfect design\nthe first time around.
It's as though we've been handed an oracle
that behaves exactly as intended.
Well, this machine would be\nthe perfect labor-saving device.
It can design the machine\nthat can build the machine
that can do any physical work
more or less for the cost\nof raw materials.
So we're talking about\nthe end of human drudgery.
We're also talking about the end\nof most intellectual work.
So what would apes like ourselves\ndo in this circumstance?
Well, we'd be free to play Frisbee\nand give each other massages.
Add some LSD and some\nquestionable wardrobe choices
and the whole world\ncould be like Burning Man.
Now, that might sound pretty good
but ask yourself what would happen
under our current economic\nand political order?
It seems likely that we would witness
a level of wealth inequality\nand unemployment
that we have never seen before.
Absent a willingness\nto immediately put this new wealth
a few trillionaires could grace\n
while the rest of the world\nwould be free to starve.
And what would the Russians\nor the Chinese do
if they heard that some company\nin Silicon Valley
was about to deploy a superintelligent AI?
This machine would be capable\nof waging war
This is a winner-take-all scenario.
To be six months ahead\nof the competition here
So it seems that even mere rumors\nof this kind of breakthrough
could cause our species to go berserk.
Now, one of the most frightening things
are the kinds of things\nthat AI researchers say
when they want to be reassuring.
And the most common reason\nwe're told not to worry is time.
This is all a long way off,\ndon't you know.
This is probably 50 or 100 years away.
is like worrying\nabout overpopulation on Mars.
This is the Silicon Valley version
of "don\'t worry your\npretty little head about it.
that referencing the time horizon
If intelligence is just a matter\nof information processing
and we continue to improve our machines
we will produce\nsome form of superintelligence.
And we have no idea\nhow long it will take us
to create the conditions\nto do that safely.
We have no idea how long it will take us
to create the conditions\nto do that safely.
And if you haven't noticed,\n50 years is not what it used to be.
This is how long we've had the iPhone.
This is how long "The Simpsons"\nhas been on television.
Fifty years is not that much time
to meet one of the greatest challenges\n
Once again, we seem to be failing\n
to what we have every reason\nto believe is coming.
The computer scientist Stuart Russell\nhas a nice analogy here.
He said, imagine that we received\n
we will arrive on your planet in 50 years.
And now we're just counting down\n
We would feel a little\nmore urgency than we do.
Another reason we're told not to worry
is that these machines\ncan't help but share our values
because they will be literally\nextensions of ourselves.
They'll be grafted onto our brains
and we'll essentially\nbecome their limbic systems.
that the safest\nand only prudent path forward
is to implant this technology\ndirectly into our brains.
Now, this may in fact be the safest\n
but usually one's safety concerns\nabout a technology
have to be pretty much worked out\n
The deeper problem is that\n
than building superintelligent AI
and having the completed neuroscience
that allows us to seamlessly\nintegrate our minds with it.
And given that the companies\nand governments doing this work
are likely to perceive themselves\n
given that to win this race\nis to win the world
provided you don't destroy it\nin the next moment
then it seems likely\nthat whatever is easier to do
Now, unfortunately,\nI don't have a solution to this problem
apart from recommending\nthat more of us think about it.
I think we need something\nlike a Manhattan Project
on the topic of artificial intelligence.
Not to build it, because I think\nwe'll inevitably do that
but to understand\nhow to avoid an arms race
and to build it in a way\nthat is aligned with our interests.
When you're talking\nabout superintelligent AI
that can make changes to itself
it seems that we only have one chance\n
and even then we will need to absorb
the economic and political\nconsequences of getting them right.
that information processing\nis the source of intelligence
that some appropriate computational system\n
and we admit that we will improve\nthese systems continuously
and we admit that the horizon\n
that we are in the process\nof building some sort of god.
to make sure it's a god we can live with.