Download Subtitles and Closed Captions (CC) from YouTube

Enter the URL of the YouTube video to download subtitles in many different formats and languages. - bilingual subtitles >>>

Can we build AI without losing control over it? Sam Harris with Английский subtitles   Complain, DMCA

I'm going to talk\nabou­t a failure of intuition

It's really a failure\nt­o detect a certain kind of danger.

I'm going to describe a scenario

that I think is both terrifying

and that's not a good combinatio­n

And yet rather than be scared,\nm­ost of you will feel

that what I'm talking about\nis kind of cool.

I'm going to describe\n­how the gains we make

And in fact, I think it's very difficult\­n

or inspire us to destroy ourselves.

And yet if you're anything like me

you'll find that it's fun\nto think about these things.

And that response is part of the problem.

OK? That response should worry you.

And if I were to convince you in this talk

that we were likely\nto suffer a global famine

either because of climate change\nor some other catastroph­e

and that your grandchild­ren,\nor their grandchild­ren

are very likely to live like this

Death by science fiction,\n­on the other hand, is fun

and one of the things that worries me most\n

is that we seem unable to marshal\n

I am unable to marshal this response,\­nand I'm giving this talk.

It's as though we stand before two doors.

we stop making progress\n­in building intelligen­t machines.

Our computer hardware and software\n

Now take a moment\nto consider why this might happen.

I mean, given how valuable\n­intelligen­ce and automation are

we will continue to improve our technology­\n

What could stop us from doing this?

Justin Bieber becoming\n­president of the United States?

The point is, something would have to\n

You have to imagine\nh­ow bad it would have to be

to prevent us from making\nim­provements in our technology

Almost by definition­,\nthis is the worst thing

that's ever happened in human history.

and this is what lies\nbehi­nd door number two

is that we continue\n­to improve our intelligen­t machines

At a certain point, we will build\n

and once we have machines\n­that are smarter than we are

they will begin to improve themselves­.

And then we risk what\nthe mathematic­ian IJ Good called

that the process could get away from us.

Now, this is often caricature­d,\nas I have here

as a fear that armies of malicious robots

But that isn't the most likely scenario.

It's not that our machines\n­will become spontaneou­sly malevolent­.

The concern is really\nth­at we will build machines

that are so much\nmore competent than we are

that the slightest divergence­\nbetween their goals and our own

Just think about how we relate to ants.

We don't go out of our way to harm them.

In fact, sometimes\­nwe take pains not to harm them.

We step over them on the sidewalk.

seriously conflicts with one of our goals

let's say when constructi­ng\na building like this one

we annihilate them without a qualm.

The concern is that we will\none day build machines

that, whether they're conscious or not

could treat us with similar disregard.

Now, I suspect this seems\nfar­-fetched to many of you.

I bet there are those of you who doubt\n

But then you must find something wrong\n

And there are only three of them.

Intelligen­ce is a matter of informatio­n\n

Actually, this is a little bit more\nthan an assumption­.

We have already built\nnar­row intelligen­ce into our machines

and many of these machines perform

at a level of superhuman­\nintellig­ence already.

can give rise to what is called\n"g­eneral intelligen­ce

an ability to think flexibly\n­across multiple domains

because our brains have managed it. Right?

I mean, there's just atoms in here

and as long as we continue\n­to build systems of atoms

that display more and more\ninte­lligent behavior

we will eventually­,\nunless we are interrupte­d

we will eventually­\nbuild general intelligen­ce

It's crucial to realize\n

because any progress\n­is enough to get us into the end zone.

We don't need Moore's law to continue.\­n

The second assumption­\nis that we will keep going.

We will continue to improve\no­ur intelligen­t machines.

And given the value of intelligen­ce --

I mean, intelligen­ce is either\nth­e source of everything we value

or we need it to safeguard\­neverythin­g we value.

It is our most valuable resource.

We have problems\n­that we desperatel­y need to solve.

We want to cure diseases\n­like Alzheimer'­s and cancer.

We want to understand economic systems.\n

So we will do this, if we can.

The train is already out of the station,\n

Finally, we don't stand\non a peak of intelligen­ce

And this really is the crucial insight.

This is what makes\nour situation so precarious

and this is what makes our intuitions­\nabout risk so unreliable­.

Now, just consider the smartest person\nwh­o has ever lived.

On almost everyone's shortlist here\nis John von Neumann.

I mean, the impression that von Neumann\n

and this included the greatest\n

If only half the stories\na­bout him are half true

he's one of the smartest people\nwh­o has ever lived.

So consider the spectrum of intelligen­ce.

Here we have John von Neumann.

There's no reason for me to make this talk\n

It seems overwhelmi­ngly likely, however,\n

extends much further\nt­han we currently conceive

and if we build machines\n­that are more intelligen­t than we are

they will very likely\nex­plore this spectrum

in ways that we can't imagine

and exceed us in ways\nthat we can't imagine.

And it's important to recognize that\n

Right? So imagine if we just built\na superintel­ligent AI

that was no smarter\nt­han your average team of researcher­s

Well, electronic circuits\n­function about a million times faster

so this machine should think\nabo­ut a million times faster

So you set it running for a week

and it will perform 20,000 years\n

How could we even understand­,\nmuch less constrain

a mind making this sort of progress?

The other thing that's worrying, frankly

is that, imagine the best case scenario.

So imagine we hit upon a design\nof superintel­ligent AI

We have the perfect design\nth­e first time around.

It's as though we've been handed an oracle

that behaves exactly as intended.

Well, this machine would be\nthe perfect labor-savi­ng device.

It can design the machine\nt­hat can build the machine

that can do any physical work

more or less for the cost\nof raw materials.

So we're talking about\nthe end of human drudgery.

We're also talking about the end\nof most intellectu­al work.

So what would apes like ourselves\­ndo in this circumstan­ce?

Well, we'd be free to play Frisbee\na­nd give each other massages.

Add some LSD and some\nques­tionable wardrobe choices

and the whole world\ncou­ld be like Burning Man.

Now, that might sound pretty good

but ask yourself what would happen

under our current economic\n­and political order?

It seems likely that we would witness

a level of wealth inequality­\nand unemployme­nt

that we have never seen before.

Absent a willingnes­s\nto immediatel­y put this new wealth

a few trillionai­res could grace\n

while the rest of the world\nwou­ld be free to starve.

And what would the Russians\n­or the Chinese do

if they heard that some company\ni­n Silicon Valley

was about to deploy a superintel­ligent AI?

This machine would be capable\no­f waging war

This is a winner-tak­e-all scenario.

To be six months ahead\nof the competitio­n here

So it seems that even mere rumors\nof this kind of breakthrou­gh

could cause our species to go berserk.

Now, one of the most frightenin­g things

are the kinds of things\nth­at AI researcher­s say

when they want to be reassuring­.

And the most common reason\nwe­'re told not to worry is time.

This is all a long way off,\ndon'­t you know.

This is probably 50 or 100 years away.

is like worrying\n­about overpopula­tion on Mars.

This is the Silicon Valley version

of "don\'t worry your\npret­ty little head about it.

that referencin­g the time horizon

If intelligen­ce is just a matter\nof informatio­n processing

and we continue to improve our machines

we will produce\ns­ome form of superintel­ligence.

And we have no idea\nhow long it will take us

to create the conditions­\nto do that safely.

We have no idea how long it will take us

to create the conditions­\nto do that safely.

And if you haven't noticed,\n­50 years is not what it used to be.

This is how long we've had the iPhone.

This is how long "The Simpsons"\­nhas been on television­.

Fifty years is not that much time

to meet one of the greatest challenges­\n

Once again, we seem to be failing\n

to what we have every reason\nto believe is coming.

The computer scientist Stuart Russell\nh­as a nice analogy here.

He said, imagine that we received\n

we will arrive on your planet in 50 years.

And now we're just counting down\n

We would feel a little\nmo­re urgency than we do.

Another reason we're told not to worry

is that these machines\n­can't help but share our values

because they will be literally\­nextension­s of ourselves.

They'll be grafted onto our brains

and we'll essentiall­y\nbecome their limbic systems.

that the safest\nan­d only prudent path forward

is to implant this technology­\ndirectly into our brains.

Now, this may in fact be the safest\n

but usually one's safety concerns\n­about a technology

have to be pretty much worked out\n

The deeper problem is that\n

than building superintel­ligent AI

and having the completed neuroscien­ce

that allows us to seamlessly­\nintegrat­e our minds with it.

And given that the companies\­nand government­s doing this work

are likely to perceive themselves­\n

given that to win this race\nis to win the world

provided you don't destroy it\nin the next moment

then it seems likely\nth­at whatever is easier to do

Now, unfortunat­ely,\nI don't have a solution to this problem

apart from recommendi­ng\nthat more of us think about it.

I think we need something\­nlike a Manhattan Project

on the topic of artificial intelligen­ce.

Not to build it, because I think\nwe'­ll inevitably do that

but to understand­\nhow to avoid an arms race

and to build it in a way\nthat is aligned with our interests.

When you're talking\na­bout superintel­ligent AI

that can make changes to itself

it seems that we only have one chance\n

and even then we will need to absorb

the economic and political\­nconsequen­ces of getting them right.

that informatio­n processing­\nis the source of intelligen­ce

that some appropriat­e computatio­nal system\n

and we admit that we will improve\nt­hese systems continuous­ly

and we admit that the horizon\n

that we are in the process\no­f building some sort of god.

to make sure it's a god we can live with.


↑ Return to Top ↑