English-Video.net comment policy

The comment field is common to all languages

Let's write in your language and use "Google Translate" together

Please refer to informative community guidelines on TED.com

TED2018

Max Tegmark: How to get empowered, not overpowered, by AI

Filmed:
699,884 views

Many artificial intelligence researchers expect AI to outsmart humans at all tasks and jobs within decades, enabling a future where we're restricted only by the laws of physics, not the limits of our intelligence. MIT physicist and AI researcher Max Tegmark separates the real opportunities and threats from the myths, describing the concrete steps we should take today to ensure that AI ends up being the best -- rather than worst -- thing to ever happen to humanity.

- Scientist, author
Max Tegmark is driven by curiosity, both about how our universe works and about how we can use the science and technology we discover to help humanity flourish rather than flounder. Full bio

After 13.8 billion years
of cosmic history,
00:12
our universe has woken up
00:17
and become aware of itself.
00:19
From a small blue planet,
00:21
tiny, conscious parts of our universe
have begun gazing out into the cosmos
00:23
with telescopes,
00:27
discovering something humbling.
00:29
We've discovered that our universe
is vastly grander
00:31
than our ancestors imagined
00:34
and that life seems to be an almost
imperceptibly small perturbation
00:35
on an otherwise dead universe.
00:39
But we've also discovered
something inspiring,
00:42
which is that the technology
we're developing has the potential
00:45
to help life flourish like never before,
00:48
not just for centuries
but for billions of years,
00:51
and not just on earth but throughout
much of this amazing cosmos.
00:54
I think of the earliest life as "Life 1.0"
00:59
because it was really dumb,
01:03
like bacteria, unable to learn
anything during its lifetime.
01:04
I think of us humans as "Life 2.0"
because we can learn,
01:08
which we in nerdy, geek speak,
01:12
might think of as installing
new software into our brains,
01:13
like languages and job skills.
01:16
"Life 3.0," which can design not only
its software but also its hardware
01:19
of course doesn't exist yet.
01:24
But perhaps our technology
has already made us "Life 2.1,"
01:25
with our artificial knees,
pacemakers and cochlear implants.
01:29
So let's take a closer look
at our relationship with technology, OK?
01:33
As an example,
01:38
the Apollo 11 moon mission
was both successful and inspiring,
01:40
showing that when we humans
use technology wisely,
01:45
we can accomplish things
that our ancestors could only dream of.
01:48
But there's an even more inspiring journey
01:52
propelled by something
more powerful than rocket engines,
01:55
where the passengers
aren't just three astronauts
01:59
but all of humanity.
02:01
Let's talk about our collective
journey into the future
02:03
with artificial intelligence.
02:06
My friend Jaan Tallinn likes to point out
that just as with rocketry,
02:08
it's not enough to make
our technology powerful.
02:13
We also have to figure out,
if we're going to be really ambitious,
02:17
how to steer it
02:20
and where we want to go with it.
02:22
So let's talk about all three
for artificial intelligence:
02:24
the power, the steering
and the destination.
02:28
Let's start with the power.
02:31
I define intelligence very inclusively --
02:33
simply as our ability
to accomplish complex goals,
02:36
because I want to include both
biological and artificial intelligence.
02:41
And I want to avoid
the silly carbon-chauvinism idea
02:44
that you can only be smart
if you're made of meat.
02:48
It's really amazing how the power
of AI has grown recently.
02:52
Just think about it.
02:57
Not long ago, robots couldn't walk.
02:58
Now, they can do backflips.
03:03
Not long ago,
03:06
we didn't have self-driving cars.
03:07
Now, we have self-flying rockets.
03:10
Not long ago,
03:15
AI couldn't do face recognition.
03:17
Now, AI can generate fake faces
03:20
and simulate your face
saying stuff that you never said.
03:23
Not long ago,
03:28
AI couldn't beat us at the game of Go.
03:30
Then, Google DeepMind's AlphaZero AI
took 3,000 years of human Go games
03:32
and Go wisdom,
03:37
ignored it all and became the world's best
player by just playing against itself.
03:38
And the most impressive feat here
wasn't that it crushed human gamers,
03:43
but that it crushed human AI researchers
03:47
who had spent decades
handcrafting game-playing software.
03:50
And AlphaZero crushed human AI researchers
not just in Go but even at chess,
03:54
which we have been working on since 1950.
03:58
So all this amazing recent progress in AI
really begs the question:
04:02
How far will it go?
04:07
I like to think about this question
04:09
in terms of this abstract
landscape of tasks,
04:11
where the elevation represents
how hard it is for AI to do each task
04:14
at human level,
04:18
and the sea level represents
what AI can do today.
04:19
The sea level is rising
as AI improves,
04:23
so there's a kind of global warming
going on here in the task landscape.
04:25
And the obvious takeaway
is to avoid careers at the waterfront --
04:30
(Laughter)
04:33
which will soon be
automated and disrupted.
04:34
But there's a much
bigger question as well.
04:37
How high will the water end up rising?
04:40
Will it eventually rise
to flood everything,
04:43
matching human intelligence at all tasks.
04:47
This is the definition
of artificial general intelligence --
04:50
AGI,
04:54
which has been the holy grail
of AI research since its inception.
04:55
By this definition, people who say,
04:59
"Ah, there will always be jobs
that humans can do better than machines,"
05:00
are simply saying
that we'll never get AGI.
05:04
Sure, we might still choose
to have some human jobs
05:07
or to give humans income
and purpose with our jobs,
05:11
but AGI will in any case
transform life as we know it
05:14
with humans no longer being
the most intelligent.
05:18
Now, if the water level does reach AGI,
05:20
then further AI progress will be driven
mainly not by humans but by AI,
05:24
which means that there's a possibility
05:29
that further AI progress
could be way faster
05:31
than the typical human research
and development timescale of years,
05:34
raising the controversial possibility
of an intelligence explosion
05:37
where recursively self-improving AI
05:41
rapidly leaves human
intelligence far behind,
05:43
creating what's known
as superintelligence.
05:47
Alright, reality check:
05:51
Are we going to get AGI any time soon?
05:55
Some famous AI researchers,
like Rodney Brooks,
05:58
think it won't happen
for hundreds of years.
06:01
But others, like Google DeepMind
founder Demis Hassabis,
06:03
are more optimistic
06:07
and are working to try to make
it happen much sooner.
06:08
And recent surveys have shown
that most AI researchers
06:11
actually share Demis's optimism,
06:14
expecting that we will
get AGI within decades,
06:17
so within the lifetime of many of us,
06:21
which begs the question -- and then what?
06:23
What do we want the role of humans to be
06:27
if machines can do everything better
and cheaper than us?
06:29
The way I see it, we face a choice.
06:35
One option is to be complacent.
06:38
We can say, "Oh, let's just build machines
that can do everything we can do
06:39
and not worry about the consequences.
06:43
Come on, if we build technology
that makes all humans obsolete,
06:45
what could possibly go wrong?"
06:48
(Laughter)
06:50
But I think that would be
embarrassingly lame.
06:52
I think we should be more ambitious --
in the spirit of TED.
06:56
Let's envision a truly inspiring
high-tech future
06:59
and try to steer towards it.
07:03
This brings us to the second part
of our rocket metaphor: the steering.
07:05
We're making AI more powerful,
07:09
but how can we steer towards a future
07:11
where AI helps humanity flourish
rather than flounder?
07:15
To help with this,
07:18
I cofounded the Future of Life Institute.
07:20
It's a small nonprofit promoting
beneficial technology use,
07:22
and our goal is simply
for the future of life to exist
07:24
and to be as inspiring as possible.
07:27
You know, I love technology.
07:29
Technology is why today
is better than the Stone Age.
07:32
And I'm optimistic that we can create
a really inspiring high-tech future ...
07:36
if -- and this is a big if --
07:41
if we win the wisdom race --
07:43
the race between the growing
power of our technology
07:45
and the growing wisdom
with which we manage it.
07:48
But this is going to require
a change of strategy
07:51
because our old strategy
has been learning from mistakes.
07:53
We invented fire,
07:57
screwed up a bunch of times --
07:58
invented the fire extinguisher.
08:00
(Laughter)
08:02
We invented the car,
screwed up a bunch of times --
08:03
invented the traffic light,
the seat belt and the airbag,
08:06
but with more powerful technology
like nuclear weapons and AGI,
08:08
learning from mistakes
is a lousy strategy,
08:12
don't you think?
08:16
(Laughter)
08:17
It's much better to be proactive
rather than reactive;
08:18
plan ahead and get things
right the first time
08:20
because that might be
the only time we'll get.
08:23
But it is funny because
sometimes people tell me,
08:25
"Max, shhh, don't talk like that.
08:28
That's Luddite scaremongering."
08:30
But it's not scaremongering.
08:34
It's what we at MIT
call safety engineering.
08:35
Think about it:
08:39
before NASA launched
the Apollo 11 mission,
08:40
they systematically thought through
everything that could go wrong
08:42
when you put people
on top of explosive fuel tanks
08:45
and launch them somewhere
where no one could help them.
08:48
And there was a lot that could go wrong.
08:50
Was that scaremongering?
08:52
No.
08:55
That's was precisely
the safety engineering
08:56
that ensured the success of the mission,
08:58
and that is precisely the strategy
I think we should take with AGI.
09:00
Think through what can go wrong
to make sure it goes right.
09:04
So in this spirit,
we've organized conferences,
09:08
bringing together leading
AI researchers and other thinkers
09:11
to discuss how to grow this wisdom
we need to keep AI beneficial.
09:14
Our last conference
was in Asilomar, California last year
09:17
and produced this list of 23 principles
09:21
which have since been signed
by over 1,000 AI researchers
09:24
and key industry leaders,
09:27
and I want to tell you
about three of these principles.
09:28
One is that we should avoid an arms race
and lethal autonomous weapons.
09:31
The idea here is that any science
can be used for new ways of helping people
09:37
or new ways of harming people.
09:41
For example, biology and chemistry
are much more likely to be used
09:42
for new medicines or new cures
than for new ways of killing people,
09:46
because biologists
and chemists pushed hard --
09:51
and successfully --
09:53
for bans on biological
and chemical weapons.
09:55
And in the same spirit,
09:57
most AI researchers want to stigmatize
and ban lethal autonomous weapons.
09:58
Another Asilomar AI principle
10:03
is that we should mitigate
AI-fueled income inequality.
10:05
I think that if we can grow
the economic pie dramatically with AI
10:09
and we still can't figure out
how to divide this pie
10:13
so that everyone is better off,
10:16
then shame on us.
10:17
(Applause)
10:19
Alright, now raise your hand
if your computer has ever crashed.
10:23
(Laughter)
10:27
Wow, that's a lot of hands.
10:28
Well, then you'll appreciate
this principle
10:30
that we should invest much more
in AI safety research,
10:32
because as we put AI in charge
of even more decisions and infrastructure,
10:35
we need to figure out how to transform
today's buggy and hackable computers
10:39
into robust AI systems
that we can really trust,
10:43
because otherwise,
10:45
all this awesome new technology
can malfunction and harm us,
10:46
or get hacked and be turned against us.
10:49
And this AI safety work
has to include work on AI value alignment,
10:51
because the real threat
from AGI isn't malice,
10:57
like in silly Hollywood movies,
11:00
but competence --
11:01
AGI accomplishing goals
that just aren't aligned with ours.
11:03
For example, when we humans drove
the West African black rhino extinct,
11:07
we didn't do it because we were a bunch
of evil rhinoceros haters, did we?
11:11
We did it because
we were smarter than them
11:15
and our goals weren't aligned with theirs.
11:17
But AGI is by definition smarter than us,
11:20
so to make sure that we don't put
ourselves in the position of those rhinos
11:23
if we create AGI,
11:26
we need to figure out how
to make machines understand our goals,
11:28
adopt our goals and retain our goals.
11:32
And whose goals should these be, anyway?
11:37
Which goals should they be?
11:40
This brings us to the third part
of our rocket metaphor: the destination.
11:42
We're making AI more powerful,
11:47
trying to figure out how to steer it,
11:49
but where do we want to go with it?
11:50
This is the elephant in the room
that almost nobody talks about --
11:53
not even here at TED --
11:57
because we're so fixated
on short-term AI challenges.
11:59
Look, our species is trying to build AGI,
12:04
motivated by curiosity and economics,
12:08
but what sort of future society
are we hoping for if we succeed?
12:12
We did an opinion poll on this recently,
12:16
and I was struck to see
12:18
that most people actually
want us to build superintelligence:
12:19
AI that's vastly smarter
than us in all ways.
12:22
What there was the greatest agreement on
was that we should be ambitious
12:27
and help life spread into the cosmos,
12:30
but there was much less agreement
about who or what should be in charge.
12:32
And I was actually quite amused
12:37
to see that there's some some people
who want it to be just machines.
12:38
(Laughter)
12:42
And there was total disagreement
about what the role of humans should be,
12:44
even at the most basic level,
12:47
so let's take a closer look
at possible futures
12:49
that we might choose
to steer toward, alright?
12:52
So don't get be wrong here.
12:55
I'm not talking about space travel,
12:56
merely about humanity's
metaphorical journey into the future.
12:59
So one option that some
of my AI colleagues like
13:02
is to build superintelligence
and keep it under human control,
13:06
like an enslaved god,
13:10
disconnected from the internet
13:11
and used to create unimaginable
technology and wealth
13:13
for whoever controls it.
13:16
But Lord Acton warned us
13:18
that power corrupts,
and absolute power corrupts absolutely,
13:20
so you might worry that maybe
we humans just aren't smart enough,
13:23
or wise enough rather,
13:28
to handle this much power.
13:29
Also, aside from any
moral qualms you might have
13:31
about enslaving superior minds,
13:34
you might worry that maybe
the superintelligence could outsmart us,
13:36
break out and take over.
13:40
But I also have colleagues
who are fine with AI taking over
13:43
and even causing human extinction,
13:47
as long as we feel the the AIs
are our worthy descendants,
13:49
like our children.
13:52
But how would we know that the AIs
have adopted our best values
13:54
and aren't just unconscious zombies
tricking us into anthropomorphizing them?
14:00
Also, shouldn't those people
who don't want human extinction
14:04
have a say in the matter, too?
14:07
Now, if you didn't like either
of those two high-tech options,
14:10
it's important to remember
that low-tech is suicide
14:13
from a cosmic perspective,
14:16
because if we don't go far
beyond today's technology,
14:18
the question isn't whether humanity
is going to go extinct,
14:20
merely whether
we're going to get taken out
14:23
by the next killer asteroid, supervolcano
14:25
or some other problem
that better technology could have solved.
14:27
So, how about having
our cake and eating it ...
14:30
with AGI that's not enslaved
14:34
but treats us well because its values
are aligned with ours?
14:37
This is the gist of what Eliezer Yudkowsky
has called "friendly AI,"
14:40
and if we can do this,
it could be awesome.
14:44
It could not only eliminate negative
experiences like disease, poverty,
14:47
crime and other suffering,
14:52
but it could also give us
the freedom to choose
14:54
from a fantastic new diversity
of positive experiences --
14:57
basically making us
the masters of our own destiny.
15:01
So in summary,
15:06
our situation with technology
is complicated,
15:07
but the big picture is rather simple.
15:10
Most AI researchers
expect AGI within decades,
15:13
and if we just bumble
into this unprepared,
15:16
it will probably be
the biggest mistake in human history --
15:19
let's face it.
15:23
It could enable brutal,
global dictatorship
15:24
with unprecedented inequality,
surveillance and suffering,
15:27
and maybe even human extinction.
15:30
But if we steer carefully,
15:32
we could end up in a fantastic future
where everybody's better off:
15:36
the poor are richer, the rich are richer,
15:39
everybody is healthy
and free to live out their dreams.
15:42
Now, hang on.
15:47
Do you folks want the future
that's politically right or left?
15:48
Do you want the pious society
with strict moral rules,
15:53
or do you an hedonistic free-for-all,
15:56
more like Burning Man 24/7?
15:57
Do you want beautiful beaches,
forests and lakes,
16:00
or would you prefer to rearrange
some of those atoms with the computers,
16:02
enabling virtual experiences?
16:06
With friendly AI, we could simply
build all of these societies
16:07
and give people the freedom
to choose which one they want to live in
16:10
because we would no longer
be limited by our intelligence,
16:14
merely by the laws of physics.
16:17
So the resources and space
for this would be astronomical --
16:18
literally.
16:23
So here's our choice.
16:25
We can either be complacent
about our future,
16:27
taking as an article of blind faith
16:31
that any new technology
is guaranteed to be beneficial,
16:34
and just repeat that to ourselves
as a mantra over and over and over again
16:38
as we drift like a rudderless ship
towards our own obsolescence.
16:42
Or we can be ambitious --
16:46
thinking hard about how
to steer our technology
16:49
and where we want to go with it
16:52
to create the age of amazement.
16:54
We're all here to celebrate
the age of amazement,
16:57
and I feel that its essence should lie
in becoming not overpowered
16:59
but empowered by our technology.
17:05
Thank you.
17:07
(Applause)
17:09

▲Back to top

About the speaker:

Max Tegmark - Scientist, author
Max Tegmark is driven by curiosity, both about how our universe works and about how we can use the science and technology we discover to help humanity flourish rather than flounder.

Why you should listen

Max Tegmark is an MIT professor who loves thinking about life's big questions. He's written two popular books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality and the recently published Life 3.0: Being Human in the Age of Artificial Intelligenceas well as more than 200 nerdy technical papers on topics from cosmology to AI.

He writes: "In my spare time, I'm president of the Future of Life Institute, which aims to ensure that we develop not only technology but also the wisdom required to use it beneficially."

More profile about the speaker
Max Tegmark | Speaker | TED.com