ABOUT THE SPEAKER
Nick Bostrom - Philosopher
Nick Bostrom asks big questions: What should we do, as individuals and as a species, to optimize our long-term prospects? Will humanity’s technological advancements ultimately destroy us?

Why you should listen

Philosopher Nick Bostrom envisioned a future full of human enhancement, nanotechnology and machine intelligence long before they became mainstream concerns. From his famous simulation argument -- which identified some striking implications of rejecting the Matrix-like idea that humans are living in a computer simulation -- to his work on existential risk, Bostrom approaches both the inevitable and the speculative using the tools of philosophy, probability theory, and scientific analysis.

Since 2005, Bostrom has led the Future of Humanity Institute, a research group of mathematicians, philosophers and scientists at Oxford University tasked with investigating the big picture for the human condition and its future. He has been referred to as one of the most important thinkers of our age.

Nick was honored as one of Foreign Policy's 2015 Global Thinkers .

His recent book Superintelligence advances the ominous idea that “the first ultraintelligent machine is the last invention that man need ever make.”

More profile about the speaker
Nick Bostrom | Speaker | TED.com
TED2015

Nick Bostrom: What happens when our computers get smarter than we are?

Filmed:
4,632,705 views

Artificial intelligence is getting smarter by leaps and bounds -- within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make." A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values -- or will they have values of their own?
- Philosopher
Nick Bostrom asks big questions: What should we do, as individuals and as a species, to optimize our long-term prospects? Will humanity’s technological advancements ultimately destroy us? Full bio

Double-click the English transcript below to play the video.

00:12
I work with a bunch of mathematicians,
philosophers and computer scientists,
0
570
4207
00:16
and we sit around and think about
the future of machine intelligence,
1
4777
5209
00:21
among other things.
2
9986
2044
00:24
Some people think that some of these
things are sort of science fiction-y,
3
12030
4725
00:28
far out there, crazy.
4
16755
3101
00:31
But I like to say,
5
19856
1470
00:33
okay, let's look at the modern
human condition.
6
21326
3604
00:36
(Laughter)
7
24930
1692
00:38
This is the normal way for things to be.
8
26622
2402
00:41
But if we think about it,
9
29024
2285
00:43
we are actually recently arrived
guests on this planet,
10
31309
3293
00:46
the human species.
11
34602
2082
00:48
Think about if Earth
was created one year ago,
12
36684
4746
00:53
the human species, then,
would be 10 minutes old.
13
41430
3548
00:56
The industrial era started
two seconds ago.
14
44978
3168
01:01
Another way to look at this is to think of
world GDP over the last 10,000 years,
15
49276
5225
01:06
I've actually taken the trouble
to plot this for you in a graph.
16
54501
3029
01:09
It looks like this.
17
57530
1774
01:11
(Laughter)
18
59304
1363
01:12
It's a curious shape
for a normal condition.
19
60667
2151
01:14
I sure wouldn't want to sit on it.
20
62818
1698
01:16
(Laughter)
21
64516
2551
01:19
Let's ask ourselves, what is the cause
of this current anomaly?
22
67067
4774
01:23
Some people would say it's technology.
23
71841
2552
01:26
Now it's true, technology has accumulated
through human history,
24
74393
4668
01:31
and right now, technology
advances extremely rapidly --
25
79061
4652
01:35
that is the proximate cause,
26
83713
1565
01:37
that's why we are currently
so very productive.
27
85278
2565
01:40
But I like to think back further
to the ultimate cause.
28
88473
3661
01:45
Look at these two highly
distinguished gentlemen:
29
93114
3766
01:48
We have Kanzi --
30
96880
1600
01:50
he's mastered 200 lexical
tokens, an incredible feat.
31
98480
4643
01:55
And Ed Witten unleashed the second
superstring revolution.
32
103123
3694
01:58
If we look under the hood,
this is what we find:
33
106817
2324
02:01
basically the same thing.
34
109141
1570
02:02
One is a little larger,
35
110711
1813
02:04
it maybe also has a few tricks
in the exact way it's wired.
36
112524
2758
02:07
These invisible differences cannot
be too complicated, however,
37
115282
3812
02:11
because there have only
been 250,000 generations
38
119094
4285
02:15
since our last common ancestor.
39
123379
1732
02:17
We know that complicated mechanisms
take a long time to evolve.
40
125111
3849
02:22
So a bunch of relatively minor changes
41
130000
2499
02:24
take us from Kanzi to Witten,
42
132499
3067
02:27
from broken-off tree branches
to intercontinental ballistic missiles.
43
135566
4543
02:32
So this then seems pretty obvious
that everything we've achieved,
44
140839
3935
02:36
and everything we care about,
45
144774
1378
02:38
depends crucially on some relatively minor
changes that made the human mind.
46
146152
5228
02:44
And the corollary, of course,
is that any further changes
47
152650
3662
02:48
that could significantly change
the substrate of thinking
48
156312
3477
02:51
could have potentially
enormous consequences.
49
159789
3202
02:56
Some of my colleagues
think we're on the verge
50
164321
2905
02:59
of something that could cause
a profound change in that substrate,
51
167226
3908
03:03
and that is machine superintelligence.
52
171134
3213
03:06
Artificial intelligence used to be
about putting commands in a box.
53
174347
4739
03:11
You would have human programmers
54
179086
1665
03:12
that would painstakingly
handcraft knowledge items.
55
180751
3135
03:15
You build up these expert systems,
56
183886
2086
03:17
and they were kind of useful
for some purposes,
57
185972
2324
03:20
but they were very brittle,
you couldn't scale them.
58
188296
2681
03:22
Basically, you got out only
what you put in.
59
190977
3433
03:26
But since then,
60
194410
997
03:27
a paradigm shift has taken place
in the field of artificial intelligence.
61
195407
3467
03:30
Today, the action is really
around machine learning.
62
198874
2770
03:34
So rather than handcrafting knowledge
representations and features,
63
202394
5387
03:40
we create algorithms that learn,
often from raw perceptual data.
64
208511
5554
03:46
Basically the same thing
that the human infant does.
65
214065
4998
03:51
The result is A.I. that is not
limited to one domain --
66
219063
4207
03:55
the same system can learn to translate
between any pairs of languages,
67
223270
4631
03:59
or learn to play any computer game
on the Atari console.
68
227901
5437
04:05
Now of course,
69
233338
1779
04:07
A.I. is still nowhere near having
the same powerful, cross-domain
70
235117
3999
04:11
ability to learn and plan
as a human being has.
71
239116
3219
04:14
The cortex still has some
algorithmic tricks
72
242335
2126
04:16
that we don't yet know
how to match in machines.
73
244461
2355
04:19
So the question is,
74
247886
1899
04:21
how far are we from being able
to match those tricks?
75
249785
3500
04:26
A couple of years ago,
76
254245
1083
04:27
we did a survey of some of the world's
leading A.I. experts,
77
255328
2888
04:30
to see what they think,
and one of the questions we asked was,
78
258216
3224
04:33
"By which year do you think
there is a 50 percent probability
79
261440
3353
04:36
that we will have achieved
human-level machine intelligence?"
80
264793
3482
04:40
We defined human-level here
as the ability to perform
81
268785
4183
04:44
almost any job at least as well
as an adult human,
82
272968
2871
04:47
so real human-level, not just
within some limited domain.
83
275839
4005
04:51
And the median answer was 2040 or 2050,
84
279844
3650
04:55
depending on precisely which
group of experts we asked.
85
283494
2806
04:58
Now, it could happen much,
much later, or sooner,
86
286300
4039
05:02
the truth is nobody really knows.
87
290339
1940
05:05
What we do know is that the ultimate
limit to information processing
88
293259
4412
05:09
in a machine substrate lies far outside
the limits in biological tissue.
89
297671
4871
05:15
This comes down to physics.
90
303241
2378
05:17
A biological neuron fires, maybe,
at 200 hertz, 200 times a second.
91
305619
4718
05:22
But even a present-day transistor
operates at the Gigahertz.
92
310337
3594
05:25
Neurons propagate slowly in axons,
100 meters per second, tops.
93
313931
5297
05:31
But in computers, signals can travel
at the speed of light.
94
319228
3111
05:35
There are also size limitations,
95
323079
1869
05:36
like a human brain has
to fit inside a cranium,
96
324948
3027
05:39
but a computer can be the size
of a warehouse or larger.
97
327975
4761
05:44
So the potential for superintelligence
lies dormant in matter,
98
332736
5599
05:50
much like the power of the atom
lay dormant throughout human history,
99
338335
5712
05:56
patiently waiting there until 1945.
100
344047
4405
06:00
In this century,
101
348452
1248
06:01
scientists may learn to awaken
the power of artificial intelligence.
102
349700
4118
06:05
And I think we might then see
an intelligence explosion.
103
353818
4008
06:10
Now most people, when they think
about what is smart and what is dumb,
104
358406
3957
06:14
I think have in mind a picture
roughly like this.
105
362363
3023
06:17
So at one end we have the village idiot,
106
365386
2598
06:19
and then far over at the other side
107
367984
2483
06:22
we have Ed Witten, or Albert Einstein,
or whoever your favorite guru is.
108
370467
4756
06:27
But I think that from the point of view
of artificial intelligence,
109
375223
3834
06:31
the true picture is actually
probably more like this:
110
379057
3681
06:35
AI starts out at this point here,
at zero intelligence,
111
383258
3378
06:38
and then, after many, many
years of really hard work,
112
386636
3011
06:41
maybe eventually we get to
mouse-level artificial intelligence,
113
389647
3844
06:45
something that can navigate
cluttered environments
114
393491
2430
06:47
as well as a mouse can.
115
395921
1987
06:49
And then, after many, many more years
of really hard work, lots of investment,
116
397908
4313
06:54
maybe eventually we get to
chimpanzee-level artificial intelligence.
117
402221
4639
06:58
And then, after even more years
of really, really hard work,
118
406860
3210
07:02
we get to village idiot
artificial intelligence.
119
410070
2913
07:04
And a few moments later,
we are beyond Ed Witten.
120
412983
3272
07:08
The train doesn't stop
at Humanville Station.
121
416255
2970
07:11
It's likely, rather, to swoosh right by.
122
419225
3022
07:14
Now this has profound implications,
123
422247
1984
07:16
particularly when it comes
to questions of power.
124
424231
3862
07:20
For example, chimpanzees are strong --
125
428093
1899
07:21
pound for pound, a chimpanzee is about
twice as strong as a fit human male.
126
429992
5222
07:27
And yet, the fate of Kanzi
and his pals depends a lot more
127
435214
4614
07:31
on what we humans do than on
what the chimpanzees do themselves.
128
439828
4140
07:37
Once there is superintelligence,
129
445228
2314
07:39
the fate of humanity may depend
on what the superintelligence does.
130
447542
3839
07:44
Think about it:
131
452451
1057
07:45
Machine intelligence is the last invention
that humanity will ever need to make.
132
453508
5044
07:50
Machines will then be better
at inventing than we are,
133
458552
2973
07:53
and they'll be doing so
on digital timescales.
134
461525
2540
07:56
What this means is basically
a telescoping of the future.
135
464065
4901
08:00
Think of all the crazy technologies
that you could have imagined
136
468966
3558
08:04
maybe humans could have developed
in the fullness of time:
137
472524
2798
08:07
cures for aging, space colonization,
138
475322
3258
08:10
self-replicating nanobots or uploading
of minds into computers,
139
478580
3731
08:14
all kinds of science fiction-y stuff
140
482311
2159
08:16
that's nevertheless consistent
with the laws of physics.
141
484470
2737
08:19
All of this superintelligence could
develop, and possibly quite rapidly.
142
487207
4212
08:24
Now, a superintelligence with such
technological maturity
143
492449
3558
08:28
would be extremely powerful,
144
496007
2179
08:30
and at least in some scenarios,
it would be able to get what it wants.
145
498186
4546
08:34
We would then have a future that would
be shaped by the preferences of this A.I.
146
502732
5661
08:41
Now a good question is,
what are those preferences?
147
509855
3749
08:46
Here it gets trickier.
148
514244
1769
08:48
To make any headway with this,
149
516013
1435
08:49
we must first of all
avoid anthropomorphizing.
150
517448
3276
08:53
And this is ironic because
every newspaper article
151
521934
3301
08:57
about the future of A.I.
has a picture of this:
152
525235
3855
09:02
So I think what we need to do is
to conceive of the issue more abstractly,
153
530280
4134
09:06
not in terms of vivid Hollywood scenarios.
154
534414
2790
09:09
We need to think of intelligence
as an optimization process,
155
537204
3617
09:12
a process that steers the future
into a particular set of configurations.
156
540821
5649
09:18
A superintelligence is
a really strong optimization process.
157
546470
3511
09:21
It's extremely good at using
available means to achieve a state
158
549981
4117
09:26
in which its goal is realized.
159
554098
1909
09:28
This means that there is no necessary
conenction between
160
556447
2672
09:31
being highly intelligent in this sense,
161
559119
2734
09:33
and having an objective that we humans
would find worthwhile or meaningful.
162
561853
4662
09:39
Suppose we give an A.I. the goal
to make humans smile.
163
567321
3794
09:43
When the A.I. is weak, it performs useful
or amusing actions
164
571115
2982
09:46
that cause its user to smile.
165
574097
2517
09:48
When the A.I. becomes superintelligent,
166
576614
2417
09:51
it realizes that there is a more
effective way to achieve this goal:
167
579031
3523
09:54
take control of the world
168
582554
1922
09:56
and stick electrodes into the facial
muscles of humans
169
584476
3162
09:59
to cause constant, beaming grins.
170
587638
2941
10:02
Another example,
171
590579
1035
10:03
suppose we give A.I. the goal to solve
a difficult mathematical problem.
172
591614
3383
10:06
When the A.I. becomes superintelligent,
173
594997
1937
10:08
it realizes that the most effective way
to get the solution to this problem
174
596934
4171
10:13
is by transforming the planet
into a giant computer,
175
601105
2930
10:16
so as to increase its thinking capacity.
176
604035
2246
10:18
And notice that this gives the A.I.s
an instrumental reason
177
606281
2764
10:21
to do things to us that we
might not approve of.
178
609045
2516
10:23
Human beings in this model are threats,
179
611561
1935
10:25
we could prevent the mathematical
problem from being solved.
180
613496
2921
10:29
Of course, perceivably things won't
go wrong in these particular ways;
181
617207
3494
10:32
these are cartoon examples.
182
620701
1753
10:34
But the general point here is important:
183
622454
1939
10:36
if you create a really powerful
optimization process
184
624393
2873
10:39
to maximize for objective x,
185
627266
2234
10:41
you better make sure
that your definition of x
186
629500
2276
10:43
incorporates everything you care about.
187
631776
2469
10:46
This is a lesson that's also taught
in many a myth.
188
634835
4384
10:51
King Midas wishes that everything
he touches be turned into gold.
189
639219
5298
10:56
He touches his daughter,
she turns into gold.
190
644517
2861
10:59
He touches his food, it turns into gold.
191
647378
2553
11:01
This could become practically relevant,
192
649931
2589
11:04
not just as a metaphor for greed,
193
652520
2070
11:06
but as an illustration of what happens
194
654590
1895
11:08
if you create a powerful
optimization process
195
656485
2837
11:11
and give it misconceived
or poorly specified goals.
196
659322
4789
11:16
Now you might say, if a computer starts
sticking electrodes into people's faces,
197
664111
5189
11:21
we'd just shut it off.
198
669300
2265
11:24
A, this is not necessarily so easy to do
if we've grown dependent on the system --
199
672555
5340
11:29
like, where is the off switch
to the Internet?
200
677895
2732
11:32
B, why haven't the chimpanzees
flicked the off switch to humanity,
201
680627
5120
11:37
or the Neanderthals?
202
685747
1551
11:39
They certainly had reasons.
203
687298
2666
11:41
We have an off switch,
for example, right here.
204
689964
2795
11:44
(Choking)
205
692759
1554
11:46
The reason is that we are
an intelligent adversary;
206
694313
2925
11:49
we can anticipate threats
and plan around them.
207
697238
2728
11:51
But so could a superintelligent agent,
208
699966
2504
11:54
and it would be much better
at that than we are.
209
702470
3254
11:57
The point is, we should not be confident
that we have this under control here.
210
705724
7187
12:04
And we could try to make our job
a little bit easier by, say,
211
712911
3447
12:08
putting the A.I. in a box,
212
716358
1590
12:09
like a secure software environment,
213
717948
1796
12:11
a virtual reality simulation
from which it cannot escape.
214
719744
3022
12:14
But how confident can we be that
the A.I. couldn't find a bug.
215
722766
4146
12:18
Given that merely human hackers
find bugs all the time,
216
726912
3169
12:22
I'd say, probably not very confident.
217
730081
3036
12:26
So we disconnect the ethernet cable
to create an air gap,
218
734237
4548
12:30
but again, like merely human hackers
219
738785
2668
12:33
routinely transgress air gaps
using social engineering.
220
741453
3381
12:36
Right now, as I speak,
221
744834
1259
12:38
I'm sure there is some employee
out there somewhere
222
746093
2389
12:40
who has been talked into handing out
her account details
223
748482
3346
12:43
by somebody claiming to be
from the I.T. department.
224
751828
2746
12:46
More creative scenarios are also possible,
225
754574
2127
12:48
like if you're the A.I.,
226
756701
1315
12:50
you can imagine wiggling electrodes
around in your internal circuitry
227
758016
3532
12:53
to create radio waves that you
can use to communicate.
228
761548
3462
12:57
Or maybe you could pretend to malfunction,
229
765010
2424
12:59
and then when the programmers open
you up to see what went wrong with you,
230
767434
3497
13:02
they look at the source code -- Bam! --
231
770931
1936
13:04
the manipulation can take place.
232
772867
2447
13:07
Or it could output the blueprint
to a really nifty technology,
233
775314
3430
13:10
and when we implement it,
234
778744
1398
13:12
it has some surreptitious side effect
that the A.I. had planned.
235
780142
4397
13:16
The point here is that we should
not be confident in our ability
236
784539
3463
13:20
to keep a superintelligent genie
locked up in its bottle forever.
237
788002
3808
13:23
Sooner or later, it will out.
238
791810
2254
13:27
I believe that the answer here
is to figure out
239
795034
3103
13:30
how to create superintelligent A.I.
such that even if -- when -- it escapes,
240
798137
5024
13:35
it is still safe because it is
fundamentally on our side
241
803161
3277
13:38
because it shares our values.
242
806438
1899
13:40
I see no way around
this difficult problem.
243
808337
3210
13:44
Now, I'm actually fairly optimistic
that this problem can be solved.
244
812557
3834
13:48
We wouldn't have to write down
a long list of everything we care about,
245
816391
3903
13:52
or worse yet, spell it out
in some computer language
246
820294
3643
13:55
like C++ or Python,
247
823937
1454
13:57
that would be a task beyond hopeless.
248
825391
2767
14:00
Instead, we would create an A.I.
that uses its intelligence
249
828158
4297
14:04
to learn what we value,
250
832455
2771
14:07
and its motivation system is constructed
in such a way that it is motivated
251
835226
5280
14:12
to pursue our values or to perform actions
that it predicts we would approve of.
252
840506
5232
14:17
We would thus leverage
its intelligence as much as possible
253
845738
3414
14:21
to solve the problem of value-loading.
254
849152
2745
14:24
This can happen,
255
852727
1512
14:26
and the outcome could be
very good for humanity.
256
854239
3596
14:29
But it doesn't happen automatically.
257
857835
3957
14:33
The initial conditions
for the intelligence explosion
258
861792
2998
14:36
might need to be set up
in just the right way
259
864790
2863
14:39
if we are to have a controlled detonation.
260
867653
3530
14:43
The values that the A.I. has
need to match ours,
261
871183
2618
14:45
not just in the familiar context,
262
873801
1760
14:47
like where we can easily check
how the A.I. behaves,
263
875561
2438
14:49
but also in all novel contexts
that the A.I. might encounter
264
877999
3234
14:53
in the indefinite future.
265
881233
1557
14:54
And there are also some esoteric issues
that would need to be solved, sorted out:
266
882790
4737
14:59
the exact details of its decision theory,
267
887527
2089
15:01
how to deal with logical
uncertainty and so forth.
268
889616
2864
15:05
So the technical problems that need
to be solved to make this work
269
893330
3102
15:08
look quite difficult --
270
896432
1113
15:09
not as difficult as making
a superintelligent A.I.,
271
897545
3380
15:12
but fairly difficult.
272
900925
2868
15:15
Here is the worry:
273
903793
1695
15:17
Making superintelligent A.I.
is a really hard challenge.
274
905488
4684
15:22
Making superintelligent A.I. that is safe
275
910172
2548
15:24
involves some additional
challenge on top of that.
276
912720
2416
15:28
The risk is that if somebody figures out
how to crack the first challenge
277
916216
3487
15:31
without also having cracked
the additional challenge
278
919703
3001
15:34
of ensuring perfect safety.
279
922704
1901
15:37
So I think that we should
work out a solution
280
925375
3331
15:40
to the control problem in advance,
281
928706
2822
15:43
so that we have it available
by the time it is needed.
282
931528
2660
15:46
Now it might be that we cannot solve
the entire control problem in advance
283
934768
3507
15:50
because maybe some elements
can only be put in place
284
938275
3024
15:53
once you know the details of the
architecture where it will be implemented.
285
941299
3997
15:57
But the more of the control problem
that we solve in advance,
286
945296
3380
16:00
the better the odds that the transition
to the machine intelligence era
287
948676
4090
16:04
will go well.
288
952766
1540
16:06
This to me looks like a thing
that is well worth doing
289
954306
4644
16:10
and I can imagine that if
things turn out okay,
290
958950
3332
16:14
that people a million years from now
look back at this century
291
962282
4658
16:18
and it might well be that they say that
the one thing we did that really mattered
292
966940
4002
16:22
was to get this thing right.
293
970942
1567
16:24
Thank you.
294
972509
1689
16:26
(Applause)
295
974198
2813

▲Back to top

ABOUT THE SPEAKER
Nick Bostrom - Philosopher
Nick Bostrom asks big questions: What should we do, as individuals and as a species, to optimize our long-term prospects? Will humanity’s technological advancements ultimately destroy us?

Why you should listen

Philosopher Nick Bostrom envisioned a future full of human enhancement, nanotechnology and machine intelligence long before they became mainstream concerns. From his famous simulation argument -- which identified some striking implications of rejecting the Matrix-like idea that humans are living in a computer simulation -- to his work on existential risk, Bostrom approaches both the inevitable and the speculative using the tools of philosophy, probability theory, and scientific analysis.

Since 2005, Bostrom has led the Future of Humanity Institute, a research group of mathematicians, philosophers and scientists at Oxford University tasked with investigating the big picture for the human condition and its future. He has been referred to as one of the most important thinkers of our age.

Nick was honored as one of Foreign Policy's 2015 Global Thinkers .

His recent book Superintelligence advances the ominous idea that “the first ultraintelligent machine is the last invention that man need ever make.”

More profile about the speaker
Nick Bostrom | Speaker | TED.com