ABOUT THE SPEAKERS
Nick Bostrom - Philosopher
Nick Bostrom asks big questions: What should we do, as individuals and as a species, to optimize our long-term prospects? Will humanity’s technological advancements ultimately destroy us?

Why you should listen

Philosopher Nick Bostrom envisioned a future full of human enhancement, nanotechnology and machine intelligence long before they became mainstream concerns. From his famous simulation argument -- which identified some striking implications of rejecting the Matrix-like idea that humans are living in a computer simulation -- to his work on existential risk, Bostrom approaches both the inevitable and the speculative using the tools of philosophy, probability theory, and scientific analysis.

Since 2005, Bostrom has led the Future of Humanity Institute, a research group of mathematicians, philosophers and scientists at Oxford University tasked with investigating the big picture for the human condition and its future. He has been referred to as one of the most important thinkers of our age.

Nick was honored as one of Foreign Policy's 2015 Global Thinkers .

His recent book Superintelligence advances the ominous idea that “the first ultraintelligent machine is the last invention that man need ever make.”

More profile about the speaker
Nick Bostrom | Speaker | TED.com
Chris Anderson - TED Curator
After a long career in journalism and publishing, Chris Anderson became the curator of the TED Conference in 2002 and has developed it as a platform for identifying and disseminating ideas worth spreading.

Why you should listen

Chris Anderson is the Curator of TED, a nonprofit devoted to sharing valuable ideas, primarily through the medium of 'TED Talks' -- short talks that are offered free online to a global audience.

Chris was born in a remote village in Pakistan in 1957. He spent his early years in India, Pakistan and Afghanistan, where his parents worked as medical missionaries, and he attended an American school in the Himalayas for his early education. After boarding school in Bath, England, he went on to Oxford University, graduating in 1978 with a degree in philosophy, politics and economics.

Chris then trained as a journalist, working in newspapers and radio, including two years producing a world news service in the Seychelles Islands.

Back in the UK in 1984, Chris was captivated by the personal computer revolution and became an editor at one of the UK's early computer magazines. A year later he founded Future Publishing with a $25,000 bank loan. The new company initially focused on specialist computer publications but eventually expanded into other areas such as cycling, music, video games, technology and design, doubling in size every year for seven years. In 1994, Chris moved to the United States where he built Imagine Media, publisher of Business 2.0 magazine and creator of the popular video game users website IGN. Chris eventually merged Imagine and Future, taking the combined entity public in London in 1999, under the Future name. At its peak, it published 150 magazines and websites and employed 2,000 people.

This success allowed Chris to create a private nonprofit organization, the Sapling Foundation, with the hope of finding new ways to tackle tough global issues through media, technology, entrepreneurship and, most of all, ideas. In 2001, the foundation acquired the TED Conference, then an annual meeting of luminaries in the fields of Technology, Entertainment and Design held in Monterey, California, and Chris left Future to work full time on TED.

He expanded the conference's remit to cover all topics, including science, business and key global issues, while adding a Fellows program, which now has some 300 alumni, and the TED Prize, which grants its recipients "one wish to change the world." The TED stage has become a place for thinkers and doers from all fields to share their ideas and their work, capturing imaginations, sparking conversation and encouraging discovery along the way.

In 2006, TED experimented with posting some of its talks on the Internet. Their viral success encouraged Chris to begin positioning the organization as a global media initiative devoted to 'ideas worth spreading,' part of a new era of information dissemination using the power of online video. In June 2015, the organization posted its 2,000th talk online. The talks are free to view, and they have been translated into more than 100 languages with the help of volunteers from around the world. Viewership has grown to approximately one billion views per year.

Continuing a strategy of 'radical openness,' in 2009 Chris introduced the TEDx initiative, allowing free licenses to local organizers who wished to organize their own TED-like events. More than 8,000 such events have been held, generating an archive of 60,000 TEDx talks. And three years later, the TED-Ed program was launched, offering free educational videos and tools to students and teachers.

More profile about the speaker
Chris Anderson | Speaker | TED.com
TED2019

Nick Bostrom: How civilization could destroy itself -- and 4 ways we could prevent it

Filmed:
1,537,627 views

Humanity is on its way to creating a "black ball": a technological breakthrough that could destroy us all, says philosopher Nick Bostrom. In this incisive, surprisingly light-hearted conversation with Head of TED Chris Anderson, Bostrom outlines the vulnerabilities we could face if (or when) our inventions spiral beyond our control -- and explores how we can prevent our future demise.
- Philosopher
Nick Bostrom asks big questions: What should we do, as individuals and as a species, to optimize our long-term prospects? Will humanity’s technological advancements ultimately destroy us? Full bio - TED Curator
After a long career in journalism and publishing, Chris Anderson became the curator of the TED Conference in 2002 and has developed it as a platform for identifying and disseminating ideas worth spreading. Full bio

Double-click the English transcript below to play the video.

00:13
Chris Anderson: Nick Bostrom.
0
1000
1809
00:14
So, you have already given us
so many crazy ideas out there.
1
2833
3976
00:18
I think a couple of decades ago,
2
6833
1726
00:20
you made the case that we might
all be living in a simulation,
3
8583
2935
00:23
or perhaps probably were.
4
11542
1809
00:25
More recently,
5
13375
1351
00:26
you've painted the most vivid examples
of how artificial general intelligence
6
14750
4601
00:31
could go horribly wrong.
7
19375
1833
00:33
And now this year,
8
21750
1393
00:35
you're about to publish
9
23167
2226
00:37
a paper that presents something called
the vulnerable world hypothesis.
10
25417
3934
00:41
And our job this evening is to
give the illustrated guide to that.
11
29375
4583
00:46
So let's do that.
12
34417
1833
00:48
What is that hypothesis?
13
36833
1792
00:52
Nick Bostrom: It's trying to think about
14
40000
2434
00:54
a sort of structural feature
of the current human condition.
15
42458
3084
00:59
You like the urn metaphor,
16
47125
2351
01:01
so I'm going to use that to explain it.
17
49500
1893
01:03
So picture a big urn filled with balls
18
51417
4351
01:07
representing ideas, methods,
possible technologies.
19
55792
3958
01:12
You can think of the history
of human creativity
20
60833
3726
01:16
as the process of reaching into this urn
and pulling out one ball after another,
21
64583
3810
01:20
and the net effect so far
has been hugely beneficial, right?
22
68417
3226
01:23
We've extracted a great many white balls,
23
71667
2726
01:26
some various shades of gray,
mixed blessings.
24
74417
2875
01:30
We haven't so far
pulled out the black ball --
25
78042
2958
01:34
a technology that invariably destroys
the civilization that discovers it.
26
82292
5476
01:39
So the paper tries to think
about what could such a black ball be.
27
87792
3267
01:43
CA: So you define that ball
28
91083
1810
01:44
as one that would inevitably
bring about civilizational destruction.
29
92917
3684
01:48
NB: Unless we exit what I call
the semi-anarchic default condition.
30
96625
5309
01:53
But sort of, by default.
31
101958
1500
01:56
CA: So, you make the case compelling
32
104333
3518
01:59
by showing some sort of counterexamples
33
107875
2018
02:01
where you believe that so far
we've actually got lucky,
34
109917
2934
02:04
that we might have pulled out
that death ball
35
112875
2851
02:07
without even knowing it.
36
115750
1559
02:09
So there's this quote, what's this quote?
37
117333
2292
02:12
NB: Well, I guess
it's just meant to illustrate
38
120625
2684
02:15
the difficulty of foreseeing
39
123333
2101
02:17
what basic discoveries will lead to.
40
125458
2685
02:20
We just don't have that capability.
41
128167
3059
02:23
Because we have become quite good
at pulling out balls,
42
131250
3351
02:26
but we don't really have the ability
to put the ball back into the urn, right.
43
134625
3726
02:30
We can invent, but we can't un-invent.
44
138375
2167
02:33
So our strategy, such as it is,
45
141583
2768
02:36
is to hope that there is
no black ball in the urn.
46
144375
2434
02:38
CA: So once it's out, it's out,
and you can't put it back in,
47
146833
4060
02:42
and you think we've been lucky.
48
150917
1517
02:44
So talk through a couple
of these examples.
49
152458
2226
02:46
You talk about different
types of vulnerability.
50
154708
3101
02:49
NB: So the easiest type to understand
51
157833
2435
02:52
is a technology
that just makes it very easy
52
160292
3142
02:55
to cause massive amounts of destruction.
53
163458
2125
02:59
Synthetic biology might be a fecund
source of that kind of black ball,
54
167375
3518
03:02
but many other possible things we could --
55
170917
2684
03:05
think of geoengineering,
really great, right?
56
173625
2518
03:08
We could combat global warming,
57
176167
2226
03:10
but you don't want it
to get too easy either,
58
178417
2142
03:12
you don't want any random person
and his grandmother
59
180583
2476
03:15
to have the ability to radically
alter the earth's climate.
60
183083
3060
03:18
Or maybe lethal autonomous drones,
61
186167
3559
03:21
massed-produced, mosquito-sized
killer bot swarms.
62
189750
3333
03:26
Nanotechnology,
artificial general intelligence.
63
194500
2726
03:29
CA: You argue in the paper
64
197250
1309
03:30
that it's a matter of luck
that when we discovered
65
198583
2893
03:33
that nuclear power could create a bomb,
66
201500
3434
03:36
it might have been the case
67
204958
1393
03:38
that you could have created a bomb
68
206375
1851
03:40
with much easier resources,
accessible to anyone.
69
208250
3559
03:43
NB: Yeah, so think back to the 1930s
70
211833
3560
03:47
where for the first time we make
some breakthroughs in nuclear physics,
71
215417
4601
03:52
some genius figures out that it's possible
to create a nuclear chain reaction
72
220042
3684
03:55
and then realizes
that this could lead to the bomb.
73
223750
3184
03:58
And we do some more work,
74
226958
1893
04:00
it turns out that what you require
to make a nuclear bomb
75
228875
2726
04:03
is highly enriched uranium or plutonium,
76
231625
2393
04:06
which are very difficult materials to get.
77
234042
2017
04:08
You need ultracentrifuges,
78
236083
2268
04:10
you need reactors, like,
massive amounts of energy.
79
238375
3768
04:14
But suppose it had turned out instead
80
242167
1809
04:16
there had been an easy way
to unlock the energy of the atom.
81
244000
3976
04:20
That maybe by baking sand
in the microwave oven
82
248000
2768
04:22
or something like that
83
250792
1267
04:24
you could have created
a nuclear detonation.
84
252083
2101
04:26
So we know that that's
physically impossible.
85
254208
2143
04:28
But before you did the relevant physics
86
256375
1893
04:30
how could you have known
how it would turn out?
87
258292
2191
04:32
CA: Although, couldn't you argue
88
260507
1552
04:34
that for life to evolve on Earth
89
262083
1935
04:36
that implied sort of stable environment,
90
264042
3267
04:39
that if it was possible to create
massive nuclear reactions relatively easy,
91
267333
4185
04:43
the Earth would never have been stable,
92
271542
1858
04:45
that we wouldn't be here at all.
93
273424
1552
04:47
NB: Yeah, unless there were something
that is easy to do on purpose
94
275000
3393
04:50
but that wouldn't happen by random chance.
95
278417
2851
04:53
So, like things we can easily do,
96
281292
1579
04:54
we can stack 10 blocks
on top of one another,
97
282896
2110
04:57
but in nature, you're not going to find,
like, a stack of 10 blocks.
98
285031
3197
05:00
CA: OK, so this is probably the one
99
288253
1673
05:01
that many of us worry about most,
100
289950
1943
05:03
and yes, synthetic biology
is perhaps the quickest route
101
291917
3517
05:07
that we can foresee
in our near future to get us here.
102
295458
3018
05:10
NB: Yeah, and so think
about what that would have meant
103
298500
2934
05:13
if, say, anybody by working
in their kitchen for an afternoon
104
301458
3643
05:17
could destroy a city.
105
305125
1393
05:18
It's hard to see how
modern civilization as we know it
106
306542
3559
05:22
could have survived that.
107
310125
1434
05:23
Because in any population
of a million people,
108
311583
2518
05:26
there will always be some
who would, for whatever reason,
109
314125
2684
05:28
choose to use that destructive power.
110
316833
2084
05:31
So if that apocalyptic residual
111
319750
3143
05:34
would choose to destroy a city, or worse,
112
322917
1976
05:36
then cities would get destroyed.
113
324917
1559
05:38
CA: So here's another type
of vulnerability.
114
326500
2351
05:40
Talk about this.
115
328875
1643
05:42
NB: Yeah, so in addition to these
kind of obvious types of black balls
116
330542
3976
05:46
that would just make it possible
to blow up a lot of things,
117
334542
2810
05:49
other types would act
by creating bad incentives
118
337376
4433
05:53
for humans to do things that are harmful.
119
341833
2226
05:56
So, the Type-2a, we might call it that,
120
344083
4101
06:00
is to think about some technology
that incentivizes great powers
121
348208
4518
06:04
to use their massive amounts of force
to create destruction.
122
352750
4476
06:09
So, nuclear weapons were actually
very close to this, right?
123
357250
2833
06:14
What we did, we spent
over 10 trillion dollars
124
362083
3060
06:17
to build 70,000 nuclear warheads
125
365167
2517
06:19
and put them on hair-trigger alert.
126
367708
2435
06:22
And there were several times
during the Cold War
127
370167
2267
06:24
we almost blew each other up.
128
372458
1435
06:25
It's not because a lot of people felt
this would be a great idea,
129
373917
3101
06:29
let's all spend 10 trillion dollars
to blow ourselves up,
130
377042
2684
06:31
but the incentives were such
that we were finding ourselves --
131
379750
2934
06:34
this could have been worse.
132
382708
1310
06:36
Imagine if there had been
a safe first strike.
133
384042
2434
06:38
Then it might have been very tricky,
134
386500
2309
06:40
in a crisis situation,
135
388833
1268
06:42
to refrain from launching
all their nuclear missiles.
136
390125
2477
06:44
If nothing else, because you would fear
that the other side might do it.
137
392626
3392
06:48
CA: Right, mutual assured destruction
138
396042
1809
06:49
kept the Cold War relatively stable,
139
397875
2726
06:52
without that, we might not be here now.
140
400625
1934
06:54
NB: It could have been
more unstable than it was.
141
402583
2310
06:56
And there could be
other properties of technology.
142
404917
2351
06:59
It could have been harder
to have arms treaties,
143
407292
2309
07:01
if instead of nuclear weapons
144
409625
1601
07:03
there had been some smaller thing
or something less distinctive.
145
411250
3018
07:06
CA: And as well as bad incentives
for powerful actors,
146
414292
2559
07:08
you also worry about bad incentives
for all of us, in Type-2b here.
147
416875
3518
07:12
NB: Yeah, so, here we might
take the case of global warming.
148
420417
4250
07:18
There are a lot of little conveniences
149
426958
1893
07:20
that cause each one of us to do things
150
428875
2184
07:23
that individually
have no significant effect, right?
151
431083
2851
07:25
But if billions of people do it,
152
433958
1976
07:27
cumulatively, it has a damaging effect.
153
435958
2060
07:30
Now, global warming
could have been a lot worse than it is.
154
438042
2809
07:32
So we have the climate
sensitivity parameter, right.
155
440875
2976
07:35
It's a parameter that says
how much warmer does it get
156
443875
3643
07:39
if you emit a certain amount
of greenhouse gases.
157
447542
2684
07:42
But, suppose that it had been the case
158
450250
2393
07:44
that with the amount
of greenhouse gases we emitted,
159
452667
2517
07:47
instead of the temperature rising by, say,
160
455208
2060
07:49
between three and 4.5 degrees by 2100,
161
457292
3726
07:53
suppose it had been
15 degrees or 20 degrees.
162
461042
2500
07:56
Like, then we might have been
in a very bad situation.
163
464375
2559
07:58
Or suppose that renewable energy
had just been a lot harder to do.
164
466958
3143
08:02
Or that there had been
more fossil fuels in the ground.
165
470125
2643
08:04
CA: Couldn't you argue
that if in that case of --
166
472792
2642
08:07
if what we are doing today
167
475458
1726
08:09
had resulted in 10 degrees difference
in the time period that we could see,
168
477208
4560
08:13
actually humanity would have got
off its ass and done something about it.
169
481792
3684
08:17
We're stupid, but we're not
maybe that stupid.
170
485500
2809
08:20
Or maybe we are.
171
488333
1268
08:21
NB: I wouldn't bet on it.
172
489625
1268
08:22
(Laughter)
173
490917
2184
08:25
You could imagine other features.
174
493125
1684
08:26
So, right now, it's a little bit difficult
to switch to renewables and stuff, right,
175
494833
5518
08:32
but it can be done.
176
500375
1268
08:33
But it might just have been,
with slightly different physics,
177
501667
2976
08:36
it could have been much more expensive
to do these things.
178
504667
2791
08:40
CA: And what's your view, Nick?
179
508375
1518
08:41
Do you think, putting
these possibilities together,
180
509917
2434
08:44
that this earth, humanity that we are,
181
512375
4268
08:48
we count as a vulnerable world?
182
516667
1559
08:50
That there is a death ball in our future?
183
518250
2417
08:55
NB: It's hard to say.
184
523958
1268
08:57
I mean, I think there might
well be various black balls in the urn,
185
525250
5059
09:02
that's what it looks like.
186
530333
1310
09:03
There might also be some golden balls
187
531667
2392
09:06
that would help us
protect against black balls.
188
534083
3476
09:09
And I don't know which order
they will come out.
189
537583
2976
09:12
CA: I mean, one possible
philosophical critique of this idea
190
540583
3851
09:16
is that it implies a view
that the future is essentially settled.
191
544458
5643
09:22
That there either
is that ball there or it's not.
192
550125
2476
09:24
And in a way,
193
552625
3018
09:27
that's not a view of the future
that I want to believe.
194
555667
2601
09:30
I want to believe
that the future is undetermined,
195
558292
2351
09:32
that our decisions today will determine
196
560667
1934
09:34
what kind of balls
we pull out of that urn.
197
562625
2208
09:37
NB: I mean, if we just keep inventing,
198
565917
3767
09:41
like, eventually we will
pull out all the balls.
199
569708
2334
09:44
I mean, I think there's a kind
of weak form of technological determinism
200
572875
3393
09:48
that is quite plausible,
201
576292
1267
09:49
like, you're unlikely
to encounter a society
202
577583
2643
09:52
that uses flint axes and jet planes.
203
580250
2833
09:56
But you can almost think
of a technology as a set of affordances.
204
584208
4060
10:00
So technology is the thing
that enables us to do various things
205
588292
3017
10:03
and achieve various effects in the world.
206
591333
1976
10:05
How we'd then use that,
of course depends on human choice.
207
593333
2810
10:08
But if we think about these
three types of vulnerability,
208
596167
2684
10:10
they make quite weak assumptions
about how we would choose to use them.
209
598875
3393
10:14
So a Type-1 vulnerability, again,
this massive, destructive power,
210
602292
3392
10:17
it's a fairly weak assumption
211
605708
1435
10:19
to think that in a population
of millions of people
212
607167
2392
10:21
there would be some that would choose
to use it destructively.
213
609583
2935
10:24
CA: For me, the most single
disturbing argument
214
612542
2434
10:27
is that we actually might have
some kind of view into the urn
215
615000
4559
10:31
that makes it actually
very likely that we're doomed.
216
619583
3518
10:35
Namely, if you believe
in accelerating power,
217
623125
4643
10:39
that technology inherently accelerates,
218
627792
2267
10:42
that we build the tools
that make us more powerful,
219
630083
2435
10:44
then at some point you get to a stage
220
632542
2642
10:47
where a single individual
can take us all down,
221
635208
3060
10:50
and then it looks like we're screwed.
222
638292
2851
10:53
Isn't that argument quite alarming?
223
641167
2934
10:56
NB: Ah, yeah.
224
644125
1750
10:58
(Laughter)
225
646708
1268
11:00
I think --
226
648000
1333
11:02
Yeah, we get more and more power,
227
650875
1601
11:04
and [it's] easier and easier
to use those powers,
228
652500
3934
11:08
but we can also invent technologies
that kind of help us control
229
656458
3560
11:12
how people use those powers.
230
660042
2017
11:14
CA: So let's talk about that,
let's talk about the response.
231
662083
2851
11:16
Suppose that thinking
about all the possibilities
232
664958
2310
11:19
that are out there now --
233
667292
2101
11:21
it's not just synbio,
it's things like cyberwarfare,
234
669417
3726
11:25
artificial intelligence, etc., etc. --
235
673167
3351
11:28
that there might be
serious doom in our future.
236
676542
4517
11:33
What are the possible responses?
237
681083
1601
11:34
And you've talked about
four possible responses as well.
238
682708
4893
11:39
NB: Restricting technological development
doesn't seem promising,
239
687625
3643
11:43
if we are talking about a general halt
to technological progress.
240
691292
3226
11:46
I think neither feasible,
241
694542
1267
11:47
nor would it be desirable
even if we could do it.
242
695833
2310
11:50
I think there might be very limited areas
243
698167
3017
11:53
where maybe you would want
slower technological progress.
244
701208
2726
11:55
You don't, I think, want
faster progress in bioweapons,
245
703958
3393
11:59
or in, say, isotope separation,
246
707375
2059
12:01
that would make it easier to create nukes.
247
709458
2250
12:04
CA: I mean, I used to be
fully on board with that.
248
712583
3310
12:07
But I would like to actually
push back on that for a minute.
249
715917
3267
12:11
Just because, first of all,
250
719208
1310
12:12
if you look at the history
of the last couple of decades,
251
720542
2684
12:15
you know, it's always been
push forward at full speed,
252
723250
3559
12:18
it's OK, that's our only choice.
253
726833
1851
12:20
But if you look at globalization
and the rapid acceleration of that,
254
728708
4268
12:25
if you look at the strategy of
"move fast and break things"
255
733000
3434
12:28
and what happened with that,
256
736458
2060
12:30
and then you look at the potential
for synthetic biology,
257
738542
2767
12:33
I don't know that we should
move forward rapidly
258
741333
4435
12:37
or without any kind of restriction
259
745792
1642
12:39
to a world where you could have
a DNA printer in every home
260
747458
3310
12:42
and high school lab.
261
750792
1333
12:45
There are some restrictions, right?
262
753167
1684
12:46
NB: Possibly, there is
the first part, the not feasible.
263
754875
2643
12:49
If you think it would be
desirable to stop it,
264
757542
2184
12:51
there's the problem of feasibility.
265
759750
1726
12:53
So it doesn't really help
if one nation kind of --
266
761500
2809
12:56
CA: No, it doesn't help
if one nation does,
267
764333
2018
12:58
but we've had treaties before.
268
766375
2934
13:01
That's really how we survived
the nuclear threat,
269
769333
3351
13:04
was by going out there
270
772708
1268
13:06
and going through
the painful process of negotiating.
271
774000
2518
13:08
I just wonder whether the logic isn't
that we, as a matter of global priority,
272
776542
5434
13:14
we shouldn't go out there and try,
273
782000
1684
13:15
like, now start negotiating
really strict rules
274
783708
2685
13:18
on where synthetic bioresearch is done,
275
786417
2684
13:21
that it's not something
that you want to democratize, no?
276
789125
2851
13:24
NB: I totally agree with that --
277
792000
1809
13:25
that it would be desirable, for example,
278
793833
4226
13:30
maybe to have DNA synthesis machines,
279
798083
3601
13:33
not as a product where each lab
has their own device,
280
801708
3560
13:37
but maybe as a service.
281
805292
1476
13:38
Maybe there could be
four or five places in the world
282
806792
2517
13:41
where you send in your digital blueprint
and the DNA comes back, right?
283
809333
3518
13:44
And then, you would have the ability,
284
812875
1768
13:46
if one day it really looked
like it was necessary,
285
814667
2392
13:49
we would have like,
a finite set of choke points.
286
817083
2351
13:51
So I think you want to look
for kind of special opportunities,
287
819458
3518
13:55
where you could have tighter control.
288
823000
2059
13:57
CA: Your belief is, fundamentally,
289
825083
1643
13:58
we are not going to be successful
in just holding back.
290
826750
2893
14:01
Someone, somewhere --
North Korea, you know --
291
829667
2726
14:04
someone is going to go there
and discover this knowledge,
292
832417
3517
14:07
if it's there to be found.
293
835958
1268
14:09
NB: That looks plausible
under current conditions.
294
837250
2351
14:11
It's not just synthetic biology, either.
295
839625
1934
14:13
I mean, any kind of profound,
new change in the world
296
841583
2518
14:16
could turn out to be a black ball.
297
844101
1626
14:17
CA: Let's look at
another possible response.
298
845727
2096
14:19
NB: This also, I think,
has only limited potential.
299
847823
2403
14:22
So, with the Type-1 vulnerability again,
300
850250
3559
14:25
I mean, if you could reduce the number
of people who are incentivized
301
853833
4351
14:30
to destroy the world,
302
858208
1268
14:31
if only they could get
access and the means,
303
859500
2059
14:33
that would be good.
304
861583
1268
14:34
CA: In this image that you asked us to do
305
862875
1976
14:36
you're imagining these drones
flying around the world
306
864875
2559
14:39
with facial recognition.
307
867458
1268
14:40
When they spot someone
showing signs of sociopathic behavior,
308
868750
2893
14:43
they shower them with love, they fix them.
309
871667
2184
14:45
NB: I think it's like a hybrid picture.
310
873875
1893
14:47
Eliminate can either mean,
like, incarcerate or kill,
311
875792
4017
14:51
or it can mean persuade them
to a better view of the world.
312
879833
3018
14:54
But the point is that,
313
882875
1726
14:56
suppose you were
extremely successful in this,
314
884625
2143
14:58
and you reduced the number
of such individuals by half.
315
886792
3309
15:02
And if you want to do it by persuasion,
316
890125
1893
15:04
you are competing against
all other powerful forces
317
892042
2392
15:06
that are trying to persuade people,
318
894458
1685
15:08
parties, religion, education system.
319
896167
1767
15:09
But suppose you could reduce it by half,
320
897958
1905
15:11
I don't think the risk
would be reduced by half.
321
899887
2256
15:14
Maybe by five or 10 percent.
322
902167
1559
15:15
CA: You're not recommending that we gamble
humanity's future on response two.
323
903750
4351
15:20
NB: I think it's all good
to try to deter and persuade people,
324
908125
3018
15:23
but we shouldn't rely on that
as our only safeguard.
325
911167
2976
15:26
CA: How about three?
326
914167
1267
15:27
NB: I think there are two general methods
327
915458
2893
15:30
that we could use to achieve
the ability to stabilize the world
328
918375
3976
15:34
against the whole spectrum
of possible vulnerabilities.
329
922375
2976
15:37
And we probably would need both.
330
925375
1559
15:38
So, one is an extremely effective ability
331
926958
4518
15:43
to do preventive policing.
332
931500
1768
15:45
Such that you could intercept.
333
933292
1524
15:46
If anybody started to do
this dangerous thing,
334
934840
2761
15:49
you could intercept them
in real time, and stop them.
335
937625
2684
15:52
So this would require
ubiquitous surveillance,
336
940333
2476
15:54
everybody would be monitored all the time.
337
942833
2375
15:58
CA: This is "Minority Report,"
essentially, a form of.
338
946333
2560
16:00
NB: You would have maybe AI algorithms,
339
948917
1934
16:02
big freedom centers
that were reviewing this, etc., etc.
340
950875
4417
16:08
CA: You know that mass surveillance
is not a very popular term right now?
341
956583
4393
16:13
(Laughter)
342
961000
1250
16:15
NB: Yeah, so this little device there,
343
963458
1810
16:17
imagine that kind of necklace
that you would have to wear at all times
344
965292
3601
16:20
with multidirectional cameras.
345
968917
2000
16:23
But, to make it go down better,
346
971792
1809
16:25
just call it the "freedom tag"
or something like that.
347
973625
2524
16:28
(Laughter)
348
976173
2011
16:30
CA: OK.
349
978208
1268
16:31
I mean, this is the conversation, friends,
350
979500
2101
16:33
this is why this is
such a mind-blowing conversation.
351
981625
3559
16:37
NB: Actually, there's
a whole big conversation on this
352
985208
2601
16:39
on its own, obviously.
353
987833
1310
16:41
There are huge problems and risks
with that, right?
354
989167
2476
16:43
We may come back to that.
355
991667
1267
16:44
So the other, the final,
356
992958
1268
16:46
the other general stabilization capability
357
994250
2559
16:48
is kind of plugging
another governance gap.
358
996833
2060
16:50
So the surveillance would be kind of
governance gap at the microlevel,
359
998917
4184
16:55
like, preventing anybody
from ever doing something highly illegal.
360
1003125
3101
16:58
Then, there's a corresponding
governance gap
361
1006250
2309
17:00
at the macro level, at the global level.
362
1008583
1935
17:02
You would need the ability, reliably,
363
1010542
3934
17:06
to prevent the worst kinds
of global coordination failures,
364
1014500
2809
17:09
to avoid wars between great powers,
365
1017333
3768
17:13
arms races,
366
1021125
1333
17:15
cataclysmic commons problems,
367
1023500
2208
17:19
in order to deal with
the Type-2a vulnerabilities.
368
1027667
4184
17:23
CA: Global governance is a term
369
1031875
1934
17:25
that's definitely way out
of fashion right now,
370
1033833
2226
17:28
but could you make the case
that throughout history,
371
1036083
2518
17:30
the history of humanity
372
1038625
1268
17:31
is that at every stage
of technological power increase,
373
1039917
5434
17:37
people have reorganized
and sort of centralized the power.
374
1045375
3226
17:40
So, for example,
when a roving band of criminals
375
1048625
3434
17:44
could take over a society,
376
1052083
1685
17:45
the response was,
well, you have a nation-state
377
1053792
2239
17:48
and you centralize force,
a police force or an army,
378
1056055
2434
17:50
so, "No, you can't do that."
379
1058513
1630
17:52
The logic, perhaps, of having
a single person or a single group
380
1060167
4559
17:56
able to take out humanity
381
1064750
1643
17:58
means at some point
we're going to have to go this route,
382
1066417
2726
18:01
at least in some form, no?
383
1069167
1434
18:02
NB: It's certainly true that the scale
of political organization has increased
384
1070625
3684
18:06
over the course of human history.
385
1074333
2143
18:08
It used to be hunter-gatherer band, right,
386
1076500
2018
18:10
and then chiefdom, city-states, nations,
387
1078542
2934
18:13
now there are international organizations
and so on and so forth.
388
1081500
3976
18:17
Again, I just want to make sure
389
1085500
1518
18:19
I get the chance to stress
390
1087042
1642
18:20
that obviously there are huge downsides
391
1088708
1976
18:22
and indeed, massive risks,
392
1090708
1518
18:24
both to mass surveillance
and to global governance.
393
1092250
3351
18:27
I'm just pointing out
that if we are lucky,
394
1095625
2559
18:30
the world could be such
that these would be the only ways
395
1098208
2685
18:32
you could survive a black ball.
396
1100917
1517
18:34
CA: The logic of this theory,
397
1102458
2518
18:37
it seems to me,
398
1105000
1268
18:38
is that we've got to recognize
we can't have it all.
399
1106292
3601
18:41
That the sort of,
400
1109917
1833
18:45
I would say, naive dream
that many of us had
401
1113500
2976
18:48
that technology is always
going to be a force for good,
402
1116500
3351
18:51
keep going, don't stop,
go as fast as you can
403
1119875
2976
18:54
and not pay attention
to some of the consequences,
404
1122875
2351
18:57
that's actually just not an option.
405
1125250
1684
18:58
We can have that.
406
1126958
1935
19:00
If we have that,
407
1128917
1267
19:02
we're going to have to accept
408
1130208
1435
19:03
some of these other
very uncomfortable things with it,
409
1131667
2559
19:06
and kind of be in this
arms race with ourselves
410
1134250
2226
19:08
of, you want the power,
you better limit it,
411
1136500
2268
19:10
you better figure out how to limit it.
412
1138792
2142
19:12
NB: I think it is an option,
413
1140958
3476
19:16
a very tempting option,
it's in a sense the easiest option
414
1144458
2768
19:19
and it might work,
415
1147250
1268
19:20
but it means we are fundamentally
vulnerable to extracting a black ball.
416
1148542
4809
19:25
Now, I think with a bit of coordination,
417
1153375
2143
19:27
like, if you did solve this
macrogovernance problem,
418
1155542
2726
19:30
and the microgovernance problem,
419
1158292
1601
19:31
then we could extract
all the balls from the urn
420
1159917
2309
19:34
and we'd benefit greatly.
421
1162250
2268
19:36
CA: I mean, if we're living
in a simulation, does it matter?
422
1164542
3434
19:40
We just reboot.
423
1168000
1309
19:41
(Laughter)
424
1169333
1268
19:42
NB: Then ... I ...
425
1170625
1643
19:44
(Laughter)
426
1172292
2476
19:46
I didn't see that one coming.
427
1174792
1416
19:50
CA: So what's your view?
428
1178125
1268
19:51
Putting all the pieces together,
how likely is it that we're doomed?
429
1179417
4809
19:56
(Laughter)
430
1184250
1958
19:59
I love how people laugh
when you ask that question.
431
1187042
2392
20:01
NB: On an individual level,
432
1189458
1351
20:02
we seem to kind of be doomed anyway,
just with the time line,
433
1190833
3851
20:06
we're rotting and aging
and all kinds of things, right?
434
1194708
2601
20:09
(Laughter)
435
1197333
1601
20:10
It's actually a little bit tricky.
436
1198958
1685
20:12
If you want to set up
so that you can attach a probability,
437
1200667
2767
20:15
first, who are we?
438
1203458
1268
20:16
If you're very old,
probably you'll die of natural causes,
439
1204750
2726
20:19
if you're very young,
you might have a 100-year --
440
1207500
2351
20:21
the probability might depend
on who you ask.
441
1209875
2143
20:24
Then the threshold, like, what counts
as civilizational devastation?
442
1212042
4226
20:28
In the paper I don't require
an existential catastrophe
443
1216292
5642
20:33
in order for it to count.
444
1221958
1435
20:35
This is just a definitional matter,
445
1223417
1684
20:37
I say a billion dead,
446
1225125
1309
20:38
or a reduction of world GDP by 50 percent,
447
1226458
2060
20:40
but depending on what
you say the threshold is,
448
1228542
2226
20:42
you get a different probability estimate.
449
1230792
1976
20:44
But I guess you could
put me down as a frightened optimist.
450
1232792
4517
20:49
(Laughter)
451
1237333
1101
20:50
CA: You're a frightened optimist,
452
1238458
1643
20:52
and I think you've just created
a large number of other frightened ...
453
1240125
4268
20:56
people.
454
1244417
1267
20:57
(Laughter)
455
1245708
1060
20:58
NB: In the simulation.
456
1246792
1267
21:00
CA: In a simulation.
457
1248083
1268
21:01
Nick Bostrom, your mind amazes me,
458
1249375
1684
21:03
thank you so much for scaring
the living daylights out of us.
459
1251083
2893
21:06
(Applause)
460
1254000
2375

▲Back to top

ABOUT THE SPEAKERS
Nick Bostrom - Philosopher
Nick Bostrom asks big questions: What should we do, as individuals and as a species, to optimize our long-term prospects? Will humanity’s technological advancements ultimately destroy us?

Why you should listen

Philosopher Nick Bostrom envisioned a future full of human enhancement, nanotechnology and machine intelligence long before they became mainstream concerns. From his famous simulation argument -- which identified some striking implications of rejecting the Matrix-like idea that humans are living in a computer simulation -- to his work on existential risk, Bostrom approaches both the inevitable and the speculative using the tools of philosophy, probability theory, and scientific analysis.

Since 2005, Bostrom has led the Future of Humanity Institute, a research group of mathematicians, philosophers and scientists at Oxford University tasked with investigating the big picture for the human condition and its future. He has been referred to as one of the most important thinkers of our age.

Nick was honored as one of Foreign Policy's 2015 Global Thinkers .

His recent book Superintelligence advances the ominous idea that “the first ultraintelligent machine is the last invention that man need ever make.”

More profile about the speaker
Nick Bostrom | Speaker | TED.com
Chris Anderson - TED Curator
After a long career in journalism and publishing, Chris Anderson became the curator of the TED Conference in 2002 and has developed it as a platform for identifying and disseminating ideas worth spreading.

Why you should listen

Chris Anderson is the Curator of TED, a nonprofit devoted to sharing valuable ideas, primarily through the medium of 'TED Talks' -- short talks that are offered free online to a global audience.

Chris was born in a remote village in Pakistan in 1957. He spent his early years in India, Pakistan and Afghanistan, where his parents worked as medical missionaries, and he attended an American school in the Himalayas for his early education. After boarding school in Bath, England, he went on to Oxford University, graduating in 1978 with a degree in philosophy, politics and economics.

Chris then trained as a journalist, working in newspapers and radio, including two years producing a world news service in the Seychelles Islands.

Back in the UK in 1984, Chris was captivated by the personal computer revolution and became an editor at one of the UK's early computer magazines. A year later he founded Future Publishing with a $25,000 bank loan. The new company initially focused on specialist computer publications but eventually expanded into other areas such as cycling, music, video games, technology and design, doubling in size every year for seven years. In 1994, Chris moved to the United States where he built Imagine Media, publisher of Business 2.0 magazine and creator of the popular video game users website IGN. Chris eventually merged Imagine and Future, taking the combined entity public in London in 1999, under the Future name. At its peak, it published 150 magazines and websites and employed 2,000 people.

This success allowed Chris to create a private nonprofit organization, the Sapling Foundation, with the hope of finding new ways to tackle tough global issues through media, technology, entrepreneurship and, most of all, ideas. In 2001, the foundation acquired the TED Conference, then an annual meeting of luminaries in the fields of Technology, Entertainment and Design held in Monterey, California, and Chris left Future to work full time on TED.

He expanded the conference's remit to cover all topics, including science, business and key global issues, while adding a Fellows program, which now has some 300 alumni, and the TED Prize, which grants its recipients "one wish to change the world." The TED stage has become a place for thinkers and doers from all fields to share their ideas and their work, capturing imaginations, sparking conversation and encouraging discovery along the way.

In 2006, TED experimented with posting some of its talks on the Internet. Their viral success encouraged Chris to begin positioning the organization as a global media initiative devoted to 'ideas worth spreading,' part of a new era of information dissemination using the power of online video. In June 2015, the organization posted its 2,000th talk online. The talks are free to view, and they have been translated into more than 100 languages with the help of volunteers from around the world. Viewership has grown to approximately one billion views per year.

Continuing a strategy of 'radical openness,' in 2009 Chris introduced the TEDx initiative, allowing free licenses to local organizers who wished to organize their own TED-like events. More than 8,000 such events have been held, generating an archive of 60,000 TEDx talks. And three years later, the TED-Ed program was launched, offering free educational videos and tools to students and teachers.

More profile about the speaker
Chris Anderson | Speaker | TED.com