ABOUT THE SPEAKER
Sam Harris - Neuroscientist, philosopher
Sam Harris's work focuses on how our growing understanding of ourselves and the world is changing our sense of how we should live.

Why you should listen

Sam Harris is the author of five New York Times bestsellers. His books include The End of FaithLetter to a Christian Nation, The Moral Landscape, Free Will, Lying, Waking Up and Islam and the Future of Tolerance (with Maajid Nawaz). The End of Faith won the 2005 PEN Award for Nonfiction. Harris's writing and public lectures cover a wide range of topics -- neuroscience, moral philosophy, religion, spirituality, violence, human reasoning -- but generally focus on how a growing understanding of ourselves and the world is changing our sense of how we should live.

Harris's work has been published in more than 20 languages and has been discussed in the New York Times, Time, Scientific American, Nature, Newsweek, Rolling Stone and many other journals. He has written for the New York Times, the Los Angeles Times, The Economist, The Times (London), the Boston Globe, The Atlantic, The Annals of Neurology and elsewhere. Harris also regularly hosts a popular podcast.

Harris received a degree in philosophy from Stanford University and a Ph.D. in neuroscience from UCLA.

More profile about the speaker
Sam Harris | Speaker | TED.com
TEDSummit

Sam Harris: Can we build AI without losing control over it?

山姆·哈里斯: 我们能打造人工智能且同时掌握控制权吗?

Filmed:
5,024,015 views

你害怕超级人工智能吗?神经科学家兼哲学家山姆·哈里斯说:你应该害怕。而且这种害怕不应仅仅只停留在理论层面上。他说,我们将要打造超越人类的机器,它们将会像我们对待蚂蚁那样对待我们。然而针对这样的问题,我们却仍未能给出一个让人满意的答案。
- Neuroscientist, philosopher
Sam Harris's work focuses on how our growing understanding of ourselves and the world is changing our sense of how we should live. Full bio

Double-click the English transcript below to play the video.

00:13
I'm going to talk
about a failure失败 of intuition直觉
0
1000
2216
我想谈论一种我们
很多人都经历过的
00:15
that many许多 of us suffer遭受 from.
1
3240
1600
来自于直觉上的失误。
00:17
It's really a failure失败
to detect检测 a certain某些 kind of danger危险.
2
5480
3040
它让人们无法察觉到
一种特定危险的存在。
00:21
I'm going to describe描述 a scenario脚本
3
9360
1736
我要向大家描述一个情景,
一个我觉得既令人害怕,
00:23
that I think is both terrifying可怕的
4
11120
3256
00:26
and likely容易 to occur发生,
5
14400
1760
却又很可能发生的情景。
00:28
and that's not a good combination组合,
6
16840
1656
这样一个组合的出现,
显然不是一个好的征兆。
00:30
as it turns out.
7
18520
1536
不过,在座的大部分人都会觉得,
00:32
And yet然而 rather than be scared害怕,
most of you will feel
8
20080
2456
我要谈论的这件事其实挺酷的。
00:34
that what I'm talking about
is kind of cool.
9
22560
2080
00:37
I'm going to describe描述
how the gains收益 we make
10
25200
2976
我将描述我们从人工智能中
00:40
in artificial人造 intelligence情报
11
28200
1776
获得的好处,
将怎样彻底地毁灭我们。
00:42
could ultimately最终 destroy破坏 us.
12
30000
1776
事实上,想看到人工智能
最终不摧毁我们是很难的,
00:43
And in fact事实, I think it's very difficult
to see how they won't惯于 destroy破坏 us
13
31800
3456
或者说它必将驱使我们自我毁灭。
00:47
or inspire启发 us to destroy破坏 ourselves我们自己.
14
35280
1680
00:49
And yet然而 if you're anything like me,
15
37400
1856
如果你和我有共同点,
你会发现思考这些问题
是相当有趣的。
00:51
you'll你会 find that it's fun开玩笑
to think about these things.
16
39280
2656
而这种反应就是问题的一部分。
00:53
And that response响应 is part部分 of the problem问题.
17
41960
3376
因为这种想法应该使你感到担忧。
00:57
OK? That response响应 should worry担心 you.
18
45360
1720
00:59
And if I were to convince说服 you in this talk
19
47920
2656
假如我想在这个演讲中让你们相信,
我们因为气候变化或者其他灾难,
01:02
that we were likely容易
to suffer遭受 a global全球 famine饥荒,
20
50600
3416
很可能会遭受全球性的饥荒,
01:06
either because of climate气候 change更改
or some other catastrophe灾难,
21
54040
3056
同时,你们的子孙后辈
01:09
and that your grandchildren孙子,
or their grandchildren孙子,
22
57120
3416
都可能在这样的饥荒中挣扎求生,
01:12
are very likely容易 to live生活 like this,
23
60560
1800
01:15
you wouldn't不会 think,
24
63200
1200
你们就不会觉得
01:17
"Interesting有趣.
25
65440
1336
“真有趣,
我喜欢这个TED演讲。”
01:18
I like this TEDTED Talk."
26
66800
1200
01:21
Famine饥荒 isn't fun开玩笑.
27
69200
1520
因为饥荒一点都不有趣。
01:23
Death死亡 by science科学 fiction小说,
on the other hand, is fun开玩笑,
28
71800
3376
不过,科幻小说中的死亡
往往却引人入胜。
所以我现在最担心的一个问题是,
01:27
and one of the things that worries me most
about the development发展 of AIAI at this point
29
75200
3976
人们对人工智能的发展将带来的危险,
01:31
is that we seem似乎 unable无法 to marshal元帅
an appropriate适当 emotional情绪化 response响应
30
79200
4096
01:35
to the dangers危险 that lie谎言 ahead.
31
83320
1816
似乎还没有形成一个正确的认识。
我也同样如此,所以我想
在这个演讲中和大家一起探讨。
01:37
I am unable无法 to marshal元帅 this response响应,
and I'm giving this talk.
32
85160
3200
01:42
It's as though虽然 we stand before two doors.
33
90120
2696
我们就像站在了两扇门前。
在第一扇门后面,
01:44
Behind背后 door number one,
34
92840
1256
我们停下打造智能机器的脚步。
01:46
we stop making制造 progress进展
in building建造 intelligent智能 machines.
35
94120
3296
某些原因也使我们停止了
对电脑软件和硬件的升级。
01:49
Our computer电脑 hardware硬件 and software软件
just stops停止 getting得到 better for some reason原因.
36
97440
4016
现在让我们想一下为什么会这样。
01:53
Now take a moment时刻
to consider考虑 why this might威力 happen发生.
37
101480
3000
01:57
I mean, given特定 how valuable有价值
intelligence情报 and automation自动化 are,
38
105080
3656
我的意思是,当我们认识到
智能和自动化不可估量的价值时,
我们总会竭尽所能的改善这些科技。
02:00
we will continue继续 to improve提高 our technology技术
if we are at all able能够 to.
39
108760
3520
02:05
What could stop us from doing this?
40
113200
1667
那么,什么会使我们停下脚步呢?
02:07
A full-scale全面 nuclear war战争?
41
115800
1800
一场大规模的核战争?
02:11
A global全球 pandemic流感大流行?
42
119000
1560
一次全球性的瘟疫?
02:14
An asteroid小行星 impact碰撞?
43
122320
1320
一个小行星撞击了地球?
或者是贾斯汀·比伯成为了美国总统?
02:17
Justin贾斯汀 Bieber比伯 becoming变得
president主席 of the United联合的 States状态?
44
125640
2576
(笑声)
02:20
(Laughter笑声)
45
128240
2280
02:24
The point is, something would have to
destroy破坏 civilization文明 as we know it.
46
132760
3920
重点是,总有一个事物
会摧毁人类现有的文明。
02:29
You have to imagine想像
how bad it would have to be
47
137360
4296
你需要思考这个灾难究竟有多恐怖,
才会永久性地阻止我们
02:33
to prevent避免 us from making制造
improvements改进 in our technology技术
48
141680
3336
发展科技,
02:37
permanently永久,
49
145040
1216
永久性的。
02:38
generation after generation.
50
146280
2016
光想想它,
就觉得这将是人类历史上
02:40
Almost几乎 by definition定义,
this is the worst最差 thing
51
148320
2136
能发生的最惨绝人寰的事了。
02:42
that's ever happened发生 in human人的 history历史.
52
150480
2016
那么,我们唯一剩下的选择,
02:44
So the only alternative替代,
53
152520
1296
就藏在第二扇门的后面,
02:45
and this is what lies
behind背后 door number two,
54
153840
2336
那就是我们持续
改进我们的智能机器,
02:48
is that we continue继续
to improve提高 our intelligent智能 machines
55
156200
3136
永不停歇。
02:51
year after year after year.
56
159360
1600
02:53
At a certain某些 point, we will build建立
machines that are smarter聪明 than we are,
57
161720
3640
在将来的某一天,我们会
造出比我们更聪明的机器,
02:58
and once一旦 we have machines
that are smarter聪明 than we are,
58
166080
2616
一旦我们有了
比我们更聪明的机器,
它们将进行自我改进。
03:00
they will begin开始 to improve提高 themselves他们自己.
59
168720
1976
然后我们就会承担着
数学家IJ Good 所说的
03:02
And then we risk风险 what
the mathematician数学家 IJIJ Good called
60
170720
2736
03:05
an "intelligence情报 explosion爆炸,"
61
173480
1776
“智能爆炸”的风险,
(科技进步的)
进程将不再受我们的控制。
03:07
that the process处理 could get away from us.
62
175280
2000
03:10
Now, this is often经常 caricatured讽刺,
as I have here,
63
178120
2816
现在我们时常会看到
这样一些讽刺漫画,
03:12
as a fear恐惧 that armies军队 of malicious恶毒 robots机器人
64
180960
3216
我们总会担心受到一些不怀好意的
机器人军队的攻击。
03:16
will attack攻击 us.
65
184200
1256
但这不是最可能出现的事情。
03:17
But that isn't the most likely容易 scenario脚本.
66
185480
2696
我们的机器不会自动变得邪恶。
03:20
It's not that our machines
will become成为 spontaneously自发 malevolent坏心肠的.
67
188200
4856
所以,我们唯一的顾虑就是
我们将会打造
03:25
The concern关心 is really
that we will build建立 machines
68
193080
2616
03:27
that are so much
more competent胜任 than we are
69
195720
2056
比我们人类更有竞争力的机器。
而一旦我们和它们的目标不一致,
03:29
that the slightest丝毫 divergence差异
between之间 their goals目标 and our own拥有
70
197800
3776
我们将会被摧毁。
03:33
could destroy破坏 us.
71
201600
1200
03:35
Just think about how we relate涉及 to ants蚂蚁.
72
203960
2080
想想我们与蚂蚁的关系吧。
03:38
We don't hate讨厌 them.
73
206600
1656
我们不讨厌它们,
我们不会去主动去伤害它们。
03:40
We don't go out of our way to harm危害 them.
74
208280
2056
实际上,我们经常会尽量
避免伤害蚂蚁。
03:42
In fact事实, sometimes有时
we take pains辛劳 not to harm危害 them.
75
210360
2376
我们会选择从它们身边走过。
03:44
We step over them on the sidewalk人行道.
76
212760
2016
但只要它们的存在
03:46
But whenever每当 their presence存在
77
214800
2136
妨碍到了我们达成目标,
03:48
seriously认真地 conflicts冲突 with one of our goals目标,
78
216960
2496
比如说当我们在建造这样一个建筑,
03:51
let's say when constructing建设
a building建造 like this one,
79
219480
2477
我们会毫不手软地杀掉它们。
03:53
we annihilate歼灭 them without a qualm疑虑.
80
221981
1960
03:56
The concern关心 is that we will
one day build建立 machines
81
224480
2936
所以我们的顾虑是,终将有一天
我们打造的机器,
不管它们是否有意识,
它们终将会以
03:59
that, whether是否 they're conscious意识 or not,
82
227440
2736
04:02
could treat对待 us with similar类似 disregard漠视.
83
230200
2000
我们对待蚂蚁的方式
来对待我们。
04:05
Now, I suspect疑似 this seems似乎
far-fetched牵强 to many许多 of you.
84
233760
2760
我想很多人会说这很遥远。
04:09
I bet赌注 there are those of you who doubt怀疑
that superintelligent超智 AIAI is possible可能,
85
237360
6336
我打赌你们中有些人还会
怀疑超级人工智能是否可能实现,
认为我是在小题大做。
04:15
much less inevitable必然.
86
243720
1656
但是你很快会发现以下这些
假设中的某一个是有问题的。
04:17
But then you must必须 find something wrong错误
with one of the following以下 assumptions假设.
87
245400
3620
下面是仅有的三种假设:
04:21
And there are only three of them.
88
249044
1572
04:23
Intelligence情报 is a matter of information信息
processing处理 in physical物理 systems系统.
89
251800
4719
第一,智慧可以被看做
物理系统中的信息处理过程。
04:29
Actually其实, this is a little bit more
than an assumption假设.
90
257320
2615
事实上,这不仅仅是一个假设。
我们已经在有些机器中
嵌入了智能系统,
04:31
We have already已经 built内置
narrow狭窄 intelligence情报 into our machines,
91
259959
3457
这些机器中很多已经
04:35
and many许多 of these machines perform演出
92
263440
2016
有着超越普通人的智慧了。
04:37
at a level水平 of superhuman超人
intelligence情报 already已经.
93
265480
2640
04:40
And we know that mere matter
94
268840
2576
而且,我们也知道任何一点小事
都可以引发所谓的“普遍智慧”,
04:43
can give rise上升 to what is called
"general一般 intelligence情报,"
95
271440
2616
这是一种可以在不同领域间
灵活思考的能力,
04:46
an ability能力 to think flexibly灵活
across横过 multiple domains,
96
274080
3656
因为我们的大脑已经
成功做到了这些。对吧?
04:49
because our brains大脑 have managed管理 it. Right?
97
277760
3136
我的意思是,
大脑里其实都是原子,
04:52
I mean, there's just atoms原子 in here,
98
280920
3936
只要我们继续建造这些原子体系,
04:56
and as long as we continue继续
to build建立 systems系统 of atoms原子
99
284880
4496
我们就能实现越来越多的智慧行为,
05:01
that display显示 more and more
intelligent智能 behavior行为,
100
289400
2696
我们最终将会,
当然除非我们被干扰,
05:04
we will eventually终于,
unless除非 we are interrupted间断,
101
292120
2536
我们最终将会给我们的机器赋予
05:06
we will eventually终于
build建立 general一般 intelligence情报
102
294680
3376
广泛意义上的智能。
05:10
into our machines.
103
298080
1296
我们要知道这个进程的速度并不重要,
05:11
It's crucial关键 to realize实现
that the rate of progress进展 doesn't matter,
104
299400
3656
因为任何进程都足够
让我们走进死胡同。
05:15
because any progress进展
is enough足够 to get us into the end结束 zone.
105
303080
3176
甚至不需要考虑摩尔定律,
也不需要用指数函数来衡量,
05:18
We don't need Moore's摩尔定律 law to continue继续.
We don't need exponential指数 progress进展.
106
306280
3776
这一切顺其自然都会发生。
05:22
We just need to keep going.
107
310080
1600
05:25
The second第二 assumption假设
is that we will keep going.
108
313480
2920
第二个假设是,我们会一直创新。
05:29
We will continue继续 to improve提高
our intelligent智能 machines.
109
317000
2760
去继续改进我们的智能机器。
05:33
And given特定 the value of intelligence情报 --
110
321000
4376
由于智慧的价值就是——
提供我们所珍爱的事物,
05:37
I mean, intelligence情报 is either
the source资源 of everything we value
111
325400
3536
或是用于保护我们所珍视的一切。
05:40
or we need it to safeguard保障
everything we value.
112
328960
2776
智慧就是我们最有价值的资源。
05:43
It is our most valuable有价值 resource资源.
113
331760
2256
所以我们想继续革新它。
05:46
So we want to do this.
114
334040
1536
因为我们有很多需要
迫切解决的问题。
05:47
We have problems问题
that we desperately拼命 need to solve解决.
115
335600
3336
我们想要治愈像阿兹海默症
和癌症这样的疾病,
05:50
We want to cure治愈 diseases疾病
like Alzheimer's老年痴呆症 and cancer癌症.
116
338960
3200
05:54
We want to understand理解 economic经济 systems系统.
We want to improve提高 our climate气候 science科学.
117
342960
3936
我们想要了解经济系统,
想要改善我们的气候科学,
所以只要可能,
我们就会将革新继续下去。
05:58
So we will do this, if we can.
118
346920
2256
而且革新的列车早已驶出,
车上却没有刹车。
06:01
The train培养 is already已经 out of the station,
and there's no brake制动 to pull.
119
349200
3286
06:05
Finally最后, we don't stand
on a peak of intelligence情报,
120
353880
5456
第三种假设是:
人类没有登上智慧的巅峰,
甚至连接近可能都谈不上。
06:11
or anywhere随地 near it, likely容易.
121
359360
1800
06:13
And this really is the crucial关键 insight眼光.
122
361640
1896
这个想法十分关键。
这就是为什么
我们所处的环境是很危险的,
06:15
This is what makes品牌
our situation情况 so precarious危险的,
123
363560
2416
这也是为什么我们对风险的
直觉是不可靠的。
06:18
and this is what makes品牌 our intuitions直觉
about risk风险 so unreliable靠不住.
124
366000
4040
06:23
Now, just consider考虑 the smartest最聪明的 person
who has ever lived生活.
125
371120
2720
现在,请大家想一下
谁是世界上最聪明的人。
06:26
On almost几乎 everyone's大家的 shortlist名单 here
is John约翰 von Neumann诺伊曼.
126
374640
3416
几乎每个人的候选名单里都会
有约翰·冯·诺伊曼。
冯·诺伊曼留给周围人的印象
06:30
I mean, the impression印象 that von Neumann诺伊曼
made制作 on the people around him,
127
378080
3336
就是他是那个时代当中最杰出的
数学家和物理学家,
06:33
and this included包括 the greatest最大
mathematicians数学家 and physicists物理学家 of his time,
128
381440
4056
这些都是完好的记录在案的。
06:37
is fairly相当 well-documented充分证明.
129
385520
1936
即使他的故事里有一半是假的,
06:39
If only half the stories故事
about him are half true真正,
130
387480
3776
都没有人会质疑
06:43
there's no question
131
391280
1216
他仍然是世界上最聪明的人之一。
06:44
he's one of the smartest最聪明的 people
who has ever lived生活.
132
392520
2456
那么,让我们来看看智慧谱线吧。
06:47
So consider考虑 the spectrum光谱 of intelligence情报.
133
395000
2520
06:50
Here we have John约翰 von Neumann诺伊曼.
134
398320
1429
现在我们有了约翰·冯·诺伊曼,
06:53
And then we have you and me.
135
401560
1334
还有我们大家。
06:56
And then we have a chicken.
136
404120
1296
另外还有一只鸡。
(笑声)
06:57
(Laughter笑声)
137
405440
1936
抱歉,母鸡的位置应该在这。
06:59
Sorry, a chicken.
138
407400
1216
(笑声)
07:00
(Laughter笑声)
139
408640
1256
这个演讲已经够严肃了,
开个玩笑轻松一下。
07:01
There's no reason原因 for me to make this talk
more depressing压抑 than it needs需求 to be.
140
409920
3736
(笑声)
07:05
(Laughter笑声)
141
413680
1600
07:08
It seems似乎 overwhelmingly压倒性 likely容易, however然而,
that the spectrum光谱 of intelligence情报
142
416339
3477
然而,很可能的情况是,
智慧谱线上的内容
已远远超出了我们的认知,
07:11
extends扩展 much further进一步
than we currently目前 conceive构想,
143
419840
3120
07:15
and if we build建立 machines
that are more intelligent智能 than we are,
144
423880
3216
如果我们建造了比
自身更聪明的机器,
它们将非常可能
以超乎寻常的方式
07:19
they will very likely容易
explore探索 this spectrum光谱
145
427120
2296
延展这个谱线,
07:21
in ways方法 that we can't imagine想像,
146
429440
1856
最终超越人类。
07:23
and exceed超过 us in ways方法
that we can't imagine想像.
147
431320
2520
07:27
And it's important重要 to recognize认识 that
this is true真正 by virtue美德 of speed速度 alone单独.
148
435000
4336
仅仅从速度方面考虑,
我们就能够意识到这一点。
那么,现在让我们来想象一下
我们刚建好一个超级人工智能机器,
07:31
Right? So imagine想像 if we just built内置
a superintelligent超智 AIAI
149
439360
5056
大概和斯坦福
或是麻省理工学院的研究员的
07:36
that was no smarter聪明
than your average平均 team球队 of researchers研究人员
150
444440
3456
平均水平差不多吧。
07:39
at Stanford斯坦福 or MITMIT.
151
447920
2296
但是,电路板要比生物系统
07:42
Well, electronic电子 circuits电路
function功能 about a million百万 times faster更快
152
450240
2976
运行速度快一百万倍,
07:45
than biochemical生化 ones那些,
153
453240
1256
所以这个机器思考起来
会比那些打造它的大脑
07:46
so this machine should think
about a million百万 times faster更快
154
454520
3136
快一百万倍。
07:49
than the minds头脑 that built内置 it.
155
457680
1816
当你让它运行一周后,
07:51
So you set it running赛跑 for a week,
156
459520
1656
它将能呈现出相当于人类智慧在
20000年间发展出的水平,
07:53
and it will perform演出 20,000 years年份
of human-level人类水平 intellectual知识分子 work,
157
461200
4560
07:58
week after week after week.
158
466400
1960
而这个过程将周而复始。
08:01
How could we even understand理解,
much less constrain压抑,
159
469640
3096
那么,我们又怎么能理解,
更不用说去制约
一个以如此速度运行的机器呢?
08:04
a mind心神 making制造 this sort分类 of progress进展?
160
472760
2280
08:08
The other thing that's worrying令人担忧, frankly坦率地说,
161
476840
2136
坦白讲,另一件令人担心的事就是,
08:11
is that, imagine想像 the best最好 case案件 scenario脚本.
162
479000
4976
我们考虑一下最理想的情景。
想象我们正好做出了
一个没有任何安全隐患的
08:16
So imagine想像 we hit击中 upon a design设计
of superintelligent超智 AIAI
163
484000
4176
超级人工智能。
08:20
that has no safety安全 concerns关注.
164
488200
1376
我们有了一个前所未有的完美设计。
08:21
We have the perfect完善 design设计
the first time around.
165
489600
3256
就好像我们被赐予了一件神物,
08:24
It's as though虽然 we've我们已经 been handed an oracle神谕
166
492880
2216
它能够准确的执行目标动作。
08:27
that behaves的行为 exactly究竟 as intended.
167
495120
2016
这个机器将完美的节省人力工作。
08:29
Well, this machine would be
the perfect完善 labor-saving省力 device设备.
168
497160
3720
08:33
It can design设计 the machine
that can build建立 the machine
169
501680
2429
它设计出的机器
能够再生产其他机器,
去完成所有的人力工作。
08:36
that can do any physical物理 work,
170
504133
1763
由太阳能供电,
08:37
powered动力 by sunlight阳光,
171
505920
1456
而成本的多少仅取决于原材料。
08:39
more or less for the cost成本
of raw生的 materials物料.
172
507400
2696
那么,我们正在谈论的
就是人力劳动的终结。
08:42
So we're talking about
the end结束 of human人的 drudgery苦差事.
173
510120
3256
也关乎脑力劳动的终结。
08:45
We're also talking about the end结束
of most intellectual知识分子 work.
174
513400
2800
08:49
So what would apes类人猿 like ourselves我们自己
do in this circumstance环境?
175
517200
3056
那在这种情况下,
像我们这样的"大猩猩"还能有什么用呢?
我们可以悠闲地玩飞盘,
给彼此做按摩。
08:52
Well, we'd星期三 be free自由 to play Frisbee飞盘
and give each other massages按摩.
176
520280
4080
08:57
Add some LSDLSD and some
questionable可疑的 wardrobe衣柜 choices选择,
177
525840
2856
服用一些迷药,
穿一些奇装异服,
整个世界都沉浸在狂欢节之中。
09:00
and the whole整个 world世界
could be like Burning燃烧 Man.
178
528720
2176
(笑声)
09:02
(Laughter笑声)
179
530920
1640
09:06
Now, that might威力 sound声音 pretty漂亮 good,
180
534320
2000
那可能听起来挺棒的,
09:09
but ask yourself你自己 what would happen发生
181
537280
2376
不过让我们扪心自问,
在现有的经济和政治体制下,
这意味着什么?
09:11
under our current当前 economic经济
and political政治 order订购?
182
539680
2736
我们很可能会目睹
09:14
It seems似乎 likely容易 that we would witness见证
183
542440
2416
前所未有的贫富差距
09:16
a level水平 of wealth财富 inequality不等式
and unemployment失业
184
544880
4136
和失业率。
09:21
that we have never seen看到 before.
185
549040
1496
有钱人不愿意马上把这笔新的财富
09:22
Absent缺席 a willingness愿意
to immediately立即 put this new wealth财富
186
550560
2616
贡献出来服务社会,
09:25
to the service服务 of all humanity人性,
187
553200
1480
09:27
a few少数 trillionairestrillionaires could grace恩典
the covers盖子 of our business商业 magazines杂志
188
555640
3616
这时一些千万富翁能够优雅地
登上商业杂志的封面,
而剩下的人可能都在挨饿。
09:31
while the rest休息 of the world世界
would be free自由 to starve饿死.
189
559280
2440
09:34
And what would the Russians俄罗斯
or the Chinese中文 do
190
562320
2296
如果听说硅谷里的公司
即将造出超级人工智能,
09:36
if they heard听说 that some company公司
in Silicon Valley
191
564640
2616
俄国人和中国人
会采取怎样的行动呢?
09:39
was about to deploy部署 a superintelligent超智 AIAI?
192
567280
2736
那个机器将能够
以一种前所未有的能力
09:42
This machine would be capable
of waging发动 war战争,
193
570040
2856
去开展由领土问题和
09:44
whether是否 terrestrial陆生 or cyber网络,
194
572920
2216
网络问题引发的战争。
09:47
with unprecedented史无前例 power功率.
195
575160
1680
09:50
This is a winner-take-all赢家通吃 scenario脚本.
196
578120
1856
这是一个胜者为王的世界。
机器世界中的半年,
09:52
To be six months个月 ahead
of the competition竞争 here
197
580000
3136
在现实世界至少会相当于
09:55
is to be 500,000 years年份 ahead,
198
583160
2776
09:57
at a minimum最低限度.
199
585960
1496
50万年。
所以仅仅是关于这种科技突破的传闻,
09:59
So it seems似乎 that even mere rumors传闻
of this kind of breakthrough突破
200
587480
4736
就可以让我们的种族丧失理智。
10:04
could cause原因 our species种类 to go berserk发狂的.
201
592240
2376
在我的观念里,
10:06
Now, one of the most frightening可怕的 things,
202
594640
2896
当前最可怕的东西
10:09
in my view视图, at this moment时刻,
203
597560
2776
正是人工智能的研究人员
10:12
are the kinds of things
that AIAI researchers研究人员 say
204
600360
4296
安慰我们的那些话。
10:16
when they want to be reassuring令人欣慰.
205
604680
1560
10:19
And the most common共同 reason原因
we're told not to worry担心 is time.
206
607000
3456
最常见的理由就是关于时间。
他们会说,现在开始担心还为时尚早。
10:22
This is all a long way off,
don't you know.
207
610480
2056
这很可能是50年或者
100年之后才需要担心的事。
10:24
This is probably大概 50 or 100 years年份 away.
208
612560
2440
10:27
One researcher研究员 has said,
209
615720
1256
一个研究人员曾说过,
“担心人工智能的安全性
10:29
"Worrying令人担忧 about AIAI safety安全
210
617000
1576
就好比担心火星上人口过多一样。”
10:30
is like worrying令人担忧
about overpopulation人口过剩 on Mars火星."
211
618600
2280
10:34
This is the Silicon Valley version
212
622116
1620
这就是硅谷版本的
10:35
of "don't worry担心 your
pretty漂亮 little head about it."
213
623760
2376
“不要杞人忧天。”
(笑声)
10:38
(Laughter笑声)
214
626160
1336
似乎没有人注意到
10:39
No one seems似乎 to notice注意
215
627520
1896
以时间作为参考系
10:41
that referencing引用 the time horizon地平线
216
629440
2616
是得不出合理的结论的。
10:44
is a total non sequitur不合逻辑.
217
632080
2576
如果说智慧只包括信息处理,
10:46
If intelligence情报 is just a matter
of information信息 processing处理,
218
634680
3256
然后我们继续改善这些机器,
10:49
and we continue继续 to improve提高 our machines,
219
637960
2656
那么我们终将生产出超级智能。
10:52
we will produce生产
some form形成 of superintelligence超级智能.
220
640640
2880
10:56
And we have no idea理念
how long it will take us
221
644320
3656
但是,我们无法预估将花费多长时间
来创造实现这一切的安全环境。
11:00
to create创建 the conditions条件
to do that safely安然.
222
648000
2400
11:04
Let me say that again.
223
652200
1296
我再重复一遍。
我们无法预估将花费多长时间
11:05
We have no idea理念 how long it will take us
224
653520
3816
来创造实现这一切的安全环境。
11:09
to create创建 the conditions条件
to do that safely安然.
225
657360
2240
11:12
And if you haven't没有 noticed注意到,
50 years年份 is not what it used to be.
226
660920
3456
你们可能没有注意过,
50年的概念已今非昔比。
这是用月来衡量50年的样子。
(每个点表示一个月)
11:16
This is 50 years年份 in months个月.
227
664400
2456
红色的点是代表苹果手机出现的时间。
11:18
This is how long we've我们已经 had the iPhone苹果手机.
228
666880
1840
11:21
This is how long "The Simpsons辛普森"
has been on television电视.
229
669440
2600
这是《辛普森一家》(动画片)
在电视上播出以来的时间。
11:24
Fifty五十 years年份 is not that much time
230
672680
2376
要做好准备面对
人类历史上前所未有的挑战,
50年时间并不是很长。
11:27
to meet遇到 one of the greatest最大 challenges挑战
our species种类 will ever face面对.
231
675080
3160
11:31
Once一旦 again, we seem似乎 to be failing失败
to have an appropriate适当 emotional情绪化 response响应
232
679640
4016
就像我刚才说的,
我们对确定会来临的事情
做出了不合理的回应。
11:35
to what we have every一切 reason原因
to believe is coming未来.
233
683680
2696
计算机科学家斯图尔特·罗素
给出了一个极好的类比。
11:38
The computer电脑 scientist科学家 Stuart斯图尔特 Russell罗素
has a nice不错 analogy比喻 here.
234
686400
3976
他说,想象我们从
外太空接收到一条讯息,
11:42
He said, imagine想像 that we received收到
a message信息 from an alien外侨 civilization文明,
235
690400
4896
上面写着:
11:47
which哪一个 read:
236
695320
1696
“地球上的人类,
11:49
"People of Earth地球,
237
697040
1536
我们将在五十年后到达你们的星球,
11:50
we will arrive到达 on your planet行星 in 50 years年份.
238
698600
2360
11:53
Get ready准备."
239
701800
1576
做好准备吧。”
于是我们就开始倒计时,
直到它们的“母舰”着陆吗?
11:55
And now we're just counting数数 down
the months个月 until直到 the mothership母舰 lands土地?
240
703400
4256
在这种情况下我们会感到更紧迫。
11:59
We would feel a little
more urgency than we do.
241
707680
3000
12:04
Another另一个 reason原因 we're told not to worry担心
242
712680
1856
另外一个试图安慰我们的理由是,
那些机器必须
拥有和我们一样的价值观,
12:06
is that these machines
can't help but share分享 our values
243
714560
3016
因为它们将会是我们自身的延伸。
12:09
because they will be literally按照字面
extensions扩展 of ourselves我们自己.
244
717600
2616
它们将会被嫁接到我们的大脑上,
12:12
They'll他们会 be grafted嫁接 onto our brains大脑,
245
720240
1816
我们将会成它们的边缘系统。
12:14
and we'll essentially实质上
become成为 their limbic边缘 systems系统.
246
722080
2360
12:17
Now take a moment时刻 to consider考虑
247
725120
1416
现在我们再思考一下
最安全的,也是唯一经慎重考虑后
12:18
that the safest最安全
and only prudent谨慎 path路径 forward前锋,
248
726560
3176
推荐的发展方向,
12:21
recommended推荐的,
249
729760
1336
是将这项技术直接植入我们大脑。
12:23
is to implant注入 this technology技术
directly into our brains大脑.
250
731120
2800
12:26
Now, this may可能 in fact事实 be the safest最安全
and only prudent谨慎 path路径 forward前锋,
251
734600
3376
这也许确实是最安全的,
也是唯一慎重的发展方向,
但通常在我们把它塞进脑袋之前,
12:30
but usually平时 one's那些 safety安全 concerns关注
about a technology技术
252
738000
3056
会充分考虑这项技术的安全性。
12:33
have to be pretty漂亮 much worked工作 out
before you stick it inside your head.
253
741080
3656
(笑声)
12:36
(Laughter笑声)
254
744760
2016
更深一层的问题是:
仅仅制造出超级人工智能机器
12:38
The deeper更深 problem问题 is that
building建造 superintelligent超智 AIAI on its own拥有
255
746800
5336
可能要比
12:44
seems似乎 likely容易 to be easier更轻松
256
752160
1736
既制造超级人工智能,
12:45
than building建造 superintelligent超智 AIAI
257
753920
1856
又让其拥有能让
我们的思想和超级人工智能
12:47
and having the completed完成 neuroscience神经科学
258
755800
1776
无缝对接的完整的
神经科学系统要简单很多。
12:49
that allows允许 us to seamlessly无缝
integrate整合 our minds头脑 with it.
259
757600
2680
12:52
And given特定 that the companies公司
and governments政府 doing this work
260
760800
3176
而做这些研究的公司或政府,
很可能将彼此视作竞争对手,
12:56
are likely容易 to perceive感知 themselves他们自己
as being存在 in a race种族 against反对 all others其他,
261
764000
3656
因为赢得了比赛就意味着称霸了世界,
12:59
given特定 that to win赢得 this race种族
is to win赢得 the world世界,
262
767680
3256
前提是不在刚成功后就将其销毁,
13:02
provided提供 you don't destroy破坏 it
in the next下一个 moment时刻,
263
770960
2456
所以结论是:简单的选项
13:05
then it seems似乎 likely容易
that whatever随你 is easier更轻松 to do
264
773440
2616
一定会被先实现。
13:08
will get doneDONE first.
265
776080
1200
13:10
Now, unfortunately不幸,
I don't have a solution to this problem问题,
266
778560
2856
但很遗憾,
除了建议更多人去思考这个问题,
我对此并无解决方案。
13:13
apart距离 from recommending建议
that more of us think about it.
267
781440
2616
我觉得在人工智能问题上,
13:16
I think we need something
like a Manhattan曼哈顿 Project项目
268
784080
2376
我们需要一个“曼哈顿计划”
(二战核武器研究计划),
13:18
on the topic话题 of artificial人造 intelligence情报.
269
786480
2016
不是用于讨论如何制造人工智能,
因为我们一定会这么做,
13:20
Not to build建立 it, because I think
we'll inevitably必将 do that,
270
788520
2736
而是去避免军备竞赛,
13:23
but to understand理解
how to avoid避免 an arms武器 race种族
271
791280
3336
最终以一种有利于
我们的方式去打造它。
13:26
and to build建立 it in a way
that is aligned对齐 with our interests利益.
272
794640
3496
当你在谈论一个可以自我改造的
13:30
When you're talking
about superintelligent超智 AIAI
273
798160
2136
超级人工智能时,
13:32
that can make changes变化 to itself本身,
274
800320
2256
我们似乎只有
一次正确搭建初始系统的机会,
13:34
it seems似乎 that we only have one chance机会
to get the initial初始 conditions条件 right,
275
802600
4616
而这个正确的初始系统
13:39
and even then we will need to absorb吸收
276
807240
2056
需要我们在经济以及政治上
做出很大的努力。
13:41
the economic经济 and political政治
consequences后果 of getting得到 them right.
277
809320
3040
13:45
But the moment时刻 we admit承认
278
813760
2056
但是当我们承认
信息处理是智慧的源头,
13:47
that information信息 processing处理
is the source资源 of intelligence情报,
279
815840
4000
13:52
that some appropriate适当 computational计算 system系统
is what the basis基础 of intelligence情报 is,
280
820720
4800
承认一些电脑系统是智能的基础,
13:58
and we admit承认 that we will improve提高
these systems系统 continuously一直,
281
826360
3760
承认我们会不断改善这些系统,
14:03
and we admit承认 that the horizon地平线
of cognition认识 very likely容易 far exceeds超过
282
831280
4456
承认我们现存的认知远没有达到极限,
将很可能被超越,
14:07
what we currently目前 know,
283
835760
1200
14:10
then we have to admit承认
284
838120
1216
我们又必须同时承认
我们在某种意义上
正在创造一个新的“上帝”。
14:11
that we are in the process处理
of building建造 some sort分类 of god.
285
839360
2640
14:15
Now would be a good time
286
843400
1576
现在正是思考人类是否
能与这个“上帝”和睦相处的最佳时机。
14:17
to make sure it's a god we can live生活 with.
287
845000
1953
14:20
Thank you very much.
288
848120
1536
非常感谢!
(掌声)
14:21
(Applause掌声)
289
849680
5093
Translated by Junyi Sha
Reviewed by Ma Nan

▲Back to top

ABOUT THE SPEAKER
Sam Harris - Neuroscientist, philosopher
Sam Harris's work focuses on how our growing understanding of ourselves and the world is changing our sense of how we should live.

Why you should listen

Sam Harris is the author of five New York Times bestsellers. His books include The End of FaithLetter to a Christian Nation, The Moral Landscape, Free Will, Lying, Waking Up and Islam and the Future of Tolerance (with Maajid Nawaz). The End of Faith won the 2005 PEN Award for Nonfiction. Harris's writing and public lectures cover a wide range of topics -- neuroscience, moral philosophy, religion, spirituality, violence, human reasoning -- but generally focus on how a growing understanding of ourselves and the world is changing our sense of how we should live.

Harris's work has been published in more than 20 languages and has been discussed in the New York Times, Time, Scientific American, Nature, Newsweek, Rolling Stone and many other journals. He has written for the New York Times, the Los Angeles Times, The Economist, The Times (London), the Boston Globe, The Atlantic, The Annals of Neurology and elsewhere. Harris also regularly hosts a popular podcast.

Harris received a degree in philosophy from Stanford University and a Ph.D. in neuroscience from UCLA.

More profile about the speaker
Sam Harris | Speaker | TED.com