ABOUT THE SPEAKER
Grady Booch - Scientist, philosopher
IBM's Grady Booch is shaping the future of cognitive computing by building intelligent systems that can reason and learn.

Why you should listen

When he was 13, Grady Booch saw 2001: A Space Odyssey in the theaters for the first time. Ever since, he's been trying to build Hal (albeit one without the homicidal tendencies). A scientist, storyteller and philosopher, Booch is Chief Scientist for Software Engineering as well as Chief Scientist for Watson/M at IBM Research, where he leads IBM's research and development for embodied cognition. Having originated the term and the practice of object-oriented design, he is best known for his work in advancing the fields of software engineering and software architecture.

A co-author of the Unified Modeling Language (UML), a founding member of the Agile Allianc, and a founding member of the Hillside Group, Booch has published six books and several hundred technical articles, including an ongoing column for IEEE Software. He's also a trustee for the Computer History Museum, an IBM Fellow, an ACM Fellow and IEEE Fellow. He has been awarded the Lovelace Medal and has given the Turing Lecture for the BCS, and was recently named an IEEE Computer Pioneer.

Booch is currently deeply involved in the development of cognitive systems and is also developing a major trans-media documentary for public broadcast on the intersection of computing and the human experience.

More profile about the speaker
Grady Booch | Speaker | TED.com
TED@IBM

Grady Booch: Don't fear superintelligent AI

Filmed:
2,866,438 views

New tech spawns new anxieties, says scientist and philosopher Grady Booch, but we don't need to be afraid an all-powerful, unfeeling AI. Booch allays our worst (sci-fi induced) fears about superintelligent computers by explaining how we'll teach, not program, them to share our human values. Rather than worry about an unlikely existential threat, he urges us to consider how artificial intelligence will enhance human life.
- Scientist, philosopher
IBM's Grady Booch is shaping the future of cognitive computing by building intelligent systems that can reason and learn. Full bio

Double-click the English transcript below to play the video.

00:12
When I was a kid,
I was the quintessential nerd.
0
760
3840
00:17
I think some of you were, too.
1
5320
2176
00:19
(Laughter)
2
7520
1216
00:20
And you, sir, who laughed the loudest,
you probably still are.
3
8760
3216
00:24
(Laughter)
4
12000
2256
00:26
I grew up in a small town
in the dusty plains of north Texas,
5
14280
3496
00:29
the son of a sheriff
who was the son of a pastor.
6
17800
3336
00:33
Getting into trouble was not an option.
7
21160
1920
00:36
And so I started reading
calculus books for fun.
8
24040
3256
00:39
(Laughter)
9
27320
1536
00:40
You did, too.
10
28880
1696
00:42
That led me to building a laser
and a computer and model rockets,
11
30600
3736
00:46
and that led me to making
rocket fuel in my bedroom.
12
34360
3000
00:49
Now, in scientific terms,
13
37960
3656
00:53
we call this a very bad idea.
14
41640
3256
00:56
(Laughter)
15
44920
1216
00:58
Around that same time,
16
46160
2176
01:00
Stanley Kubrick's "2001: A Space Odyssey"
came to the theaters,
17
48360
3216
01:03
and my life was forever changed.
18
51600
2200
01:06
I loved everything about that movie,
19
54280
2056
01:08
especially the HAL 9000.
20
56360
2536
01:10
Now, HAL was a sentient computer
21
58920
2056
01:13
designed to guide the Discovery spacecraft
22
61000
2456
01:15
from the Earth to Jupiter.
23
63480
2536
01:18
HAL was also a flawed character,
24
66040
2056
01:20
for in the end he chose
to value the mission over human life.
25
68120
4280
01:24
Now, HAL was a fictional character,
26
72840
2096
01:26
but nonetheless he speaks to our fears,
27
74960
2656
01:29
our fears of being subjugated
28
77640
2096
01:31
by some unfeeling, artificial intelligence
29
79760
3016
01:34
who is indifferent to our humanity.
30
82800
1960
01:37
I believe that such fears are unfounded.
31
85880
2576
01:40
Indeed, we stand at a remarkable time
32
88480
2696
01:43
in human history,
33
91200
1536
01:44
where, driven by refusal to accept
the limits of our bodies and our minds,
34
92760
4976
01:49
we are building machines
35
97760
1696
01:51
of exquisite, beautiful
complexity and grace
36
99480
3616
01:55
that will extend the human experience
37
103120
2056
01:57
in ways beyond our imagining.
38
105200
1680
01:59
After a career that led me
from the Air Force Academy
39
107720
2576
02:02
to Space Command to now,
40
110320
1936
02:04
I became a systems engineer,
41
112280
1696
02:06
and recently I was drawn
into an engineering problem
42
114000
2736
02:08
associated with NASA's mission to Mars.
43
116760
2576
02:11
Now, in space flights to the Moon,
44
119360
2496
02:13
we can rely upon
mission control in Houston
45
121880
3136
02:17
to watch over all aspects of a flight.
46
125040
1976
02:19
However, Mars is 200 times further away,
47
127040
3536
02:22
and as a result it takes
on average 13 minutes
48
130600
3216
02:25
for a signal to travel
from the Earth to Mars.
49
133840
3136
02:29
If there's trouble,
there's not enough time.
50
137000
3400
02:32
And so a reasonable engineering solution
51
140840
2496
02:35
calls for us to put mission control
52
143360
2576
02:37
inside the walls of the Orion spacecraft.
53
145960
3016
02:41
Another fascinating idea
in the mission profile
54
149000
2896
02:43
places humanoid robots
on the surface of Mars
55
151920
2896
02:46
before the humans themselves arrive,
56
154840
1856
02:48
first to build facilities
57
156720
1656
02:50
and later to serve as collaborative
members of the science team.
58
158400
3360
02:55
Now, as I looked at this
from an engineering perspective,
59
163400
2736
02:58
it became very clear to me
that what I needed to architect
60
166160
3176
03:01
was a smart, collaborative,
61
169360
2176
03:03
socially intelligent
artificial intelligence.
62
171560
2376
03:05
In other words, I needed to build
something very much like a HAL
63
173960
4296
03:10
but without the homicidal tendencies.
64
178280
2416
03:12
(Laughter)
65
180720
1360
03:14
Let's pause for a moment.
66
182920
1816
03:16
Is it really possible to build
an artificial intelligence like that?
67
184760
3896
03:20
Actually, it is.
68
188680
1456
03:22
In many ways,
69
190160
1256
03:23
this is a hard engineering problem
70
191440
1976
03:25
with elements of AI,
71
193440
1456
03:26
not some wet hair ball of an AI problem
that needs to be engineered.
72
194920
4696
03:31
To paraphrase Alan Turing,
73
199640
2656
03:34
I'm not interested
in building a sentient machine.
74
202320
2376
03:36
I'm not building a HAL.
75
204720
1576
03:38
All I'm after is a simple brain,
76
206320
2416
03:40
something that offers
the illusion of intelligence.
77
208760
3120
03:45
The art and the science of computing
have come a long way
78
213000
3136
03:48
since HAL was onscreen,
79
216160
1496
03:49
and I'd imagine if his inventor
Dr. Chandra were here today,
80
217680
3216
03:52
he'd have a whole lot of questions for us.
81
220920
2336
03:55
Is it really possible for us
82
223280
2096
03:57
to take a system of millions
upon millions of devices,
83
225400
4016
04:01
to read in their data streams,
84
229440
1456
04:02
to predict their failures
and act in advance?
85
230920
2256
04:05
Yes.
86
233200
1216
04:06
Can we build systems that converse
with humans in natural language?
87
234440
3176
04:09
Yes.
88
237640
1216
04:10
Can we build systems
that recognize objects, identify emotions,
89
238880
2976
04:13
emote themselves,
play games and even read lips?
90
241880
3376
04:17
Yes.
91
245280
1216
04:18
Can we build a system that sets goals,
92
246520
2136
04:20
that carries out plans against those goals
and learns along the way?
93
248680
3616
04:24
Yes.
94
252320
1216
04:25
Can we build systems
that have a theory of mind?
95
253560
3336
04:28
This we are learning to do.
96
256920
1496
04:30
Can we build systems that have
an ethical and moral foundation?
97
258440
3480
04:34
This we must learn how to do.
98
262480
2040
04:37
So let's accept for a moment
99
265360
1376
04:38
that it's possible to build
such an artificial intelligence
100
266760
2896
04:41
for this kind of mission and others.
101
269680
2136
04:43
The next question
you must ask yourself is,
102
271840
2536
04:46
should we fear it?
103
274400
1456
04:47
Now, every new technology
104
275880
1976
04:49
brings with it
some measure of trepidation.
105
277880
2896
04:52
When we first saw cars,
106
280800
1696
04:54
people lamented that we would see
the destruction of the family.
107
282520
4016
04:58
When we first saw telephones come in,
108
286560
2696
05:01
people were worried it would destroy
all civil conversation.
109
289280
2896
05:04
At a point in time we saw
the written word become pervasive,
110
292200
3936
05:08
people thought we would lose
our ability to memorize.
111
296160
2496
05:10
These things are all true to a degree,
112
298680
2056
05:12
but it's also the case
that these technologies
113
300760
2416
05:15
brought to us things
that extended the human experience
114
303200
3376
05:18
in some profound ways.
115
306600
1880
05:21
So let's take this a little further.
116
309840
2280
05:25
I do not fear the creation
of an AI like this,
117
313120
4736
05:29
because it will eventually
embody some of our values.
118
317880
3816
05:33
Consider this: building a cognitive system
is fundamentally different
119
321720
3496
05:37
than building a traditional
software-intensive system of the past.
120
325240
3296
05:40
We don't program them. We teach them.
121
328560
2456
05:43
In order to teach a system
how to recognize flowers,
122
331040
2656
05:45
I show it thousands of flowers
of the kinds I like.
123
333720
3016
05:48
In order to teach a system
how to play a game --
124
336760
2256
05:51
Well, I would. You would, too.
125
339040
1960
05:54
I like flowers. Come on.
126
342600
2040
05:57
To teach a system
how to play a game like Go,
127
345440
2856
06:00
I'd have it play thousands of games of Go,
128
348320
2056
06:02
but in the process I also teach it
129
350400
1656
06:04
how to discern
a good game from a bad game.
130
352080
2416
06:06
If I want to create an artificially
intelligent legal assistant,
131
354520
3696
06:10
I will teach it some corpus of law
132
358240
1776
06:12
but at the same time I am fusing with it
133
360040
2856
06:14
the sense of mercy and justice
that is part of that law.
134
362920
2880
06:18
In scientific terms,
this is what we call ground truth,
135
366560
2976
06:21
and here's the important point:
136
369560
2016
06:23
in producing these machines,
137
371600
1456
06:25
we are therefore teaching them
a sense of our values.
138
373080
3416
06:28
To that end, I trust
an artificial intelligence
139
376520
3136
06:31
the same, if not more,
as a human who is well-trained.
140
379680
3640
06:36
But, you may ask,
141
384080
1216
06:37
what about rogue agents,
142
385320
2616
06:39
some well-funded
nongovernment organization?
143
387960
3336
06:43
I do not fear an artificial intelligence
in the hand of a lone wolf.
144
391320
3816
06:47
Clearly, we cannot protect ourselves
against all random acts of violence,
145
395160
4536
06:51
but the reality is such a system
146
399720
2136
06:53
requires substantial training
and subtle training
147
401880
3096
06:57
far beyond the resources of an individual.
148
405000
2296
06:59
And furthermore,
149
407320
1216
07:00
it's far more than just injecting
an internet virus to the world,
150
408560
3256
07:03
where you push a button,
all of a sudden it's in a million places
151
411840
3096
07:06
and laptops start blowing up
all over the place.
152
414960
2456
07:09
Now, these kinds of substances
are much larger,
153
417440
2816
07:12
and we'll certainly see them coming.
154
420280
1715
07:14
Do I fear that such
an artificial intelligence
155
422520
3056
07:17
might threaten all of humanity?
156
425600
1960
07:20
If you look at movies
such as "The Matrix," "Metropolis,"
157
428280
4376
07:24
"The Terminator,"
shows such as "Westworld,"
158
432680
3176
07:27
they all speak of this kind of fear.
159
435880
2136
07:30
Indeed, in the book "Superintelligence"
by the philosopher Nick Bostrom,
160
438040
4296
07:34
he picks up on this theme
161
442360
1536
07:35
and observes that a superintelligence
might not only be dangerous,
162
443920
4016
07:39
it could represent an existential threat
to all of humanity.
163
447960
3856
07:43
Dr. Bostrom's basic argument
164
451840
2216
07:46
is that such systems will eventually
165
454080
2736
07:48
have such an insatiable
thirst for information
166
456840
3256
07:52
that they will perhaps learn how to learn
167
460120
2896
07:55
and eventually discover
that they may have goals
168
463040
2616
07:57
that are contrary to human needs.
169
465680
2296
08:00
Dr. Bostrom has a number of followers.
170
468000
1856
08:01
He is supported by people
such as Elon Musk and Stephen Hawking.
171
469880
4320
08:06
With all due respect
172
474880
2400
08:10
to these brilliant minds,
173
478160
2016
08:12
I believe that they
are fundamentally wrong.
174
480200
2256
08:14
Now, there are a lot of pieces
of Dr. Bostrom's argument to unpack,
175
482480
3176
08:17
and I don't have time to unpack them all,
176
485680
2136
08:19
but very briefly, consider this:
177
487840
2696
08:22
super knowing is very different
than super doing.
178
490560
3736
08:26
HAL was a threat to the Discovery crew
179
494320
1896
08:28
only insofar as HAL commanded
all aspects of the Discovery.
180
496240
4416
08:32
So it would have to be
with a superintelligence.
181
500680
2496
08:35
It would have to have dominion
over all of our world.
182
503200
2496
08:37
This is the stuff of Skynet
from the movie "The Terminator"
183
505720
2816
08:40
in which we had a superintelligence
184
508560
1856
08:42
that commanded human will,
185
510440
1376
08:43
that directed every device
that was in every corner of the world.
186
511840
3856
08:47
Practically speaking,
187
515720
1456
08:49
it ain't gonna happen.
188
517200
2096
08:51
We are not building AIs
that control the weather,
189
519320
3056
08:54
that direct the tides,
190
522400
1336
08:55
that command us
capricious, chaotic humans.
191
523760
3376
08:59
And furthermore, if such
an artificial intelligence existed,
192
527160
3896
09:03
it would have to compete
with human economies,
193
531080
2936
09:06
and thereby compete for resources with us.
194
534040
2520
09:09
And in the end --
195
537200
1216
09:10
don't tell Siri this --
196
538440
1240
09:12
we can always unplug them.
197
540440
1376
09:13
(Laughter)
198
541840
2120
09:17
We are on an incredible journey
199
545360
2456
09:19
of coevolution with our machines.
200
547840
2496
09:22
The humans we are today
201
550360
2496
09:24
are not the humans we will be then.
202
552880
2536
09:27
To worry now about the rise
of a superintelligence
203
555440
3136
09:30
is in many ways a dangerous distraction
204
558600
3056
09:33
because the rise of computing itself
205
561680
2336
09:36
brings to us a number
of human and societal issues
206
564040
3016
09:39
to which we must now attend.
207
567080
1640
09:41
How shall I best organize society
208
569360
2816
09:44
when the need for human labor diminishes?
209
572200
2336
09:46
How can I bring understanding
and education throughout the globe
210
574560
3816
09:50
and still respect our differences?
211
578400
1776
09:52
How might I extend and enhance human life
through cognitive healthcare?
212
580200
4256
09:56
How might I use computing
213
584480
2856
09:59
to help take us to the stars?
214
587360
1760
10:01
And that's the exciting thing.
215
589760
2040
10:04
The opportunities to use computing
216
592400
2336
10:06
to advance the human experience
217
594760
1536
10:08
are within our reach,
218
596320
1416
10:09
here and now,
219
597760
1856
10:11
and we are just beginning.
220
599640
1680
10:14
Thank you very much.
221
602280
1216
10:15
(Applause)
222
603520
4286

▲Back to top

ABOUT THE SPEAKER
Grady Booch - Scientist, philosopher
IBM's Grady Booch is shaping the future of cognitive computing by building intelligent systems that can reason and learn.

Why you should listen

When he was 13, Grady Booch saw 2001: A Space Odyssey in the theaters for the first time. Ever since, he's been trying to build Hal (albeit one without the homicidal tendencies). A scientist, storyteller and philosopher, Booch is Chief Scientist for Software Engineering as well as Chief Scientist for Watson/M at IBM Research, where he leads IBM's research and development for embodied cognition. Having originated the term and the practice of object-oriented design, he is best known for his work in advancing the fields of software engineering and software architecture.

A co-author of the Unified Modeling Language (UML), a founding member of the Agile Allianc, and a founding member of the Hillside Group, Booch has published six books and several hundred technical articles, including an ongoing column for IEEE Software. He's also a trustee for the Computer History Museum, an IBM Fellow, an ACM Fellow and IEEE Fellow. He has been awarded the Lovelace Medal and has given the Turing Lecture for the BCS, and was recently named an IEEE Computer Pioneer.

Booch is currently deeply involved in the development of cognitive systems and is also developing a major trans-media documentary for public broadcast on the intersection of computing and the human experience.

More profile about the speaker
Grady Booch | Speaker | TED.com