ABOUT THE SPEAKER
Janelle Shane - AI researcher
While moonlighting as a research scientist, Janelle Shane found fame documenting the often hilarious antics of AI algorithms.

Why you should listen

Janelle Shane's humor blog, AIweirdness.com, looks at, as she tells it, "the strange side of artificial intelligence." Her upcoming book, You Look Like a Thing and I Love You: How AI Works and Why It's Making the World a Weirder Place, uses cartoons and humorous pop-culture experiments to look inside the minds of the algorithms that run our world, making artificial intelligence and machine learning both accessible and entertaining.

According to Shane, she has only made a neural network-written recipe once -- and discovered that horseradish brownies are about as terrible as you might imagine.

More profile about the speaker
Janelle Shane | Speaker | TED.com
TED2019

Janelle Shane: The danger of AI is weirder than you think

Filmed:
376,501 views

The danger of artificial intelligence isn't that it's going to rebel against us, but that it's going to do exactly what we ask it to do, says AI researcher Janelle Shane. Sharing the weird, sometimes alarming antics of AI algorithms as they try to solve human problems -- like creating new ice cream flavors or recognizing cars on the road -- Shane shows why AI doesn't yet measure up to real brains.
- AI researcher
While moonlighting as a research scientist, Janelle Shane found fame documenting the often hilarious antics of AI algorithms. Full bio

Double-click the English transcript below to play the video.

00:13
So, artificial intelligence
0
1765
3000
00:16
is known for disrupting
all kinds of industries.
1
4789
3529
00:20
What about ice cream?
2
8961
2043
00:23
What kind of mind-blowing
new flavors could we generate
3
11879
3639
00:27
with the power of an advanced
artificial intelligence?
4
15542
2976
00:31
So I teamed up with a group of coders
from Kealing Middle School
5
19011
4161
00:35
to find out the answer to this question.
6
23196
2241
00:37
They collected over 1,600
existing ice cream flavors,
7
25461
5081
00:42
and together, we fed them to an algorithm
to see what it would generate.
8
30566
5522
00:48
And here are some of the flavors
that the AI came up with.
9
36112
3753
00:52
[Pumpkin Trash Break]
10
40444
1471
00:53
(Laughter)
11
41939
1402
00:55
[Peanut Butter Slime]
12
43365
2469
00:58
[Strawberry Cream Disease]
13
46822
1343
01:00
(Laughter)
14
48189
2126
01:02
These flavors are not delicious,
as we might have hoped they would be.
15
50339
4597
01:06
So the question is: What happened?
16
54960
1864
01:08
What went wrong?
17
56848
1394
01:10
Is the AI trying to kill us?
18
58266
1959
01:13
Or is it trying to do what we asked,
and there was a problem?
19
61027
4310
01:18
In movies, when something
goes wrong with AI,
20
66567
2464
01:21
it's usually because the AI has decided
21
69055
2712
01:23
that it doesn't want to obey
the humans anymore,
22
71791
2272
01:26
and it's got its own goals,
thank you very much.
23
74087
2623
01:29
In real life, though,
the AI that we actually have
24
77266
3216
01:32
is not nearly smart enough for that.
25
80506
1863
01:34
It has the approximate computing power
26
82781
2982
01:37
of an earthworm,
27
85787
1276
01:39
or maybe at most a single honeybee,
28
87087
3403
01:42
and actually, probably maybe less.
29
90514
2215
01:44
Like, we're constantly learning
new things about brains
30
92753
2594
01:47
that make it clear how much our AIs
don't measure up to real brains.
31
95371
4360
01:51
So today's AI can do a task
like identify a pedestrian in a picture,
32
99755
5663
01:57
but it doesn't have a concept
of what the pedestrian is
33
105442
2983
02:00
beyond that it's a collection
of lines and textures and things.
34
108449
4824
02:05
It doesn't know what a human actually is.
35
113792
2521
02:08
So will today's AI
do what we ask it to do?
36
116822
3282
02:12
It will if it can,
37
120128
1594
02:13
but it might not do what we actually want.
38
121746
2726
02:16
So let's say that you
were trying to get an AI
39
124496
2415
02:18
to take this collection of robot parts
40
126935
2619
02:21
and assemble them into some kind of robot
to get from Point A to Point B.
41
129578
4197
02:25
Now, if you were going to try
and solve this problem
42
133799
2481
02:28
by writing a traditional-style
computer program,
43
136304
2351
02:30
you would give the program
step-by-step instructions
44
138679
3417
02:34
on how to take these parts,
45
142120
1329
02:35
how to assemble them
into a robot with legs
46
143473
2407
02:37
and then how to use those legs
to walk to Point B.
47
145904
2942
02:41
But when you're using AI
to solve the problem,
48
149441
2340
02:43
it goes differently.
49
151805
1174
02:45
You don't tell it
how to solve the problem,
50
153003
2382
02:47
you just give it the goal,
51
155409
1479
02:48
and it has to figure out for itself
via trial and error
52
156912
3262
02:52
how to reach that goal.
53
160198
1484
02:54
And it turns out that the way AI tends
to solve this particular problem
54
162254
4102
02:58
is by doing this:
55
166380
1484
02:59
it assembles itself into a tower
and then falls over
56
167888
3367
03:03
and lands at Point B.
57
171279
1827
03:05
And technically, this solves the problem.
58
173130
2829
03:07
Technically, it got to Point B.
59
175983
1639
03:09
The danger of AI is not that
it's going to rebel against us,
60
177646
4265
03:13
it's that it's going to do
exactly what we ask it to do.
61
181935
4274
03:18
So then the trick
of working with AI becomes:
62
186876
2498
03:21
How do we set up the problem
so that it actually does what we want?
63
189398
3828
03:26
So this little robot here
is being controlled by an AI.
64
194726
3306
03:30
The AI came up with a design
for the robot legs
65
198056
2814
03:32
and then figured out how to use them
to get past all these obstacles.
66
200894
4078
03:36
But when David Ha set up this experiment,
67
204996
2741
03:39
he had to set it up
with very, very strict limits
68
207761
2856
03:42
on how big the AI
was allowed to make the legs,
69
210641
3292
03:45
because otherwise ...
70
213957
1550
03:55
(Laughter)
71
223058
3931
04:00
And technically, it got
to the end of that obstacle course.
72
228563
3745
04:04
So you see how hard it is to get AI
to do something as simple as just walk.
73
232332
4942
04:09
So seeing the AI do this,
you may say, OK, no fair,
74
237298
3820
04:13
you can't just be
a tall tower and fall over,
75
241142
2580
04:15
you have to actually, like,
use legs to walk.
76
243746
3435
04:19
And it turns out,
that doesn't always work, either.
77
247205
2759
04:21
This AI's job was to move fast.
78
249988
2759
04:25
They didn't tell it that it had
to run facing forward
79
253115
3593
04:28
or that it couldn't use its arms.
80
256732
2258
04:31
So this is what you get
when you train AI to move fast,
81
259487
4618
04:36
you get things like somersaulting
and silly walks.
82
264129
3534
04:39
It's really common.
83
267687
1400
04:41
So is twitching along the floor in a heap.
84
269667
3179
04:44
(Laughter)
85
272870
1150
04:47
So in my opinion, you know what
should have been a whole lot weirder
86
275241
3254
04:50
is the "Terminator" robots.
87
278519
1396
04:52
Hacking "The Matrix" is another thing
that AI will do if you give it a chance.
88
280256
3755
04:56
So if you train an AI in a simulation,
89
284035
2517
04:58
it will learn how to do things like
hack into the simulation's math errors
90
286576
4113
05:02
and harvest them for energy.
91
290713
2207
05:04
Or it will figure out how to move faster
by glitching repeatedly into the floor.
92
292944
5475
05:10
When you're working with AI,
93
298443
1585
05:12
it's less like working with another human
94
300052
2389
05:14
and a lot more like working
with some kind of weird force of nature.
95
302465
3629
05:18
And it's really easy to accidentally
give AI the wrong problem to solve,
96
306562
4623
05:23
and often we don't realize that
until something has actually gone wrong.
97
311209
4538
05:28
So here's an experiment I did,
98
316242
2080
05:30
where I wanted the AI
to copy paint colors,
99
318346
3182
05:33
to invent new paint colors,
100
321552
1746
05:35
given the list like the ones
here on the left.
101
323322
2987
05:38
And here's what the AI
actually came up with.
102
326798
3004
05:41
[Sindis Poop, Turdly, Suffer, Gray Pubic]
103
329826
3143
05:44
(Laughter)
104
332993
4230
05:51
So technically,
105
339177
1886
05:53
it did what I asked it to.
106
341087
1864
05:54
I thought I was asking it for,
like, nice paint color names,
107
342975
3308
05:58
but what I was actually asking it to do
108
346307
2307
06:00
was just imitate the kinds
of letter combinations
109
348638
3086
06:03
that it had seen in the original.
110
351748
1905
06:05
And I didn't tell it anything
about what words mean,
111
353677
3098
06:08
or that there are maybe some words
112
356799
2560
06:11
that it should avoid using
in these paint colors.
113
359383
2889
06:15
So its entire world
is the data that I gave it.
114
363141
3494
06:18
Like with the ice cream flavors,
it doesn't know about anything else.
115
366659
4028
06:24
So it is through the data
116
372491
1638
06:26
that we often accidentally tell AI
to do the wrong thing.
117
374153
4044
06:30
This is a fish called a tench.
118
378694
3032
06:33
And there was a group of researchers
119
381750
1815
06:35
who trained an AI to identify
this tench in pictures.
120
383589
3874
06:39
But then when they asked it
121
387487
1296
06:40
what part of the picture it was actually
using to identify the fish,
122
388807
3426
06:44
here's what it highlighted.
123
392257
1358
06:47
Yes, those are human fingers.
124
395203
2189
06:49
Why would it be looking for human fingers
125
397416
2059
06:51
if it's trying to identify a fish?
126
399499
1921
06:54
Well, it turns out that the tench
is a trophy fish,
127
402126
3164
06:57
and so in a lot of pictures
that the AI had seen of this fish
128
405314
3811
07:01
during training,
129
409149
1151
07:02
the fish looked like this.
130
410324
1490
07:03
(Laughter)
131
411838
1635
07:05
And it didn't know that the fingers
aren't part of the fish.
132
413497
3330
07:10
So you see why it is so hard
to design an AI
133
418808
4120
07:14
that actually can understand
what it's looking at.
134
422952
3319
07:18
And this is why designing
the image recognition
135
426295
2862
07:21
in self-driving cars is so hard,
136
429181
2067
07:23
and why so many self-driving car failures
137
431272
2205
07:25
are because the AI got confused.
138
433501
2885
07:28
I want to talk about an example from 2016.
139
436410
4008
07:32
There was a fatal accident when somebody
was using Tesla's autopilot AI,
140
440442
4455
07:36
but instead of using it on the highway
like it was designed for,
141
444921
3414
07:40
they used it on city streets.
142
448359
2205
07:43
And what happened was,
143
451239
1175
07:44
a truck drove out in front of the car
and the car failed to brake.
144
452438
3396
07:48
Now, the AI definitely was trained
to recognize trucks in pictures.
145
456507
4762
07:53
But what it looks like happened is
146
461293
2145
07:55
the AI was trained to recognize
trucks on highway driving,
147
463462
2931
07:58
where you would expect
to see trucks from behind.
148
466417
2899
08:01
Trucks on the side is not supposed
to happen on a highway,
149
469340
3420
08:04
and so when the AI saw this truck,
150
472784
3455
08:08
it looks like the AI recognized it
as most likely to be a road sign
151
476263
4827
08:13
and therefore, safe to drive underneath.
152
481114
2273
08:16
Here's an AI misstep
from a different field.
153
484114
2580
08:18
Amazon recently had to give up
on a résumé-sorting algorithm
154
486718
3460
08:22
that they were working on
155
490202
1220
08:23
when they discovered that the algorithm
had learned to discriminate against women.
156
491446
3908
08:27
What happened is they had trained it
on example résumés
157
495378
2716
08:30
of people who they had hired in the past.
158
498118
2242
08:32
And from these examples, the AI learned
to avoid the résumés of people
159
500384
4023
08:36
who had gone to women's colleges
160
504431
2026
08:38
or who had the word "women"
somewhere in their resume,
161
506481
2806
08:41
as in, "women's soccer team"
or "Society of Women Engineers."
162
509311
4576
08:45
The AI didn't know that it wasn't supposed
to copy this particular thing
163
513911
3974
08:49
that it had seen the humans do.
164
517909
1978
08:51
And technically, it did
what they asked it to do.
165
519911
3177
08:55
They just accidentally asked it
to do the wrong thing.
166
523112
2797
08:58
And this happens all the time with AI.
167
526653
2895
09:02
AI can be really destructive
and not know it.
168
530120
3591
09:05
So the AIs that recommend
new content in Facebook, in YouTube,
169
533735
5078
09:10
they're optimized to increase
the number of clicks and views.
170
538837
3539
09:14
And unfortunately, one way
that they have found of doing this
171
542400
3436
09:17
is to recommend the content
of conspiracy theories or bigotry.
172
545860
4503
09:22
The AIs themselves don't have any concept
of what this content actually is,
173
550902
5302
09:28
and they don't have any concept
of what the consequences might be
174
556228
3395
09:31
of recommending this content.
175
559647
2109
09:34
So, when we're working with AI,
176
562296
2011
09:36
it's up to us to avoid problems.
177
564331
4182
09:40
And avoiding things going wrong,
178
568537
2323
09:42
that may come down to
the age-old problem of communication,
179
570884
4526
09:47
where we as humans have to learn
how to communicate with AI.
180
575434
3745
09:51
We have to learn what AI
is capable of doing and what it's not,
181
579203
4039
09:55
and to understand that,
with its tiny little worm brain,
182
583266
3086
09:58
AI doesn't really understand
what we're trying to ask it to do.
183
586376
4013
10:03
So in other words, we have
to be prepared to work with AI
184
591148
3321
10:06
that's not the super-competent,
all-knowing AI of science fiction.
185
594493
5258
10:11
We have to prepared to work with an AI
186
599775
2862
10:14
that's the one that we actually have
in the present day.
187
602661
2938
10:17
And present-day AI is plenty weird enough.
188
605623
4205
10:21
Thank you.
189
609852
1190
10:23
(Applause)
190
611066
5225

▲Back to top

ABOUT THE SPEAKER
Janelle Shane - AI researcher
While moonlighting as a research scientist, Janelle Shane found fame documenting the often hilarious antics of AI algorithms.

Why you should listen

Janelle Shane's humor blog, AIweirdness.com, looks at, as she tells it, "the strange side of artificial intelligence." Her upcoming book, You Look Like a Thing and I Love You: How AI Works and Why It's Making the World a Weirder Place, uses cartoons and humorous pop-culture experiments to look inside the minds of the algorithms that run our world, making artificial intelligence and machine learning both accessible and entertaining.

According to Shane, she has only made a neural network-written recipe once -- and discovered that horseradish brownies are about as terrible as you might imagine.

More profile about the speaker
Janelle Shane | Speaker | TED.com