ABOUT THE SPEAKER
Iyad Rahwan - Computational social scientist
Iyad Rahwan's work lies at the intersection of the computer and social sciences, with a focus on collective intelligence, large-scale cooperation and the social aspects of artificial intelligence.

Why you should listen

Iyad Rahwan is the AT&T Career Development Professor and an associate professor of media arts & sciences at the MIT Media Lab, where he leads the Scalable Cooperation group. A native of Aleppo, Syria, Rahwan holds a PhD. from the University of Melbourne, Australia and is an affiliate faculty at the MIT Institute of Data, Systems and Society (IDSS). He led the winning team in the US State Department's Tag Challenge, using social media to locate individuals in remote cities within 12 hours using only their mug shots. Recently he crowdsourced 30 million decisions from people worldwide about the ethics of AI systems. Rahwan's work appeared in major academic journals, including Science and PNAS, and features regularly in major media outlets, including the New York Times, The Economist and the Wall Street Journal.

(Photo: Victoriano Izquierdo)

More profile about the speaker
Iyad Rahwan | Speaker | TED.com
TEDxCambridge

Iyad Rahwan: What moral decisions should driverless cars make?

Filmed:
1,147,381 views

Should your driverless car kill you if it means saving five pedestrians? In this primer on the social dilemmas of driverless cars, Iyad Rahwan explores how the technology will challenge our morality and explains his work collecting data from real people on the ethical trade-offs we're willing (and not willing) to make.
- Computational social scientist
Iyad Rahwan's work lies at the intersection of the computer and social sciences, with a focus on collective intelligence, large-scale cooperation and the social aspects of artificial intelligence. Full bio

Double-click the English transcript below to play the video.

00:13
Today I'm going to talk
about technology and society.
0
1000
4080
00:19
The Department of Transport
estimated that last year
1
7040
3696
00:22
35,000 people died
from traffic crashes in the US alone.
2
10760
4080
00:28
Worldwide, 1.2 million people
die every year in traffic accidents.
3
16040
4800
00:33
If there was a way we could eliminate
90 percent of those accidents,
4
21760
4096
00:37
would you support it?
5
25880
1200
00:39
Of course you would.
6
27720
1296
00:41
This is what driverless car technology
promises to achieve
7
29040
3655
00:44
by eliminating the main
source of accidents --
8
32720
2816
00:47
human error.
9
35560
1200
00:49
Now picture yourself
in a driverless car in the year 2030,
10
37920
5416
00:55
sitting back and watching
this vintage TEDxCambridge video.
11
43360
3456
00:58
(Laughter)
12
46840
2000
01:01
All of a sudden,
13
49520
1216
01:02
the car experiences mechanical failure
and is unable to stop.
14
50760
3280
01:07
If the car continues,
15
55360
1520
01:09
it will crash into a bunch
of pedestrians crossing the street,
16
57720
4120
01:15
but the car may swerve,
17
63080
2135
01:17
hitting one bystander,
18
65239
1857
01:19
killing them to save the pedestrians.
19
67120
2080
01:22
What should the car do,
and who should decide?
20
70040
2600
01:25
What if instead the car
could swerve into a wall,
21
73520
3536
01:29
crashing and killing you, the passenger,
22
77080
3296
01:32
in order to save those pedestrians?
23
80400
2320
01:35
This scenario is inspired
by the trolley problem,
24
83240
3080
01:38
which was invented
by philosophers a few decades ago
25
86960
3776
01:42
to think about ethics.
26
90760
1240
01:46
Now, the way we think
about this problem matters.
27
94120
2496
01:48
We may for example
not think about it at all.
28
96640
2616
01:51
We may say this scenario is unrealistic,
29
99280
3376
01:54
incredibly unlikely, or just silly.
30
102680
2320
01:57
But I think this criticism
misses the point
31
105760
2736
02:00
because it takes
the scenario too literally.
32
108520
2160
02:03
Of course no accident
is going to look like this;
33
111920
2736
02:06
no accident has two or three options
34
114680
3336
02:10
where everybody dies somehow.
35
118040
2000
02:13
Instead, the car is going
to calculate something
36
121480
2576
02:16
like the probability of hitting
a certain group of people,
37
124080
4896
02:21
if you swerve one direction
versus another direction,
38
129000
3336
02:24
you might slightly increase the risk
to passengers or other drivers
39
132360
3456
02:27
versus pedestrians.
40
135840
1536
02:29
It's going to be
a more complex calculation,
41
137400
2160
02:32
but it's still going
to involve trade-offs,
42
140480
2520
02:35
and trade-offs often require ethics.
43
143840
2880
02:39
We might say then,
"Well, let's not worry about this.
44
147840
2736
02:42
Let's wait until technology
is fully ready and 100 percent safe."
45
150600
4640
02:48
Suppose that we can indeed
eliminate 90 percent of those accidents,
46
156520
3680
02:53
or even 99 percent in the next 10 years.
47
161080
2840
02:56
What if eliminating
the last one percent of accidents
48
164920
3176
03:00
requires 50 more years of research?
49
168120
3120
03:04
Should we not adopt the technology?
50
172400
1800
03:06
That's 60 million people
dead in car accidents
51
174720
4776
03:11
if we maintain the current rate.
52
179520
1760
03:14
So the point is,
53
182760
1216
03:16
waiting for full safety is also a choice,
54
184000
3616
03:19
and it also involves trade-offs.
55
187640
2160
03:23
People online on social media
have been coming up with all sorts of ways
56
191560
4336
03:27
to not think about this problem.
57
195920
2016
03:29
One person suggested
the car should just swerve somehow
58
197960
3216
03:33
in between the passengers --
59
201200
2136
03:35
(Laughter)
60
203360
1016
03:36
and the bystander.
61
204400
1256
03:37
Of course if that's what the car can do,
that's what the car should do.
62
205680
3360
03:41
We're interested in scenarios
in which this is not possible.
63
209920
2840
03:45
And my personal favorite
was a suggestion by a blogger
64
213280
5416
03:50
to have an eject button in the car
that you press --
65
218720
3016
03:53
(Laughter)
66
221760
1216
03:55
just before the car self-destructs.
67
223000
1667
03:56
(Laughter)
68
224691
1680
03:59
So if we acknowledge that cars
will have to make trade-offs on the road,
69
227840
5200
04:06
how do we think about those trade-offs,
70
234200
1880
04:09
and how do we decide?
71
237320
1576
04:10
Well, maybe we should run a survey
to find out what society wants,
72
238920
3136
04:14
because ultimately,
73
242080
1456
04:15
regulations and the law
are a reflection of societal values.
74
243560
3960
04:20
So this is what we did.
75
248040
1240
04:21
With my collaborators,
76
249880
1616
04:23
Jean-François Bonnefon and Azim Shariff,
77
251520
2336
04:25
we ran a survey
78
253880
1616
04:27
in which we presented people
with these types of scenarios.
79
255520
2855
04:30
We gave them two options
inspired by two philosophers:
80
258399
3777
04:34
Jeremy Bentham and Immanuel Kant.
81
262200
2640
04:37
Bentham says the car
should follow utilitarian ethics:
82
265600
3096
04:40
it should take the action
that will minimize total harm --
83
268720
3416
04:44
even if that action will kill a bystander
84
272160
2816
04:47
and even if that action
will kill the passenger.
85
275000
2440
04:50
Immanuel Kant says the car
should follow duty-bound principles,
86
278120
4976
04:55
like "Thou shalt not kill."
87
283120
1560
04:57
So you should not take an action
that explicitly harms a human being,
88
285480
4456
05:01
and you should let the car take its course
89
289960
2456
05:04
even if that's going to harm more people.
90
292440
1960
05:07
What do you think?
91
295640
1200
05:09
Bentham or Kant?
92
297360
1520
05:11
Here's what we found.
93
299760
1256
05:13
Most people sided with Bentham.
94
301040
1800
05:16
So it seems that people
want cars to be utilitarian,
95
304160
3776
05:19
minimize total harm,
96
307960
1416
05:21
and that's what we should all do.
97
309400
1576
05:23
Problem solved.
98
311000
1200
05:25
But there is a little catch.
99
313240
1480
05:27
When we asked people
whether they would purchase such cars,
100
315920
3736
05:31
they said, "Absolutely not."
101
319680
1616
05:33
(Laughter)
102
321320
2296
05:35
They would like to buy cars
that protect them at all costs,
103
323640
3896
05:39
but they want everybody else
to buy cars that minimize harm.
104
327560
3616
05:43
(Laughter)
105
331200
2520
05:46
We've seen this problem before.
106
334720
1856
05:48
It's called a social dilemma.
107
336600
1560
05:51
And to understand the social dilemma,
108
339160
1816
05:53
we have to go a little bit
back in history.
109
341000
2040
05:56
In the 1800s,
110
344000
2576
05:58
English economist William Forster Lloyd
published a pamphlet
111
346600
3736
06:02
which describes the following scenario.
112
350360
2216
06:04
You have a group of farmers --
113
352600
1656
06:06
English farmers --
114
354280
1336
06:07
who are sharing a common land
for their sheep to graze.
115
355640
2680
06:11
Now, if each farmer
brings a certain number of sheep --
116
359520
2576
06:14
let's say three sheep --
117
362120
1496
06:15
the land will be rejuvenated,
118
363640
2096
06:17
the farmers are happy,
119
365760
1216
06:19
the sheep are happy,
120
367000
1616
06:20
everything is good.
121
368640
1200
06:22
Now, if one farmer brings one extra sheep,
122
370440
2520
06:25
that farmer will do slightly better,
and no one else will be harmed.
123
373800
4720
06:31
But if every farmer made
that individually rational decision,
124
379160
3640
06:35
the land will be overrun,
and it will be depleted
125
383840
2720
06:39
to the detriment of all the farmers,
126
387360
2176
06:41
and of course,
to the detriment of the sheep.
127
389560
2120
06:44
We see this problem in many places:
128
392720
3680
06:49
in the difficulty of managing overfishing,
129
397080
3176
06:52
or in reducing carbon emissions
to mitigate climate change.
130
400280
4560
06:59
When it comes to the regulation
of driverless cars,
131
407160
2920
07:03
the common land now
is basically public safety --
132
411080
4336
07:07
that's the common good --
133
415440
1240
07:09
and the farmers are the passengers
134
417400
1976
07:11
or the car owners who are choosing
to ride in those cars.
135
419400
3600
07:16
And by making the individually
rational choice
136
424960
2616
07:19
of prioritizing their own safety,
137
427600
2816
07:22
they may collectively be
diminishing the common good,
138
430440
3136
07:25
which is minimizing total harm.
139
433600
2200
07:30
It's called the tragedy of the commons,
140
438320
2136
07:32
traditionally,
141
440480
1296
07:33
but I think in the case
of driverless cars,
142
441800
3096
07:36
the problem may be
a little bit more insidious
143
444920
2856
07:39
because there is not necessarily
an individual human being
144
447800
3496
07:43
making those decisions.
145
451320
1696
07:45
So car manufacturers
may simply program cars
146
453040
3296
07:48
that will maximize safety
for their clients,
147
456360
2520
07:52
and those cars may learn
automatically on their own
148
460080
2976
07:55
that doing so requires slightly
increasing risk for pedestrians.
149
463080
3520
07:59
So to use the sheep metaphor,
150
467520
1416
08:00
it's like we now have electric sheep
that have a mind of their own.
151
468960
3616
08:04
(Laughter)
152
472600
1456
08:06
And they may go and graze
even if the farmer doesn't know it.
153
474080
3080
08:10
So this is what we may call
the tragedy of the algorithmic commons,
154
478640
3976
08:14
and if offers new types of challenges.
155
482640
2360
08:22
Typically, traditionally,
156
490520
1896
08:24
we solve these types
of social dilemmas using regulation,
157
492440
3336
08:27
so either governments
or communities get together,
158
495800
2736
08:30
and they decide collectively
what kind of outcome they want
159
498560
3736
08:34
and what sort of constraints
on individual behavior
160
502320
2656
08:37
they need to implement.
161
505000
1200
08:39
And then using monitoring and enforcement,
162
507600
2616
08:42
they can make sure
that the public good is preserved.
163
510240
2559
08:45
So why don't we just,
164
513440
1575
08:47
as regulators,
165
515039
1496
08:48
require that all cars minimize harm?
166
516559
2897
08:51
After all, this is
what people say they want.
167
519480
2240
08:55
And more importantly,
168
523200
1416
08:56
I can be sure that as an individual,
169
524640
3096
08:59
if I buy a car that may
sacrifice me in a very rare case,
170
527760
3856
09:03
I'm not the only sucker doing that
171
531640
1656
09:05
while everybody else
enjoys unconditional protection.
172
533320
2680
09:09
In our survey, we did ask people
whether they would support regulation
173
537120
3336
09:12
and here's what we found.
174
540480
1200
09:14
First of all, people
said no to regulation;
175
542360
3760
09:19
and second, they said,
176
547280
1256
09:20
"Well if you regulate cars to do this
and to minimize total harm,
177
548560
3936
09:24
I will not buy those cars."
178
552520
1480
09:27
So ironically,
179
555400
1376
09:28
by regulating cars to minimize harm,
180
556800
3496
09:32
we may actually end up with more harm
181
560320
1840
09:35
because people may not
opt into the safer technology
182
563040
3656
09:38
even if it's much safer
than human drivers.
183
566720
2080
09:42
I don't have the final
answer to this riddle,
184
570360
3416
09:45
but I think as a starting point,
185
573800
1576
09:47
we need society to come together
186
575400
3296
09:50
to decide what trade-offs
we are comfortable with
187
578720
2760
09:54
and to come up with ways
in which we can enforce those trade-offs.
188
582360
3480
09:58
As a starting point,
my brilliant students,
189
586520
2536
10:01
Edmond Awad and Sohan Dsouza,
190
589080
2456
10:03
built the Moral Machine website,
191
591560
1800
10:06
which generates random scenarios at you --
192
594200
2680
10:10
basically a bunch
of random dilemmas in a sequence
193
598080
2456
10:12
where you have to choose what
the car should do in a given scenario.
194
600560
3920
10:17
And we vary the ages and even
the species of the different victims.
195
605040
4600
10:23
So far we've collected
over five million decisions
196
611040
3696
10:26
by over one million people worldwide
197
614760
2200
10:30
from the website.
198
618400
1200
10:32
And this is helping us
form an early picture
199
620360
2416
10:34
of what trade-offs
people are comfortable with
200
622800
2616
10:37
and what matters to them --
201
625440
1896
10:39
even across cultures.
202
627360
1440
10:42
But more importantly,
203
630240
1496
10:43
doing this exercise
is helping people recognize
204
631760
3376
10:47
the difficulty of making those choices
205
635160
2816
10:50
and that the regulators
are tasked with impossible choices.
206
638000
3800
10:55
And maybe this will help us as a society
understand the kinds of trade-offs
207
643360
3576
10:58
that will be implemented
ultimately in regulation.
208
646960
3056
11:02
And indeed, I was very happy to hear
209
650040
1736
11:03
that the first set of regulations
210
651800
2016
11:05
that came from
the Department of Transport --
211
653840
2136
11:08
announced last week --
212
656000
1376
11:09
included a 15-point checklist
for all carmakers to provide,
213
657400
6576
11:16
and number 14 was ethical consideration --
214
664000
3256
11:19
how are you going to deal with that.
215
667280
1720
11:23
We also have people
reflect on their own decisions
216
671800
2656
11:26
by giving them summaries
of what they chose.
217
674480
3000
11:30
I'll give you one example --
218
678440
1656
11:32
I'm just going to warn you
that this is not your typical example,
219
680120
3536
11:35
your typical user.
220
683680
1376
11:37
This is the most sacrificed and the most
saved character for this person.
221
685080
3616
11:40
(Laughter)
222
688720
5200
11:46
Some of you may agree with him,
223
694680
1896
11:48
or her, we don't know.
224
696600
1640
11:52
But this person also seems to slightly
prefer passengers over pedestrians
225
700480
6136
11:58
in their choices
226
706640
2096
12:00
and is very happy to punish jaywalking.
227
708760
2816
12:03
(Laughter)
228
711600
3040
12:09
So let's wrap up.
229
717320
1216
12:10
We started with the question --
let's call it the ethical dilemma --
230
718559
3416
12:14
of what the car should do
in a specific scenario:
231
722000
3056
12:17
swerve or stay?
232
725080
1200
12:19
But then we realized
that the problem was a different one.
233
727240
2736
12:22
It was the problem of how to get
society to agree on and enforce
234
730000
4536
12:26
the trade-offs they're comfortable with.
235
734560
1936
12:28
It's a social dilemma.
236
736520
1256
12:29
In the 1940s, Isaac Asimov
wrote his famous laws of robotics --
237
737800
5016
12:34
the three laws of robotics.
238
742840
1320
12:37
A robot may not harm a human being,
239
745240
2456
12:39
a robot may not disobey a human being,
240
747720
2536
12:42
and a robot may not allow
itself to come to harm --
241
750280
3256
12:45
in this order of importance.
242
753560
1960
12:48
But after 40 years or so
243
756360
2136
12:50
and after so many stories
pushing these laws to the limit,
244
758520
3736
12:54
Asimov introduced the zeroth law
245
762280
3696
12:58
which takes precedence above all,
246
766000
2256
13:00
and it's that a robot
may not harm humanity as a whole.
247
768280
3280
13:04
I don't know what this means
in the context of driverless cars
248
772480
4376
13:08
or any specific situation,
249
776880
2736
13:11
and I don't know how we can implement it,
250
779640
2216
13:13
but I think that by recognizing
251
781880
1536
13:15
that the regulation of driverless cars
is not only a technological problem
252
783440
6136
13:21
but also a societal cooperation problem,
253
789600
3280
13:25
I hope that we can at least begin
to ask the right questions.
254
793800
2880
13:29
Thank you.
255
797200
1216
13:30
(Applause)
256
798440
2920

▲Back to top

ABOUT THE SPEAKER
Iyad Rahwan - Computational social scientist
Iyad Rahwan's work lies at the intersection of the computer and social sciences, with a focus on collective intelligence, large-scale cooperation and the social aspects of artificial intelligence.

Why you should listen

Iyad Rahwan is the AT&T Career Development Professor and an associate professor of media arts & sciences at the MIT Media Lab, where he leads the Scalable Cooperation group. A native of Aleppo, Syria, Rahwan holds a PhD. from the University of Melbourne, Australia and is an affiliate faculty at the MIT Institute of Data, Systems and Society (IDSS). He led the winning team in the US State Department's Tag Challenge, using social media to locate individuals in remote cities within 12 hours using only their mug shots. Recently he crowdsourced 30 million decisions from people worldwide about the ethics of AI systems. Rahwan's work appeared in major academic journals, including Science and PNAS, and features regularly in major media outlets, including the New York Times, The Economist and the Wall Street Journal.

(Photo: Victoriano Izquierdo)

More profile about the speaker
Iyad Rahwan | Speaker | TED.com