ABOUT THE SPEAKER
Will MacAskill - Moral philosopher, writer
Will MacAskill is a cofounder of the effective altruism movement, a philosophy that tries to answer the question: How can we do as much good as possible?

Why you should listen

Will MacAskill is the author of Doing Good Better and the cofounder of the organization Giving What We Can and the social-impact-career advice project 80,000 Hours. He is a trustee of the Centre for Effective Altruism. As he writes: "I'm an associate professor of moral philosophy at Oxford University. I helped to create the philosophy of effective altruism: the use of evidence and reason to figure out how to do the most good we can.

"While growing up, I tried my best to do good things: I donated to high street charities, I worked at an old folks' home, I taught English in Ethiopia. But when I was a graduate student I realized that I'd never asked myself the question of whether these were the truly best ways of improving the world. So, being a nerd, I started doing some research to try to find out what activities would do the most to make the world a better place. Ten years later, I'm still asking that question.

"I'm still far from certain about what the best ways of doing good are, but I have learned that each and every one of us can do a tremendous amount to make the world a better place, whether that's by donating to buy bednets to protect children from malaria, campaigning to improve living conditions for animals in factory farms or working to ensure that the benefits of new technology are harnessed while avoiding the risks."

More profile about the speaker
Will MacAskill | Speaker | TED.com
TED2018

Will MacAskill: What are the most important moral problems of our time?

Filmed:
2,112,093 views

Of all the problems facing humanity, which should we focus on solving first? In a compelling talk about how to make the world better, moral philosopher Will MacAskill provides a framework for answering this question based on the philosophy of "effective altruism" -- and shares ideas for taking on three pressing global issues.
- Moral philosopher, writer
Will MacAskill is a cofounder of the effective altruism movement, a philosophy that tries to answer the question: How can we do as much good as possible? Full bio

Double-click the English transcript below to play the video.

00:12
This is a graph
0
857
1479
00:14
that represents the economic history
of human civilization.
1
2360
3659
00:18
[World GDP per capita
over the last 200,000 years]
2
6043
2400
00:23
There's not much going on, is there.
3
11757
2003
00:26
For the vast majority of human history,
4
14751
2347
00:29
pretty much everyone lived
on the equivalent of one dollar per day,
5
17835
4048
00:33
and not much changed.
6
21907
1286
00:36
But then, something
extraordinary happened:
7
24757
2777
00:40
the Scientific and Industrial Revolutions.
8
28677
2811
00:43
And the basically flat graph you just saw
9
31512
2785
00:46
transforms into this.
10
34321
2675
00:50
What this graph means is that,
in terms of power to change the world,
11
38612
4635
00:55
we live in an unprecedented time
in human history,
12
43271
3438
00:58
and I believe our ethical understanding
hasn't yet caught up with this fact.
13
46733
3944
01:03
The Scientific and Industrial Revolutions
14
51716
1984
01:05
transformed both
our understanding of the world
15
53724
2909
01:08
and our ability to alter it.
16
56657
1669
01:11
What we need is an ethical revolution
17
59505
3667
01:15
so that we can work out
18
63196
1548
01:16
how do we use this tremendous
bounty of resources
19
64768
3152
01:19
to improve the world.
20
67944
1395
01:22
For the last 10 years,
21
70249
1591
01:23
my colleagues and I have developed
a philosophy and research program
22
71864
3833
01:27
that we call effective altruism.
23
75721
1835
01:30
It tries to respond
to these radical changes in our world,
24
78366
3595
01:33
uses evidence and careful reasoning
to try to answer this question:
25
81985
4476
01:40
How can we do the most good?
26
88173
2278
01:44
Now, there are many issues
you've got to address
27
92265
3221
01:47
if you want to tackle this problem:
28
95510
2263
01:49
whether to do good through your charity
29
97797
2031
01:51
or your career
or your political engagement,
30
99852
2152
01:54
what programs to focus on,
who to work with.
31
102028
2395
01:57
But what I want to talk about
32
105624
1476
01:59
is what I think is the most
fundamental problem.
33
107124
2872
02:02
Of all the many problems
that the world faces,
34
110020
2693
02:05
which should we be focused
on trying to solve first?
35
113962
2659
02:10
Now, I'm going to give you a framework
for thinking about this question,
36
118668
3468
02:14
and the framework is very simple.
37
122160
1936
02:16
A problem's higher priority,
38
124842
1699
02:19
the bigger, the more easily solvable
and the more neglected it is.
39
127416
4063
02:24
Bigger is better,
40
132694
1642
02:26
because we've got more to gain
if we do solve the problem.
41
134360
2841
02:30
More easily solvable is better
42
138221
1569
02:31
because I can solve the problem
with less time or money.
43
139814
2824
02:35
And most subtly,
44
143737
2063
02:38
more neglected is better,
because of diminishing returns.
45
146681
2849
02:42
The more resources that have already been
invested into solving a problem,
46
150285
3714
02:46
the harder it will be
to make additional progress.
47
154023
2905
02:50
Now, the key thing that I want
to leave with you is this framework,
48
158560
4059
02:54
so that you can think for yourself
49
162643
1984
02:56
what are the highest global priorities.
50
164651
2321
02:59
But I and others
in the effective altruism community
51
167954
2692
03:02
have converged on three moral issues
that we believe are unusually important,
52
170670
5879
03:08
score unusually well in this framework.
53
176573
2182
03:11
First is global health.
54
179151
2813
03:13
This is supersolvable.
55
181988
2411
03:16
We have an amazing track record
in global health.
56
184423
3397
03:19
Rates of death from measles,
malaria, diarrheal disease
57
187844
5420
03:25
are down by over 70 percent.
58
193288
2246
03:29
And in 1980, we eradicated smallpox.
59
197534
2789
03:33
I estimate we thereby saved
over 60 million lives.
60
201815
3667
03:37
That's more lives saved
than if we'd achieved world peace
61
205506
3064
03:40
in that same time period.
62
208594
1697
03:43
On our current best estimates,
63
211893
2325
03:46
we can save a life by distributing
long-lasting insecticide-treated bed nets
64
214242
4128
03:50
for just a few thousand dollars.
65
218394
1903
03:52
This is an amazing opportunity.
66
220911
1667
03:55
The second big priority
is factory farming.
67
223594
2515
03:58
This is superneglected.
68
226681
1563
04:00
There are 50 billion land animals
used every year for food,
69
228768
4143
04:05
and the vast majority of them
are factory farmed,
70
233625
2548
04:08
living in conditions
of horrific suffering.
71
236197
2380
04:10
They're probably among
the worst-off creatures on this planet,
72
238601
3151
04:13
and in many cases, we could
significantly improve their lives
73
241776
2858
04:16
for just pennies per animal.
74
244658
1602
04:19
Yet this is hugely neglected.
75
247123
2082
04:21
There are 3,000 times
more animals in factory farms
76
249229
3810
04:25
than there are stray pets,
77
253063
1561
04:28
but yet, factory farming gets one fiftieth
of the philanthropic funding.
78
256600
4373
04:34
That means additional
resources in this area
79
262211
2128
04:36
could have a truly transformative impact.
80
264363
2150
04:39
Now the third area is the one
that I want to focus on the most,
81
267458
2985
04:42
and that's the category
of existential risks:
82
270467
2984
04:45
events like a nuclear war
or a global pandemic
83
273475
3873
04:50
that could permanently derail civilization
84
278824
2611
04:54
or even lead to the extinction
of the human race.
85
282156
2436
04:57
Let me explain why I think
this is such a big priority
86
285882
2540
05:00
in terms of this framework.
87
288446
1337
05:02
First, size.
88
290992
1412
05:05
How bad would it be if there were
a truly existential catastrophe?
89
293341
3889
05:10
Well, it would involve the deaths
of all seven billion people on this planet
90
298920
6342
05:17
and that means you
and everyone you know and love.
91
305286
3119
05:21
That's just a tragedy
of unimaginable size.
92
309214
2579
05:25
But then, what's more,
93
313684
1976
05:27
it would also mean the curtailment
of humanity's future potential,
94
315684
3605
05:31
and I believe that humanity's
potential is vast.
95
319313
2952
05:35
The human race has been around
for about 200,000 years,
96
323551
3451
05:39
and if she lives as long
as a typical mammalian species,
97
327026
2883
05:41
she would last
for about two million years.
98
329933
2298
05:46
If the human race
were a single individual,
99
334884
2691
05:49
she would be just 10 years old today.
100
337599
2419
05:53
And what's more, the human race
isn't a typical mammalian species.
101
341526
4166
05:58
There's no reason why, if we're careful,
102
346950
1906
06:00
we should die off
after only two million years.
103
348880
2205
06:03
The earth will remain habitable
for 500 million years to come.
104
351839
4040
06:08
And if someday, we took to the stars,
105
356696
1944
06:11
the civilization could continue
for billions more.
106
359640
2516
06:16
So I think the future
is going to be really big,
107
364193
2388
06:19
but is it going to be good?
108
367669
1802
06:21
Is the human race
even really worth preserving?
109
369495
2817
06:26
Well, we hear all the time about
how things have been getting worse,
110
374540
3929
06:31
but I think that when
we take the long run,
111
379459
2693
06:34
things have been getting radically better.
112
382176
2031
06:37
Here, for example,
is life expectancy over time.
113
385453
2294
06:40
Here's the proportion of people
not living in extreme poverty.
114
388892
3023
06:45
Here's the number of countries over time
that have decriminalized homosexuality.
115
393106
4095
06:50
Here's the number of countries over time
that have become democratic.
116
398848
3269
06:55
Then, when we look to the future,
there could be so much more to gain again.
117
403015
4619
06:59
We'll be so much richer,
118
407658
1228
07:00
we can solve so many problems
that are intractable today.
119
408910
3595
07:05
So if this is kind of a graph
of how humanity has progressed
120
413389
4445
07:09
in terms of total human
flourishing over time,
121
417858
2890
07:12
well, this is what we would expect
future progress to look like.
122
420772
3355
07:16
It's vast.
123
424881
1150
07:18
Here, for example,
124
426953
1198
07:20
is where we would expect no one
to live in extreme poverty.
125
428175
3746
07:25
Here is where we would expect
everyone to be better off
126
433930
3202
07:29
than the richest person alive today.
127
437156
1853
07:32
Perhaps here is where we would discover
the fundamental natural laws
128
440081
3192
07:35
that govern our world.
129
443297
1268
07:37
Perhaps here is where we discover
an entirely new form of art,
130
445516
3705
07:41
a form of music we currently lack
the ears to hear.
131
449245
3040
07:45
And this is just
the next few thousand years.
132
453072
2222
07:47
Once we think past that,
133
455827
2205
07:50
well, we can't even imagine the heights
that human accomplishment might reach.
134
458056
4167
07:54
So the future could be very big
and it could be very good,
135
462247
3040
07:57
but are there ways
we could lose this value?
136
465311
2086
08:00
And sadly, I think there are.
137
468366
1826
08:02
The last two centuries brought
tremendous technological progress,
138
470216
4053
08:06
but they also brought
the global risks of nuclear war
139
474293
2622
08:08
and the possibility
of extreme climate change.
140
476939
2157
08:11
When we look to the coming centuries,
141
479725
1767
08:13
we should expect to see
the same pattern again.
142
481516
2647
08:16
And we can see some radically
powerful technologies on the horizon.
143
484187
3356
08:20
Synthetic biology might give us
the power to create viruses
144
488132
2849
08:23
of unprecedented
contagiousness and lethality.
145
491005
3047
08:27
Geoengineering might give us the power
to dramatically alter the earth's climate.
146
495131
4643
08:31
Artificial intelligence might give us
the power to create intelligent agents
147
499798
4199
08:36
with abilities greater than our own.
148
504021
2142
08:40
Now, I'm not saying that any
of these risks are particularly likely,
149
508222
3888
08:44
but when there's so much at stake,
150
512134
1644
08:45
even small probabilities
matter a great deal.
151
513802
2967
08:49
Imagine if you're getting on a plane
and you're kind of nervous,
152
517568
3001
08:52
and the pilot reassures you by saying,
153
520593
3444
08:56
"There's only a one-in-a-thousand
chance of crashing. Don't worry."
154
524061
4634
09:02
Would you feel reassured?
155
530157
1554
09:04
For these reasons, I think that preserving
the future of humanity
156
532509
4088
09:08
is among the most important problems
that we currently face.
157
536621
2984
09:12
But let's keep using this framework.
158
540546
2150
09:14
Is this problem neglected?
159
542720
1310
09:18
And I think the answer is yes,
160
546085
2282
09:20
and that's because problems
that affect future generations
161
548391
3325
09:23
are often hugely neglected.
162
551740
1651
09:26
Why?
163
554930
1406
09:28
Because future people
don't participate in markets today.
164
556360
3478
09:31
They don't have a vote.
165
559862
1522
09:33
It's not like there's a lobby
representing the interests
166
561931
2673
09:36
of those born in 2300 AD.
167
564628
2023
09:40
They don't get to influence
the decisions we make today.
168
568313
3242
09:43
They're voiceless.
169
571995
1191
09:46
And that means we still spend
a paltry amount on these issues:
170
574490
3445
09:49
nuclear nonproliferation,
171
577959
1799
09:51
geoengineering, biorisk,
172
579782
2330
09:55
artificial intelligence safety.
173
583414
1642
09:57
All of these receive
only a few tens of millions of dollars
174
585923
2874
10:00
of philanthropic funding every year.
175
588821
1927
10:04
That's tiny compared
to the 390 billion dollars
176
592044
3929
10:08
that's spent on US philanthropy in total.
177
596790
2261
10:13
The final aspect of our framework then:
178
601885
2484
10:17
Is this solvable?
179
605083
1190
10:19
I believe it is.
180
607289
1289
10:21
You can contribute with your money,
181
609014
3047
10:24
your career or your political engagement.
182
612085
2644
10:28
With your money,
you can support organizations
183
616225
2175
10:30
that focus on these risks,
184
618424
1302
10:31
like the Nuclear Threat Initiative,
185
619750
2555
10:34
which campaigns to take nuclear weapons
off hair-trigger alert,
186
622329
3660
10:38
or the Blue Ribbon Panel, which
develops policy to minimize the damage
187
626013
3571
10:41
from natural and man-made pandemics,
188
629608
2095
10:45
or the Center for Human-Compatible AI,
which does technical research
189
633158
3260
10:48
to ensure that AI systems
are safe and reliable.
190
636442
2747
10:52
With your political engagement,
191
640652
1516
10:54
you can vote for candidates
that care about these risks,
192
642192
3096
10:57
and you can support
greater international cooperation.
193
645312
2586
11:01
And then with your career,
there is so much that you can do.
194
649767
3542
11:05
Of course, we need scientists
and policymakers and organization leaders,
195
653333
3672
11:09
but just as importantly,
196
657865
1152
11:11
we also need accountants
and managers and assistants
197
659041
4117
11:16
to work in these organizations
that are tackling these problems.
198
664691
3754
11:20
Now, the research program
of effective altruism
199
668469
3492
11:25
is still in its infancy,
200
673191
1444
11:27
and there's still a huge amount
that we don't know.
201
675262
2524
11:31
But even with what we've learned so far,
202
679173
2343
11:34
we can see that by thinking carefully
203
682748
2183
11:37
and by focusing on those problems
that are big, solvable and neglected,
204
685494
4873
11:43
we can make a truly tremendous
difference to the world
205
691152
2708
11:45
for thousands of years to come.
206
693884
1631
11:47
Thank you.
207
695963
1151
11:49
(Applause)
208
697138
4560

▲Back to top

ABOUT THE SPEAKER
Will MacAskill - Moral philosopher, writer
Will MacAskill is a cofounder of the effective altruism movement, a philosophy that tries to answer the question: How can we do as much good as possible?

Why you should listen

Will MacAskill is the author of Doing Good Better and the cofounder of the organization Giving What We Can and the social-impact-career advice project 80,000 Hours. He is a trustee of the Centre for Effective Altruism. As he writes: "I'm an associate professor of moral philosophy at Oxford University. I helped to create the philosophy of effective altruism: the use of evidence and reason to figure out how to do the most good we can.

"While growing up, I tried my best to do good things: I donated to high street charities, I worked at an old folks' home, I taught English in Ethiopia. But when I was a graduate student I realized that I'd never asked myself the question of whether these were the truly best ways of improving the world. So, being a nerd, I started doing some research to try to find out what activities would do the most to make the world a better place. Ten years later, I'm still asking that question.

"I'm still far from certain about what the best ways of doing good are, but I have learned that each and every one of us can do a tremendous amount to make the world a better place, whether that's by donating to buy bednets to protect children from malaria, campaigning to improve living conditions for animals in factory farms or working to ensure that the benefits of new technology are harnessed while avoiding the risks."

More profile about the speaker
Will MacAskill | Speaker | TED.com