English-Video.net comment policy

The comment field is common to all languages

Let's write in your language and use "Google Translate" together

Please refer to informative community guidelines on TED.com

TEDGlobal>NYC

Zeynep Tufekci: We're building a dystopia just to make people click on ads

Filmed:
2,316,686 views

We're building an artificial intelligence-powered dystopia, one click at a time, says techno-sociologist Zeynep Tufekci. In an eye-opening talk, she details how the same algorithms companies like Facebook, Google and Amazon use to get you to click on ads are also used to organize your access to political and social information. And the machines aren't even the real threat. What we need to understand is how the powerful might use AI to control us -- and what we can do in response.

- Techno-sociologist
Techno-sociologist Zeynep Tufekci asks big questions about our societies and our lives, as both algorithms and digital connectivity spread. Full bio

So when people voice fears
of artificial intelligence,
00:12
very often, they invoke images
of humanoid robots run amok.
00:16
You know? Terminator?
00:20
You know, that might be
something to consider,
00:22
but that's a distant threat.
00:24
Or, we fret about digital surveillance
00:26
with metaphors from the past.
00:30
"1984," George Orwell's "1984,"
00:31
it's hitting the bestseller lists again.
00:34
It's a great book,
00:37
but it's not the correct dystopia
for the 21st century.
00:39
What we need to fear most
00:44
is not what artificial intelligence
will do to us on its own,
00:45
but how the people in power
will use artificial intelligence
00:50
to control us and to manipulate us
00:55
in novel, sometimes hidden,
00:57
subtle and unexpected ways.
01:01
Much of the technology
01:04
that threatens our freedom
and our dignity in the near-term future
01:06
is being developed by companies
01:10
in the business of capturing
and selling our data and our attention
01:12
to advertisers and others:
01:17
Facebook, Google, Amazon,
01:19
Alibaba, Tencent.
01:22
Now, artificial intelligence has started
bolstering their business as well.
01:26
And it may seem
like artificial intelligence
01:31
is just the next thing after online ads.
01:33
It's not.
01:36
It's a jump in category.
01:37
It's a whole different world,
01:40
and it has great potential.
01:42
It could accelerate our understanding
of many areas of study and research.
01:45
But to paraphrase
a famous Hollywood philosopher,
01:53
"With prodigious potential
comes prodigious risk."
01:56
Now let's look at a basic fact
of our digital lives, online ads.
02:01
Right? We kind of dismiss them.
02:05
They seem crude, ineffective.
02:08
We've all had the experience
of being followed on the web
02:10
by an ad based on something
we searched or read.
02:14
You know, you look up a pair of boots
02:17
and for a week, those boots are following
you around everywhere you go.
02:18
Even after you succumb and buy them,
they're still following you around.
02:22
We're kind of inured to that kind
of basic, cheap manipulation.
02:26
We roll our eyes and we think,
"You know what? These things don't work."
02:29
Except, online,
02:33
the digital technologies are not just ads.
02:35
Now, to understand that,
let's think of a physical world example.
02:40
You know how, at the checkout counters
at supermarkets, near the cashier,
02:43
there's candy and gum
at the eye level of kids?
02:48
That's designed to make them
whine at their parents
02:52
just as the parents
are about to sort of check out.
02:56
Now, that's a persuasion architecture.
03:00
It's not nice, but it kind of works.
03:03
That's why you see it
in every supermarket.
03:06
Now, in the physical world,
03:08
such persuasion architectures
are kind of limited,
03:10
because you can only put
so many things by the cashier. Right?
03:12
And the candy and gum,
it's the same for everyone,
03:17
even though it mostly works
03:22
only for people who have
whiny little humans beside them.
03:23
In the physical world,
we live with those limitations.
03:29
In the digital world, though,
03:34
persuasion architectures
can be built at the scale of billions
03:36
and they can target, infer, understand
03:41
and be deployed at individuals
03:45
one by one
03:48
by figuring out your weaknesses,
03:49
and they can be sent
to everyone's phone private screen,
03:52
so it's not visible to us.
03:57
And that's different.
03:59
And that's just one of the basic things
that artificial intelligence can do.
04:01
Now, let's take an example.
04:04
Let's say you want to sell
plane tickets to Vegas. Right?
04:06
So in the old world, you could think
of some demographics to target
04:08
based on experience
and what you can guess.
04:12
You might try to advertise to, oh,
04:15
men between the ages of 25 and 35,
04:18
or people who have
a high limit on their credit card,
04:20
or retired couples. Right?
04:24
That's what you would do in the past.
04:26
With big data and machine learning,
04:28
that's not how it works anymore.
04:31
So to imagine that,
04:33
think of all the data
that Facebook has on you:
04:35
every status update you ever typed,
04:39
every Messenger conversation,
04:41
every place you logged in from,
04:44
all your photographs
that you uploaded there.
04:48
If you start typing something
and change your mind and delete it,
04:51
Facebook keeps those
and analyzes them, too.
04:55
Increasingly, it tries
to match you with your offline data.
04:59
It also purchases
a lot of data from data brokers.
05:03
It could be everything
from your financial records
05:06
to a good chunk of your browsing history.
05:09
Right? In the US,
such data is routinely collected,
05:12
collated and sold.
05:17
In Europe, they have tougher rules.
05:20
So what happens then is,
05:23
by churning through all that data,
these machine-learning algorithms --
05:26
that's why they're called
learning algorithms --
05:30
they learn to understand
the characteristics of people
05:33
who purchased tickets to Vegas before.
05:38
When they learn this from existing data,
05:41
they also learn
how to apply this to new people.
05:45
So if they're presented with a new person,
05:49
they can classify whether that person
is likely to buy a ticket to Vegas or not.
05:52
Fine. You're thinking,
an offer to buy tickets to Vegas.
05:57
I can ignore that.
06:03
But the problem isn't that.
06:04
The problem is,
06:06
we no longer really understand
how these complex algorithms work.
06:08
We don't understand
how they're doing this categorization.
06:12
It's giant matrices,
thousands of rows and columns,
06:16
maybe millions of rows and columns,
06:20
and not the programmers
06:23
and not anybody who looks at it,
06:26
even if you have all the data,
06:29
understands anymore
how exactly it's operating
06:30
any more than you'd know
what I was thinking right now
06:35
if you were shown
a cross section of my brain.
06:39
It's like we're not programming anymore,
06:44
we're growing intelligence
that we don't truly understand.
06:46
And these things only work
if there's an enormous amount of data,
06:52
so they also encourage
deep surveillance on all of us
06:56
so that the machine learning
algorithms can work.
07:01
That's why Facebook wants
to collect all the data it can about you.
07:04
The algorithms work better.
07:07
So let's push that Vegas example a bit.
07:08
What if the system
that we do not understand
07:11
was picking up that it's easier
to sell Vegas tickets
07:16
to people who are bipolar
and about to enter the manic phase.
07:21
Such people tend to become
overspenders, compulsive gamblers.
07:25
They could do this, and you'd have no clue
that's what they were picking up on.
07:31
I gave this example
to a bunch of computer scientists once
07:35
and afterwards, one of them came up to me.
07:39
He was troubled and he said,
"That's why I couldn't publish it."
07:41
I was like, "Couldn't publish what?"
07:45
He had tried to see whether you can indeed
figure out the onset of mania
07:47
from social media posts
before clinical symptoms,
07:53
and it had worked,
07:56
and it had worked very well,
07:58
and he had no idea how it worked
or what it was picking up on.
08:00
Now, the problem isn't solved
if he doesn't publish it,
08:06
because there are already companies
08:11
that are developing
this kind of technology,
08:13
and a lot of the stuff
is just off the shelf.
08:15
This is not very difficult anymore.
08:19
Do you ever go on YouTube
meaning to watch one video
08:21
and an hour later you've watched 27?
08:25
You know how YouTube
has this column on the right
08:28
that says, "Up next"
08:31
and it autoplays something?
08:33
It's an algorithm
08:35
picking what it thinks
that you might be interested in
08:36
and maybe not find on your own.
08:40
It's not a human editor.
08:41
It's what algorithms do.
08:43
It picks up on what you have watched
and what people like you have watched,
08:44
and infers that that must be
what you're interested in,
08:49
what you want more of,
08:53
and just shows you more.
08:54
It sounds like a benign
and useful feature,
08:56
except when it isn't.
08:59
So in 2016, I attended rallies
of then-candidate Donald Trump
09:01
to study as a scholar
the movement supporting him.
09:09
I study social movements,
so I was studying it, too.
09:13
And then I wanted to write something
about one of his rallies,
09:16
so I watched it a few times on YouTube.
09:20
YouTube started recommending to me
09:23
and autoplaying to me
white supremacist videos
09:26
in increasing order of extremism.
09:30
If I watched one,
09:33
it served up one even more extreme
09:35
and autoplayed that one, too.
09:38
If you watch Hillary Clinton
or Bernie Sanders content,
09:40
YouTube recommends
and autoplays conspiracy left,
09:44
and it goes downhill from there.
09:49
Well, you might be thinking,
this is politics, but it's not.
09:52
This isn't about politics.
09:55
This is just the algorithm
figuring out human behavior.
09:56
I once watched a video
about vegetarianism on YouTube
09:59
and YouTube recommended
and autoplayed a video about being vegan.
10:04
It's like you're never
hardcore enough for YouTube.
10:09
(Laughter)
10:12
So what's going on?
10:14
Now, YouTube's algorithm is proprietary,
10:16
but here's what I think is going on.
10:20
The algorithm has figured out
10:23
that if you can entice people
10:25
into thinking that you can
show them something more hardcore,
10:29
they're more likely to stay on the site
10:32
watching video after video
going down that rabbit hole
10:35
while Google serves them ads.
10:39
Now, with nobody minding
the ethics of the store,
10:43
these sites can profile people
10:47
who are Jew haters,
10:53
who think that Jews are parasites
10:56
and who have such explicit
anti-Semitic content,
11:00
and let you target them with ads.
11:06
They can also mobilize algorithms
11:09
to find for you look-alike audiences,
11:12
people who do not have such explicit
anti-Semitic content on their profile
11:15
but who the algorithm detects
may be susceptible to such messages,
11:21
and lets you target them with ads, too.
11:27
Now, this may sound
like an implausible example,
11:30
but this is real.
11:33
ProPublica investigated this
11:35
and found that you can indeed
do this on Facebook,
11:37
and Facebook helpfully
offered up suggestions
11:41
on how to broaden that audience.
11:43
BuzzFeed tried it for Google,
and very quickly they found,
11:46
yep, you can do it on Google, too.
11:49
And it wasn't even expensive.
11:51
The ProPublica reporter
spent about 30 dollars
11:53
to target this category.
11:57
So last year, Donald Trump's
social media manager disclosed
12:02
that they were using Facebook dark posts
to demobilize people,
12:07
not to persuade them,
12:13
but to convince them not to vote at all.
12:14
And to do that,
they targeted specifically,
12:18
for example, African-American men
in key cities like Philadelphia,
12:22
and I'm going to read
exactly what he said.
12:26
I'm quoting.
12:28
They were using "nonpublic posts
12:29
whose viewership the campaign controls
12:32
so that only the people
we want to see it see it.
12:35
We modeled this.
12:38
It will dramatically affect her ability
to turn these people out."
12:40
What's in those dark posts?
12:45
We have no idea.
12:48
Facebook won't tell us.
12:50
So Facebook also algorithmically
arranges the posts
12:52
that your friends put on Facebook,
or the pages you follow.
12:56
It doesn't show you
everything chronologically.
13:00
It puts the order in the way
that the algorithm thinks will entice you
13:02
to stay on the site longer.
13:07
Now, so this has a lot of consequences.
13:11
You may be thinking
somebody is snubbing you on Facebook.
13:14
The algorithm may never
be showing your post to them.
13:18
The algorithm is prioritizing
some of them and burying the others.
13:22
Experiments show
13:29
that what the algorithm picks to show you
can affect your emotions.
13:30
But that's not all.
13:36
It also affects political behavior.
13:38
So in 2010, in the midterm elections,
13:41
Facebook did an experiment
on 61 million people in the US
13:46
that was disclosed after the fact.
13:51
So some people were shown,
"Today is election day,"
13:53
the simpler one,
13:57
and some people were shown
the one with that tiny tweak
13:58
with those little thumbnails
14:02
of your friends who clicked on "I voted."
14:04
This simple tweak.
14:09
OK? So the pictures were the only change,
14:11
and that post shown just once
14:15
turned out an additional 340,000 voters
14:19
in that election,
14:25
according to this research
14:26
as confirmed by the voter rolls.
14:28
A fluke? No.
14:32
Because in 2012,
they repeated the same experiment.
14:34
And that time,
14:40
that civic message shown just once
14:42
turned out an additional 270,000 voters.
14:45
For reference, the 2016
US presidential election
14:51
was decided by about 100,000 votes.
14:56
Now, Facebook can also
very easily infer what your politics are,
15:01
even if you've never
disclosed them on the site.
15:06
Right? These algorithms
can do that quite easily.
15:08
What if a platform with that kind of power
15:11
decides to turn out supporters
of one candidate over the other?
15:15
How would we even know about it?
15:21
Now, we started from someplace
seemingly innocuous --
15:25
online adds following us around --
15:29
and we've landed someplace else.
15:31
As a public and as citizens,
15:35
we no longer know
if we're seeing the same information
15:37
or what anybody else is seeing,
15:41
and without a common basis of information,
15:43
little by little,
15:46
public debate is becoming impossible,
15:47
and we're just at
the beginning stages of this.
15:51
These algorithms can quite easily infer
15:54
things like your people's ethnicity,
15:57
religious and political views,
personality traits,
16:00
intelligence, happiness,
use of addictive substances,
16:03
parental separation, age and genders,
16:06
just from Facebook likes.
16:09
These algorithms can identify protesters
16:13
even if their faces
are partially concealed.
16:17
These algorithms may be able
to detect people's sexual orientation
16:21
just from their dating profile pictures.
16:28
Now, these are probabilistic guesses,
16:33
so they're not going
to be 100 percent right,
16:36
but I don't see the powerful resisting
the temptation to use these technologies
16:39
just because there are
some false positives,
16:44
which will of course create
a whole other layer of problems.
16:46
Imagine what a state can do
16:49
with the immense amount of data
it has on its citizens.
16:52
China is already using
face detection technology
16:56
to identify and arrest people.
17:01
And here's the tragedy:
17:05
we're building this infrastructure
of surveillance authoritarianism
17:07
merely to get people to click on ads.
17:13
And this won't be
Orwell's authoritarianism.
17:17
This isn't "1984."
17:19
Now, if authoritarianism
is using overt fear to terrorize us,
17:21
we'll all be scared, but we'll know it,
17:26
we'll hate it and we'll resist it.
17:29
But if the people in power
are using these algorithms
17:32
to quietly watch us,
17:37
to judge us and to nudge us,
17:40
to predict and identify
the troublemakers and the rebels,
17:43
to deploy persuasion
architectures at scale
17:47
and to manipulate individuals one by one
17:51
using their personal, individual
weaknesses and vulnerabilities,
17:56
and if they're doing it at scale
18:02
through our private screens
18:06
so that we don't even know
18:07
what our fellow citizens
and neighbors are seeing,
18:09
that authoritarianism
will envelop us like a spider's web
18:13
and we may not even know we're in it.
18:18
So Facebook's market capitalization
18:22
is approaching half a trillion dollars.
18:25
It's because it works great
as a persuasion architecture.
18:28
But the structure of that architecture
18:33
is the same whether you're selling shoes
18:36
or whether you're selling politics.
18:39
The algorithms do not know the difference.
18:42
The same algorithms set loose upon us
18:46
to make us more pliable for ads
18:49
are also organizing our political,
personal and social information flows,
18:52
and that's what's got to change.
18:59
Now, don't get me wrong,
19:02
we use digital platforms
because they provide us with great value.
19:04
I use Facebook to keep in touch
with friends and family around the world.
19:09
I've written about how crucial
social media is for social movements.
19:14
I have studied how
these technologies can be used
19:19
to circumvent censorship around the world.
19:22
But it's not that the people who run,
you know, Facebook or Google
19:27
are maliciously and deliberately trying
19:33
to make the country
or the world more polarized
19:36
and encourage extremism.
19:40
I read the many
well-intentioned statements
19:43
that these people put out.
19:47
But it's not the intent or the statements
people in technology make that matter,
19:51
it's the structures
and business models they're building.
19:57
And that's the core of the problem.
20:02
Either Facebook is a giant con
of half a trillion dollars
20:04
and ads don't work on the site,
20:10
it doesn't work
as a persuasion architecture,
20:12
or its power of influence
is of great concern.
20:14
It's either one or the other.
20:20
It's similar for Google, too.
20:22
So what can we do?
20:24
This needs to change.
20:27
Now, I can't offer a simple recipe,
20:29
because we need to restructure
20:31
the whole way our
digital technology operates.
20:34
Everything from the way
technology is developed
20:37
to the way the incentives,
economic and otherwise,
20:41
are built into the system.
20:45
We have to face and try to deal with
20:48
the lack of transparency
created by the proprietary algorithms,
20:51
the structural challenge
of machine learning's opacity,
20:56
all this indiscriminate data
that's being collected about us.
21:00
We have a big task in front of us.
21:05
We have to mobilize our technology,
21:08
our creativity
21:11
and yes, our politics
21:13
so that we can build
artificial intelligence
21:16
that supports us in our human goals
21:18
but that is also constrained
by our human values.
21:22
And I understand this won't be easy.
21:27
We might not even easily agree
on what those terms mean.
21:30
But if we take seriously
21:34
how these systems that we
depend on for so much operate,
21:38
I don't see how we can postpone
this conversation anymore.
21:44
These structures
21:49
are organizing how we function
21:51
and they're controlling
21:55
what we can and we cannot do.
21:58
And many of these ad-financed platforms,
22:00
they boast that they're free.
22:03
In this context, it means
that we are the product that's being sold.
22:04
We need a digital economy
22:10
where our data and our attention
22:13
is not for sale to the highest-bidding
authoritarian or demagogue.
22:17
(Applause)
22:23
So to go back to
that Hollywood paraphrase,
22:30
we do want the prodigious potential
22:33
of artificial intelligence
and digital technology to blossom,
22:37
but for that, we must face
this prodigious menace,
22:41
open-eyed and now.
22:46
Thank you.
22:48
(Applause)
22:49

▲Back to top

About the speaker:

Zeynep Tufekci - Techno-sociologist
Techno-sociologist Zeynep Tufekci asks big questions about our societies and our lives, as both algorithms and digital connectivity spread.

Why you should listen

We've entered an era of digital connectivity and machine intelligence. Complex algorithms are increasingly used to make consequential decisions about us. Many of these decisions are subjective and have no right answer: who should be hired, fired or promoted; what news should be shown to whom; which of your friends do you see updates from; which convict should be paroled. With increasing use of machine learning in these systems, we often don't even understand how exactly they are making these decisions. Zeynep Tufekci studies what this historic transition means for culture, markets, politics and personal life.

Tufekci is a contributing opinion writer at the New York Times, an associate professor at the School of Information and Library Science at University of North Carolina, Chapel Hill, and a faculty associate at Harvard's Berkman Klein Center for Internet and Society.

Her book, Twitter and Tear Gas: The Power and Fragility of Networked Protest, was published in 2017 by Yale University Press. Her next book, from Penguin Random House, will be about algorithms that watch, judge and nudge us.

More profile about the speaker
Zeynep Tufekci | Speaker | TED.com