English-Video.net comment policy

The comment field is common to all languages

Let's write in your language and use "Google Translate" together

Please refer to informative community guidelines on TED.com

TED@BCG Milan

Margaret Mitchell: How we can build AI to help humans, not hurt us

Filmed:
1,029,907 views

As a research scientist at Google, Margaret Mitchell helps develop computers that can communicate about what they see and understand. She tells a cautionary tale about the gaps, blind spots and biases we subconsciously encode into AI -- and asks us to consider what the technology we create today will mean for tomorrow. "All that we see now is a snapshot in the evolution of artificial intelligence," Mitchell says. "If we want AI to evolve in a way that helps humans, then we need to define the goals and strategies that enable that path now."

- AI research scientist
Margaret Mitchell is a senior research scientist in Google's Research & Machine Intelligence group, working on artificial intelligence. Full bio

I work on helping computers
communicate about the world around us.
00:13
There are a lot of ways to do this,
00:17
and I like to focus on helping computers
00:19
to talk about what they see
and understand.
00:22
Given a scene like this,
00:25
a modern computer-vision algorithm
00:27
can tell you that there's a woman
and there's a dog.
00:29
It can tell you that the woman is smiling.
00:32
It might even be able to tell you
that the dog is incredibly cute.
00:34
I work on this problem
00:38
thinking about how humans
understand and process the world.
00:40
The thoughts, memories and stories
00:45
that a scene like this
might evoke for humans.
00:48
All the interconnections
of related situations.
00:51
Maybe you've seen
a dog like this one before,
00:55
or you've spent time
running on a beach like this one,
00:58
and that further evokes thoughts
and memories of a past vacation,
01:01
past times to the beach,
01:06
times spent running around
with other dogs.
01:08
One of my guiding principles
is that by helping computers to understand
01:11
what it's like to have these experiences,
01:16
to understand what we share
and believe and feel,
01:19
then we're in a great position
to start evolving computer technology
01:26
in a way that's complementary
with our own experiences.
01:30
So, digging more deeply into this,
01:35
a few years ago I began working on helping
computers to generate human-like stories
01:38
from sequences of images.
01:44
So, one day,
01:47
I was working with my computer to ask it
what it thought about a trip to Australia.
01:49
It took a look at the pictures,
and it saw a koala.
01:54
It didn't know what the koala was,
01:58
but it said it thought
it was an interesting-looking creature.
01:59
Then I shared with it a sequence of images
about a house burning down.
02:04
It took a look at the images and it said,
02:09
"This is an amazing view!
This is spectacular!"
02:13
It sent chills down my spine.
02:17
It saw a horrible, life-changing
and life-destroying event
02:20
and thought it was something positive.
02:25
I realized that it recognized
the contrast,
02:27
the reds, the yellows,
02:31
and thought it was something
worth remarking on positively.
02:34
And part of why it was doing this
02:37
was because most
of the images I had given it
02:39
were positive images.
02:42
That's because people
tend to share positive images
02:44
when they talk about their experiences.
02:48
When was the last time
you saw a selfie at a funeral?
02:51
I realized that,
as I worked on improving AI
02:55
task by task, dataset by dataset,
02:58
that I was creating massive gaps,
03:02
holes and blind spots
in what it could understand.
03:05
And while doing so,
03:10
I was encoding all kinds of biases.
03:11
Biases that reflect a limited viewpoint,
03:15
limited to a single dataset --
03:18
biases that can reflect
human biases found in the data,
03:21
such as prejudice and stereotyping.
03:25
I thought back to the evolution
of the technology
03:29
that brought me to where I was that day --
03:32
how the first color images
03:35
were calibrated against
a white woman's skin,
03:38
meaning that color photography
was biased against black faces.
03:41
And that same bias, that same blind spot
03:46
continued well into the '90s.
03:49
And the same blind spot
continues even today
03:51
in how well we can recognize
different people's faces
03:54
in facial recognition technology.
03:58
I though about the state of the art
in research today,
04:01
where we tend to limit our thinking
to one dataset and one problem.
04:04
And that in doing so, we were creating
more blind spots and biases
04:09
that the AI could further amplify.
04:14
I realized then
that we had to think deeply
04:17
about how the technology we work on today
looks in five years, in 10 years.
04:19
Humans evolve slowly,
with time to correct for issues
04:25
in the interaction of humans
and their environment.
04:29
In contrast, artificial intelligence
is evolving at an incredibly fast rate.
04:33
And that means that it really matters
04:39
that we think about this
carefully right now --
04:40
that we reflect on our own blind spots,
04:44
our own biases,
04:47
and think about how that's informing
the technology we're creating
04:49
and discuss what the technology of today
will mean for tomorrow.
04:53
CEOs and scientists have weighed in
on what they think
04:58
the artificial intelligence technology
of the future will be.
05:01
Stephen Hawking warns that
05:05
"Artificial intelligence
could end mankind."
05:06
Elon Musk warns
that it's an existential risk
05:10
and one of the greatest risks
that we face as a civilization.
05:13
Bill Gates has made the point,
05:17
"I don't understand
why people aren't more concerned."
05:19
But these views --
05:23
they're part of the story.
05:25
The math, the models,
05:28
the basic building blocks
of artificial intelligence
05:30
are something that we call access
and all work with.
05:33
We have open-source tools
for machine learning and intelligence
05:36
that we can contribute to.
05:40
And beyond that,
we can share our experience.
05:42
We can share our experiences
with technology and how it concerns us
05:46
and how it excites us.
05:50
We can discuss what we love.
05:52
We can communicate with foresight
05:55
about the aspects of technology
that could be more beneficial
05:57
or could be more problematic over time.
06:02
If we all focus on opening up
the discussion on AI
06:05
with foresight towards the future,
06:09
this will help create a general
conversation and awareness
06:13
about what AI is now,
06:17
what it can become
06:21
and all the things that we need to do
06:23
in order to enable that outcome
that best suits us.
06:25
We already see and know this
in the technology that we use today.
06:29
We use smart phones
and digital assistants and Roombas.
06:33
Are they evil?
06:38
Maybe sometimes.
06:40
Are they beneficial?
06:42
Yes, they're that, too.
06:45
And they're not all the same.
06:48
And there you already see
a light shining on what the future holds.
06:50
The future continues on
from what we build and create right now.
06:54
We set into motion that domino effect
06:59
that carves out AI's evolutionary path.
07:01
In our time right now,
we shape the AI of tomorrow.
07:05
Technology that immerses us
in augmented realities
07:08
bringing to life past worlds.
07:12
Technology that helps people
to share their experiences
07:15
when they have difficulty communicating.
07:20
Technology built on understanding
the streaming visual worlds
07:23
used as technology for self-driving cars.
07:27
Technology built on understanding images
and generating language,
07:32
evolving into technology that helps people
who are visually impaired
07:35
be better able to access the visual world.
07:40
And we also see how technology
can lead to problems.
07:42
We have technology today
07:46
that analyzes physical
characteristics we're born with --
07:48
such as the color of our skin
or the look of our face --
07:52
in order to determine whether or not
we might be criminals or terrorists.
07:55
We have technology
that crunches through our data,
07:59
even data relating
to our gender or our race,
08:02
in order to determine whether or not
we might get a loan.
08:05
All that we see now
08:09
is a snapshot in the evolution
of artificial intelligence.
08:11
Because where we are right now,
08:15
is within a moment of that evolution.
08:17
That means that what we do now
will affect what happens down the line
08:20
and in the future.
08:24
If we want AI to evolve
in a way that helps humans,
08:26
then we need to define
the goals and strategies
08:30
that enable that path now.
08:32
What I'd like to see is something
that fits well with humans,
08:35
with our culture and with the environment.
08:39
Technology that aids and assists
those of us with neurological conditions
08:43
or other disabilities
08:47
in order to make life
equally challenging for everyone.
08:49
Technology that works
08:54
regardless of your demographics
or the color of your skin.
08:55
And so today, what I focus on
is the technology for tomorrow
09:00
and for 10 years from now.
09:05
AI can turn out in many different ways.
09:08
But in this case,
09:11
it isn't a self-driving car
without any destination.
09:12
This is the car that we are driving.
09:16
We choose when to speed up
and when to slow down.
09:19
We choose if we need to make a turn.
09:23
We choose what the AI
of the future will be.
09:26
There's a vast playing field
09:31
of all the things that artificial
intelligence can become.
09:32
It will become many things.
09:36
And it's up to us now,
09:39
in order to figure out
what we need to put in place
09:41
to make sure the outcomes
of artificial intelligence
09:44
are the ones that will be
better for all of us.
09:48
Thank you.
09:51
(Applause)
09:52

▲Back to top

About the speaker:

Margaret Mitchell - AI research scientist
Margaret Mitchell is a senior research scientist in Google's Research & Machine Intelligence group, working on artificial intelligence.

Why you should listen

Margaret Mitchell's research involves vision-language and grounded language generation, focusing on how to evolve artificial intelligence towards positive goals. Her work combines computer vision, natural language processing, social media as well as many statistical methods and insights from cognitive science. Before Google, Mitchell was a founding member of Microsoft Research's "Cognition" group, focused on advancing artificial intelligence, and a researcher in Microsoft Research's Natural Language Processing group.

More profile about the speaker
Margaret Mitchell | Speaker | TED.com