English-Video.net comment policy

The comment field is common to all languages

Let's write in your language and use "Google Translate" together

Please refer to informative community guidelines on TED.com

TED2015

Fei-Fei Li: How we're teaching computers to understand pictures

Filmed
Views 2,055,368

When a very young child looks at a picture, she can identify simple elements: "cat," "book," "chair." Now, computers are getting smart enough to do that too. What's next? In a thrilling talk, computer vision expert Fei-Fei Li describes the state of the art -- including the database of 15 million photos her team built to "teach" a computer to understand pictures -- and the key insights yet to come.

- Computer scientist
As Director of Stanford’s Artificial Intelligence Lab and Vision Lab, Fei-Fei Li is working to solve AI’s trickiest problems -- including image recognition, learning and language processing. Full bio

Let me show you something.
00:14
(Video) Girl: Okay, that's a cat
sitting in a bed.
00:18
The boy is petting the elephant.
00:22
Those are people
that are going on an airplane.
00:26
That's a big airplane.
00:30
Fei-Fei Li: This is
a three-year-old child
00:33
describing what she sees
in a series of photos.
00:35
She might still have a lot
to learn about this world,
00:39
but she's already an expert
at one very important task:
00:42
to make sense of what she sees.
00:46
Our society is more
technologically advanced than ever.
00:50
We send people to the moon,
we make phones that talk to us
00:54
or customize radio stations
that can play only music we like.
00:58
Yet, our most advanced
machines and computers
01:03
still struggle at this task.
01:07
So I'm here today
to give you a progress report
01:09
on the latest advances
in our research in computer vision,
01:13
one of the most frontier
and potentially revolutionary
01:17
technologies in computer science.
01:21
Yes, we have prototyped cars
that can drive by themselves,
01:24
but without smart vision,
they cannot really tell the difference
01:29
between a crumpled paper bag
on the road, which can be run over,
01:33
and a rock that size,
which should be avoided.
01:37
We have made fabulous megapixel cameras,
01:41
but we have not delivered
sight to the blind.
01:44
Drones can fly over massive land,
01:48
but don't have enough vision technology
01:51
to help us to track
the changes of the rainforests.
01:53
Security cameras are everywhere,
01:57
but they do not alert us when a child
is drowning in a swimming pool.
02:00
Photos and videos are becoming
an integral part of global life.
02:06
They're being generated at a pace
that's far beyond what any human,
02:11
or teams of humans, could hope to view,
02:15
and you and I are contributing
to that at this TED.
02:18
Yet our most advanced software
is still struggling at understanding
02:22
and managing this enormous content.
02:27
So in other words,
collectively as a society,
02:31
we're very much blind,
02:36
because our smartest
machines are still blind.
02:38
"Why is this so hard?" you may ask.
02:43
Cameras can take pictures like this one
02:46
by converting lights into
a two-dimensional array of numbers
02:49
known as pixels,
02:53
but these are just lifeless numbers.
02:54
They do not carry meaning in themselves.
02:57
Just like to hear is not
the same as to listen,
03:00
to take pictures is not
the same as to see,
03:04
and by seeing,
we really mean understanding.
03:08
In fact, it took Mother Nature
540 million years of hard work
03:13
to do this task,
03:19
and much of that effort
03:21
went into developing the visual
processing apparatus of our brains,
03:23
not the eyes themselves.
03:28
So vision begins with the eyes,
03:31
but it truly takes place in the brain.
03:33
So for 15 years now, starting
from my Ph.D. at Caltech
03:38
and then leading Stanford's Vision Lab,
03:43
I've been working with my mentors,
collaborators and students
03:46
to teach computers to see.
03:50
Our research field is called
computer vision and machine learning.
03:54
It's part of the general field
of artificial intelligence.
03:57
So ultimately, we want to teach
the machines to see just like we do:
04:03
naming objects, identifying people,
inferring 3D geometry of things,
04:08
understanding relations, emotions,
actions and intentions.
04:13
You and I weave together entire stories
of people, places and things
04:19
the moment we lay our gaze on them.
04:25
The first step towards this goal
is to teach a computer to see objects,
04:28
the building block of the visual world.
04:34
In its simplest terms,
imagine this teaching process
04:37
as showing the computers
some training images
04:42
of a particular object, let's say cats,
04:45
and designing a model that learns
from these training images.
04:48
How hard can this be?
04:53
After all, a cat is just
a collection of shapes and colors,
04:55
and this is what we did
in the early days of object modeling.
04:59
We'd tell the computer algorithm
in a mathematical language
05:03
that a cat has a round face,
a chubby body,
05:07
two pointy ears, and a long tail,
05:10
and that looked all fine.
05:12
But what about this cat?
05:14
(Laughter)
05:16
It's all curled up.
05:18
Now you have to add another shape
and viewpoint to the object model.
05:19
But what if cats are hidden?
05:24
What about these silly cats?
05:27
Now you get my point.
05:31
Even something as simple
as a household pet
05:33
can present an infinite number
of variations to the object model,
05:36
and that's just one object.
05:41
So about eight years ago,
05:44
a very simple and profound observation
changed my thinking.
05:47
No one tells a child how to see,
05:53
especially in the early years.
05:56
They learn this through
real-world experiences and examples.
05:58
If you consider a child's eyes
06:03
as a pair of biological cameras,
06:06
they take one picture
about every 200 milliseconds,
06:08
the average time an eye movement is made.
06:12
So by age three, a child would have seen
hundreds of millions of pictures
06:15
of the real world.
06:21
That's a lot of training examples.
06:23
So instead of focusing solely
on better and better algorithms,
06:26
my insight was to give the algorithms
the kind of training data
06:32
that a child was given through experiences
06:37
in both quantity and quality.
06:40
Once we know this,
06:44
we knew we needed to collect a data set
06:46
that has far more images
than we have ever had before,
06:49
perhaps thousands of times more,
06:54
and together with Professor
Kai Li at Princeton University,
06:56
we launched the ImageNet project in 2007.
07:00
Luckily, we didn't have to mount
a camera on our head
07:05
and wait for many years.
07:09
We went to the Internet,
07:11
the biggest treasure trove of pictures
that humans have ever created.
07:12
We downloaded nearly a billion images
07:17
and used crowdsourcing technology
like the Amazon Mechanical Turk platform
07:20
to help us to label these images.
07:25
At its peak, ImageNet was one of
the biggest employers
07:28
of the Amazon Mechanical Turk workers:
07:33
together, almost 50,000 workers
07:36
from 167 countries around the world
07:40
helped us to clean, sort and label
07:44
nearly a billion candidate images.
07:48
That was how much effort it took
07:52
to capture even a fraction
of the imagery
07:55
a child's mind takes in
in the early developmental years.
07:59
In hindsight, this idea of using big data
08:04
to train computer algorithms
may seem obvious now,
08:08
but back in 2007, it was not so obvious.
08:12
We were fairly alone on this journey
for quite a while.
08:16
Some very friendly colleagues advised me
to do something more useful for my tenure,
08:20
and we were constantly struggling
for research funding.
08:25
Once, I even joked to my graduate students
08:29
that I would just reopen
my dry cleaner's shop to fund ImageNet.
08:32
After all, that's how I funded
my college years.
08:36
So we carried on.
08:41
In 2009, the ImageNet project delivered
08:43
a database of 15 million images
08:46
across 22,000 classes
of objects and things
08:50
organized by everyday English words.
08:55
In both quantity and quality,
08:58
this was an unprecedented scale.
09:01
As an example, in the case of cats,
09:04
we have more than 62,000 cats
09:08
of all kinds of looks and poses
09:11
and across all species
of domestic and wild cats.
09:15
We were thrilled
to have put together ImageNet,
09:20
and we wanted the whole research world
to benefit from it,
09:23
so in the TED fashion,
we opened up the entire data set
09:27
to the worldwide
research community for free.
09:31
(Applause)
09:36
Now that we have the data
to nourish our computer brain,
09:41
we're ready to come back
to the algorithms themselves.
09:45
As it turned out, the wealth
of information provided by ImageNet
09:49
was a perfect match to a particular class
of machine learning algorithms
09:54
called convolutional neural network,
09:59
pioneered by Kunihiko Fukushima,
Geoff Hinton, and Yann LeCun
10:02
back in the 1970s and '80s.
10:07
Just like the brain consists
of billions of highly connected neurons,
10:10
a basic operating unit in a neural network
10:16
is a neuron-like node.
10:20
It takes input from other nodes
10:22
and sends output to others.
10:25
Moreover, these hundreds of thousands
or even millions of nodes
10:28
are organized in hierarchical layers,
10:32
also similar to the brain.
10:36
In a typical neural network we use
to train our object recognition model,
10:38
it has 24 million nodes,
10:43
140 million parameters,
10:46
and 15 billion connections.
10:49
That's an enormous model.
10:52
Powered by the massive data from ImageNet
10:55
and the modern CPUs and GPUs
to train such a humongous model,
10:58
the convolutional neural network
11:04
blossomed in a way that no one expected.
11:06
It became the winning architecture
11:10
to generate exciting new results
in object recognition.
11:12
This is a computer telling us
11:18
this picture contains a cat
11:20
and where the cat is.
11:23
Of course there are more things than cats,
11:25
so here's a computer algorithm telling us
11:27
the picture contains
a boy and a teddy bear;
11:29
a dog, a person, and a small kite
in the background;
11:32
or a picture of very busy things
11:37
like a man, a skateboard,
railings, a lampost, and so on.
11:40
Sometimes, when the computer
is not so confident about what it sees,
11:45
we have taught it to be smart enough
11:51
to give us a safe answer
instead of committing too much,
11:53
just like we would do,
11:57
but other times our computer algorithm
is remarkable at telling us
12:00
what exactly the objects are,
12:05
like the make, model, year of the cars.
12:07
We applied this algorithm to millions
of Google Street View images
12:10
across hundreds of American cities,
12:16
and we have learned something
really interesting:
12:19
first, it confirmed our common wisdom
12:22
that car prices correlate very well
12:25
with household incomes.
12:28
But surprisingly, car prices
also correlate well
12:31
with crime rates in cities,
12:35
or voting patterns by zip codes.
12:39
So wait a minute. Is that it?
12:44
Has the computer already matched
or even surpassed human capabilities?
12:46
Not so fast.
12:51
So far, we have just taught
the computer to see objects.
12:53
This is like a small child
learning to utter a few nouns.
12:58
It's an incredible accomplishment,
13:03
but it's only the first step.
13:05
Soon, another developmental
milestone will be hit,
13:08
and children begin
to communicate in sentences.
13:12
So instead of saying
this is a cat in the picture,
13:15
you already heard the little girl
telling us this is a cat lying on a bed.
13:19
So to teach a computer
to see a picture and generate sentences,
13:24
the marriage between big data
and machine learning algorithm
13:30
has to take another step.
13:34
Now, the computer has to learn
from both pictures
13:36
as well as natural language sentences
13:40
generated by humans.
13:43
Just like the brain integrates
vision and language,
13:47
we developed a model
that connects parts of visual things
13:50
like visual snippets
13:56
with words and phrases in sentences.
13:58
About four months ago,
14:02
we finally tied all this together
14:04
and produced one of the first
computer vision models
14:07
that is capable of generating
a human-like sentence
14:11
when it sees a picture for the first time.
14:15
Now, I'm ready to show you
what the computer says
14:18
when it sees the picture
14:23
that the little girl saw
at the beginning of this talk.
14:25
(Video) Computer: A man is standing
next to an elephant.
14:31
A large airplane sitting on top
of an airport runway.
14:36
FFL: Of course, we're still working hard
to improve our algorithms,
14:41
and it still has a lot to learn.
14:45
(Applause)
14:47
And the computer still makes mistakes.
14:51
(Video) Computer: A cat lying
on a bed in a blanket.
14:54
FFL: So of course, when it sees
too many cats,
14:58
it thinks everything
might look like a cat.
15:00
(Video) Computer: A young boy
is holding a baseball bat.
15:05
(Laughter)
15:08
FFL: Or, if it hasn't seen a toothbrush,
it confuses it with a baseball bat.
15:09
(Video) Computer: A man riding a horse
down a street next to a building.
15:15
(Laughter)
15:18
FFL: We haven't taught Art 101
to the computers.
15:20
(Video) Computer: A zebra standing
in a field of grass.
15:25
FFL: And it hasn't learned to appreciate
the stunning beauty of nature
15:28
like you and I do.
15:32
So it has been a long journey.
15:34
To get from age zero to three was hard.
15:37
The real challenge is to go
from three to 13 and far beyond.
15:41
Let me remind you with this picture
of the boy and the cake again.
15:47
So far, we have taught
the computer to see objects
15:51
or even tell us a simple story
when seeing a picture.
15:55
(Video) Computer: A person sitting
at a table with a cake.
15:59
FFL: But there's so much more
to this picture
16:03
than just a person and a cake.
16:06
What the computer doesn't see
is that this is a special Italian cake
16:08
that's only served during Easter time.
16:12
The boy is wearing his favorite t-shirt
16:16
given to him as a gift by his father
after a trip to Sydney,
16:19
and you and I can all tell how happy he is
16:23
and what's exactly on his mind
at that moment.
16:27
This is my son Leo.
16:31
On my quest for visual intelligence,
16:34
I think of Leo constantly
16:36
and the future world he will live in.
16:39
When machines can see,
16:42
doctors and nurses will have
extra pairs of tireless eyes
16:44
to help them to diagnose
and take care of patients.
16:48
Cars will run smarter
and safer on the road.
16:53
Robots, not just humans,
16:57
will help us to brave the disaster zones
to save the trapped and wounded.
17:00
We will discover new species,
better materials,
17:05
and explore unseen frontiers
with the help of the machines.
17:09
Little by little, we're giving sight
to the machines.
17:15
First, we teach them to see.
17:19
Then, they help us to see better.
17:22
For the first time, human eyes
won't be the only ones
17:24
pondering and exploring our world.
17:29
We will not only use the machines
for their intelligence,
17:31
we will also collaborate with them
in ways that we cannot even imagine.
17:35
This is my quest:
17:41
to give computers visual intelligence
17:43
and to create a better future
for Leo and for the world.
17:46
Thank you.
17:51
(Applause)
17:53

▲Back to top

About the speaker:

Fei-Fei Li - Computer scientist
As Director of Stanford’s Artificial Intelligence Lab and Vision Lab, Fei-Fei Li is working to solve AI’s trickiest problems -- including image recognition, learning and language processing.

Why you should listen

Using algorithms built on machine learning methods such as neural network models, the Stanford Artificial Intelligence Lab led by Fei-Fei Li has created software capable of recognizing scenes in still photographs -- and accurately describe them using natural language.

Li’s work with neural networks and computer vision (with Stanford’s Vision Lab) marks a significant step forward for AI research, and could lead to applications ranging from more intuitive image searches to robots able to make autonomous decisions in unfamiliar situations.

Fei-Fei was honored as one of Foreign Policy's 2015 Global Thinkers

More profile about the speaker
Fei-Fei Li | Speaker | TED.com