English-Video.net comment policy

The comment field is common to all languages

Let's write in your language and use "Google Translate" together

Please refer to informative community guidelines on TED.com

TED2017

Sebastian Thrun and Chris Anderson: What AI is -- and isn't

Filmed
Views 1,025,818

Educator and entrepreneur Sebastian Thrun wants us to use AI to free humanity of repetitive work and unleash our creativity. In an inspiring, informative conversation with TED Curator Chris Anderson, Thrun discusses the progress of deep learning, why we shouldn't fear runaway AI and how society will be better off if dull, tedious work is done with the help of machines. "Only one percent of interesting things have been invented yet," Thrun says. "I believe all of us are insanely creative ... [AI] will empower us to turn creativity into action."

- Engineer
Sebastian Thrun is the director of the Stanford Artificial Intelligence Lab and is working, through robotics, to change the way we understand the world. Full bio

- TED Curator
After a long career in journalism and publishing, Chris Anderson became the curator of the TED Conference in 2002 and has developed it as a platform for identifying and disseminating ideas worth spreading. Full bio

Chris Anderson: Help us understand
what machine learning is,
00:12
because that seems to be the key driver
00:15
of so much of the excitement
and also of the concern
00:17
around artificial intelligence.
00:20
How does machine learning work?
00:22
Sebastian Thrun: So, artificial
intelligence and machine learning
00:23
is about 60 years old
00:27
and has not had a great day
in its past until recently.
00:29
And the reason is that today,
00:34
we have reached a scale
of computing and datasets
00:37
that was necessary to make machines smart.
00:41
So here's how it works.
00:43
If you program a computer today,
say, your phone,
00:45
then you hire software engineers
00:48
that write a very,
very long kitchen recipe,
00:51
like, "If the water is too hot,
turn down the temperature.
00:55
If it's too cold, turn up
the temperature."
00:58
The recipes are not just 10 lines long.
01:00
They are millions of lines long.
01:03
A modern cell phone
has 12 million lines of code.
01:06
A browser has five million lines of code.
01:10
And each bug in this recipe
can cause your computer to crash.
01:12
That's why a software engineer
makes so much money.
01:17
The new thing now is that computers
can find their own rules.
01:21
So instead of an expert
deciphering, step by step,
01:25
a rule for every contingency,
01:29
what you do now is you give
the computer examples
01:31
and have it infer its own rules.
01:34
A really good example is AlphaGo,
which recently was won by Google.
01:36
Normally, in game playing,
you would really write down all the rules,
01:40
but in AlphaGo's case,
01:44
the system looked over a million games
01:45
and was able to infer its own rules
01:48
and then beat the world's
residing Go champion.
01:50
That is exciting, because it relieves
the software engineer
01:53
of the need of being super smart,
01:57
and pushes the burden towards the data.
01:59
As I said, the inflection point
where this has become really possible --
02:01
very embarrassing, my thesis
was about machine learning.
02:06
It was completely
insignificant, don't read it,
02:08
because it was 20 years ago
02:11
and back then, the computers
were as big as a cockroach brain.
02:12
Now they are powerful enough
to really emulate
02:15
kind of specialized human thinking.
02:17
And then the computers
take advantage of the fact
02:19
that they can look at
much more data than people can.
02:22
So I'd say AlphaGo looked at
more than a million games.
02:24
No human expert can ever
study a million games.
02:27
Google has looked at over
a hundred billion web pages.
02:30
No person can ever study
a hundred billion web pages.
02:33
So as a result,
the computer can find rules
02:36
that even people can't find.
02:39
CA: So instead of looking ahead
to, "If he does that, I will do that,"
02:41
it's more saying, "Here is what
looks like a winning pattern,
02:45
here is what looks like
a winning pattern."
02:48
ST: Yeah. I mean, think about
how you raise children.
02:50
You don't spend the first 18 years
giving kids a rule for every contingency
02:53
and set them free
and they have this big program.
02:56
They stumble, fall, get up,
they get slapped or spanked,
02:59
and they have a positive experience,
a good grade in school,
03:01
and they figure it out on their own.
03:04
That's happening with computers now,
03:06
which makes computer programming
so much easier all of a sudden.
03:08
Now we don't have to think anymore.
We just give them lots of data.
03:11
CA: And so, this has been key
to the spectacular improvement
03:14
in power of self-driving cars.
03:18
I think you gave me an example.
03:21
Can you explain what's happening here?
03:23
ST: This is a drive of a self-driving car
03:25
that we happened to have at Udacity
03:29
and recently made
into a spin-off called Voyage.
03:31
We have used this thing
called deep learning
03:33
to train a car to drive itself,
03:36
and this is driving
from Mountain View, California,
03:37
to San Francisco
03:40
on El Camino Real on a rainy day,
03:41
with bicyclists and pedestrians
and 133 traffic lights.
03:43
And the novel thing here is,
03:47
many, many moons ago, I started
the Google self-driving car team.
03:50
And back in the day, I hired
the world's best software engineers
03:53
to find the world's best rules.
03:56
This is just trained.
03:58
We drive this road 20 times,
03:59
we put all this data
into the computer brain,
04:03
and after a few hours of processing,
04:05
it comes up with behavior
that often surpasses human agility.
04:07
So it's become really easy to program it.
04:11
This is 100 percent autonomous,
about 33 miles, an hour and a half.
04:13
CA: So, explain it -- on the big part
of this program on the left,
04:17
you're seeing basically what
the computer sees as trucks and cars
04:21
and those dots overtaking it and so forth.
04:24
ST: On the right side, you see the camera
image, which is the main input here,
04:27
and it's used to find lanes,
other cars, traffic lights.
04:31
The vehicle has a radar
to do distance estimation.
04:33
This is very commonly used
in these kind of systems.
04:36
On the left side you see a laser diagram,
04:39
where you see obstacles like trees
and so on depicted by the laser.
04:41
But almost all the interesting work
is centering on the camera image now.
04:44
We're really shifting over from precision
sensors like radars and lasers
04:47
into very cheap, commoditized sensors.
04:51
A camera costs less than eight dollars.
04:53
CA: And that green dot
on the left thing, what is that?
04:55
Is that anything meaningful?
04:57
ST: This is a look-ahead point
for your adaptive cruise control,
04:59
so it helps us understand
how to regulate velocity
05:03
based on how far
the cars in front of you are.
05:05
CA: And so, you've also
got an example, I think,
05:08
of how the actual
learning part takes place.
05:10
Maybe we can see that. Talk about this.
05:13
ST: This is an example where we posed
a challenge to Udacity students
05:15
to take what we call
a self-driving car Nanodegree.
05:19
We gave them this dataset
05:22
and said "Hey, can you guys figure out
how to steer this car?"
05:24
And if you look at the images,
05:27
it's, even for humans, quite impossible
to get the steering right.
05:28
And we ran a competition and said,
"It's a deep learning competition,
05:33
AI competition,"
05:36
and we gave the students 48 hours.
05:37
So if you are a software house
like Google or Facebook,
05:39
something like this costs you
at least six months of work.
05:43
So we figured 48 hours is great.
05:46
And within 48 hours, we got about
100 submissions from students,
05:48
and the top four got it perfectly right.
05:52
It drives better than I could
drive on this imagery,
05:55
using deep learning.
05:58
And again, it's the same methodology.
05:59
It's this magical thing.
06:01
When you give enough data
to a computer now,
06:02
and give enough time
to comprehend the data,
06:04
it finds its own rules.
06:06
CA: And so that has led to the development
of powerful applications
06:09
in all sorts of areas.
06:14
You were talking to me
the other day about cancer.
06:15
Can I show this video?
06:18
ST: Yeah, absolutely, please.
CA: This is cool.
06:19
ST: This is kind of an insight
into what's happening
06:22
in a completely different domain.
06:25
This is augmenting, or competing --
06:28
it's in the eye of the beholder --
06:31
with people who are being paid
400,000 dollars a year,
06:33
dermatologists,
06:37
highly trained specialists.
06:38
It takes more than a decade of training
to be a good dermatologist.
06:40
What you see here is
the machine learning version of it.
06:43
It's called a neural network.
06:47
"Neural networks" is the technical term
for these machine learning algorithms.
06:49
They've been around since the 1980s.
06:52
This one was invented in 1988
by a Facebook Fellow called Yann LeCun,
06:54
and it propagates data stages
06:59
through what you could think of
as the human brain.
07:02
It's not quite the same thing,
but it emulates the same thing.
07:05
It goes stage after stage.
07:08
In the very first stage, it takes
the visual input and extracts edges
07:09
and rods and dots.
07:13
And the next one becomes
more complicated edges
07:16
and shapes like little half-moons.
07:19
And eventually, it's able to build
really complicated concepts.
07:22
Andrew Ng has been able to show
07:26
that it's able to find
cat faces and dog faces
07:28
in vast amounts of images.
07:32
What my student team
at Stanford has shown is that
07:34
if you train it on 129,000 images
of skin conditions,
07:36
including melanoma and carcinomas,
07:42
you can do as good a job
07:45
as the best human dermatologists.
07:48
And to convince ourselves
that this is the case,
07:51
we captured an independent dataset
that we presented to our network
07:53
and to 25 board-certified
Stanford-level dermatologists,
07:57
and compared those.
08:01
And in most cases,
08:03
they were either on par or above
the performance classification accuracy
08:05
of human dermatologists.
08:09
CA: You were telling me an anecdote.
08:10
I think about this image right here.
08:12
What happened here?
08:14
ST: This was last Thursday.
That's a moving piece.
08:15
What we've shown before and we published
in "Nature" earlier this year
08:19
was this idea that we show
dermatologists images
08:23
and our computer program images,
08:26
and count how often they're right.
08:27
But all these images are past images.
08:29
They've all been biopsied to make sure
we had the correct classification.
08:31
This one wasn't.
08:34
This one was actually done at Stanford
by one of our collaborators.
08:35
The story goes that our collaborator,
08:38
who is a world-famous dermatologist,
one of the three best, apparently,
08:41
looked at this mole and said,
"This is not skin cancer."
08:44
And then he had
a second moment, where he said,
08:47
"Well, let me just check with the app."
08:50
So he took out his iPhone
and ran our piece of software,
08:52
our "pocket dermatologist," so to speak,
08:54
and the iPhone said: cancer.
08:56
It said melanoma.
08:59
And then he was confused.
09:01
And he decided, "OK, maybe I trust
the iPhone a little bit more than myself,"
09:03
and he sent it out to the lab
to get it biopsied.
09:07
And it came up as an aggressive melanoma.
09:10
So I think this might be the first time
that we actually found,
09:13
in the practice of using deep learning,
09:16
an actual person whose melanoma
would have gone unclassified,
09:19
had it not been for deep learning.
09:22
CA: I mean, that's incredible.
09:24
(Applause)
09:26
It feels like there'd be an instant demand
for an app like this right now,
09:28
that you might freak out a lot of people.
09:31
Are you thinking of doing this,
making an app that allows self-checking?
09:33
ST: So my in-box is flooded
about cancer apps,
09:37
with heartbreaking stories of people.
09:42
I mean, some people have had
10, 15, 20 melanomas removed,
09:44
and are scared that one
might be overlooked, like this one,
09:47
and also, about, I don't know,
09:51
flying cars and speaker inquiries
these days, I guess.
09:53
My take is, we need more testing.
09:56
I want to be very careful.
09:59
It's very easy to give a flashy result
and impress a TED audience.
10:01
It's much harder to put
something out that's ethical.
10:04
And if people were to use the app
10:07
and choose not to consult
the assistance of a doctor
10:10
because we get it wrong,
10:12
I would feel really bad about it.
10:14
So we're currently doing clinical tests,
10:16
and if these clinical tests commence
and our data holds up,
10:18
we might be able at some point
to take this kind of technology
10:20
and take it out of the Stanford clinic
10:23
and bring it to the entire world,
10:25
places where Stanford
doctors never, ever set foot.
10:27
CA: And do I hear this right,
10:30
that it seemed like what you were saying,
10:33
because you are working
with this army of Udacity students,
10:35
that in a way, you're applying
a different form of machine learning
10:39
than might take place in a company,
10:42
which is you're combining machine learning
with a form of crowd wisdom.
10:44
Are you saying that sometimes you think
that could actually outperform
10:48
what a company can do,
even a vast company?
10:51
ST: I believe there's now
instances that blow my mind,
10:53
and I'm still trying to understand.
10:56
What Chris is referring to
is these competitions that we run.
10:58
We turn them around in 48 hours,
11:02
and we've been able to build
a self-driving car
11:04
that can drive from Mountain View
to San Francisco on surface streets.
11:06
It's not quite on par with Google
after seven years of Google work,
11:10
but it's getting there.
11:13
And it took us only two engineers
and three months to do this.
11:16
And the reason is, we have
an army of students
11:19
who participate in competitions.
11:22
We're not the only ones
who use crowdsourcing.
11:24
Uber and Didi use crowdsource for driving.
11:26
Airbnb uses crowdsourcing for hotels.
11:28
There's now many examples
where people do bug-finding crowdsourcing
11:31
or protein folding, of all things,
in crowdsourcing.
11:35
But we've been able to build
this car in three months,
11:38
so I am actually rethinking
11:41
how we organize corporations.
11:44
We have a staff of 9,000 people
who are never hired,
11:47
that I never fire.
11:51
They show up to work
and I don't even know.
11:53
Then they submit to me
maybe 9,000 answers.
11:55
I'm not obliged to use any of those.
11:58
I end up -- I pay only the winners,
12:00
so I'm actually very cheapskate here,
which is maybe not the best thing to do.
12:02
But they consider it part
of their education, too, which is nice.
12:06
But these students have been able
to produce amazing deep learning results.
12:09
So yeah, the synthesis of great people
and great machine learning is amazing.
12:14
CA: I mean, Gary Kasparov said on
the first day [of TED2017]
12:18
that the winners of chess, surprisingly,
turned out to be two amateur chess players
12:20
with three mediocre-ish,
mediocre-to-good, computer programs,
12:26
that could outperform one grand master
with one great chess player,
12:31
like it was all part of the process.
12:34
And it almost seems like
you're talking about a much richer version
12:36
of that same idea.
12:39
ST: Yeah, I mean, as you followed
the fantastic panels yesterday morning,
12:41
two sessions about AI,
12:45
robotic overlords and the human response,
12:47
many, many great things were said.
12:49
But one of the concerns is
that we sometimes confuse
12:51
what's actually been done with AI
with this kind of overlord threat,
12:54
where your AI develops
consciousness, right?
12:58
The last thing I want
is for my AI to have consciousness.
13:01
I don't want to come into my kitchen
13:04
and have the refrigerator fall in love
with the dishwasher
13:06
and tell me, because I wasn't nice enough,
13:10
my food is now warm.
13:12
I wouldn't buy these products,
and I don't want them.
13:14
But the truth is, for me,
13:17
AI has always been
an augmentation of people.
13:19
It's been an augmentation of us,
13:22
to make us stronger.
13:24
And I think Kasparov was exactly correct.
13:26
It's been the combination
of human smarts and machine smarts
13:28
that make us stronger.
13:32
The theme of machines making us stronger
is as old as machines are.
13:34
The agricultural revolution took
place because it made steam engines
13:39
and farming equipment
that couldn't farm by itself,
13:43
that never replaced us;
it made us stronger.
13:46
And I believe this new wave of AI
will make us much, much stronger
13:48
as a human race.
13:51
CA: We'll come on to that a bit more,
13:53
but just to continue with the scary part
of this for some people,
13:55
like, what feels like it gets
scary for people is when you have
13:59
a computer that can, one,
rewrite its own code,
14:02
so, it can create
multiple copies of itself,
14:07
try a bunch of different code versions,
14:11
possibly even at random,
14:13
and then check them out and see
if a goal is achieved and improved.
14:14
So, say the goal is to do better
on an intelligence test.
14:18
You know, a computer
that's moderately good at that,
14:22
you could try a million versions of that.
14:26
You might find one that was better,
14:28
and then, you know, repeat.
14:30
And so the concern is that you get
some sort of runaway effect
14:32
where everything is fine
on Thursday evening,
14:35
and you come back into the lab
on Friday morning,
14:38
and because of the speed
of computers and so forth,
14:41
things have gone crazy, and suddenly --
14:43
ST: I would say this is a possibility,
14:45
but it's a very remote possibility.
14:47
So let me just translate
what I heard you say.
14:49
In the AlphaGo case,
we had exactly this thing:
14:52
the computer would play
the game against itself
14:55
and then learn new rules.
14:58
And what machine learning is
is a rewriting of the rules.
14:59
It's the rewriting of code.
15:02
But I think there was
absolutely no concern
15:04
that AlphaGo would take over the world.
15:07
It can't even play chess.
15:09
CA: No, no, no, but now,
these are all very single-domain things.
15:11
But it's possible to imagine.
15:16
I mean, we just saw a computer
that seemed nearly capable
15:19
of passing a university entrance test,
15:22
that can kind of -- it can't read
and understand in the sense that we can,
15:25
but it can certainly absorb all the text
15:28
and maybe see increased
patterns of meaning.
15:30
Isn't there a chance that,
as this broadens out,
15:33
there could be a different
kind of runaway effect?
15:37
ST: That's where
I draw the line, honestly.
15:39
And the chance exists --
I don't want to downplay it --
15:41
but I think it's remote, and it's not
the thing that's on my mind these days,
15:44
because I think the big revolution
is something else.
15:48
Everything successful in AI
to the present date
15:50
has been extremely specialized,
15:53
and it's been thriving on a single idea,
15:56
which is massive amounts of data.
15:58
The reason AlphaGo works so well
is because of massive numbers of Go plays,
16:01
and AlphaGo can't drive a car
or fly a plane.
16:05
The Google self-driving car
or the Udacity self-driving car
16:08
thrives on massive amounts of data,
and it can't do anything else.
16:11
It can't even control a motorcycle.
16:15
It's a very specific,
domain-specific function,
16:16
and the same is true for our cancer app.
16:19
There has been almost no progress
on this thing called "general AI,"
16:21
where you go to an AI and say,
"Hey, invent for me special relativity
16:24
or string theory."
16:28
It's totally in the infancy.
16:30
The reason I want to emphasize this,
16:32
I see the concerns,
and I want to acknowledge them.
16:34
But if I were to think about one thing,
16:38
I would ask myself the question,
"What if we can take anything repetitive
16:41
and make ourselves
100 times as efficient?"
16:47
It so turns out, 300 years ago,
we all worked in agriculture
16:51
and did farming and did repetitive things.
16:55
Today, 75 percent of us work in offices
16:57
and do repetitive things.
17:00
We've become spreadsheet monkeys.
17:02
And not just low-end labor.
17:04
We've become dermatologists
doing repetitive things,
17:06
lawyers doing repetitive things.
17:09
I think we are at the brink
of being able to take an AI,
17:11
look over our shoulders,
17:14
and they make us maybe 10 or 50 times
as effective in these repetitive things.
17:16
That's what is on my mind.
17:20
CA: That sounds super exciting.
17:22
The process of getting there seems
a little terrifying to some people,
17:24
because once a computer
can do this repetitive thing
17:28
much better than the dermatologist
17:31
or than the driver, especially,
is the thing that's talked about
17:34
so much now,
17:37
suddenly millions of jobs go,
17:39
and, you know, the country's in revolution
17:41
before we ever get to the more
glorious aspects of what's possible.
17:44
ST: Yeah, and that's an issue,
and it's a big issue,
17:48
and it was pointed out yesterday morning
by several guest speakers.
17:50
Now, prior to me showing up onstage,
17:55
I confessed I'm a positive,
optimistic person,
17:57
so let me give you an optimistic pitch,
18:01
which is, think of yourself
back 300 years ago.
18:04
Europe just survived 140 years
of continuous war,
18:08
none of you could read or write,
18:12
there were no jobs that you hold today,
18:14
like investment banker
or software engineer or TV anchor.
18:17
We would all be in the fields and farming.
18:21
Now here comes little Sebastian
with a little steam engine in his pocket,
18:24
saying, "Hey guys, look at this.
18:27
It's going to make you 100 times
as strong, so you can do something else."
18:29
And then back in the day,
there was no real stage,
18:32
but Chris and I hang out
with the cows in the stable,
18:35
and he says, "I'm really
concerned about it,
18:38
because I milk my cow every day,
and what if the machine does this for me?"
18:40
The reason why I mention this is,
18:43
we're always good in acknowledging
past progress and the benefit of it,
18:46
like our iPhones or our planes
or electricity or medical supply.
18:49
We all love to live to 80,
which was impossible 300 years ago.
18:53
But we kind of don't apply
the same rules to the future.
18:57
So if I look at my own job as a CEO,
19:02
I would say 90 percent
of my work is repetitive,
19:05
I don't enjoy it,
19:09
I spend about four hours per day
on stupid, repetitive email.
19:10
And I'm burning to have something
that helps me get rid of this.
19:14
Why?
19:18
Because I believe all of us
are insanely creative;
19:19
I think the TED community
more than anybody else.
19:22
But even blue-collar workers;
I think you can go to your hotel maid
19:25
and have a drink with him or her,
19:29
and an hour later,
you find a creative idea.
19:31
What this will empower
is to turn this creativity into action.
19:34
Like, what if you could
build Google in a day?
19:39
What if you could sit over beer
and invent the next Snapchat,
19:43
whatever it is,
19:46
and tomorrow morning it's up and running?
19:47
And that is not science fiction.
19:49
What's going to happen is,
19:51
we are already in history.
19:53
We've unleashed this amazing creativity
19:54
by de-slaving us from farming
19:58
and later, of course, from factory work
19:59
and have invented so many things.
20:03
It's going to be even better,
in my opinion.
20:06
And there's going to be
great side effects.
20:08
One of the side effects will be
20:10
that things like food and medical supply
and education and shelter
20:12
and transportation
20:17
will all become much more
affordable to all of us,
20:18
not just the rich people.
20:20
CA: Hmm.
20:22
So when Martin Ford argued, you know,
that this time it's different
20:23
because the intelligence
that we've used in the past
20:27
to find new ways to be
20:31
will be matched at the same pace
20:33
by computers taking over those things,
20:35
what I hear you saying
is that, not completely,
20:38
because of human creativity.
20:41
Do you think that that's fundamentally
different from the kind of creativity
20:44
that computers can do?
20:48
ST: So, that's my firm
belief as an AI person --
20:50
that I haven't seen
any real progress on creativity
20:55
and out-of-the-box thinking.
20:59
What I see right now -- and this is
really important for people to realize,
21:01
because the word "artificial
intelligence" is so threatening,
21:05
and then we have Steve Spielberg
tossing a movie in,
21:07
where all of a sudden
the computer is our overlord,
21:10
but it's really a technology.
21:12
It's a technology that helps us
do repetitive things.
21:14
And the progress has been
entirely on the repetitive end.
21:17
It's been in legal document discovery.
21:20
It's been contract drafting.
21:22
It's been screening X-rays of your chest.
21:24
And these things are so specialized,
21:28
I don't see the big threat of humanity.
21:30
In fact, we as people --
21:32
I mean, let's face it:
we've become superhuman.
21:34
We've made us superhuman.
21:36
We can swim across
the Atlantic in 11 hours.
21:38
We can take a device out of our pocket
21:41
and shout all the way to Australia,
21:43
and in real time, have that person
shouting back to us.
21:45
That's physically not possible.
We're breaking the rules of physics.
21:48
When this is said and done,
we're going to remember everything
21:51
we've ever said and seen,
21:54
you'll remember every person,
21:56
which is good for me
in my early stages of Alzheimer's.
21:57
Sorry, what was I saying? I forgot.
22:00
CA: (Laughs)
22:02
ST: We will probably have
an IQ of 1,000 or more.
22:03
There will be no more
spelling classes for our kids,
22:06
because there's no spelling issue anymore.
22:10
There's no math issue anymore.
22:12
And I think what really will happen
is that we can be super creative.
22:14
And we are. We are creative.
22:17
That's our secret weapon.
22:19
CA: So the jobs that are getting lost,
22:21
in a way, even though
it's going to be painful,
22:23
humans are capable
of more than those jobs.
22:25
This is the dream.
22:27
The dream is that humans can rise
to just a new level of empowerment
22:29
and discovery.
22:33
That's the dream.
22:35
ST: And think about this:
22:36
if you look at the history of humanity,
22:38
that might be whatever --
60-100,000 years old, give or take --
22:40
almost everything that you cherish
in terms of invention,
22:43
of technology, of things we've built,
22:47
has been invented in the last 150 years.
22:49
If you toss in the book and the wheel,
it's a little bit older.
22:53
Or the axe.
22:56
But your phone, your sneakers,
22:58
these chairs, modern
manufacturing, penicillin --
23:00
the things we cherish.
23:04
Now, that to me means
23:06
the next 150 years will find more things.
23:09
In fact, the pace of invention
has gone up, not gone down, in my opinion.
23:12
I believe only one percent of interesting
things have been invented yet. Right?
23:17
We haven't cured cancer.
23:22
We don't have flying cars -- yet.
Hopefully, I'll change this.
23:24
That used to be an example
people laughed about. (Laughs)
23:27
It's funny, isn't it?
Working secretly on flying cars.
23:31
We don't live twice as long yet. OK?
23:34
We don't have this magic
implant in our brain
23:36
that gives us the information we want.
23:39
And you might be appalled by it,
23:41
but I promise you,
once you have it, you'll love it.
23:42
I hope you will.
23:45
It's a bit scary, I know.
23:46
There are so many things
we haven't invented yet
23:48
that I think we'll invent.
23:50
We have no gravity shields.
23:52
We can't beam ourselves
from one location to another.
23:53
That sounds ridiculous,
23:56
but about 200 years ago,
23:57
experts were of the opinion
that flight wouldn't exist,
23:58
even 120 years ago,
24:01
and if you moved faster
than you could run,
24:02
you would instantly die.
24:05
So who says we are correct today
that you can't beam a person
24:06
from here to Mars?
24:10
CA: Sebastian, thank you so much
24:12
for your incredibly inspiring vision
and your brilliance.
24:14
Thank you, Sebastian Thrun.
24:16
ST: That was fantastic. (Applause)
24:18

▲Back to top

About the speakers:

Sebastian Thrun - Engineer
Sebastian Thrun is the director of the Stanford Artificial Intelligence Lab and is working, through robotics, to change the way we understand the world.

Why you should listen

Sebastian Thrun is a professor of Computer Science and Electrical Engineering at Stanford University, where he also serves as the Director of the Stanford AI Lab. His research focuses on robotics and artificial intelligence. He led the development of the robotic vehicle Stanley which won the 2005 DARPA Grand Challenge, and which is exhibited in the Smithsonian. 

Read the TED Blog's first-person story of a spin in the Google driverless car >>

Interesting: Will the Google Car make money?

More profile about the speaker
Sebastian Thrun | Speaker | TED.com
Chris Anderson - TED Curator
After a long career in journalism and publishing, Chris Anderson became the curator of the TED Conference in 2002 and has developed it as a platform for identifying and disseminating ideas worth spreading.

Why you should listen

Chris Anderson is the Curator of TED, a nonprofit devoted to sharing valuable ideas, primarily through the medium of 'TED Talks' -- short talks that are offered free online to a global audience.

Chris was born in a remote village in Pakistan in 1957. He spent his early years in India, Pakistan and Afghanistan, where his parents worked as medical missionaries, and he attended an American school in the Himalayas for his early education. After boarding school in Bath, England, he went on to Oxford University, graduating in 1978 with a degree in philosophy, politics and economics.

Chris then trained as a journalist, working in newspapers and radio, including two years producing a world news service in the Seychelles Islands.

Back in the UK in 1984, Chris was captivated by the personal computer revolution and became an editor at one of the UK's early computer magazines. A year later he founded Future Publishing with a $25,000 bank loan. The new company initially focused on specialist computer publications but eventually expanded into other areas such as cycling, music, video games, technology and design, doubling in size every year for seven years. In 1994, Chris moved to the United States where he built Imagine Media, publisher of Business 2.0 magazine and creator of the popular video game users website IGN. Chris eventually merged Imagine and Future, taking the combined entity public in London in 1999, under the Future name. At its peak, it published 150 magazines and websites and employed 2,000 people.

This success allowed Chris to create a private nonprofit organization, the Sapling Foundation, with the hope of finding new ways to tackle tough global issues through media, technology, entrepreneurship and, most of all, ideas. In 2001, the foundation acquired the TED Conference, then an annual meeting of luminaries in the fields of Technology, Entertainment and Design held in Monterey, California, and Chris left Future to work full time on TED.

He expanded the conference's remit to cover all topics, including science, business and key global issues, while adding a Fellows program, which now has some 300 alumni, and the TED Prize, which grants its recipients "one wish to change the world." The TED stage has become a place for thinkers and doers from all fields to share their ideas and their work, capturing imaginations, sparking conversation and encouraging discovery along the way.

In 2006, TED experimented with posting some of its talks on the Internet. Their viral success encouraged Chris to begin positioning the organization as a global media initiative devoted to 'ideas worth spreading,' part of a new era of information dissemination using the power of online video. In June 2015, the organization posted its 2,000th talk online. The talks are free to view, and they have been translated into more than 100 languages with the help of volunteers from around the world. Viewership has grown to approximately one billion views per year.

Continuing a strategy of 'radical openness,' in 2009 Chris introduced the TEDx initiative, allowing free licenses to local organizers who wished to organize their own TED-like events. More than 8,000 such events have been held, generating an archive of 60,000 TEDx talks. And three years later, the TED-Ed program was launched, offering free educational videos and tools to students and teachers.

More profile about the speaker
Chris Anderson | Speaker | TED.com