ABOUT THE SPEAKER
Tan Le - Entrepreneur
Tan Le is the founder & CEO of Emotiv, a bioinformatics company that's working on identifying biomarkers for mental and other neurological conditions using electroencephalography (EEG).

Why you should listen

Tan Le is the co-founder and president of Emotiv. Before this, she headed a firm that worked on a new form of remote control that uses brainwaves to control digital devices and digital media. It's long been a dream to bypass the mechanical (mouse, keyboard, clicker) and have our digital devices respond directly to what we think. Emotiv's EPOC headset uses 16 sensors to listen to activity across the entire brain. Software "learns" what each user's brain activity looks like when one, for instance, imagines a left turn or a jump.

Le herself has an extraordinary story -- a refugee from Vietnam at age 4, she entered college at 16 and has since become a vital young leader in her home country of Australia.

More profile about the speaker
Tan Le | Speaker | TED.com
TEDGlobal 2010

Tan Le: A headset that reads your brainwaves

Filmed:
2,732,929 views

Tan Le's astonishing new computer interface reads its user's brainwaves, making it possible to control virtual objects, and even physical electronics, with mere thoughts (and a little concentration). She demos the headset, and talks about its far-reaching applications.
- Entrepreneur
Tan Le is the founder & CEO of Emotiv, a bioinformatics company that's working on identifying biomarkers for mental and other neurological conditions using electroencephalography (EEG). Full bio

Double-click the English transcript below to play the video.

00:16
Up until now, our communication with machines
0
1000
2000
00:18
has always been limited
1
3000
2000
00:20
to conscious and direct forms.
2
5000
2000
00:22
Whether it's something simple
3
7000
2000
00:24
like turning on the lights with a switch,
4
9000
2000
00:26
or even as complex as programming robotics,
5
11000
3000
00:29
we have always had to give a command to a machine,
6
14000
3000
00:32
or even a series of commands,
7
17000
2000
00:34
in order for it to do something for us.
8
19000
3000
00:37
Communication between people, on the other hand,
9
22000
2000
00:39
is far more complex and a lot more interesting
10
24000
3000
00:42
because we take into account
11
27000
2000
00:44
so much more than what is explicitly expressed.
12
29000
3000
00:47
We observe facial expressions, body language,
13
32000
3000
00:50
and we can intuit feelings and emotions
14
35000
2000
00:52
from our dialogue with one another.
15
37000
3000
00:55
This actually forms a large part
16
40000
2000
00:57
of our decision-making process.
17
42000
2000
00:59
Our vision is to introduce
18
44000
2000
01:01
this whole new realm of human interaction
19
46000
3000
01:04
into human-computer interaction
20
49000
2000
01:06
so that computers can understand
21
51000
2000
01:08
not only what you direct it to do,
22
53000
2000
01:10
but it can also respond
23
55000
2000
01:12
to your facial expressions
24
57000
2000
01:14
and emotional experiences.
25
59000
2000
01:16
And what better way to do this
26
61000
2000
01:18
than by interpreting the signals
27
63000
2000
01:20
naturally produced by our brain,
28
65000
2000
01:22
our center for control and experience.
29
67000
3000
01:25
Well, it sounds like a pretty good idea,
30
70000
2000
01:27
but this task, as Bruno mentioned,
31
72000
2000
01:29
isn't an easy one for two main reasons:
32
74000
3000
01:32
First, the detection algorithms.
33
77000
3000
01:35
Our brain is made up of
34
80000
2000
01:37
billions of active neurons,
35
82000
2000
01:39
around 170,000 km
36
84000
3000
01:42
of combined axon length.
37
87000
2000
01:44
When these neurons interact,
38
89000
2000
01:46
the chemical reaction emits an electrical impulse,
39
91000
2000
01:48
which can be measured.
40
93000
2000
01:50
The majority of our functional brain
41
95000
3000
01:53
is distributed over
42
98000
2000
01:55
the outer surface layer of the brain,
43
100000
2000
01:57
and to increase the area that's available for mental capacity,
44
102000
3000
02:00
the brain surface is highly folded.
45
105000
3000
02:03
Now this cortical folding
46
108000
2000
02:05
presents a significant challenge
47
110000
2000
02:07
for interpreting surface electrical impulses.
48
112000
3000
02:10
Each individual's cortex
49
115000
2000
02:12
is folded differently,
50
117000
2000
02:14
very much like a fingerprint.
51
119000
2000
02:16
So even though a signal
52
121000
2000
02:18
may come from the same functional part of the brain,
53
123000
3000
02:21
by the time the structure has been folded,
54
126000
2000
02:23
its physical location
55
128000
2000
02:25
is very different between individuals,
56
130000
2000
02:27
even identical twins.
57
132000
3000
02:30
There is no longer any consistency
58
135000
2000
02:32
in the surface signals.
59
137000
2000
02:34
Our breakthrough was to create an algorithm
60
139000
2000
02:36
that unfolds the cortex,
61
141000
2000
02:38
so that we can map the signals
62
143000
2000
02:40
closer to its source,
63
145000
2000
02:42
and therefore making it capable of working across a mass population.
64
147000
3000
02:46
The second challenge
65
151000
2000
02:48
is the actual device for observing brainwaves.
66
153000
3000
02:51
EEG measurements typically involve
67
156000
2000
02:53
a hairnet with an array of sensors,
68
158000
3000
02:56
like the one that you can see here in the photo.
69
161000
3000
02:59
A technician will put the electrodes
70
164000
2000
03:01
onto the scalp
71
166000
2000
03:03
using a conductive gel or paste
72
168000
2000
03:05
and usually after a procedure of preparing the scalp
73
170000
3000
03:08
by light abrasion.
74
173000
2000
03:10
Now this is quite time consuming
75
175000
2000
03:12
and isn't the most comfortable process.
76
177000
2000
03:14
And on top of that, these systems
77
179000
2000
03:16
actually cost in the tens of thousands of dollars.
78
181000
3000
03:20
So with that, I'd like to invite onstage
79
185000
3000
03:23
Evan Grant, who is one of last year's speakers,
80
188000
2000
03:25
who's kindly agreed
81
190000
2000
03:27
to help me to demonstrate
82
192000
2000
03:29
what we've been able to develop.
83
194000
2000
03:31
(Applause)
84
196000
6000
03:37
So the device that you see
85
202000
2000
03:39
is a 14-channel, high-fidelity
86
204000
2000
03:41
EEG acquisition system.
87
206000
2000
03:43
It doesn't require any scalp preparation,
88
208000
3000
03:46
no conductive gel or paste.
89
211000
2000
03:48
It only takes a few minutes to put on
90
213000
3000
03:51
and for the signals to settle.
91
216000
2000
03:53
It's also wireless,
92
218000
2000
03:55
so it gives you the freedom to move around.
93
220000
3000
03:58
And compared to the tens of thousands of dollars
94
223000
3000
04:01
for a traditional EEG system,
95
226000
3000
04:04
this headset only costs
96
229000
2000
04:06
a few hundred dollars.
97
231000
2000
04:08
Now on to the detection algorithms.
98
233000
3000
04:11
So facial expressions --
99
236000
2000
04:13
as I mentioned before in emotional experiences --
100
238000
2000
04:15
are actually designed to work out of the box
101
240000
2000
04:17
with some sensitivity adjustments
102
242000
2000
04:19
available for personalization.
103
244000
3000
04:22
But with the limited time we have available,
104
247000
2000
04:24
I'd like to show you the cognitive suite,
105
249000
2000
04:26
which is the ability for you
106
251000
2000
04:28
to basically move virtual objects with your mind.
107
253000
3000
04:32
Now, Evan is new to this system,
108
257000
2000
04:34
so what we have to do first
109
259000
2000
04:36
is create a new profile for him.
110
261000
2000
04:38
He's obviously not Joanne -- so we'll "add user."
111
263000
3000
04:41
Evan. Okay.
112
266000
2000
04:43
So the first thing we need to do with the cognitive suite
113
268000
3000
04:46
is to start with training
114
271000
2000
04:48
a neutral signal.
115
273000
2000
04:50
With neutral, there's nothing in particular
116
275000
2000
04:52
that Evan needs to do.
117
277000
2000
04:54
He just hangs out. He's relaxed.
118
279000
2000
04:56
And the idea is to establish a baseline
119
281000
2000
04:58
or normal state for his brain,
120
283000
2000
05:00
because every brain is different.
121
285000
2000
05:02
It takes eight seconds to do this,
122
287000
2000
05:04
and now that that's done,
123
289000
2000
05:06
we can choose a movement-based action.
124
291000
2000
05:08
So Evan, choose something
125
293000
2000
05:10
that you can visualize clearly in your mind.
126
295000
2000
05:12
Evan Grant: Let's do "pull."
127
297000
2000
05:14
Tan Le: Okay, so let's choose "pull."
128
299000
2000
05:16
So the idea here now
129
301000
2000
05:18
is that Evan needs to
130
303000
2000
05:20
imagine the object coming forward
131
305000
2000
05:22
into the screen,
132
307000
2000
05:24
and there's a progress bar that will scroll across the screen
133
309000
3000
05:27
while he's doing that.
134
312000
2000
05:29
The first time, nothing will happen,
135
314000
2000
05:31
because the system has no idea how he thinks about "pull."
136
316000
3000
05:34
But maintain that thought
137
319000
2000
05:36
for the entire duration of the eight seconds.
138
321000
2000
05:38
So: one, two, three, go.
139
323000
3000
05:49
Okay.
140
334000
2000
05:51
So once we accept this,
141
336000
2000
05:53
the cube is live.
142
338000
2000
05:55
So let's see if Evan
143
340000
2000
05:57
can actually try and imagine pulling.
144
342000
3000
06:00
Ah, good job!
145
345000
2000
06:02
(Applause)
146
347000
3000
06:05
That's really amazing.
147
350000
2000
06:07
(Applause)
148
352000
4000
06:11
So we have a little bit of time available,
149
356000
2000
06:13
so I'm going to ask Evan
150
358000
2000
06:15
to do a really difficult task.
151
360000
2000
06:17
And this one is difficult
152
362000
2000
06:19
because it's all about being able to visualize something
153
364000
3000
06:22
that doesn't exist in our physical world.
154
367000
2000
06:24
This is "disappear."
155
369000
2000
06:26
So what you want to do -- at least with movement-based actions,
156
371000
2000
06:28
we do that all the time, so you can visualize it.
157
373000
3000
06:31
But with "disappear," there's really no analogies --
158
376000
2000
06:33
so Evan, what you want to do here
159
378000
2000
06:35
is to imagine the cube slowly fading out, okay.
160
380000
3000
06:38
Same sort of drill. So: one, two, three, go.
161
383000
3000
06:50
Okay. Let's try that.
162
395000
3000
06:53
Oh, my goodness. He's just too good.
163
398000
3000
06:57
Let's try that again.
164
402000
2000
07:04
EG: Losing concentration.
165
409000
2000
07:06
(Laughter)
166
411000
2000
07:08
TL: But we can see that it actually works,
167
413000
2000
07:10
even though you can only hold it
168
415000
2000
07:12
for a little bit of time.
169
417000
2000
07:14
As I said, it's a very difficult process
170
419000
3000
07:17
to imagine this.
171
422000
2000
07:19
And the great thing about it is that
172
424000
2000
07:21
we've only given the software one instance
173
426000
2000
07:23
of how he thinks about "disappear."
174
428000
3000
07:26
As there is a machine learning algorithm in this --
175
431000
3000
07:29
(Applause)
176
434000
4000
07:33
Thank you.
177
438000
2000
07:35
Good job. Good job.
178
440000
3000
07:38
(Applause)
179
443000
2000
07:40
Thank you, Evan, you're a wonderful, wonderful
180
445000
3000
07:43
example of the technology.
181
448000
3000
07:46
So, as you can see, before,
182
451000
2000
07:48
there is a leveling system built into this software
183
453000
3000
07:51
so that as Evan, or any user,
184
456000
2000
07:53
becomes more familiar with the system,
185
458000
2000
07:55
they can continue to add more and more detections,
186
460000
3000
07:58
so that the system begins to differentiate
187
463000
2000
08:00
between different distinct thoughts.
188
465000
3000
08:04
And once you've trained up the detections,
189
469000
2000
08:06
these thoughts can be assigned or mapped
190
471000
2000
08:08
to any computing platform,
191
473000
2000
08:10
application or device.
192
475000
2000
08:12
So I'd like to show you a few examples,
193
477000
2000
08:14
because there are many possible applications
194
479000
2000
08:16
for this new interface.
195
481000
2000
08:19
In games and virtual worlds, for example,
196
484000
2000
08:21
your facial expressions
197
486000
2000
08:23
can naturally and intuitively be used
198
488000
2000
08:25
to control an avatar or virtual character.
199
490000
3000
08:29
Obviously, you can experience the fantasy of magic
200
494000
2000
08:31
and control the world with your mind.
201
496000
3000
08:36
And also, colors, lighting,
202
501000
3000
08:39
sound and effects
203
504000
2000
08:41
can dynamically respond to your emotional state
204
506000
2000
08:43
to heighten the experience that you're having, in real time.
205
508000
3000
08:47
And moving on to some applications
206
512000
2000
08:49
developed by developers and researchers around the world,
207
514000
3000
08:52
with robots and simple machines, for example --
208
517000
3000
08:55
in this case, flying a toy helicopter
209
520000
2000
08:57
simply by thinking "lift" with your mind.
210
522000
3000
09:00
The technology can also be applied
211
525000
2000
09:02
to real world applications --
212
527000
2000
09:04
in this example, a smart home.
213
529000
2000
09:06
You know, from the user interface of the control system
214
531000
3000
09:09
to opening curtains
215
534000
2000
09:11
or closing curtains.
216
536000
3000
09:22
And of course, also to the lighting --
217
547000
3000
09:25
turning them on
218
550000
3000
09:28
or off.
219
553000
2000
09:30
And finally,
220
555000
2000
09:32
to real life-changing applications,
221
557000
2000
09:34
such as being able to control an electric wheelchair.
222
559000
3000
09:37
In this example,
223
562000
2000
09:39
facial expressions are mapped to the movement commands.
224
564000
3000
09:42
Man: Now blink right to go right.
225
567000
3000
09:50
Now blink left to turn back left.
226
575000
3000
10:02
Now smile to go straight.
227
587000
3000
10:08
TL: We really -- Thank you.
228
593000
2000
10:10
(Applause)
229
595000
5000
10:15
We are really only scratching the surface of what is possible today,
230
600000
3000
10:18
and with the community's input,
231
603000
2000
10:20
and also with the involvement of developers
232
605000
2000
10:22
and researchers from around the world,
233
607000
3000
10:25
we hope that you can help us to shape
234
610000
2000
10:27
where the technology goes from here. Thank you so much.
235
612000
3000

▲Back to top

ABOUT THE SPEAKER
Tan Le - Entrepreneur
Tan Le is the founder & CEO of Emotiv, a bioinformatics company that's working on identifying biomarkers for mental and other neurological conditions using electroencephalography (EEG).

Why you should listen

Tan Le is the co-founder and president of Emotiv. Before this, she headed a firm that worked on a new form of remote control that uses brainwaves to control digital devices and digital media. It's long been a dream to bypass the mechanical (mouse, keyboard, clicker) and have our digital devices respond directly to what we think. Emotiv's EPOC headset uses 16 sensors to listen to activity across the entire brain. Software "learns" what each user's brain activity looks like when one, for instance, imagines a left turn or a jump.

Le herself has an extraordinary story -- a refugee from Vietnam at age 4, she entered college at 16 and has since become a vital young leader in her home country of Australia.

More profile about the speaker
Tan Le | Speaker | TED.com