English-Video.net comment policy

The comment field is common to all languages

Let's write in your language and use "Google Translate" together

Please refer to informative community guidelines on TED.com

TED@IBM

Raphael Arar: How we can teach computers to make sense of our emotions

Filmed
Views 208,632

How can we make AI that people actually want to interact with? Raphael Arar suggests we start by making art. He shares interactive projects that help AI explore complex ideas like nostalgia, intuition and conversation -- all working towards the goal of making our future technology just as much human as it is artificial.

- Designer, researcher
IBM's Raphael Arar creates art and designs experiences that examine the complexities of our increasingly technocentric lives. Full bio

I consider myself one part artist
and one part designer.
00:13
And I work at an artificial
intelligence research lab.
00:18
We're trying to create technology
00:22
that you'll want to interact with
in the far future.
00:24
Not just six months from now,
but try years and decades from now.
00:27
And we're taking a moonshot
00:33
that we'll want to be
interacting with computers
00:34
in deeply emotional ways.
00:37
So in order to do that,
00:40
the technology has to be
just as much human as it is artificial.
00:41
It has to get you.
00:46
You know, like that inside joke
that'll have you and your best friend
00:49
on the floor, cracking up.
00:52
Or that look of disappointment
that you can just smell from miles away.
00:54
I view art as the gateway to help us
bridge this gap between human and machine:
01:00
to figure out what it means
to get each other
01:07
so that we can train AI to get us.
01:10
See, to me, art is a way
to put tangible experiences
01:13
to intangible ideas,
feelings and emotions.
01:17
And I think it's one
of the most human things about us.
01:21
See, we're a complicated
and complex bunch.
01:25
We have what feels like
an infinite range of emotions,
01:28
and to top it off, we're all different.
01:31
We have different family backgrounds,
01:34
different experiences
and different psychologies.
01:36
And this is what makes life
really interesting.
01:40
But this is also what makes
working on intelligent technology
01:43
extremely difficult.
01:46
And right now, AI research, well,
01:49
it's a bit lopsided on the tech side.
01:53
And that makes a lot of sense.
01:55
See, for every
qualitative thing about us --
01:57
you know, those parts of us that are
emotional, dynamic and subjective --
01:59
we have to convert it
to a quantitative metric:
02:04
something that can be represented
with facts, figures and computer code.
02:07
The issue is, there are
many qualitative things
02:13
that we just can't put our finger on.
02:16
So, think about hearing
your favorite song for the first time.
02:20
What were you doing?
02:25
How did you feel?
02:28
Did you get goosebumps?
02:30
Or did you get fired up?
02:33
Hard to describe, right?
02:36
See, parts of us feel so simple,
02:38
but under the surface,
there's really a ton of complexity.
02:40
And translating
that complexity to machines
02:44
is what makes them modern-day moonshots.
02:47
And I'm not convinced that we can
answer these deeper questions
02:50
with just ones and zeros alone.
02:54
So, in the lab, I've been creating art
02:57
as a way to help me
design better experiences
02:59
for bleeding-edge technology.
03:01
And it's been serving as a catalyst
03:03
to beef up the more human ways
that computers can relate to us.
03:05
Through art, we're tacking
some of the hardest questions,
03:10
like what does it really mean to feel?
03:12
Or how do we engage and know
how to be present with each other?
03:16
And how does intuition
affect the way that we interact?
03:20
So, take for example human emotion.
03:26
Right now, computers can make sense
of our most basic ones,
03:28
like joy, sadness,
anger, fear and disgust,
03:31
by converting those
characteristics to math.
03:35
But what about the more complex emotions?
03:39
You know, those emotions
03:41
that we have a hard time
describing to each other?
03:43
Like nostalgia.
03:45
So, to explore this, I created
a piece of art, an experience,
03:47
that asked people to share a memory,
03:51
and I teamed up with some data scientists
03:53
to figure out how to take
an emotion that's so highly subjective
03:55
and convert it into something
mathematically precise.
03:59
So, we created what we call
a nostalgia score
04:03
and it's the heart of this installation.
04:06
To do that, the installation
asks you to share a story,
04:08
the computer then analyzes it
for its simpler emotions,
04:11
it checks for your tendency
to use past-tense wording
04:14
and also looks for words
that we tend to associate with nostalgia,
04:17
like "home," "childhood" and "the past."
04:20
It then creates a nostalgia score
04:24
to indicate how nostalgic your story is.
04:26
And that score is the driving force
behind these light-based sculptures
04:29
that serve as physical embodiments
of your contribution.
04:33
And the higher the score,
the rosier the hue.
04:37
You know, like looking at the world
through rose-colored glasses.
04:40
So, when you see your score
04:44
and the physical representation of it,
04:47
sometimes you'd agree
and sometimes you wouldn't.
04:50
It's as if it really understood
how that experience made you feel.
04:53
But other times it gets tripped up
04:57
and has you thinking
it doesn't understand you at all.
04:59
But the piece really serves to show
05:02
that if we have a hard time explaining
the emotions that we have to each other,
05:04
how can we teach a computer
to make sense of them?
05:08
So, even the more objective parts
about being human are hard to describe.
05:12
Like, conversation.
05:15
Have you ever really tried
to break down the steps?
05:17
So think about sitting
with your friend at a coffee shop
05:20
and just having small talk.
05:23
How do you know when to take a turn?
05:25
How do you know when to shift topics?
05:27
And how do you even know
what topics to discuss?
05:29
See, most of us
don't really think about it,
05:33
because it's almost second nature.
05:35
And when we get to know someone,
we learn more about what makes them tick,
05:37
and then we learn
what topics we can discuss.
05:40
But when it comes to teaching
AI systems how to interact with people,
05:43
we have to teach them
step by step what to do.
05:46
And right now, it feels clunky.
05:49
If you've ever tried to talk
with Alexa, Siri or Google Assistant,
05:53
you can tell that it or they
can still sound cold.
05:57
And have you ever gotten annoyed
06:02
when they didn't understand
what you were saying
06:04
and you had to rephrase what you wanted
20 times just to play a song?
06:06
Alright, to the credit of the designers,
realistic communication is really hard.
06:11
And there's a whole branch of sociology,
06:16
called conversation analysis,
06:18
that tries to make blueprints
for different types of conversation.
06:20
Types like customer service
or counseling, teaching and others.
06:23
I've been collaborating
with a conversation analyst at the lab
06:28
to try to help our AI systems
hold more human-sounding conversations.
06:31
This way, when you have an interaction
with a chatbot on your phone
06:36
or a voice-based system in the car,
06:39
it sounds a little more human
and less cold and disjointed.
06:41
So I created a piece of art
06:46
that tries to highlight
the robotic, clunky interaction
06:47
to help us understand, as designers,
06:50
why it doesn't sound human yet
and, well, what we can do about it.
06:52
The piece is called Bot to Bot
06:57
and it puts one conversational
system against another
06:58
and then exposes it to the general public.
07:01
And what ends up happening
is that you get something
07:04
that tries to mimic human conversation,
07:06
but falls short.
07:08
Sometimes it works and sometimes
it gets into these, well,
07:10
loops of misunderstanding.
07:13
So even though the machine-to-machine
conversation can make sense,
07:14
grammatically and colloquially,
07:17
it can still end up
feeling cold and robotic.
07:20
And despite checking all the boxes,
the dialogue lacks soul
07:23
and those one-off quirks
that make each of us who we are.
07:27
So while it might be grammatically correct
07:30
and uses all the right
hashtags and emojis,
07:32
it can end up sounding mechanical
and, well, a little creepy.
07:35
And we call this the uncanny valley.
07:39
You know, that creepiness factor of tech
07:41
where it's close to human
but just slightly off.
07:43
And the piece will start being
07:46
one way that we test
for the humanness of a conversation
07:48
and the parts that get
lost in translation.
07:51
So there are other things
that get lost in translation, too,
07:54
like human intuition.
07:57
Right now, computers
are gaining more autonomy.
07:59
They can take care of things for us,
08:01
like change the temperature
of our houses based on our preferences
08:03
and even help us drive on the freeway.
08:06
But there are things
that you and I do in person
08:09
that are really difficult
to translate to AI.
08:12
So think about the last time
that you saw an old classmate or coworker.
08:15
Did you give them a hug
or go in for a handshake?
08:21
You probably didn't think twice
08:24
because you've had so many
built up experiences
08:26
that had you do one or the other.
08:28
And as an artist, I feel
that access to one's intuition,
08:31
your unconscious knowing,
08:34
is what helps us create amazing things.
08:36
Big ideas, from that abstract,
nonlinear place in our consciousness
08:39
that is the culmination
of all of our experiences.
08:43
And if we want computers to relate to us
and help amplify our creative abilities,
08:47
I feel that we'll need to start thinking
about how to make computers be intuitive.
08:52
So I wanted to explore
how something like human intuition
08:56
could be directly translated
to artificial intelligence.
08:59
And I created a piece
that explores computer-based intuition
09:03
in a physical space.
09:06
The piece is called Wayfinding,
09:08
and it's set up as a symbolic compass
that has four kinetic sculptures.
09:10
Each one represents a direction,
09:14
north, east, south and west.
09:16
And there are sensors set up
on the top of each sculpture
09:19
that capture how far away
you are from them.
09:21
And the data that gets collected
09:24
ends up changing the way
that sculptures move
09:25
and the direction of the compass.
09:28
The thing is, the piece doesn't work
like the automatic door sensor
09:31
that just opens
when you walk in front of it.
09:35
See, your contribution is only a part
of its collection of lived experiences.
09:37
And all of those experiences
affect the way that it moves.
09:42
So when you walk in front of it,
09:46
it starts to use all of the data
09:48
that it's captured
throughout its exhibition history --
09:50
or its intuition --
09:53
to mechanically respond to you
based on what it's learned from others.
09:55
And what ends up happening
is that as participants
09:59
we start to learn the level
of detail that we need
10:02
in order to manage expectations
10:04
from both humans and machines.
10:06
We can almost see our intuition
being played out on the computer,
10:09
picturing all of that data
being processed in our mind's eye.
10:13
My hope is that this type of art
10:17
will help us think differently
about intuition
10:19
and how to apply that to AI in the future.
10:21
So these are just a few examples
of how I'm using art to feed into my work
10:24
as a designer and researcher
of artificial intelligence.
10:28
And I see it as a crucial way
to move innovation forward.
10:31
Because right now, there are
a lot of extremes when it comes to AI.
10:35
Popular movies show it
as this destructive force
10:39
while commercials
are showing it as a savior
10:42
to solve some of the world's
most complex problems.
10:45
But regardless of where you stand,
10:48
it's hard to deny
that we're living in a world
10:50
that's becoming more
and more digital by the second.
10:53
Our lives revolve around our devices,
smart appliances and more.
10:55
And I don't think
this will let up any time soon.
11:01
So, I'm trying to embed
more humanness from the start.
11:04
And I have a hunch that bringing art
into an AI research process
11:08
is a way to do just that.
11:13
Thank you.
11:15
(Applause)
11:16

▲Back to top

About the speaker:

Raphael Arar - Designer, researcher
IBM's Raphael Arar creates art and designs experiences that examine the complexities of our increasingly technocentric lives.

Why you should listen

While his artwork raises questions about our relationship with modernity and technology, Raphael Arar’s design work revolves around answering those questions in human-centered ways. 

Currently a designer and researcher at IBM Research, Arar has been exploring ways to translate aspects of emotional intelligence into machine-readable format. Through this, he is passionately tackling ethical platforms of artificial intelligence. Inc. Magazine says he "epitomizes the style and multi-disciplinary skill set of the modern designer," and in 2017, he was listed as one of Forbes's "30 under 30 in Enterprise Technology."

More profile about the speaker
Raphael Arar | Speaker | TED.com