Originally published at O’Reilly Radar: When we talk about artificial intelligence, we often make an unexamined assumption: that intelligence, understood as rational thought, is the same thing as mind. We use metaphors like “the brain’s operating system” or “thinking machines,” without always noticing their implicit bias.
But if what we are trying to build is artificial minds, we need only look at a map of the brain to see that in the domain we’re tackling, intelligence might be the smaller, easier part.
Maybe that’s why we started with it.
After all, the rational part of our brain is a relatively recent add-on. Setting aside unconscious processes, most of our gray matter is devoted not to thinking, but to feeling.
There was a time when we deprecated this larger part of the mind, as something we should either ignore or, if it got unruly, control.
But now we understand that, as troublesome as they may sometimes be, emotions are essential to being fully conscious. For one thing, as neurologist Antonio Damasio has demonstrated, we need them in order to make decisions. A certain kind of brain damage leaves the intellect unharmed, but removes the emotions. People with this affliction tend to analyze options endlessly, never settling on a final choice.
But that’s far from all: feelings condition our ability to do just about anything. Like an engine needs fuel or a computer needs electricity, humans need love, respect, a sense of purpose.
Consider that feeling unloved can cause crippling depression. Feeling disrespected is a leading trigger of anger or even violence. And one of the toughest forms of punishment is being made to feel lonely, through solitary confinement — too much of it can cause people to go insane.
All this by way of saying that while we’re working on AI, we need to remember to include AC: artificial compassion.
To some extent, we already do. Consider the recommendation engine, as deployed byAmazon.com, Pandora, and others (“If you like this, you might like that”). It’s easy to see it as an intelligence feature, simplifying our searches. But it’s also a compassion feature: if you feel a recommendation engine “gets” you, you’re likely to bond with it, which may be irrational, but it’s no less valuable for that.
Or think of voice interfaces, also known as interactive voice response, or IVR, systems. They may boost convenience and productivity, but experience shows that if they fail at compassion, they get very annoying, very fast.
A while back, as the consulting creative director for BeVocal, I helped design such interfaces for Sprint and others. That required some technical knowledge, including familiarity with script-writing, audio production, and Voice XML. But mostly, what was needed was empathy: imagining the emotional state of the user at any given point.
AC systems will need to detect meaning across many more dimensions, taking in tone of voice, facial expression, and more.
For example, it’s important for voice systems to apologize for errors — but not too often. It turns out that if you apologize too much, people hate it. You need to find a balance between showing that you care about what they want, without sounding obsequious and incompetent.
I learned much about this (and more) from another BeVocal consultant, human-computer interaction pioneer Clifford Nass. Nass once consulted with Microsoft on how they might recover from one of the worst interface mistakes of all time: Clippy the animated paper clip, who was reviled by pretty much everyone who had to deal with his intrusive “help” while using Windows 97 through 2003. (That included me: I’m not proud to say that I used to fantasize about creating tortureclippy.com, a place to bend him into all kinds of unnatural shapes.)
Clippy was part of Office Assistant, which was described by Microsoft as an “intelligent” help interface. He turned out to be not so intelligent after all — but more importantly, he wasn’t compassionate. Here’s Nass in the introduction to his book The Man Who Lied to His Laptop: What Machines Teach Us About Human Relationships (2010):
“[Clippy was] utterly clueless and oblivious to the appropriate ways to treat people … No matter how long users worked with Clippy, he never learned their names or preferences. Indeed, Clippy made it clear that he was not at all interested in getting to know them. If you think of Clippy as a person, of course he would evoke hatred and scorn.”
Microsoft retired Clippy in 2007. As a going away present, users were invited to fire staples at him.
To avoid Clippy’s fate, AC systems will need to recognize that people’s moods change from moment to moment. Human-to-human interactions are not static but dynamic. A new and possibly unpredictable exchange emerges from each previous one.
Nass proposed a dynamic form of compassion for an online classroom. In his design, the class would contain more or fewer classmate avatars, depending on how confident a student appeared to be feeling.
Such feelings would be detected through content analysis, and this remains the dominant approach. It’s currently deployed by many social media tools, so that marketers, for example, can determine how people feel about their products, based on the presence of positive or negative terms in social posts.
Going forward, AC systems will need to detect meaning across additional dimensions, taking in tone of voice, facial expression, and more. Research in this area is well under way, as in the University of Washington’s Automatic Tagging and Recognition of Stance (ATAROS) project, which studies such factors as “vowel space scaling and pitch/energy velocity.” In 2010, researchers at Hebrew University announced that they’d developed a sarcasm-detection algorithm with a 77% success rate when applied to Amazon.comreviews.
Sarcasm detection, by the way, appears to be a growing niche. In June of 2014, the US Secret Service issued a work order for social media management software that would include the ability to “detect sarcasm and false positives.”
Looking to the future — with help from science fiction — we see how far AC has yet to go. In 2014’s Interstellar film, the robot TARS is both highly intelligent and highly lovable. That’s because he possesses one of the highest forms of compassion, a sense of humor:
Cooper: [As Cooper tries to reconfigure TARS] Humour 75%.
TARS: 75%. Self destruct sequence in T minus 10, 9, 8…
Cooper: Let’s make it 65%.
TARS: Knock, knock.
Now that feels like a mind.