On The Hunt For Human Emotions

On the hunt for human emotions

Artificial intelligence is behind countless services that we use every day. But how close is it to really understanding human emotions? Affective computing has already come a long way – and as in many areas, big tech is in the lead.

A somber, suited man stands in a cemetery. Softly, he strokes a gravestone before throwing his arms up toward the sky, howling in sorrow. The inscription on the stone reads:

Clippy. 1997 – 2004.

The scene is from a Microsoft commercial for their Office software. In reality, however, few people mourned the demise of the paper clip-formed Office assistant, tasked with aiding Microsoft users in their screen work.

Unfailingly pseudo-helpful, Clippy may be the most ubiquitously reviled piece of software ever created. Not because a digital assistant is inherently a bad idea, but because its tone-deaf servility pushed Microsoft users closer and closer to insanity.

Designed to respond intuitively

Ever since computers became everyday tools, tech companies have been investing heavily in improving the ways humans and machines interact. We have gone from the days when using a computer required impressive technical skills and hours hunched over dense user manuals, to the plug and play era where software is designed to respond intuitively to our needs and wishes.

Even so, digital computers and human emotions have never gotten along very well. Too many computer engineers have made the cool rationality of computers the standard to which humans need to adjust. But as algorithms become more and more intertwined with every aspect of our lives, things are changing. For better and for worse.

In 1995, the American computer engineer Rosalind Picard wrote a pioneering paper, ”Affective computing”, about a nascent research field investigating the possibilities of computers learning human emotions, responding to them and perhaps even approximating human emotions to more efficiently make decisions.

Any algorithm that takes human behavior as input is indirectly responding to human emotions. Take Facebook for example, and the way its algorithms feed on human agitation, vanity and desire for companionship. Their algorithms systematically register the actions these emotions trigger (likes, shares and comments, commonly referred to as engagement), and then attempt to amplify and monetize them.

Making tech less frustrating

The field of affective computing, however, is ideally less about manipulation and more about making tech less frustrating and more helpful, perhaps even instilling in it some semblance of empathy. Counter-intuitively, one key to making affective computing work well may be to avoid anthropomorphizing the interface. Humanizing Clippy did not make people relate better to their Microsoft software, quite the opposite. And while chat bots are popular among companies hoping to slash customer service costs, for customers they are less like magically helpful spirits and more of a needlessly convoluted way of accessing information from an FAQ.

Affective computing endeavors to understand us better and deeper, by analyzing our calendars, messaging apps, web use, step count and geolocation. All this information can be harvested from our phones, along with sleep and speech patterns. Add wearable sensors and cameras with facial recognition, and computers are getting close to reading our emotions without the intermediary of our behavior.

In the near future this could result in consumer technology such as lightbulbs that adjust to your mood, sound systems that find the perfect tune whether your feeling blue or elated, and phones that adjust their notification settings as thoughtfully as a first-rate butler – just to name a few possible applications. It could also be used for surveillance of employees or citizens, for purposes malicious or benign.

Rosalind Picard is currently a professor at Massachusetts Institute of Technology, running the Affective Computing Research Group. She is also the co-founder of two groundbreaking startups in this space: Affectiva in 2009 and Empatica in 2014. Through her work she has become keenly aware of the potential to use affective computing for for both humanitarian and profit-driven purposes.

Affectiva’s first applications were developed to help people on the autism spectrum better understand facial expressions. Later the company developed technology to track the emotional state of drivers. And after Picard had moved on to form Empatica, a company hoping to address the medical needs of epilepsy patients, Affectiva has been attracting clients like Coca-Cola – who use the technology to measure the effectiveness of their advertising – and political campaigns who want to gauge the emotional response to political debates.

Simulate human emotions

Microsoft’s doomed Clippy was neither the first nor the last anthropomorphized bundle of algorithms. Robots have often been envisioned as synthetic persons, androids that understand, exhibit and perhaps even experience human-like emotions. There are currently countless projects around the world in which robots are developed for everything from education and elderly care to sex work. These machines rarely rely on cutting-edge affective computing technology, but they nevertheless simulate a range of human emotions to please their users.

If science fiction teaches us anything about synthetic emotion it is a bleak lesson. Ever since the 19th century, when a fictional android appeared in Auguste Villiers de l’Isle-Adam’s novel ”The Future Eve”, they have tended to bring misery and destruction. In the ongoing HBO series ”Westworld”, enslaved robots rise up against their makers, massacring their human oppressors. In the acclaimed British author Ian McEwan’s 2019 novel ”Machines Like Me”, the first sentient androids created by man gradually acquire human emotions, and then commit suicide.

Of course, we should celebrate the ambition to create software that adjusts to our needs and desires – helps us live and learn a little bit better. But it is worth keeping in mind the failure of Clippy, and perhaps even the warnings from concerned science fiction writers. More than that: at a time when big tech companies are hoarding personal data and using that data to manipulate us, affective computing will inevitably be a double-edged sword. After all, why should we trust Facebook’s or Google’s algorithms to ever understand empathy so long as the companies themselves show little capacity for it?

Sam Sundberg

Sam Sundberg
Freelance writer and Editor for Svenska Dagbladet
Years in Schibsted
What I’ve missed the most during the Corona crisis
City life!