The Shameful Last Citadel Of Our Humanity
The latest episode of the Stories of Emotional Granularity podcast explores the feelings of guilt and shame. These are not beautiful feelings. They're painful manifestations of the tension between social expectations and individual self-image.
Shame and guilt are powerful motivators, and yet, the people who feel these emotions often do their best to keep them hidden. They put on a brave face as a mask. They speak with confidence even as they feel tortured by self-doubt.
Guilt and shame are instructive emotions for those who are considering using artificial intelligence tools to track people's emotional states. Emotions are complex, and people often don't express them openly and directly. Nonetheless, artificial intelligence tools commonly used in market research treat people's emotional lives as if they are open books, to be read literally at face value.
Emotion With A Pi In the Middle
Emotions are difficult to pin down, even for the people who are feeling them. Studying the role of emotional motivation in consumer behavior requires nuance, patience, and deep listening. That be frustrating for marketing teams who feel under pressure to find quick insights that can be easily operationalized.
Pop-up firms that claim to use AI to track human emotion exploit this frustration by promising to simplify the ambiguities of emotion into objective, quantified systems that can autonomously keep an eye out for consumers' feelings and analyze them automatically. Emotional insights, they assert, can be gathered up at an industrial scale, like soybeans sucked up into a gigantic harvester and processed without the touch of human hands.
The outcomes of emotion AI in practice reveal the limits of the automation of empathy. Consider a recent pitch made for a company called EmotionTrac, a company that "uses AI to decode emotional responses in real-time consumer feedback" by "analyzing their micro facial expressions".
Here's an image being used to promote this company's "Emotion Tracking AI":
You might think that this client looks quite happy with the Emotion Tracking AI from EmotionTrac. He's smiling, and smiles = happiness, right?
Actually, this client doesn't exist. You've probably picked up on the fact that this image is fake. It's slop created with a generative AI tool.
The signs that this image is AI slop are obvious, and this flagrant fakery calls into question the professional integrity of the EmotionTrac team. They didn't even bother to check that their generative AI tool was using real letters of the alphabet to spell the word emotion.
Of course, EmotionTrac isn't a graphics design team. So okay, maybe they missed a few visual details, but couldn't we just cut them some slack? Actually, the whole premise of the EmotionTrac technology is that it is supposed to be able to pick up on visual subtleties in the changes in people's facial "microexpressions" to read into their minds. So, yes, actually, maybe it is fair to note their inability to pay attention to visual errors in their own promotional materials.
What about statistics? Shouldn't the people at EmotionTrac be able to represent statistics with accuracy, given that they claim to have special algorithmic ability to measure and analyze human emotion in statistical form?
Here's that pie chart in the corner of the graphic, for instance:
It's more of a squiggle than an actual pie chart. It looks suspiciously like what someone might sketch if they had information what pie charts look like without understanding what pie charts actually are. A pie chart is a thing in which differently colored wedges sort of meet in the middle.
That's the way that generative AI works. It imitates visual patterns without any comprehension whatsoever of what those visual patterns represent. In this image, generative AI mimics what a picture of statistical charts look like, without understanding at all what statistics or charts actually are. I couldn't think of any better visual metaphor for the nonsense of AI emotion analytics than that... until I saw the chart in the middle of the EmotionTrac graphic.
Nobody past the age of eight could look at this chart and think that it's genuine. It has some very rough similarities to a bar chart, but it does things that bar charts just don't do, like have shapes coming from different directions, out of alignment with their labels. What's that orange thing supposed to be? I don't know, but haven't we all felt a little hontor, when the heat of emotional costza has passed, and the booanc sets in?
No, no we haven't, but this generative AI slop doesn't know any actual words. It just has a vague idea of what words look like, roughly, on the average.
The kind of artificial intelligence that's used in EmotionTrac's "Emotion Tracking AI" is different from the generative AI model that was used to develop this promotional image. Still, when a company that purports to have expertise in applying insights artificial intelligence uses artificial intelligence tools in such a sloppy way, it suggests that its leadership hasn't thought deeply about what AI technologies can and cannot do.
Have they no shame?
Actually, no, it seems that they don't have any shame. The emotion of shame isn't mentioned even once, anywhere on the EmotionTrac web site.
How could emotional tracking artificial intelligence systems possibly track shame? Shame, like many other emotions, is a feeling that many people try to hide. Facial tracking AI systems presume that people display their emotions transparently for all the world to see, and that there isn't anything more to people's feelings than what their facial expressions display.
So it is that another emotion AI service, imentivAI, claims that it can "unlock human psyche with AI", and yet is completely dumbfounded when a person fails to display a stereotypical facial expression. In such cases of relatively impenetrable expressions, imentivAI's algorithms simply declare a person to be emotionally "neutral".
Is the woman pictured here, in an image provided by imentivAI to show what "neutral" emotion is all about, actually feeling neutral? Is there really nothing that happened earlier in her day that she has no lingering emotional reaction to? She's being filmed by imentivAI, while she tries to focus on something else. Are we really supposed to believe that she feels nothing about that, simply because the muscles of her face are mostly held in a resting position?
Even when imentivAI assigns specific emotions to people according to their facial expressions, its analysis often makes no sense. Consider the following short segment of a video posted by imentivAI. It's supposed to show how powerful its automated emotion detectors are.
Within three seconds, the imentivAI artificial intelligence system assesses a woman's emotions as shifting from neutral into angry, then happy, then angry, then happy again. Nobody has an emotional life as erratic as that.
Is she angry that she's neutral, and happy that she can express that anger? Stranger things have been felt, but imentivAI's system doesn't help to resolve this question.
One of the flaws of emotion-detecting software run on AI platforms is that it's constructed around the assumption that emotion is a definable thing that exists as a simple, identifiable condition. Emotion as we live it, in contrast, is often unclear, expressed in questions as much as statements.
In the episode on guilt and shame, Regina Lark's clients express their shame by asking "Why did I wait so long?" The author Karol Ruth Silverstein asks herself, "What will it all matter if I just have that one stupid book?" Essence Pierce asks, "Am I being selfish?"
Every emotion we have learned to articulate has its own perspective about who we really are, and what's really going on. There is some validity to each emotion's interpretation of reality, but only up to a point. Emotions are voices that often argue against each other in our minds, and none of them can be dominant for very long.
We contain multitudes, and our minds are too elaborate to be read by any algorithmic scanner.
I produce the Stories of Emotional Granularity podcast in order to display the complexity of our emotional lives. Every emotion can be understood from multiple points of view.
The designers of AI want us to put that multiplicity and nuance aside. They seek to simply models of emotion so that our feelings can be controlled, automated, and commodified.
Emotion is the last citadel of our humanity. Often, as with guilt and shame, it makes us feel weaker than we truly are. Nonetheless, emotion belongs to us alone.
Salesmen who say otherwise are not worthy of our trust.