At the CHI conference this spring, Mariam Hassib and colleagues from several institutions in Germany presented their “Emotion Actuator”. project  What is that?
“Embodied Emotional Feedback through Electroencephalography and Electrical Muscle Stimulation”
That’s right folks. Your brain waves are used to compel someone else to move their body (or vice versa). Wow!
We’ve all heard about “mirror neurons”, and how people copy and sync with each other. This project aims to take this concept to distant relationships via digital augmentation.
“embodied emotional feedback: The recipient’s own body is actuated to portray the emotional state of the sender. The recipient interprets his or her own movement and thereby gains knowledge about the emotional state of the sender.” (, p. 6133)
The technique is done by classifying EEG signals to detect the purported “emotional state” of the sender. This “state” is mapped to one of several gestures. The researchers use gestures from American Sigh Language (ASL). For example, if the EEG suggests that the sender is “amused” (i.e., is feeling the emotion “amusement”), then the receiver should make the ASL gesture for the word “amuse”. (Do German speakers use ASL?)
Phew! This is a pretty complicated contraption, which makes no sense on several grounds.
So Many Issues
First of all, I’ll just note that the concept of “emotional state” is as much about people labeling feelings (and behavior) than about physiology. In any case, using EEG to detect these alleged “states” is iffy, to say the least. Their study basically used machine learning to match up EEG patterns with self-reported “states”. (It’s not clear whether the classifications work across people, or are trained separately for each participant.)
The second component is the gesture set drawn from ASL. Note that the gesture is not imagined to be expressive (they considered but did not use a set of expressive gestures), but rather the gesture is an arbitrary muscular symbol for the word that labels an emotion. Effectively, the receiver says the word “angry” to himself, and therefore understands that he received a signal indicating the sender is feeling angry.
The actual signal is delivered as Electrical Muscle Stimulation (EMS): a dozen electrodes were attached to the victim’s recipient’s arms. Electric current to the electrodes causes muscle contractions, forcing the arm and hand to execute the intended gesture. Ouch!
Of course the resulting gesture is crude and jerky, nothing at all like a native signer would do. I can hardly imagine what this feels like, though I wouldn’t think it is especially pleasant. It is also probably tiring and/or painful. And there is no way you could do anything else while receiving messages this way.
The final piece of the picture is the digital system that maps the alleged emotion detected with the EEG to one of the descriptive words, and then triggers the EMS to force the recipient to flap his arms to make the ASL word that matches the emotion. There are only four such emotions, so this mapping isn’t difficult.
The research paper reports their developments and the laboratory studies that show that it works, at least a little. Actually, their comparison condition was to deliver a text message, versus shocking the arms to make them flap. Unsurprisingly, the EMS is harder to ignore, and was rated “more haptic”. Well, sure.
The researchers conclude that,
“The presented studies showed that EMS actuation may lead to an embodiment of emotional states, contributing to an intuitive understanding, immersion, and empathy. “ ( p. 6142)
It should be clear that I’m far from convinced of this conclusion, and about the entire approach. And, by the way, I actually have considerable experience investigating gesture detection  and even EEG inputs , so this isn’t just nay saying.
Leaving aside the obvious technical issues with the detection and actuation, and the dubious conceptualization of “emotion”, the entire enterprise is kind of pointless. For one thing, a decent video conference link will probably give you as much or more understanding of the other person as any amount of ALS or any other linguistic talking to yourself.
More Control Conditions?
It is notable that the study didn’t use either face-to-face or video conversation as comparison conditions.
For that matter, let me mention placebo effects (which I have seen first hand ). The article reports a number of comments from participants, such as, “much more emotional if the body reacted compared to when you just look” (, p. 6140 quoting a participant) and other comments indicating that, unsurprisingly, the participants were well aware of the goal of the experiment. How could they not guess what is going on? The challenge is how to separate out placebo effects from this study?
Here are several ideas for additional controls. First, there could be variations on the EMS, including a condition where the gesture is opposite to the message or randomly associated. If the gesture mapping is really meaningful, there should be a significant difference between these conditions. Furthermore, if the participants “feel the emotion” equally (regardless of accuracy), then it suggests that the EMS alone, not the mapping, is what makes the system seem to work.
A second check would be different mappings. Again, some should be meaningful, and others meaningless. If the recipients can’t tell the difference, or do just as well, then there may be placebo effects going on. A third control would be to have some null conditions for the sender. The sender sending “nothing”, or some emotion that has no correct mapping.
These ideas are half-baked, but you get the idea. We need to try to tease out how much of the “feeling the emotion” is due to expectations and interpretations of the experiment’s aims, and how much is due to the procedure alone. It’s not trivial to separate these out.
Bottom Line: A Completely Unreasonable Idea
Considering the incredible inconvenience and discomfort of both the sending and receiving kit, I wonder why anyone would ever want to use it. The forced ASL gestures seem particularly unpleasant to me. The entire rig carries no more information than an emoticon, but does so much, much less efficiently and conveniently. How can this be justified?
- Mariam Hassib, Max Pfeiffer, Stefan Schneegass, Michael Rohs, and Florian Alt, Emotion Actuator: Embodied Emotional Feedback through Electroencephalography and Electrical Muscle Stimulation, in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2017, ACM: Denver, Colorado, USA. p. 6133-6146. http://dl.acm.org/citation.cfm?doid=3025453.3025953
- Robert E. McGrath and Johan Rischau, The NeuroMaker 1.0: Personal Fabrication through Embodied Computing. 2011. http://cucfablab.org/sites/cucfablab.org/files/NeuroMaker_Rischau_McGrath.pdf
- Mary Pietrowicz, Robert E. McGrath, Guy Garnett, and John Toenjes, Multimodal Gestural Interaction in Performance, in Whole Body Interfaces Workshop at CHI 2010. 2010: Atlanta. http://lister.cms.livjm.ac.uk/homepage/staff/cmsdengl/WBI2010/documents2010/Pietrowicz.pdf