“By defining the challenges and applying them to the case- study of Cathartic Objects, we learn that designers might be able to rely on literature and on their own judgment to sensibly design for negative emotions. However, evaluating the design still carries risks, and perhaps remains limited to auto-ethnographical research for the time being.” (From )
This applied psychology experiment has little in the way of background research, and no experimental or clinical evaluation of the imagined psychological benefits. In particular, the project is predicated on the supposed benefits of “catharsis”, with little consideration for competing hypotheses about, say, the negative effects of rehearsal and reward of aggressive behaviors.
So what is the project? Essentially, it is a collection of objects that a person interacts with in destructive or abusive ways. The interactions are actually quite disturbing.
- A little animal like object that the user stabs. The robot reacts as if in pain. I.e., you are encouraged to torture a helpless being.
- An object that detects “swear words” (I imagine there are cultural and linguistics issues with this recognition), and lights up as more and more verbal abuse is delivered. In short, you are reinforced for verbally abusing the object.
- A doll that (somehow) detects that the user is “upset”, and delivers a mocking, abusive laugh. The user is invited to react to the mocking by punching the doll. e., this object is abusive, and rewards you for reacting to verbal abuse with physical violence
- A personal message is written on a ceramic tile, which the user is invited to smash. In short, you are encouraged to vent anger with violent destruction.
Obviously, the robotic technology is not particularly necessary, though it does have the virtue that these are inanimate objects. The researcher has remarked that they are intended to be “non-anthropomorphic”, in the hope that the behaviors will not transfer. Unfortunately, I’m very sure that the behaviors you learn with these devices definitely will transfer to other, non-robotic objects, including people and animals.
By the way, I think this isn’t so much “catharsis” as “displacement”—attacking a helpless robot instead of the cause of the negative emotion.
What’s wrong with this picture? It’s a really poor psychology experiment
You can tell that I don’t like this project very much.
I think it is a very poor approach to problem solving, and has a strong potential for increasing violent behavior. I also don’t like that they make claims for alleged psychological benefits, without any evidence that this approach is safe and effective. That is malpractice, plain and simple.
A big part of the problem is a lack of background research. The authors comment that “we learn that designers might be able to rely on literature and on their own judgment to sensibly design for negative emotions”  Well, they relied on their own intuitions a lot more than the broad literature, and that’s a problem.
At the core of the issue is how they think about the problems they address. The problem is perceived as “the user has negative emotions” (especially “frustration” and “anger”) and the goal is to “make the user feel better”.
In the likely event that the negative emotions are a symptom rather than the disease, this approach is not likely to help very much. Worse, I’m pretty sure that violence and verbal abuse will not make the underlying problem better. Quite the contrary.
In an interview, the researcher indicates that “[i]t has been extremely challenging to get approval for formal human subject studies that center around negative emotions and destructive behaviors.”  Ya, think? Personally I think the CMU IRB is completely justified, and should make him follow the legal and ethical requirements for research on human subjects. And, by the way, without control or comparison conditions, or even measurement, it’s not even a real study—which generally will be rejected by an IRB for very good reason.
The researcher also remarks that “We also know, according to research in psychology, that people tend to feel aversion towards the idea of any negative emotions, which does not help the case” which seems to imply that the IRB is somehow “afraid” of his research. This is just plain dumb (and insulting). It’s not aversion to negative emotions that is the problem, it is bad design and worse methodology; designs that appear to promote violent behavior, seemingly without understanding what they are doing, and with a very shaky theoretical justification.
Further “Investigations”? Please, don’t!
The researcher says he plans to test these psychological objects “in an interactive installation setting”. (Note that an art installation doesn’t require IRB approval.)
“Future work will attempt to test out these concepts of destruction and catharsis in an interactive installation setting. By having people directly interact with cathartic objects, I hope to learn more about people’s emotions and experiences, as well as perceptions and values regarding interactive objects that are designed to support behaviors of destruction and catharsis.”
Please don’t. These prototypes are really, really bad for people.
How This Could Be Improved
Obviously, there should be some serious consideration of the potential hazards of these devices, and concern for the welfare of the
victims users. In order to learn what effects, positive or negative, if any, these interactions have, there needs to be much better research design.
Let’s start with measurement. These devices are supposed to make the participants “feel” better, and perhaps reduce the level of negative emotions. (I think the former would be called “displacement”, the latter would be “catharsis”, but the terminology doesn’t matter.)
Possible measures of these emotions could be: self-reports, observer ratings, and even physiological measures. Obviously, we need before and after measurements. And I think that long term effects would be important. Among other things, there might be cumulative effects or habituation that reduces the effects. So that means follow ups for weeks and months.
Second, it seems very important to investigate “transfer” and learning effects. Do these interactions “train” the user to be violent and abusive in other interactions with robots, animals, and people. It is important to realize that, if these objects work as he hopes, then they will probably be reinforcing for the users. And that might very well increase their abusive behavior. As long as they only stab the robot, that’s disturbing but maybe OK. If they start stabbing pets or other people, too, that’s a serious problem.
Third, I might suggest considering the effects for different people. I’m 100% positive you’ll find gender differences in experience of this very male oriented design. Age, race, and culture almost certainly will make a difference in how these are received and used. (If nothing else, the “swear words” device is going to need a multi-lingual, multicultural suite of dictionaries.)
Finally, let’s talk about control conditions. Or at least comparison conditions.
If you are going to claim that these objects have beneficial effects, then we need to know “compared to what”? And there are a lot of possible comparisons that might and should be made.
Here are some plausible comparison conditions not necessarily in any order:
- Base line, no treatment. (Bad feelings will fade with time.)
- Similar “catharsis” with “dumb”, non-robotic objects. E.g., yell at an image on the screen, throw darts at a picture, punch a punching bag.) By the way, there is extensive research on such “treatments”. Look it up.
- Meditation/mindfulness etc. – recognize and set aside negative feelings instead of acting them out. (Also has extensive literature.)
- Human (instead of robot) interaction – positive. Have a soothing conversation instead of a temper tantrum
- Human interaction – negative. Have an argument with a person instead of a temper tantrum.
- Animal interaction – positive. Pet a puppy, or something like that.
- Animal interaction – negative. Torture a puppy. This is unethical, so don’t literally torture animals.
Get they idea? This needs to be compared to all the other things people can an do do to deal with negative emotions, to show when and where it might be effective, and what effects it might have.
What Really Should Be Done
What I would really like to see, most of all, is robot objects that can help people actually solve problems, not practice violent displacement behaviors. Can a robot help de-escalate negative behaviors? Help people reframe situations? Encourage positive behaviors, such as seeking information and rational communication?
Note that I’m not talking about “technology that sets out to make people “happier” or more efficient.” as these researchers sneer, I’m talking about technology that sets out to help people solve problems non-violently and positively, even when they start with negative emotions.
The goal should be to make life better, not to vent.
- Evan Ackerman, These Robotic Objects Are Designed to Be Stabbed and Beaten to Help You Feel Better, in IEEE Spectrum – Robotics. 2019. https://spectrum.ieee.org/automaton/robotics/home-robots/these-robotic-objects-are-designed-to-be-stabbed-and-beaten-to-help-you-feel-better
- Michal Luria, Amit Zoran, and Jodi Forlizzi, Challenges of Designing HCI for Negative Emotions, in Conference on Human Factors in Computing Systems. 2019: Glasgow, UK. https://www.researchgate.net/publication/332820712_Challenges_of_Designing_HCI_for_Negative_Emotions