Speaking of technical fixes….
I saw a story at the BBC about an app call ReThink, which “stops cyberbullying before it starts”. The idea is to use a substitute key pad that analyses what is typed, tries to spot problematic messages, and pops up a message asking if you want to rethink what you typed.
This is a pretty nice idea. Many of us have caught ourselves, counted to ten, and erased a message. And who hasn’t wished they had had a second chance to not send that message. ReThink is a “count to ten” keyboard—which is surely a good idea for anyone, not just kids.
So, I was curious to see just how real this is.
The BBC story and other press coverage is all about how great it is for a kid to be creating a tech solution to a real world problem, mentioning a patent, awards, pitches to Shark Tank, etc. It’s a great narrative, for sure.
But what about the app?
I was able to find and download the app, which is very good. (You have no idea how many great ideas I’ve read about that don’t even exist.)
Unfortunately, ReThink crashed the first time I used it. Perhaps my ancient Android is too old. Or perhaps my “deny all permissions” environment breaks it. I dunno. But the result is I couldn’t actually see it work for me. (Software is hard, it’s a miracle when it actually does work.)
Looking at the web page, there is lots of talk about the problem to solve (bullying), but no evidence that this app has any impact at all on such bullying.
There is reference to one “study”, but no citation to any published report of any kind. The main result claimed is that users “change their mind 93% of the time” and choose different words.
That sounds fine, though we have no way to know what that means, how large the sample was, or what the situation was. For that matter, what is a reasonable comparison or control group?
There are lots of other unanswered empirical questions.
Does this effect persist? Do users habituate and stop paying attention to the app? Perhaps they get tired of it second guessing them, and turn it off. Or maybe they actually learn, and become more careful over time—making the app unnecessary.
I assume that the canonical case is for everyone in the social network to be using the app, rather than some using and some not. But what if one side of the conversation is unmoderated and the other is moderated? What happens then? I,e, if the other guys is blazing away at you, do you still accept ReThink’s recommendations?
Even if this app is highly effective at moderating thoughtlessly mean text messages, how is that connected with bullying? Some (most?) bullying is intentional, not accidental. Maybe the 7% of messages that aren’t rethought are the worst kind of bullying, and the 93% were just rudeness. If so, then the app would have no effect at all on bullying.
Worst of all, text messages are hardly the only channel for bullying. (Believe me, we were bullied at school long before the Internet.) In fact, even as far as mobile devices, this app may already be out of date, as voice and video messages gain prominence.
All this isn’t to say that this isn’t a clever app. I might call it a text messaging helper, more than a fix for bullying.
It is unfortunate that there is no actual evidence available that it actually works.
And it is unfortunate that noone seems to think it is necessary or important to prove that it is safe and effective before leaping into marketing and story telling about how beneficial it is.
Is this what we should be teaching our kids?
- BBC, How one teen’s app stops cyberbullying before it starts, in BBC Capital. 2018. p. March 28. http://www.bbc.com/capital/story/20180328-how-one-teens-app-stops-cyberbullying-before-it-starts