We are in the midst of a giant, unplanned, and uncontrolled social experiment: a wave of young people are growing up immersed in ubiquitous network connectivity. In the Silicon Valley spirit of “just do it”, we have thrown everyone, including kids and teen agers onto the maelstrom of the Internet, with no safety net or signposts. Given that even grown ups behave like idiot adolescents on the web, it doesn’t seem like the greatest environment for growing up.
Needless to say, there is a lot of parental anxiety about all this. And where there is anxiety, capitalism finds a market. Apps to the rescue!
Pamela Wisniewski and colleagues reported last week on a study of dozens of apps that are genearally aimed at “adolescent online safety” . One of their important contributes is a systematic definition of the types of strategies that may be developed. There are non-technical strategies (e.g., rulemaking), but the study focuses on technical strategies, i.e., the services provided by supposed “safety” apps.
They describe two kinds of strategy, “parental control”, and “teen self-regulation”. Parental control strategies are monitoring, restriction, and active mediation. Teen self-regulation strategies are self- awareness, impulse control, and risk-coping. In addition, apps might have an informational or teaching approach.
It should be clear that all these strategies have a role and value, but I think everyone agrees that “growing up” almost certainly means moving to self-regulation.
The heart of the study is an analysis of some 75 apps, classifying the strategies enabled by each. These apps mostly run in the background, to monitor and report activity on the mobile device. In short, spyware.
The results are clear as day: almost all of the “safety” apps are designed to monitor and restrict the online behavior of teens. Presumably, these features implement parental controls, not self-control. This study could not evaluate the effectiveness of these apps, or compare different strategies. But it is very clear that “the market” is delivering only a few of the possible strategies.
The researchers point out some conceptual weaknesses in the technical strategies employed by the apps.
“These features weren’t helping parents actually mediate what their teens are doing online,” said Wisniewski. “They weren’t enhancing communication, or helping a teen become more self-aware of his or her behavior.” (quoted in ])
Many of them operate as covert and pretty hostile spyware, which might be what parents want, but has all sorts of possible side effects. “the features offered by these apps generally did not promote values, such as trust, accountability, respect, and transparency, that are often associated with more positive family values“ (p. 60)
“Simply put, the values embedded within these apps were incongruent with how many parents of teens want to parent.” (p. 60)
To me, it is clear that these apps suffer from not just wrongheaded thinking (i.e., taking the parent as the target customer rather than the teen, or better yet, the family), but also form the affordances of the app-verse.
The Internet delivers two things very well: instant gratification (“click here”), and surveillance. Delivering self-regulation is much, much harder than delivering “swipe right”.
The Internet is all about surveillance. The wealthiest companies in the world are basically in the business of monitoring everybody as much as possible. Putting this technology in the hands of parents is literally a no-brainer. And the lack of brain effort shows.
From this point of view, the landscape documented by Wisniewski et al. is completely unsurprising. (This is also another indication that market forces are hardly the secret to good design. The market creates a hundred thousand health apps and hundreds of dating apps, because people want them, even if they don’t actually do any good.)
The researchers point out that there is an opportunity for much better design here. Some of the few apps that focus on self-regulation use the same surveillance techniques, but self-reporting and regulating. Can we design better ways to deliver self-awareness and, if desired, self-limitation? For example, everyone might benefit from a way to put an extra latch on some links, just to slow down and maybe not go there so often. An “are you sure” filter, to gently deter over doing it.
Another opportunity would be clever forms for communication within the family. I can think of some interesting technical challenges here. Obviously, there is a need to create trust, and, in my opinion, unilateral spying is not a good way to do that. But there is a need to share information and context, and a mobile device is uniquely suited to do that. So can be we make a sort of “family snap chat”, an app that, with mutual consent, shares a pretty comprehensive view of what’s going on.
For that matter, it would be nice to be able to easily share specific messages and threads, without having to necessarily share everything. Or tracking certain actions and not others. (For instance, I should be able to gossip with my best friends without CCing Mom, but perhaps software might flag when strangers are talking to me.)
This is a very useful paper.
- Matt Swayne, Online security apps focus on parental control, not teen self-regulation, in Penn State News. 2017. http://news.psu.edu/story/452954/2017/02/27/research/online-security-apps-focus-parental-control-not-teen-self
- Pamela Wisniewski, Arup Kumar Ghosh, Heng Xu, Mary Beth Rosson, and John M. Carroll, Parental Control vs. Teen Self-Regulation: Is there a middle ground for mobile online safety?, in Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. 2017, ACM: Portland, Oregon, USA. p. 51-69. http://dl.acm.org/citation.cfm?id=2998352