Category Archives: mobile apps

Freelancers Union: The App

n the early twenty first, there’s an app for everything. Indeed, some people seem to think that if you don’t have an app, you aren’t for real.

This week the Freelancer’s Union (I’m a proud member since 2015) released a new ‘app’. As their web page puts it, “Solidarity? There’s An App For That.” This isn’t my grandfather’s union, that’s for sure!

OK, I’m game. Let’s do some more close reading here.


First, let me be very clear. The Freelancers Union is doing important stuff, and I strongly support them. You can’t talk about the future of work without talking about the future of workers.

But that does not mean that I will not do a close reading of their narrative or their recent forays into digital products.


Looking At The App

Just what exactly does this ‘Solidarity Forever: The App’ actually do? Does it connect us to our brothers and sisters in the Union? Does it help recruit more members? Does it host digital rallies? Does it ping our elected representatives about legislation? Could there possibly be a playlist of inspiring songs? Dare I hope for live sing alongs with our comrades around the world?

Maybe in version 2.0.

The current version does only one thing: connects you to legal advice.  Sigh. Useful, I suppose, but not nearly as exciting as on could hope.

You App Reveals Your Psyche

While I think this app misses an opportunity to show off FU as truly the new way of work (see below), it does reveal some facts about the FU and our members.

First of all, the fact that there is an app at all, indicates the desire for conventional branding, especially, to be current. The Union is real unless it’s got an app. Box checked.

Second, we find confirmation that the backbone of the union is in the ‘digital creatives’, especially in NYC. The release is accompanied by a social promotion campaign (standard fare for digital advertising), and the instructions simply say,

Post a photo of yourself holding up the app, with the caption “I stand with freelancers because [write your reason!]. #FreelancersUnionApp

It is obviously assumed that we know what “post” means, and think that posting selfies is a meaningful political act.

We also see clearly what is at the top of the worries for the union and the membership. The app does only one thing: it refers you to a lawyer. Glancing at the app, we see a list of the common categories of problem, and the number one suggested topic is  “nonpayment”.

The FU has been pointing on its #FreelancingIsntFree campaign for more than a year, so we get the picture. The same bastards who hire temps instead of permanent employees, also find it cost effective to not pay the temps.

Another glaring point is that, like much of the union’s activities, this offer is only available in NYC initially. The Union is open to everyone, even schlunks like me out in some cornfield, but they are effective on the ground only in a few cities, and mostly in NYC where they HQ. I’m pretty sure that the union would like to spread the goodness everywhere, but it tends to be a perennial disappointment out here in the cornfields, where we can read about, but not really get much real union action.

Anyway–see how much we can learn from close reading an app!


Let me try to be clear. There isn’t anything really wrong with this app, and I certainly support the FU and the purpose of this app.  The point is to see what the app really is, and think about what it could be.

Please let me go one more step and make some suggestions for version 2.

First of all, there could be a specialized social network, with union themed features. The network should be totally flat, because everyone is in one union. PMs should be limited to pings that say, “I got your back” (forget about “like”—we don’t have to “like” each other, just fight for each other :-)). The union might circulate petitions and calls to contact politicians.

Second, there could be solidarity themed ‘togetherness’ activities. Simple ways for the Union to organize flash crowds, marches, or picnics, where feasible.  Other activities might include walkabouts that alert you when union members are near (a la Look Up or even AR Pokemon).

In cases where, we can’t meet in person, lets have digital solidarity. Digital sing songs. Digital dance alongs. Casual games

One game I can think of is a simple trivia game to learn about the union an dits members. Flash cards with simple (non-invasive) information, like where, what you do, and a tag. Remember the most Union members and be famous! High multipliers for locations outside NYC, and for statistically unusual tags (rare occupation, older worker, etc.)

If we want to go Augmented Reality, then we could make union badges that are AR markers. When you encounter someone with their badge on, point the app at her or him. Poof, they are surrounded by halos and unicorns! Or some other magic, magic that only happens when two union members are together in physical space.

The point is, if you make the app cool enough, people will want to join the union, just to get the app!  Let’s put the union in the lead of social technology.

Join the union.

Close Reading Apps: Brilliantly Executed BS

One of the maddening things about the contemporary Internet is the vast array of junk apps—hundreds of thousands, if not many millions—that do nothing at all, but look great. Some of them are flat out parodies, some are atrocities, many are just for show (no one will take us seriously if we don’t have our own app). But some are just flat out nonsense, in a pretty package. (I blame my own profession for creating such excellent software development environments.)

The only cure for this plague is careful and public analysis of apps, looking deeply into not only the shiny surface, but the underlying logic and metalogic of the enterprise. This is a sort of “close reading” of software, analogous to what they do over there in the humanities buildings.  Where does the app come from? What does it really do, compared to what they say it does? Whose interests are served?

Today’s example are two apps that pretend to do social psychology: Crystal (“Become a better communicator”) and Knack (“for unlocking the world’s potential”).

[Read Whole Article]

Yet Another Security Flaw In Your Mobile Device

It seems like every week brings a new study that demonstrates that your mobile device is a sieve of information about you, potentially hackable in any number of exotic ways.

This week we learn that not only can hackers manipulate the motion sensors in your phone, they can read the motion sensors and guess what you are typing. Specifically, Maryam Mehrnezhad and colleagues at Newcastle University published a paper demonstrating that a simple hack can seal your 4 digit PIN with greater than 50% hit rate [1].

Yoiks.

This attack takes advantage of the fact that most operating systems allow programs to access the motion sensors without asking permission. This has many potential security implications, but in this case they are looking at the use of numerical key boards type in PINs. They use a Javscript loaded in a browser to snoop on the sensors while when another tab asks for a PIN.

They train a classifier that learns to recognize the motion of the phone as numbers are tapped. The data uses features including orientation, acceleration, gravity, and so on. The resulting model then can guess what number is typed with absurdly high hit rates, even then a different person types the numbers.

Sigh.

The researchers note that this vulnerability needs to be addressed by operating systems and standards. Essentially, motion and touch sensors should be treated as IO devices, and managed similar to how microphones, cameras, and so on. (It is slightly scandalous that these sensors are so poorly protected, even after all these years.)

However, it isn’t really clear what kind of management would work, since the attack is done through the web browser.

The study also examined user expectations. The main point, of course, is that none of use would intuitively expect that our PIN could be stolen via these sensors. For that matter, many people don’t know about or understand these sensors.

You can’t do the right thing if you don’t even know it is there, and there isn’t any way to do anything anyway.

Double sigh.


  1. Maryam Mehrnezhad, Ehsan Toreini, Siamak F. Shahandashti, and Feng Hao, Stealing PINs via mobile sensors: actual risk versus user perception. International Journal of Information Security:1-23, 2017// 2017. http://dx.doi.org/10.1007/s10207-017-0369-x

Wearable Tech for Couples Therapy

Adela C. Timmons and colleagues at USC published and interesting study using digital sensor data from wearable technology to understand interpersonal interactions. “Using Multimodal Wearable Technology to Detect Conflict among Couples” [2].

The study is notable for using a variety of converging measurements (“multimodal”). They collected from sensors:

electrodermal and electrocardiographic activity, physical activity, and body temperature

and mobile phones collected:

audio recordings and GPS coordinates

They also collected self reports on mood and perceived interpersonal conflict ([2], p. 51)

The data analysis was able to learn classifications that detect “conflict” with high accuracy.

The general idea is to be able to continuously monitor interactions, to potentially intervene to improve couples’ “relationship functioning and quality of life.” Intervene in real time in the real world, not just in a therapist’s office.

This work is very much in the spirit of “quantified self” and the hundreds of thousands of health and fitness apps in the stores: self-surveillance to trigger nudges toward health and happiness. But this group is stepping beyond the narcissistic “self”, recognizing that “the quality of our relationships plays a central role in our mental and physical health.” (“Quantified us”, if you will.)

(Of course, this is still “solving the problems of the researchers”:  the target is young couples, an important problem for twenty-something researchers.)

The main thrust of the work so far is developing measurements. Following the strongest methods, the team collected data from several “modes”, including physiological, voice recordings, GPS, and self reports of mood. They had to do considerable work to organize these disparate data sources, to construct a time line. Furthermore, it is important to have the data from both members of the couple in a single, accurately synchronized, dataset. I have done this kind of data wrangling, so I know that this was a non-trivial effort, just to get a clean dataset.

From the data, they extract a variety of indications of the social context, personal activity, and indications of conflict. The paper explains some of the methods used to pull out these features.

Using this painstakingly assembled data, they have built a classification tree that is trained to determine whether there is “conflict” during each given hour of data. This is a rather coarse level of analysis, and I strongly suspect that this contributed to the relatively high misclassification rates (as they acknowledge in the paper).

One indication that these folks have their heads screwed on right is the fact that they report the receiver operating characteristic (ROC) curves to show how effective the classification is (Figure 2 on p. 56). So much of the data reported about apps (when there is any data at all) is useless cargo cult figures about “accuracy”.  Check out this study to see how you are supposed to measure effectiveness.

The overall goal is to not only classify but to automatically react to events, e.g., by trying to calm the emotions and suggest positive actions. I’m not sure what kinds of interventions might work, especially if they are delivered by an attention demanding mobile device.

The researchers point out that this system isn’t quite what is needed for such interventions either. This research demonstrated classification, but it would be really useful to have prediction. If the system could notice that trouble is brewing, it might be able to head things off, rather than try to patch things up afterwards.


This is very solid research, nicely done.

It is especially notable for taking the couple as the unit of analysis, rather than the isolated individual. Yay! For onece, Social Psychologists are involved, as they should be.

My own view is rather skeptical of the real time intervention project, for several reasons. First of all, there is a question of therapeutic strategy. Do you want a digital therapist riding along, guiding your relationship all the time? Will couples become dependent on the conflict detector and its nudges? Ideally, we’d like them to learn the skills, and wean them from the app, no?

Second, I have to wonder whether a mobile device is the right way to deliver nudges. The examples I’ve seen to date are not impressive. I grant you that millenials are more likely to take their phone as an authority equal to (or better than) a human therapist. But still, I can imagine the iPhone flying out the window if it tries to tell me to calm down and be reasonable when I really furious.

Worse, I don’t know if switching attention to the mobile device will help interpersonal conflict. If you stop arguing to check out the message from your phone that is telling you to calm down, that kind of disrupts the relationship. Distraction might or might not help defuse a fight—this would be a great topic for a research study! Perhaps just delivering a text message might be therapeutic, in and of itself.

(And if the argument is about “you pay more attention to social media than to me”, real time nudges from the phone will be a bad, bad thing.)

One final point. This group has done some heroic effort to incorporate a variety of data. I have to wonder if they really need all this data. Looking  back to the bad old days before everyone carried a GPS capable supercomputer in their pocket, studies achieved similar levels of effectiveness with much simpler datasets (e.g., accelerometers, light, and temperature in [3]). Pentland’s lab at MIT has achieved remarkable results using mainly vocal features, not even content analysis [1]) And so on.

My point is to suggest that they may want to step back and see if they can pare things down, rather than pile on more, and more data. Maybe a few features from the voices and timing of speech, some patterns of digital messaging, plus reasonable guess about social context would work well enough. Honestly, the physiological measures are never going to be easy to interpret, and maybe you don’t need them at all.


  1. Alex Pentland, Social Physics: How Good Ideas Spread – The Lessons From A New Science, New York, The Penguin Press, 2014.
  2. Adela C. Timmons, Theodora Chaspari, Sohyun C. Han, Laura Perrone, Shrikanth S. Narayanan, and Gayla Margolin, Using Multimodal Wearable Technology to Detect Conflict among Couples. Computer, 50 (3):50-59, 2017. http://ieeexplore.ieee.org/document/7888395/
  3. Van Laerhoven, Kristof and Ozan Cakmacki, What shall we teach our pants?, in Fourth International Symposium on Wearable Computers. 2000: Atlanta. p. 77-83. http://dl.acm.org/citation.cfm?id=856531

Spectroscope At The Grocery Store

OK, I’ve been beefing about “cargo cult” apps that use mobile devices and sensors to do DIY environmental and medical analysis.  Unfortunately, it’s getting harder and harder to even know how real things are.

Case in point, consider Tekla S. Perry report for IEEE Spectrum about the “SCiO Food Analyzer” app. First of all, this isn’t a trivial toy (like, say, a “smart” hair brush). They are building a tiny infrared spectroscope that attaches to or soon will be built in to a mobile phone or other device.

Is this real, or is this just something that looks real?  It’s hard to tell.

My rule of thumb is the spectroscopy is pretty magical, so this has got to be an interesting device. The question is, does this device actually work? And how do we know that?

The suggested use case is the desire to examine food in the store to get a better idea of the quality. The IR scanner can do some kinds of chemical analysis, and report on carbohydrates, fat, and sugar contents, for instance. The app uses unspecified “algorithms” to relate the measures to the flavor of produce, as well as the levels of carbs.

The article reports on a successful demonstration of the technology, which impressed the reporter. The app isn’t intended to tell you what an unknown item is, instead you tell it “this is an apple”, and it tells you the sugar content and how it falls in the range of apples it knows about. I.e., the algorithm predicts how the fruit will taste, based on the readings.

It was all pretty magical, pointing a gadget at food and getting an instant analysis. To be fair, I can’t verify the accuracy of what I was seeing on the screen; I didn’t take the fruits and cheeses back to a laboratory to confirm the analysis using more traditional technology. But it certainly seemed real, real enough that I would be pretty excited to have this kind of technology built into my smart phone,

Evidence? There are no obvious citations in the article. Consulting the company web site, there are some generic descriptions of the technology, but no validation study, published or not.

This being Silicon Valley, there is lots of information about awards and press reports, as well as news about funding and company alliances. Apparently, attracting venture capital and phone manufacturers is supposed to tell me that the results are scientifically valid. Sigh.

The lack of peer-reviewed evidence is a concern. For one thing, it is offered in the area of food safety (and possibly drug safety), which are potentially dangerous if users misinterpret the results or rely on them farther than they should. (“My phone didn’t say it was contaminated, so I thought it was OK to eat it.”)

It’s not that the technology is unbelievable, or implausible. But the fact that it could work does not mean that this particular device does work.

There are many questions that I’d want addressed.

The IR scan has many obvious limitations. I’m pretty sure it won’t work on frozen food, nor through foil or other IR opaque packages. I suspect it won’t work for most cooked foods, I don’t know what kinds of errors it may be vulnerable to. (Dust? Water on the lens? Sugar water on the packaging Fingers in the way of the scan? Deliberate hacking?)

The unspecified algorithms are surely some form of machine learning. What exactly were they taught? What are the limits of the data? If it knows about apples, what about pears?

For example, there are hundreds of species of apples. Have they sampled all of them? How well does the system deal with a new variant? What happens if I point it at a kiwi fruit and ask it if this “apple” is fresh?

The basic learning task is not just a chemical analysis, but also relating the chemistry to the quality of the produce. What heuristics are used, and how valid are they? In addition to variation in produce, how much variation is found among people’s tastes? How is this accounted for in the algorithm? Just how useful are the results?

And, of course, whatever it does, how reliable and accurate is it?

Having made a living as a software guy, I know very well that demos are hardly the same thing as actual validation.

Finally, I thought it was kind of funny that the motivating problem was that the food in the local store tastes blah, and “he resigned himself to occasionally buying tasteless produce or traveling 30 miles to a grocer he discovered that he could trust.”

This device addresses this lack of trust by…I’m not sure. I guess it lets you avoid the stuff you don’t want, though it doesn’t do much to get better food into your local store.

But the funny part is that the lack of trust in the store is solved by an app that does a lot of fancy stuff–that we are asked to trust on faith.

At the moment, it’s hard to know just how well this “magical” product works. Is this real, or cargo cult?  I can’t say, and that means I must assume it doesn’t work until proven.

It is more than a little worrying that venture capital seems to have replaced openly published research as the method for validating technology. We know that will lead to disaster.


  1. Consumer Physics, “SCiO: The world’s first pocket size molecular sensor”, https://www.consumerphysics.com/
  2. Tekla S. Perry, What Happened When We Took the SCiO Food Analyzer Grocery Shopping, in IEEE Spectrum – View From The Valley. 2017. http://spectrum.ieee.org/view-from-the-valley/at-work/start-ups/israeli-startup-consumer-physics-says-its-scio-food-analyzer-is-finally-ready-for-prime-timeso-we-took-it-grocery-shopping

 

Excitement About An APD Colony Counter App

I have blogged many times about mobile apps that offer cargo cult science, such as DIY pollution monitoring, or medical diagnosis. It is absurdly easy to make an app and distribute it, and there is little filtering on what is produced, or what claims are made. Thus, there are hundreds of thousands of apps related to health and diet, with scarcely any evidence that any of them do anything, let alone produce the alleged benefits.

While I’m alarmed at shoddy apps that look good and make unproven claims aimed at the general public, I’m really, really worried about cargo cult science apparently marketed to scientists, who should know better.

A case in point is an “APD Colony Counter App” from the Agency for Science, Technology and Research (A*STAR), Singapore, which was reported last year in Nature Methods [6] and received uncritical mention in blogs that probably should know better.

The app itself is a perfectly reasonable idea, using image processing software to create a phone app that analyses and counts bacteria colonies on a Agar plate. This is a tedious and time-consuming thing to do by eye, so automation is a great idea. It is also true that ubiquitous mobile phones now have cameras and processing sufficient to do this task. So, why not? Techs have a phone in their pocket, why not use it?

But there are several things that worry me.

First of all, there are zillions of devices and packages that already do this (e.g.,  [1, 3]), albeit, many are expensive. In fact, there are mobile apps that do the same thing (e.g., ) So, why do we need another one?

Even more important, does this software actually work? And how does it compare to other methods, human and algorithmic? A cheap app that produces poor results is no bargain, especially if health or money depend on the answers.

In this case, there is a short abstract, published in Nature Methods [6]. This paper does not review other methods, or cite any such review. The paper claims that the app is “convenient”, and portable. They also note that “there are many colony counter apps”, but this one uses software that is “to segregate merged colonies better for more accurate quantification”. So far, so good.

After discussing the interface, the paper offers one table demonstrating the effectiveness of the app. The table  compares “App Count” to “Manual Count” i.e., the number of colonies found by a person versus the app. There is little information about this tiny dataset (the sample size is twelve), such as the actual images and the expertise of the human comparison.

For that matter, the measure reported is “accuracy”, which is the number of colonies counted by the app compared to the human. There is no report of false positives, nor of the accuracy of the human standard, nor any indication that the two methods identified the same colonies, i.e., how well they agreed with each other. This data simply does not show that the algorithm even works, and there is no evidence in the paper offered that it produces valid results.

Worse, this data isn’t even relevant to the question of whether this supposedly superior algorithm works better that other similar apps. They specifically assert that this algorithm yields “more accurate quantification” than competing apps, but offer not one shred of evidence that this is true. Where is the comparison study?

This is cargo cult science, apparently marketed to scientists. In a press release, they make expansive, if non-specific claims

“”If we were to look at the history of science, many breakthroughs—including discovering microorganisms—were done at home or outside the workplace,” says Gan. “By having apps that anyone can access anywhere, I’m hoping that we’re going to bring back the spatial freedom for scientists to make discoveries anytime, anywhere.” “ (quoted from [5])

Remember, this app is offered not as a toy, it is intended to “enable the smartphone to transform into a useful scientific device for the quantification of bacterial load in clinic, research, environmental, and even food safety regulatory labs.”  [6]

Clinical tests and food safety? Yoiks!

Look, I’m sure that this app probably works pretty well. Maybe it is better than other apps, though I don’t know if it is meaningfully better.

But I need to know that this device is safe and effective. Offering something cheap that looks good, with unsupported claims that it is “accurate”, just isn’t good enough.

The truly embarrassing thing is that this is not a particularly difficult validation study. Evan I could do it. So I don’t think it is too much to expect that people who hope to create “breakthroughs” in science do some very simple science.


  1. Biocompare. Colony Counters. 2017, http://www.biocompare.com/Lab-Automation-High-Throughput/11357-Colony-Counters/.
  2. ColonyCount. ColonyCount. 2012, http://www.colonycount.org/.
  3. Quentin Geissmann. OpenCFU. 2013, http://opencfu.sourceforge.net/.
  4. Promega. Mobile, Desktop, and Web Apps for the Lab. 2017, https://www.promega.com/resources/mobile-apps/.
  5. Science X network, Counting microbes on a smartphone, in Phys.org. 2017. https://phys.org/news/2017-03-microbes-smartphone.html
  6. Chun-Foong Wong, Joshua Yi Yeo, and Samuel Ken-En Gan, APD Colony Counter App: Using Watershed Algorithm for improved colony counting . Nature Methods Application Notes.August 9 2016, http://www.nature.com/app_notes/nmeth/2016/160908/pdf/an9774.pdf

 

Study of Apps to Keep Teens Safe

We are in the midst of a giant, unplanned, and uncontrolled social experiment: a wave of young people are growing up immersed in ubiquitous network connectivity. In the Silicon Valley spirit of “just do it”, we have thrown everyone, including kids and teen agers onto the maelstrom of the Internet, with no safety net or signposts. Given that even grown ups behave like idiot adolescents on the web, it doesn’t seem like the greatest environment for growing up.

Needless to say, there is a lot of parental anxiety about all this. And where there is anxiety, capitalism finds a market. Apps to the rescue!

Pamela Wisniewski and colleagues reported last week on a study of dozens of apps that are genearally aimed at “adolescent online safety” [2]. One of their important contributes is a systematic definition of the types of strategies that may be developed. There are non-technical strategies (e.g., rulemaking), but the study focuses on technical strategies, i.e., the services provided by supposed “safety” apps.

They describe two kinds of strategy, “parental control”, and “teen self-regulation”. Parental control strategies are monitoring, restriction, and active mediation. Teen self-regulation strategies are self- awareness, impulse control, and risk-coping. In addition, apps might have an informational or teaching approach.

It should be clear that all these strategies have a role and value, but I think everyone agrees that “growing up” almost certainly means moving to self-regulation.

The heart of the study is an analysis of some 75 apps, classifying the strategies enabled by each. These apps mostly run in the background, to monitor and report activity on the mobile device. In short, spyware.

The results are clear as day: almost all of the “safety” apps are designed to monitor and restrict the online behavior of teens. Presumably, these features implement parental controls, not self-control. This study could not evaluate the effectiveness of these apps, or compare different strategies. But it is very clear that “the market” is delivering only a few of the possible strategies.

The researchers point out some conceptual weaknesses in the technical strategies employed by the apps.

These features weren’t helping parents actually mediate what their teens are doing online,” said Wisniewski. “They weren’t enhancing communication, or helping a teen become more self-aware of his or her behavior.” (quoted in [1]])

Many of them operate as covert and pretty hostile spyware, which might be what parents want, but has all sorts of possible side effects. “the features offered by these apps generally did not promote values, such as trust, accountability, respect, and transparency, that are often associated with more positive family values“ (p. 60)

Simply put, the values embedded within these apps were incongruent with how many parents of teens want to parent.” (p. 60)


To me, it is clear that these apps suffer from not just wrongheaded thinking (i.e., taking the parent as the target customer rather than the teen, or better yet, the family), but also form the affordances of the app-verse.

The Internet delivers two things very well: instant gratification (“click here”), and surveillance. Delivering self-regulation is much, much harder than delivering “swipe right”.

The Internet is all about surveillance. The wealthiest companies in the world are basically in the business of monitoring everybody as much as possible. Putting this technology in the hands of parents is literally a no-brainer.  And the lack of brain effort shows.

From this point of view, the landscape documented by Wisniewski et al. is completely unsurprising. (This is also another indication that market forces are hardly the secret to good design. The market creates a hundred thousand health apps and hundreds of dating apps, because people want them, even if they don’t actually do any good.)


The researchers point out that there is an opportunity for much better design here. Some of the few apps that focus on self-regulation use the same surveillance techniques, but self-reporting and regulating. Can we design better ways to deliver self-awareness and, if desired, self-limitation? For example, everyone might benefit from a way to put an extra latch on some links, just to slow down and maybe not go there so often. An “are you sure” filter, to gently deter over doing it.

Another opportunity would be clever forms for communication within the family. I can think of some interesting technical challenges here. Obviously, there is a need to create trust, and, in my opinion, unilateral spying is not a good way to do that. But there is a need to share information and context, and a mobile device is uniquely suited to do that. So can be we make a sort of “family snap chat”, an app that, with mutual consent, shares a pretty comprehensive view of what’s going on.

For that matter, it would be nice to be able to easily share specific messages and threads, without having to necessarily share everything. Or tracking certain actions and not others. (For instance, I should be able to gossip with my best friends without CCing Mom, but perhaps software might flag when strangers are talking to me.)

This is a very useful paper.


  1. Matt Swayne, Online security apps focus on parental control, not teen self-regulation, in Penn State News. 2017. http://news.psu.edu/story/452954/2017/02/27/research/online-security-apps-focus-parental-control-not-teen-self
  2. Pamela Wisniewski, Arup Kumar Ghosh, Heng Xu, Mary Beth Rosson, and John M. Carroll, Parental Control vs. Teen Self-Regulation: Is there a middle ground for mobile online safety?, in Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. 2017, ACM: Portland, Oregon, USA. p. 51-69. http://dl.acm.org/citation.cfm?id=2998352