Category Archives: Arrogant Internet Companies

Apple Demonstrates How Not To Support Self-Repair

Sigh. Don’t try this at home.

Apple has long been famous for its hostility to DIY.  From the beginning, long before the iPhone, Apple made it as hard as possible to develop your own software.  The so-called geniuses basically worked hard to prohibit anyone from changing or adding to—or competing with—their perfect product.  Sigh.

With 20 years of experience with lap tops and servers, the iPhone was, from the start, one of the most closed systems ever released.  You can’t even install your own software on your own phone without approval from Apple.  And you definitely can’t open up your phone, even to, say, change the battery. 

For a software developer, this basically means that I can’t really do much on Apple devices, at least not without negotiating with Apple.  Which, no, I’m not interested in getting permission from Apple to create my own stuff.

This isn’t just Apple, everybody is doing it, and it’s not funny anymore. Customers are starting to militate for more control over our own devices.  And some jurisdictions are starting to legally mandate the “right-to-repair”.  This is far short of a really open platform, but it’s a step.

Now, Apple is really, really good at design.  So they could design self-repair capabilities that would blow you socks off.  But instead, as Brian X. Chen reports, Apple seems to have bent their efforts to a passive aggressive, absolutely hostile process [1]. 

It would be really funny if it wasn’t so annoying.

Apparently, the “self-repair” option involves renting a room full of the specialized machinery that Apple shops use, so that you can attempt to follow the preposterously designed processes used by official Apple repair elves. 

(One thing we learn from this “self-repair” system is that Apple makes it difficult even for their own employees to repair things.)

Chen’s article walks through his own disaster.  All he was trying to do is replace the battery.  On normal hardware, this is considered a routine user task.  But iPhones are “special”.  Very, very, special.

Among other things, you have to heat the phone to melt the glue that seals everything.  And, of course, there are special screws that have to be removed in the right order.

Naturally, he broke his phone.  Fortunately, he had expert help to replace the screen he broke.

When the phone eventually rebooted, it would not run because it detected “unknown” parts.  Now, these were all official Apple parts, they just weren’t installed by Apple.  But you have to contact Apple and jump through hoops to get your repaired phone re-authorized by Apple.

And, by the way, all this costs more than paying Apple to do it for you, even if you don’t buy the whole repair factory kit thing for thousands of dollars.

So, fix it yourself, if you dare!  Honestly, I wouldn’t recommend any normal person try this on their own phone, at least not one you hope to keep using after you “fix” it.

It’s very clear that the design geniuses at Apple worked very hard to design systems that cannot be fiddled with.  I understand why they do this.  They want to prevent reverse engineering and hacking, and protect their “special” experience.   But this obsession with protecting Apple’s property rights has seriously bad side effects for users.

The current “self-repair” process is basically, “You rent an Apple shop, and learn to be an Apple repair person.  And then register your repairs with Apple.” 

This is a horrible user experience, and only just barely works at all.

Tsk.  I know you can do so much better if you wanted to, Apple.


  1. Brian X. Chen, I Tried Apple’s Self-Repair Program With My iPhone. Disaster Ensued, in New York Times. 2022: New York. https://www.nytimes.com/2022/05/25/technology/personaltech/apple-repair-program-iphone.html

Apple iOS Opt Seems to Have Huge Effect

I’m not a giant fan of Apple iOS because it is a tightly closed system and pretty hostile to small and non-commercial developers like me.  It’s also grievously complicated and prescriptive and expensive to develop for.  In general, it’s Apple’s way or no way, which sucks.

But autocratic rule (as well as billions in the bank) has advantages, and one of them is that Apple can, and just did, screw the biggest sharks on the Internet. 

As an old grey head who was around while we were booting up the Internet, I have to tell you that secret third party tracking is definitely not what we intended to be the primary use of the Internet.  But today most of the business and a ridiculous chunk of the bandwidth is taken up with surveillance systems.  It’s exactly how Facebook makes its zillions:  spying on it’s customers, and selling them out to anyone with money to pay.


It’s wicked and sinful.

I think that in recent years, a new circle in Hell must have opened, just for Internet advertisers.

There, the guilty spend eternity clicking on misleading user interfaces, “agreeing” to infernal terms of service, and attempting to opt out of torment.  And, of course, the torment is highly tuned to each individual, whose behavior is tracked in extreme detail, “in order to serve you better”. 

Forever.


Anyway.

Apple folks might end up in hell, but perhaps it won’t be that circle.

Because Apple did what everyone should always have done; which is block third party tracking unless the user specifically opts in.

And, amazingly enough, when given the actual choice, most people choose not to be spies on and resold as digital cattle.

One report suggests that 96% of people don’t opt in [2]. Even if that number is high, we can be sure that a vast number of people will not opt in.

As Samuel Axon put it, this “exceeds advertisers’ worst fears” [1].

Of course, Facebook is suing Apple about it, because Facebook would seem to be out of business in this model.   I’m having difficulty seeing what Facebook’s case might be. They have a right to rip off their users, and Apple doesn’t have a right to stop them?

Anyway. It will be very interesting to see the impact on online ad revenues. 

Also—when will Android give me the same option??


  1. Samuel Axon, 96% of US users opt out of app tracking in iOS 14.5, analytics find, in Ars Technica, May 7, 2021. https://arstechnica.com/gadgets/2021/05/96-of-us-users-opt-out-of-app-tracking-in-ios-14-5-analytics-find/
  2. Estelle Laziuk, Daily iOS 14.5 Opt-in Rate, in The Flurry Blog, April 29, 2021. https://www.flurry.com/blog/ios-14-5-opt-in-rate-att-restricted-app-tracking-transparency-worldwide-us-daily-latest-update/

Just How Accurate is Facebook Ad Placement?

Facebook is in the news quite a bit these days, good and bad.  Claiming a billion users will do that for you!  And one of the things on everybody’s mind is Facebook’s adverts.  So far as I can tell, Facebook is basically an advertising company, so it makes sense to pay attention to the ads.

Caveat:  I am not now, and never have been, a Facebook user.  So, I do not speak from personal experience, good or bad.

There are a couple of notable things about Facebook ads.  One is that they are targeted, allowing advertisers to communicate with very specific types of people.  Another is that they are delivered through a complicated algorithmic system that, according to reports, makes the advertisers ‘bid’ for each juicy eyeball in real time. I.e., when a victim user clicks on something, the system auctions off ad space on that something before it is displayed.

This is truly ground-breaking technology, for better or worse.  And it has certainly been good for Facebook, which has made zillions selling these ads, which are effectively selling access to those eyeballs.

At the base of this process, Facebook offers a menu of attributes to advertisers to specify their targets. The advertiser contracts to pay so much per ad, with the expectation that they will be delivered to the designated audience, and not ‘wasted’ on other targets.

The attributes include demographics (location, type of computer, etc.) and what Facebook calls ‘behavior’, which is based on history of purchases, online behavior, and so on.  The idea is that Facebook chunks away at the gazoogabytes of data it sees, creating dossiers on each user.  Ads are matched by algorithms to people who fit the specified attributes, and–ka-ching–another sale for Facebook.

It’s all sounds magical and amazing.  But how well does it really work?

I’ve always been suspicious of advertising in general and this kind of targeted advertising in particular.  I’m just not persuaded that it persuades.  Of course, low income wise acres with a skeptical attitude like me are not a prime target group for ad placement, so what do I know?  Maybe normies pay attention to Facebook ads.

But a more basic question is whether Facebook even delivers what they say they deliver.  I mean, they’ve got absurd amount of data about their users, but that doesn’t mean they can actually infer the real-world attributes and behaviors their advertisers are interested in.

While mere “targets” like you and me have no way to see what is actually going on, many advertisers have the ability to track their ads.  We are now learning that at least some of these customers have found that Facebook’s ad targeting is not very accurate.

In short, advertisers are not getting the eyeballs Facebook promised.

Personally, I’m not surprised that the “magic” is less wonderful than claimed.  Inferring real world information from Internet traces is not easy, and Facebook may have a monopoly but it does not have working alchemy.

You and I can laugh about it, but advertisers who are shelling out millions aren’t laughing.  Not to put too fine a point on in, they are, in fact, suing for alleged fraud [1].  Harsh!

The lawsuit hinges on whether Facebook knowing deceived its advertisers, i.e., made exaggerated claims about the targeting that they knew were wrong.  Sam Biddle reports in The Intercept that the evidence includes remarks attributed to unnamed Facebook employees [1].

Tsk, tsk.  A major advertising company shading the truth to make a sale?  I’m sure that’s never happened ever in the history of advertising!  : – )

Who knows whether this will be proved as deliberate fraud or not?  But this lawsuit is yet more bad publicity, suggesting as it does that Facebook ads may not be worth anywhere near what they charge for them.  This could cost FB a ton of money.

And this may all be moot, if Apple’s ‘opt in’ policy starts eating into the data Facebook has been stealing from everyone.  However well or poorly Facebook’s targeting works now, it will not work at all if most people don’t let them have their data.  That will put FB (and Google and others) out of business.

It couldn’t happen to a nicer set of folks.


  1. Sam Biddle, Facebook Managers Trash Their Own Ad Targeting in Unsealed Remarks, in The Intercept, December 24, 2020. https://theintercept.com/2020/12/24/facebook-ad-targeting-small-business/

 

“Dark Patterns”

Design Patterns have been around for a pretty long time now, the best of them clarifying the complexity of computer and software systems (famously defined by The Gang of Four in 1994 [1]).  The trick to a good Design Pattern is to find the right level of abstraction, to be general enough to be useful, but omit unnecessary details.

There have been many patterns, for all kinds of things at many levels of abstraction.

But some of my favorites are “anti-patterns”—things you shouldn’t do.  The great thing about anti-patterns is that they keep happening, despite the fact that we know they will not work and will lead to disaster.  Not surprisingly, many anti-patterns involve organizations and management, in addition to poor technical choices.

This summer, researchers at Princeton discuss “Dark Patterns”—practices that are arguable evil [2].

“Dark patterns are user interfaces that benefit an online service by leading users into making decisions they might not otherwise make. Some dark patterns deceive users while others covertly manipulate or coerce them into choices that are not in their best interests.”  ([2], p. 42)

The core of this work is descriptions of the design of interactive systems that work against the interests of the human user.

This work is an interesting bit of applied psychology, and the design patterns have a large psychological component.

“Researchers have also explained how dark patterns operate by exploiting cognitive biases” ([2], p. 42)

Arvind Narayanan and colleagues point out that these techniques are a confluence from a number of sources, brought together in computer interfaces.  They also note that contemporary web interfaces make it easy to empirically optimize the content, through infamous A/B testing.

 “dark patterns are the result of three decades-long trends: one from the world of retail (deceptive practices), one from research and public policy (nudging), and the third from the design community (growth hacking).” ([2], p. 42)

All of this is built on psychological research that has elucidated the underlying cognitive biases that can powerfully influence people to make irrational decisions. Essentially, “dark patterns” are exploiting common human weaknesses for gain.

The authors suggest that designers should incorporate ethics into the design (“Start by articulating the values that matter to you and that will guide your design”).  This is good advice for many reasons.  Much of the “darkness” is simply a prioritization of corporate interest over users’ interests.  If you don’t want to be “dark”, then incorporate users’ and others’ interests in the design.

They also point out that common A/B testing can lead to dark designs even if you didn’t mean to; because the criteria  used to optimize are short term and easy to measure.  E.g., optimizing clicks is easy to do, but generally not in the interest of the users.  They suggest working harder to define better metrics for A/B testing, if possible.

Personally, I’d like to see A/B testing regulated.

Long ago I was a psychology major, and we had to do a lot of work to do this kind of psychological testing.  We had to justify the research, including how it was beneficial to the subjects.  We had get the research reviewed to make sure that any potential harms were justified by the potential benefits.  And we had to obtain informed consent.  Legally operative informed consent.  And so on.

It really irks me to see these huge companies completely ignoring these requirements, doing whatever they want without our consent or even knowledge.

There are a lot of problems with academic research reviews (treating every questionnaire as if it were experimental brain surgery is not helpful), there ought to be some kind of transparency and accountability for A/B testing.  I think that the basic principle really ought to be that the interests of the users are part of the overall picture.  If you can’t tell me why answering this A/B question is important to me, why should I answer it for you?

I should have the ability to say no. In fact, the default should be that my behavior will not be used for research.

This won’t stop dark patterns, but it will cut down on accidentally dark designs, and reduce the level of darkness in designs overall.

Then we can turn our focus to deliberate deception and fraud.

Anyway:  designers, please come out of the darkness and try to live in the light.  We’ll all be happier.


  1. Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software, , Addison-Wesley, 1995.
  2. Arvind Narayanan, Arunesh Mathur, Marshini Chetty, and Mihir Kshirsagar, Dark patterns: past, present, and future. Communications of the ACM, 63 (9):42–47, 2020. https://doi.org/10.1145/3397884

 

A Warning From The Dawn Of The Internet

These days it seems like every pundit in the world is “discovering” that the internet is a bad thing.  And many seem to think this is news.

It isn’t news.

Way back when, at the very beginning of the World Wide Web, we saw where it was going, and warned the world.

I recently dug out an old article about “Digital Commerce” [1] from the National Center for Supercomputing Applications where Mark Andreessen’s “genius” technology came from.  (You didn’t think he invented the technology he commercialized, did you?)

I want to be clear here:  this article was written in 1995.  Netscape was six months old, and hadn’t IPOed yet, MS Internet Explorer wasn’t out for another six months.  Amazon booted up later that year, Paypal was three years in the future. Zuck was what, in third grade?  Vitalik Buterin (Mr. Ethereum) was in diapers.  Travis Kalanick (Mr. Uber) was applying to UCLA (from which he eventually dropped out).

In the article, Sensei Adam Cain (a student at the time, who didn’t drop out) and I described the landscape of the near future of internet commerce [1].  The early tech was very crude and laughably out of date now, but we could clearly see what was coming.

The most interesting thing to me is the discussion at the end of the paper.  We noted that digital commerce was going to be destabilizing (“disruptive”) in many ways, and would challenge governments and laws.  We worried about digital markets, and about digital cash (pre-Paypal and pre-Bitcoin!), for exactly the reasons we now worry about them.

Well, that all surely happened.

“As more and more economic and social activity is conducted online, what will this mean for society and the economy? The prognosis is far from clear. Digital commerce occurs with blinding speed, unrestrained by boundaries or distance- often beyond human comprehension and regulation. Will the digital economy be wildly volatile, full of lightning surges and panics of worldwide proportions? Can nation-states, as we know them, exist without a monopoly on money? If not, then what sort of governments, laws, and public institutions will come to exist?” (p. 39)

Equally important, we called out the social effects, the rise of digital communities, and the corresponding erosion of physical communities.  In a memorable phrase, I argue that “A home page is no substitute for a home or a hometown.”   And we ask the vital question, “If digital commerce does not offer support for a decent way of life, what good is it?”  (p. 39)

Quite.

“Digital commerce may help make new virtual communities economically viable. Just as small towns and regions are held together by cultural ties and supported by local economic activity, online communities will form that will be supported by digital economics. In the end, though, commerce is not culture, and digital communications are cold and impersonal. A home page is no substitute for a home or a hometown. If digital commerce does not offer support for a decent way of life, what good is it? “

We knew, right at the beginning, the deep, dangerous changes we were initiating.

And we told you.

So don’t say we didn’t warn you.


  1. Adam Cain and Robert E. McGrath, “Digital Commerce on the World Wide Web, in NCSA access magazine. 1995, National Center for Supercomputing Applications: Urbana. p. 36-39. http://hdl.handle.net/2142/46291

 

Smart Toys: Threat or Menace?

The cover of the May IEEE Computer magazine teases, “Are Smart Toys Secure?”, but the article by Kshetri and Voas buries the lede, “Cyberthreats under the Bed [1].

The topic, of course, is the plethora of new toys built with Internet and IoT technology.  These devices are designed for children, and deploy adult technology including location tracking, internet services, and AI to entertain and, in many cases, sell products.

I have blogged many times about the problematic design and numerous, grievous security and  privacy problems with this technology. I’m sure we are all shocked to learn that these same flaws are found in many “smart toys”.

K&V are particularly concerned about the security weaknesses that have already led to massive breaches. This is particularly troubling because successful identity theft of a child is particularly damaging. A child’s SSN is easily reused because there is no real history to undo.  And the damage may well not be known until much later when the young person begins to establish a credit rating and other financial standing.

Besides identity theft, there are a raft of other dubious features. “Smart” toys may record and track the children.  Personal information may be sold on, and children targeted by advertisers. And, of course, the toys are hackable, so bad guys may be able to take over.

Part of the problem is that toy designers have not focused on security, cutting corners financially and relying on outdated and poor technology.  This is exacerbated by the “let the user beware” attitude inherited from Internet companies. Pushing responsibility onto the children is not only daft, it isn’t legally operative.  So parents are required to take responsibility for groking the security of these devices—not that there is much that they can do.

“The expectation of understanding smart toys’ security and privacy risks might be unrealistic for most parents.” (p. 96)

This attitude has raised hackles when the vendors impose, and sometimes unilaterally change, contractual terms and conditions that absolve them of responsibility. No need to build a safe product if you make people agree that you have no liability.

K&V report that there is little effective regulation, government or otherwise. So parents are pretty much on their own.  Good luck. “As a general rule, however, parents should be wary of toys with recording technology, connect to the Internet, or ask for personal data.

And, basically, don’t buy them.

Just say no.

“Returning “creepy” dolls and other suspect smart toys to vendors for refunds and exchanges, or refusing to purchase them, will likely motivate toymakers to improve their products’ security.”


  1. Nir Kshetri and Jeffrey Voas, Cyberthreats under the Bed. Computer, 51 (5):92-95, 2018.

Ad Servers Are—Wait For It–Evil

The contemporary Internet never ceases to serve up jaw-dropping technocapitalist assaults on humanity. From dating apps through vicious anti-social media, the commercial Internet is predatory, amoral, and sickening.

This month, Paul Vines and colleagues at the University of Washington report on yet another travesty—ADINTUsing Ad Targeting for Surveillance” [1].

Online advertising is already evil (you can tell by their outrage at people who block out their trash), but they are also completely careless of the welfare of their helpless prey. Seeking more and more “targeted” advertising, these parasites utilize tracking IDs on every mobile device to track everyone of us. There is no opt in or opt out, we are not even informed.

The business model is to sell this information to advertisers who want to target people with certain interests.  The more specific the information, the higher the bids from the advertiser.  Individual ID is  combined with location information to serve up advertisements in particular physical locations. The “smart city” is thus converted into “the spam city”.

Vines and company have demonstrated that it is not especially difficult to purchase advertising aimed at exactly one person (device). Coupled with location specific information, the ad essentially reports the location and activity of the target person.

Without knowledge or permission.

As they point out, setting up a grid of these ads can track a person’s movement throughout a city.

This is not some secret spyware, or really clever data minig. The service is provided to anyone for a fee (they estimate $1000) . Thieves, predators, disgruntled exes, trolls, the teens next door. Anyone can stalk you.

The researchers suggest some countermeasures, though they aren’t terribly reassuring to me.

Obviously, advertisers shouldn’t do this. I.e., they should not sell ads that are so specific they identify a single person. At the very least, it should be difficult and expensive to filter down to one device. Personally, I wouldn’t rely on industry self-regulation, I think we need good old fashioned government intervention here.

Second, they suggest turning off location tracking (if you are foolish enough to still have it on), and zapping your MAID (the advertising ID). It’s not clear to me that either of these steps actually works, since advertisers track location without permission, and I simply don’t believe that denying permission will have any effect on these amoral blood suckers. They’ll ignore the settings or create new IDs not covered by the settings.

Sigh.

I guess the next step is a letter to the States Attorney and representatives. I’m sure public officials will understand why it’s not so cool to have stalkers able to track them or their family through online adverts.


  1. Paul Vines, Franziska Roesne, and Tadayoshi Kohno, Exploring ADINT: Using Ad Targeting for Surveillance on a Budget — or — How Alice Can Buy Ads to Track Bob. Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, 2017. http://adint.cs.washington.edu/ADINT.pdf

The Social Psychology of IOT: Totally Not Implemented Yet

Murray Goulden and colleagues write some interesting thoughts about the Internet of Things combined with ubiquitous mobile devices, specifically, “smart home” applications which can observe the user’s own behavior in great detail. In particular, they point out that these technologies generate vast amounts of interpersonal data, data about groups of people. Current systems do not manage and protect individual personal data especially well, but they don’t have any provisions at all for dealing with interpersonal data.

smart home technologies excel at creating data that doesn’t fit into the neat, personalised boxes offered by consumer technologies. This interpersonal data concerns groups, not individuals, and smart technologies are currently very stupid when it comes to managing it.

The researchers discuss social psychological theory that examines the way that groups have social boundaries and ways to deal with breaching the boundaries. For example, a family in their home may have conversations that they would never have anywhere else, nor when any outsider is present.

This isn’t a matter of each individual managing his own data (even if the data is available to manage), but understanding that there is a social situation that has different rules than other social situations, rules which apply to all the individuals.

In-home systems have no understanding of such rules or what to do about them, nor are there any means for humans to manage what is observed.

Their paper makes the interesting point that this stems from the basic architecture of these in-home systems;

The logic of this project – directing information, and so agency, from the outer edges of the network towards the core – is one of centralisation. The algorithms may run locally, but the agency invested in them originates elsewhere in the efforts of the software engineers who designed them.” ([3], p.2)

In short, the arrogant engineers and business managers don’t even understand the magnitude of their ignorance.

I have remarked that many products of Silicon Valley are designed to solve the problems that app developers understand and care about. The first apps were pizza ordering services, music downloads, and dating services. There are endless variations on these themes, and they are all set in the social world of a young, single, worker (with disposable income).

For more than two decades, “smart home” systems have been designing robot butlers that will adjust the “settings” to the “user’s preferences”. I have frequently questioned how these systems work when there is more than one user, i.e., two or more people live together. The lights can’t be perfectly adjusted to everyone, only one “soundtrack” can play at a time, etc.  Noone has an answer, the question isn’t considered.

I will say again that noone with any experience or common sense would ever put a voice activated, internet connected device in a house with children, let alone a system that is happy to just buy things if you tell it to. Setting aside the mischief kids will do with such capabilities, what sort of moral lesson are you teaching a young child when the house seems to respond instantly to whatever they command?

Goulden doesn’t seem to have any solutions in mind. He does suggest that there needs to be ways for groups of people to “negotiate” the rules of what should be observed and revealed. This requires that the systems be transparent enough so we know what is being observed, and that there be ways to control the behavior.

These issues have been known and studied for many years (just as a for instance take a gander at research from the old Georgia tech “Aware Home” project from the 1990’s,  e.g.,[1]), but the start up crowd doesn’t know or care about academic research–who has time to check out the relevant research.

Goulden points out that if these technology are really obnoxious, then people will reject them. And, given that many of the “features” are hardly needed, people won’t find it hard to turn them off.

Their current approach – to ride roughshod over the social terrain of the home – is not a sustainable approach. Unless and until the day we have AI systems capable of comprehending human social worlds, it may be that the smart home promised to us ends up being a lot more limited than its backers imagine.


  1. Anind K. Dey  and Gregory D. Abowd, Toward a Better Understanding of Context and Context-Awareness. GIT GVU Technical Report GIT-GVU-99-22, 1999. http://www.socs.uoguelph.ca/~qmahmoud/teaching/fall2006/pervasive/context-intro.pdf
  2. Murray Goulden, Your smart home is trying to reprogram you in The Conversation. 2017. https://theconversation.com/your-smart-home-is-trying-to-reprogram-you-78572
  3. Murray Goulden, Peter Tolmie, Richard Mortier, Tom Lodge, Anna-Kaisa Pietilainen, and Renata Teixeira, Living with interpersonal data: Observability and accountability in the age of pervasive ICT. New Media & Society: 1461444817700154, 2017. http://dx.doi.org/10.1177/1461444817700154

 

Study of Apps to Keep Teens Safe

We are in the midst of a giant, unplanned, and uncontrolled social experiment: a wave of young people are growing up immersed in ubiquitous network connectivity. In the Silicon Valley spirit of “just do it”, we have thrown everyone, including kids and teen agers onto the maelstrom of the Internet, with no safety net or signposts. Given that even grown ups behave like idiot adolescents on the web, it doesn’t seem like the greatest environment for growing up.

Needless to say, there is a lot of parental anxiety about all this. And where there is anxiety, capitalism finds a market. Apps to the rescue!

Pamela Wisniewski and colleagues reported last week on a study of dozens of apps that are genearally aimed at “adolescent online safety” [2]. One of their important contributes is a systematic definition of the types of strategies that may be developed. There are non-technical strategies (e.g., rulemaking), but the study focuses on technical strategies, i.e., the services provided by supposed “safety” apps.

They describe two kinds of strategy, “parental control”, and “teen self-regulation”. Parental control strategies are monitoring, restriction, and active mediation. Teen self-regulation strategies are self- awareness, impulse control, and risk-coping. In addition, apps might have an informational or teaching approach.

It should be clear that all these strategies have a role and value, but I think everyone agrees that “growing up” almost certainly means moving to self-regulation.

The heart of the study is an analysis of some 75 apps, classifying the strategies enabled by each. These apps mostly run in the background, to monitor and report activity on the mobile device. In short, spyware.

The results are clear as day: almost all of the “safety” apps are designed to monitor and restrict the online behavior of teens. Presumably, these features implement parental controls, not self-control. This study could not evaluate the effectiveness of these apps, or compare different strategies. But it is very clear that “the market” is delivering only a few of the possible strategies.

The researchers point out some conceptual weaknesses in the technical strategies employed by the apps.

These features weren’t helping parents actually mediate what their teens are doing online,” said Wisniewski. “They weren’t enhancing communication, or helping a teen become more self-aware of his or her behavior.” (quoted in [1]])

Many of them operate as covert and pretty hostile spyware, which might be what parents want, but has all sorts of possible side effects. “the features offered by these apps generally did not promote values, such as trust, accountability, respect, and transparency, that are often associated with more positive family values“ (p. 60)

Simply put, the values embedded within these apps were incongruent with how many parents of teens want to parent.” (p. 60)


To me, it is clear that these apps suffer from not just wrongheaded thinking (i.e., taking the parent as the target customer rather than the teen, or better yet, the family), but also form the affordances of the app-verse.

The Internet delivers two things very well: instant gratification (“click here”), and surveillance. Delivering self-regulation is much, much harder than delivering “swipe right”.

The Internet is all about surveillance. The wealthiest companies in the world are basically in the business of monitoring everybody as much as possible. Putting this technology in the hands of parents is literally a no-brainer.  And the lack of brain effort shows.

From this point of view, the landscape documented by Wisniewski et al. is completely unsurprising. (This is also another indication that market forces are hardly the secret to good design. The market creates a hundred thousand health apps and hundreds of dating apps, because people want them, even if they don’t actually do any good.)


The researchers point out that there is an opportunity for much better design here. Some of the few apps that focus on self-regulation use the same surveillance techniques, but self-reporting and regulating. Can we design better ways to deliver self-awareness and, if desired, self-limitation? For example, everyone might benefit from a way to put an extra latch on some links, just to slow down and maybe not go there so often. An “are you sure” filter, to gently deter over doing it.

Another opportunity would be clever forms for communication within the family. I can think of some interesting technical challenges here. Obviously, there is a need to create trust, and, in my opinion, unilateral spying is not a good way to do that. But there is a need to share information and context, and a mobile device is uniquely suited to do that. So can be we make a sort of “family snap chat”, an app that, with mutual consent, shares a pretty comprehensive view of what’s going on.

For that matter, it would be nice to be able to easily share specific messages and threads, without having to necessarily share everything. Or tracking certain actions and not others. (For instance, I should be able to gossip with my best friends without CCing Mom, but perhaps software might flag when strangers are talking to me.)

This is a very useful paper.


  1. Matt Swayne, Online security apps focus on parental control, not teen self-regulation, in Penn State News. 2017. http://news.psu.edu/story/452954/2017/02/27/research/online-security-apps-focus-parental-control-not-teen-self
  2. Pamela Wisniewski, Arup Kumar Ghosh, Heng Xu, Mary Beth Rosson, and John M. Carroll, Parental Control vs. Teen Self-Regulation: Is there a middle ground for mobile online safety?, in Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. 2017, ACM: Portland, Oregon, USA. p. 51-69. http://dl.acm.org/citation.cfm?id=2998352

 

Health Apps Are Potentially Dangerous

The “Inappropriate Touch Screen Files” has documented many cases of poor design of mobile and wearable apps, and I have pointed out more than once the bogosity of unvalidated cargo cult environment sensing.

This month Eliza Strickland writes in IEEE Spectrum about an even more troubling ramification of these bad designs and pseudoscientific claims: “How Mobile Health Apps and Wearables Could Actually Make People Sicker” [2].

 Strickland comments that the “quantified self” craze has produced hundreds of thousands of mobile apps to track exercise, sleep, and personal health. These apps collect and report data, with the goal of detecting problems early and optimizing exercise, diet, and other behaviors. Other apps monitor the environment, providing data on pollution and micro climate. (And yet others track data such as hair brushing techniques.)

These products are supposed to “provide useful streams of health data that will empower consumers to make better decisions and live healthier lives”.

But, Strickland says, “the flood of information can have the opposite effect by overwhelming consumers with information that may not be accurate or useful.

She quotes David Jamison of the ECRI Institute comments that many of these apps are not regulated as medical devices, so they have not been tested to show that they are safe and effective.

Jamison is one of the authors of an opinion piece in the JAMA, “The Emerging Market of Smartphone-Integrated Infant Physiologic Monitors[1]. In this article, the authors strongly criticize the sales of monitoring systems aimed at infants, on two grounds.

First, the devices have not been proven accurate, safe, or effective for any purpose, let alone the advertised aid to parents. Second, even if the devices do work, there is considerable danger of overdiagnosis. If a transient and harmless event is detected, it may trigger serious actions such as an emergency room visit. If nothing else, this will cause needless anxiety for parents.

I have pointed out the same kind of danger from DIY environmental sensing: if misinterpreted, a flood of data may produce either misplaced anxiety about harmless background level events or misplaced confidence that there is no danger if the particular sensor does not detect any threat.

An important design question in these cases is, “is this product good for the patient (or user)”?  More data is not better, if you don’t know how to interpret it.

This is becoming even more important than the “inappropriateness” of touchscreen interfaces:  the flood of cargo cult sensing in the guise of “quantified self” is not only junk, it is potentially dangerous.


  1. Christopher P. Bonafide, David T. Jamison, and Elizabeth E. Foglia, The Emerging Market of Smartphone-Integrated Infant Physiologic Monitors. JAMA: Journal of the American Medical Association, 317 (4):353-354, 2017. http://jamanetwork.com/journals/jama/article-abstract/2598780
  2. Eliza Strickland, How Mobile Health Apps and Wearables Could Actually Make People Sicker, in The Human OS. 2017, IEEE Spectrum. http://spectrum.ieee.org/the-human-os/biomedical/devices/how-mobile-health-apps-and-wearables-could-actually-make-people-sicker