Category Archives: mobile apps

ReThink app: Count to Ten Before You Send

Speaking of technical fixes….

I saw a story at the BBC  about an app call ReThink, which “stops cyberbullying before it starts”.  The idea is to use a substitute key pad that analyses what is typed, tries to spot problematic messages, and pops up a message asking if you want to rethink what you typed.

ReThink

This is a pretty nice idea. Many of us have caught ourselves, counted to ten, and erased a message.  And who hasn’t wished they had had a second chance to not send that message.  ReThink is a “count to ten” keyboard—which is surely a good idea for anyone, not just kids.

So, I was curious to see just how real this is.

The BBC story and other press coverage is all about how great it is for a kid to be creating a tech solution to a real world problem, mentioning a patent, awards, pitches to Shark Tank, etc.  It’s a great narrative, for sure.

But what about the app?

I was able to find and download the app, which is very good. (You have no idea how many great ideas I’ve read about that don’t even exist.)

Unfortunately, ReThink crashed the first time I used it.  Perhaps my ancient Android is too old.  Or perhaps my “deny all permissions” environment breaks it.  I dunno.  But the result is I couldn’t actually see it work for me.  (Software is hard, it’s a miracle when it actually does work.)

Looking at the web page, there is lots of talk about the problem to solve (bullying), but no evidence that this app has any impact at all on such bullying.

There is reference to one “study”, but no citation to any published report of any kind.  The main result claimed is that users “change their mind 93% of the time” and choose different words.

That sounds fine, though we have no way to know what that means, how large the sample was, or what the situation was.  For that matter, what is a reasonable comparison or control group?

There are lots of other unanswered empirical questions.

Does this effect persist?  Do users habituate and stop paying attention to the app?  Perhaps they get tired of it second guessing them, and turn it off. Or maybe they actually learn, and become more careful over time—making the app unnecessary.

I assume that the canonical case is for everyone in the social network to be using the app, rather than some using and some not.  But what if one side of the conversation is unmoderated and the other is moderated?  What happens then?  I,e, if the other guys is blazing away at you, do you still accept ReThink’s recommendations?

Even if this app is highly effective at moderating thoughtlessly mean text messages, how is that connected with bullying?  Some (most?) bullying is intentional, not accidental.  Maybe the 7% of messages that aren’t rethought are the worst kind of bullying, and the 93% were just rudeness.  If so, then the app would have no effect at all on bullying.

Worst of all, text messages are hardly the only channel for bullying. (Believe me, we were bullied at school long before the Internet.)  In fact, even as far as mobile devices, this app may already be out of date, as voice and video messages gain prominence.


All this isn’t to say that this isn’t a clever app.  I might call it a text messaging helper, more than a fix for bullying.

It is unfortunate that there is no actual evidence available that it actually works.

And it is unfortunate that noone seems to think it is necessary or important to prove that it is safe and effective before leaping into marketing and story telling about how beneficial it is.

Is this what we should be teaching our kids?


  1. BBC, How one teen’s app stops cyberbullying before it starts, in BBC Capital. 2018. p. March 28. http://www.bbc.com/capital/story/20180328-how-one-teens-app-stops-cyberbullying-before-it-starts

 

Biometric Authentication for Mobile Device

Alina N. Filina and Konstantin G. Kogos from National Research Nuclear University, Moscow, report a method for continuous authentication to control access to a mobile device.  They propose to use non-invasive behavioral biometrics to authenticate a person, controlling access to the device.

Continuous authentication allows you to grant rights to the user, without requiring from him any unusual activities.” ([1],  p. 69)

The basic idea is to use the sensors on the device to detect gestures, and use machine learning to identify a unique, individual “signature”.  This is used in combination with other context (e.g., whether the network is trusted or not), to detect when the correct person is holding the device.

Continuous authentication is a great idea, and some kind of biometrics might be useful to achieve this.

But I have doubts about the F&K’s approach.

First, I have to wonder if the method can be accurate enough to be practical.  Machine learning based recognition always has some percentage of false positives and negatives.  In this application, the former would grant access when it shouldn’t, and the latter will block access to the authorized user. This is particularly problematic in this continuous authentication scenario, which repeatedly tests your identity. Imagine the inconvenience of your device dropping out every so often just because the recognizer has a 1% chance of a false rejection, and misses every few minutes.

Second, the supposedly unobtrusive behaviors used to recognize the person require active interaction. The researchers point out the need to detect context such as setting the device on a table, which produces no motion, idiomatic or not.  This case and others should not lock out the user.

The general point about using active behaviors is that in order to be unobtrusive the training samples should be selected from the users “common” or “normal” behavior.  And to be continuously checked, the training samples much cover an array of behaviors that cover a substantial proportion of normal use.  It is not clear to me how to identify and capture such training samples.

Third, this method is vulnerable to changes in user behavior.  If the user enters a new environment or begins a new activity, will his phone block him out?  There is also a problem if the user is injured or incapacitated.  For example, if the user is hurt, his movements may be altered, which could lock out the device.  (This is especially problematic should the user be prevented from calling for medical assistance because his device doesn’t recognize him.)

I would think that the sample behaviors used to authenticate should be difficult to mimic. The method rests on the assumption that users can be distinguished with high probability.  The current study does not explore how effectively the method discriminates  users, or possible imitation or replay attacks.  (I note that a robot might be used to generate replays.)

I’ll also point out that this method requires that all the sensors and data be continuously collected.  This is an immense amount of trust to place in the device, and an invitation to intrusive tracking.  This might be appropriate for high security environments which are already heavily monitored, but less desirable for broad consumer use.


This is an interesting study, but I think it needs a lot more work to show that it will really work.


  1. Alina N. Filina and Konstantin G. Kogos. 2017. “Mobile authentication over hand-waving.” 2017 International Conference “Quality Management,Transport and Information Security, Information Technologies” (IT&QM&IS), 24-30 Sept. 2017. http://ieeexplore.ieee.org/document/8085764/

CertiKOS – This is what the IOT should be

Now and again I have criticized the software architecture of contemporary consumer electronics, including mobile devices, home automation, and the Internet of Things.

In particular, in response to the never-ending stream of security oopsies, I have said that it is a mistake to use a general-purpose operating system on these extremely exposed and unmanaged devices.  Realistically, there is no way to make complex systems such as Linux, Android OS or iOS secure, and once breached, the whole system is exposed (and worse, can be repurposed).

It is easy to say what not to do, Bob, but how do you do it right?


Secure operating systems are very, very hard to make, and every cunning advance in hardware (multicore, multiple busses, complex memory hierarchies, integrated networking, etc.) makes it all that harder.  Furthermore, creating a provably secure OS has to be fast and cost effective, because there are so many devices coming out.

One good example of what I’m talking about is CertiKOS from Yale.  As in any secure system, the logic is formally verified to help assure that it does what it is supposed to and only what it is supposed to.  Importantly, their system is composable, designed to be able to build up a system from components, and keep it kosher [1].

Phew!  This stuff makes by brain hurt to try to follow it.  But this is what needs to be done.

What I would like to see is to use something like this to build “certified” software for each and every IoT device and home automation gadget.  In particular, I want the IoT device to do only the limited things it is supposed to do, and simply not be able to do anything else.

A “smart light bulb” should only do what a SLB should do. Which, honestly, isn’t much.  Hack into my light bulb?  Even if you can, who cares?  It can’t do anything.

OK, I know this isn’t the whole story.  Securing networks of devices is more complicated, since there will still be problems, such as fiddling with traffic and loads.

It is also true that it will be very challenging to create a device that can load endless third-party apps, like your mobile phone can today.  But, that’s probably a good thing to cut back on, anyway.

This kind of modular device can probably help privacy as well.  How many times do we have to learn about some app that is surreptitiously spying on users.  If a device can’t spy on you, then you don’t need to worry so much about who is subverting your pocket or your living room.

Obviously, this kind of privacy is hard.  A “secure” Alexa (Salexa?) would have components to exactly and  do only what is needed. That would require designing the functions in advance, which would require understanding how people live, and designing to meet those needs

This contrasts to current methodology, which release a powerful, general-purpos platform and loads whatever combination of apps the user buys.  And in many cases, the device really knows very little about what should be done, and surveilles the user to “learn” their behavior and preferences.  This approach both invades privacy (by design) and exposes the user to huge risks if the system is infiltrated by enemies (not by design).


I think the bottom line is that  secure kernels are necessary but not sufficient to make network systems more secure.  Security and privacy are end-to-end problems, and only as good as the weakest link. But making components stronger is a good start.

The cool thing about the Yale work is that they are building tools and processes for building and composing components.  This is a crucial technology for building systems, if not the complete solution.

Nice work.


  1. Ronghui Gu, Zhong Shao, Hao Chen, Xiongnan (Newman) Wu, Jieung Kim, Vilhelm Sjöberg, and David Costanzo, CertiKOS: An Extenisble Architecture for Building Certified Concurrent OS Kernels, in USENIX Symposium on Operating Systems Design and Implementation. 2016. p. 653-669. http://flint.cs.yale.edu/flint/publications/certikos.pdf

Alex Wright on Smartphone science

Alex Wright writes in the Communications of the ACM about the emergence of many new scientific instruments, built out of ubiquitous smartphones.

A contemporary smartphone has excellent digital cameras, microphones, network connections, twentieth century and a decent display. From these ingredients, people are building apps to do real science. These devices have more computing power and bandwidth than supercomputers, and it costs very little to develop and deploy apps.

Wright’s article makes clear that there is much more than just computing power going on here. These devices have ignited a burst of innovation, developing new ways to tackle sensing and measurement problems.

Smartphones can be augmented (e.g., to focus on tissue samples), and thereby create a very inexpensive alternative to expensive microscopes or spectrographs. Many standard algorithms will run on a phone, and the inexpensive platform has encouraged new algorithms.

Wright barely scratches the surface here.  There are apps for a variety environmental sensing (DNA sequencing (!), microbe populationsmicrobe assays, weather, air pollution, particulates,, odors, earthquake detection, food quality, detecting poachers, and wildlife observations (pollinators.  bird watching, bird song, insect song).  There are also apps that are just plain not needed (garden conditions, hair brush, brain computer interfaces ),

Looking ahead, the next wave of experimentation may have less to do with the instruments themselves and more to do with finding the right pathways to market.” ([1], p. 20)

Initial interest centers on potential money making opportunities, especially biomedical research and medical diagnostics.  But these devices and apps are likely to be available to lay people, for personal health monitoring and citizen science.

It is good when these devices are being developed to rigorous scientific and medical standards. There is a huge difference between a smartphone app that sort of, almost, analyses blood chemistry, and one that generates reliable and valid results. Beyond that, interpreting the results requires actual understanding of what is measured, the limits of the instrument, and what it means.  That’s going to be hard.

I have expressed misgivings about the use of such devices by citizen scientists or the general public.  I have remarked before, simply collecting data is not actually that useful scientifically. It also invites misguided pseudoscicence, if data is not carefully analyzed or misinterpreted.  And, collecting fancy data will only make the wild world of health apps all that much more dangerous.

This will be a major challenge for designers:  how to create powerful tools that are safe and effective in the hands of a non-specialist.  The model shouldn’t be the lab or hospital equipment, but something like a home thermometer. Useful in a fool proof way.

 

  1. Alex Wright, Smartphone science. Communications of the ACM, 61 (1):18-20, January 2017. https://cacm.acm.org/magazines/2018/1/223882-smartphone-science/fulltext

 

Listening for Mosquitos

The ubiquitous mobile phone has opened many possibilities for citizen science. With most citizens equipped with a phone, and many with small supercomputers in the purse or pocket, it is easier than ever to collect data from wherever humans may be.

These devices are increasing the range of field studies, enabling the identification of plants and animals by sight and sound.

One key, of course, is the microphones and cameras. Sold to be used for deals and dating, not to mention selfies, these instruments are outstripping what scientists can afford.

The other key is that mobile devices are connected to the Internet, so data uploads are trivial. This technology is sold for commerce and dating and for sharing selfies, but it is perfect for collecting time and location stamped data.

In short, the vanity of youngsters has funded infrastructure that is better than scientists have ever built. Sigh.


Anyway.

This fall the Stanford citizen science folks are talking about yet another crowd sourced data collection: an project that identifies mosquitos by their buzz.

According to the information, Abuzz works on most phones, including older flip phones (AKA, non-smart phones).

It took me a while to figure out that Abuzz isn’t an app at all. It is a manual process. Old style.

You use the digital recording feature on your phone to record a mosquito. Then you upload that file to their web site. This seems to be a manual process, and I guess that we’re supposed to know how to save and upload sound files.

The uploaded files are analyzed to identify the species of mosquito. There are thousands of species, but the training data emphasized the important, disease bearing species we are most interested in knowing about.

A recent paper reports the details of the analysis techniques [2]. First of all, mobile phone microphones pick up mosquito sounds just fine. As we all know, the whiny buzz of those varmints is right their in human hearing, so its logical that telephones tuned ot human speech would hear mosquitos just fine.

The research indicates that the microphone is good in a range of up to 100mm. This is pretty much what you would expect for a hand held phone. So, you are going to have to hold the phone up to the mosquito, just like you would pass it to a friend to say hello.

At the crux of the matter, they were able to distinguish different mosquitos from recordings made by phone. Different species of mosquito have distinct sounds from their wing beats, and the research showed that they can detect the differences from these recordings.

They also use the time and location metadata to help identify the species. For example, the geographic region narrows down the species that are likely to be encountered.

The overall result is that it should be possible to get information about mosquito distributions from cell phone recordings provided by anyone who participates. This may contribute to preventing disease, or at least alerting the public to the current risks.


This project is pretty conservative, which is an advantage and a disadvantage. The low tech data collection is great, especially since the most interesting targets for surveillance are likely to be out in the bush, where the latest iPhones will be thin on the ground.

On the other hand, the lack of an app or a plug in to popular social platforms means that the citizen scientists have to invest more work, and get less instant gratification. This may reduce participation. Obviously, it would be possible to make a simple app, so that those with smart phones have an even simpler way to capture and upload data.

Anyway, it is clear that the researchers understand this issue. The web site is mostly instructions and video tutorials, featuring encouraging invitations from nice scientists. (OK, I thought the comment that “I would love to see is people really thinking hard about the biology of these complex animals” was a bit much.

I haven’t actually tried to submit data yet. (It’s winter here, the skeeters are gone until spring). I’m not really sure what kind of feedback you get. It would be really cool to return email a rapid report (i.e., within 24 hours). It should say the initial identification from your data (or possibly ‘there were problems, we’ll have to look at it), along with overall statistics to put your data in context (e.g., we’re getting a lot of reports of Aegyptus in your part of Africa).

To do this, you’d need to automate the data analysis, which would be a lot of work, but certainly is doable.


I’ll note that this particular data collection is something that cannot be done by UAVs. Drones are, well, too droney. Even if you could chase mosquitos, it would be difficult to record them over the darn propellers. (I won’t say impossible—sound processing can do amazing things).

I’ll also note that this research method wins points for being non-invasive. No mosquitos were harmed in this experiment. (Well, they were probably swatted, but the experiment itself was harmless.) This is actually important, because you don’t want mosquitos to selectively adapt to evade the surveillance.


  1. Taylor Kubota, Stanford researchers seek citizen scientists to contribute to worldwide mosquito tracking, in Stanford – News. 2017. https://news.stanford.edu/2017/10/31/tracking-mosquitoes-cellphone/
  2. Haripriya Mukundarajan, Felix Jan Hein Hol, Erica Araceli Castillo, and Cooper Newby Using mobile phones as acoustic sensors for high-throughput mosquito surveillance. eLife. doi: 10.7554/eLife.27854 October 11 2017, https://elifesciences.org/articles/27854#info

Ad Servers Are—Wait For It–Evil

The contemporary Internet never ceases to serve up jaw-dropping technocapitalist assaults on humanity. From dating apps through vicious anti-social media, the commercial Internet is predatory, amoral, and sickening.

This month, Paul Vines and colleagues at the University of Washington report on yet another travesty—ADINTUsing Ad Targeting for Surveillance” [1].

Online advertising is already evil (you can tell by their outrage at people who block out their trash), but they are also completely careless of the welfare of their helpless prey. Seeking more and more “targeted” advertising, these parasites utilize tracking IDs on every mobile device to track everyone of us. There is no opt in or opt out, we are not even informed.

The business model is to sell this information to advertisers who want to target people with certain interests.  The more specific the information, the higher the bids from the advertiser.  Individual ID is  combined with location information to serve up advertisements in particular physical locations. The “smart city” is thus converted into “the spam city”.

Vines and company have demonstrated that it is not especially difficult to purchase advertising aimed at exactly one person (device). Coupled with location specific information, the ad essentially reports the location and activity of the target person.

Without knowledge or permission.

As they point out, setting up a grid of these ads can track a person’s movement throughout a city.

This is not some secret spyware, or really clever data minig. The service is provided to anyone for a fee (they estimate $1000) . Thieves, predators, disgruntled exes, trolls, the teens next door. Anyone can stalk you.

The researchers suggest some countermeasures, though they aren’t terribly reassuring to me.

Obviously, advertisers shouldn’t do this. I.e., they should not sell ads that are so specific they identify a single person. At the very least, it should be difficult and expensive to filter down to one device. Personally, I wouldn’t rely on industry self-regulation, I think we need good old fashioned government intervention here.

Second, they suggest turning off location tracking (if you are foolish enough to still have it on), and zapping your MAID (the advertising ID). It’s not clear to me that either of these steps actually works, since advertisers track location without permission, and I simply don’t believe that denying permission will have any effect on these amoral blood suckers. They’ll ignore the settings or create new IDs not covered by the settings.

Sigh.

I guess the next step is a letter to the States Attorney and representatives. I’m sure public officials will understand why it’s not so cool to have stalkers able to track them or their family through online adverts.


  1. Paul Vines, Franziska Roesne, and Tadayoshi Kohno, Exploring ADINT: Using Ad Targeting for Surveillance on a Budget — or — How Alice Can Buy Ads to Track Bob. Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, 2017. http://adint.cs.washington.edu/ADINT.pdf

Database of App UI Designs

This month Ranjitha Kumar and colleagues report on ‘Rico’, which is a large dataset of UI’s from published Android apps [1]. The dataset has tools to search the data for similar apps, and to use the data to autogenerate app code to follow ‘best practice’ determined by the sample. Ideally, this can aide designers to find examples to guide development.

The data itself was collected from apps from the Android app store (which has metadata, too). Screens and sequences of interactions were collected though a hybrid of crowdsourcing (human) and automated interaction.

The data was processed to extract the UI elements underlying each screen, a set of interaction paths sampled, and animations of transitions. The visual appearance is encoded in a 75 dimensional vector, which is used for searching an generating screens.

This approach lets a designer search by example, to find other that are UIs similar. Or a designer can sketch a UI, and find others that suggest ‘the rest’ of the elements for the screen, based on  similar apps.

The information encoded in this data set are a large sample of current designs, encapsulating something about current practice. The paper says this is ‘best practice’, though it actually is just ‘common’ practice not necessarily ‘best’.

It would be natural to link this dataset with empirical data about the quality of the product, e.g., user satisfaction, number of downloads, or revenue. Then, it would be possible to rank the instances and find the actual best practices.

The data is a snapshot of current practice, and it took a lot of effort to gather, The authors would like to improve the data gathering process so they can continuously update the dataset with new and upgraded apps. If they can indeed collect data over teim, they could create a dataset of historical trends in app design. This could reveal changes over time both functional and esthetic. And tt might be possible to observe UI ‘fads’ emerge and spread throughout the population of apps. That would be neat!

The project ultimately  aims to develop tools that help designers, e.g., to autogenerate code based on sketches and the knowledge encoded in the tool and dataset.

I’m a little concerned that this tool might be basically just copying what other people have done—leading designers toward the average. This may be fast and cheap, but it is no way to create outstanding products.  In my view, apps are already too similar to each other, due to the use of ubiquitous infrastructure such as standard cloud services APIs and other toolkits.

But this kind of data might actually be used to search for novel solutions. For example, the encoded designs might be used in the fashion of a genetic algorithm. A design is encoded, then the encoding is mutated and new designs generated. Or the encodings might be mixed or crossed with each other, generating a ‘mash up’ of two designs.  Many such mutations would not be viable, but you could generate lot’s of them and select the best few.  (I.e., evolutionary design.)

I don’t know how well this would work in the case, but the idea would be to search through the gaps in current practice, and to optimize current designs. Now that would be kind of cool!


  1. Biplab Deka, Zifeng Huang, Chad Franzen, Joshua Hibschman, Daniel Afergan, Yang Li, Jeffrey Nichols, and Ranjitha Kumar, Rico: A Mobile App Dataset for Building Data-Driven Design Applications (to appear), in Symposium on User Interface Software and Technology (UIST ’17). 2017: Qubec. http://ranjithakumar.net/resources/rico.pdf