Category Archives: mobile apps

Listening for Mosquitos

The ubiquitous mobile phone has opened many possibilities for citizen science. With most citizens equipped with a phone, and many with small supercomputers in the purse or pocket, it is easier than ever to collect data from wherever humans may be.

These devices are increasing the range of field studies, enabling the identification of plants and animals by sight and sound.

One key, of course, is the microphones and cameras. Sold to be used for deals and dating, not to mention selfies, these instruments are outstripping what scientists can afford.

The other key is that mobile devices are connected to the Internet, so data uploads are trivial. This technology is sold for commerce and dating and for sharing selfies, but it is perfect for collecting time and location stamped data.

In short, the vanity of youngsters has funded infrastructure that is better than scientists have ever built. Sigh.


Anyway.

This fall the Stanford citizen science folks are talking about yet another crowd sourced data collection: an project that identifies mosquitos by their buzz.

According to the information, Abuzz works on most phones, including older flip phones (AKA, non-smart phones).

It took me a while to figure out that Abuzz isn’t an app at all. It is a manual process. Old style.

You use the digital recording feature on your phone to record a mosquito. Then you upload that file to their web site. This seems to be a manual process, and I guess that we’re supposed to know how to save and upload sound files.

The uploaded files are analyzed to identify the species of mosquito. There are thousands of species, but the training data emphasized the important, disease bearing species we are most interested in knowing about.

A recent paper reports the details of the analysis techniques [2]. First of all, mobile phone microphones pick up mosquito sounds just fine. As we all know, the whiny buzz of those varmints is right their in human hearing, so its logical that telephones tuned ot human speech would hear mosquitos just fine.

The research indicates that the microphone is good in a range of up to 100mm. This is pretty much what you would expect for a hand held phone. So, you are going to have to hold the phone up to the mosquito, just like you would pass it to a friend to say hello.

At the crux of the matter, they were able to distinguish different mosquitos from recordings made by phone. Different species of mosquito have distinct sounds from their wing beats, and the research showed that they can detect the differences from these recordings.

They also use the time and location metadata to help identify the species. For example, the geographic region narrows down the species that are likely to be encountered.

The overall result is that it should be possible to get information about mosquito distributions from cell phone recordings provided by anyone who participates. This may contribute to preventing disease, or at least alerting the public to the current risks.


This project is pretty conservative, which is an advantage and a disadvantage. The low tech data collection is great, especially since the most interesting targets for surveillance are likely to be out in the bush, where the latest iPhones will be thin on the ground.

On the other hand, the lack of an app or a plug in to popular social platforms means that the citizen scientists have to invest more work, and get less instant gratification. This may reduce participation. Obviously, it would be possible to make a simple app, so that those with smart phones have an even simpler way to capture and upload data.

Anyway, it is clear that the researchers understand this issue. The web site is mostly instructions and video tutorials, featuring encouraging invitations from nice scientists. (OK, I thought the comment that “I would love to see is people really thinking hard about the biology of these complex animals” was a bit much.

I haven’t actually tried to submit data yet. (It’s winter here, the skeeters are gone until spring). I’m not really sure what kind of feedback you get. It would be really cool to return email a rapid report (i.e., within 24 hours). It should say the initial identification from your data (or possibly ‘there were problems, we’ll have to look at it), along with overall statistics to put your data in context (e.g., we’re getting a lot of reports of Aegyptus in your part of Africa).

To do this, you’d need to automate the data analysis, which would be a lot of work, but certainly is doable.


I’ll note that this particular data collection is something that cannot be done by UAVs. Drones are, well, too droney. Even if you could chase mosquitos, it would be difficult to record them over the darn propellers. (I won’t say impossible—sound processing can do amazing things).

I’ll also note that this research method wins points for being non-invasive. No mosquitos were harmed in this experiment. (Well, they were probably swatted, but the experiment itself was harmless.) This is actually important, because you don’t want mosquitos to selectively adapt to evade the surveillance.


  1. Taylor Kubota, Stanford researchers seek citizen scientists to contribute to worldwide mosquito tracking, in Stanford – News. 2017. https://news.stanford.edu/2017/10/31/tracking-mosquitoes-cellphone/
  2. Haripriya Mukundarajan, Felix Jan Hein Hol, Erica Araceli Castillo, and Cooper Newby Using mobile phones as acoustic sensors for high-throughput mosquito surveillance. eLife. doi: 10.7554/eLife.27854 October 11 2017, https://elifesciences.org/articles/27854#info

Ad Servers Are—Wait For It–Evil

The contemporary Internet never ceases to serve up jaw-dropping technocapitalist assaults on humanity. From dating apps through vicious anti-social media, the commercial Internet is predatory, amoral, and sickening.

This month, Paul Vines and colleagues at the University of Washington report on yet another travesty—ADINTUsing Ad Targeting for Surveillance” [1].

Online advertising is already evil (you can tell by their outrage at people who block out their trash), but they are also completely careless of the welfare of their helpless prey. Seeking more and more “targeted” advertising, these parasites utilize tracking IDs on every mobile device to track everyone of us. There is no opt in or opt out, we are not even informed.

The business model is to sell this information to advertisers who want to target people with certain interests.  The more specific the information, the higher the bids from the advertiser.  Individual ID is  combined with location information to serve up advertisements in particular physical locations. The “smart city” is thus converted into “the spam city”.

Vines and company have demonstrated that it is not especially difficult to purchase advertising aimed at exactly one person (device). Coupled with location specific information, the ad essentially reports the location and activity of the target person.

Without knowledge or permission.

As they point out, setting up a grid of these ads can track a person’s movement throughout a city.

This is not some secret spyware, or really clever data minig. The service is provided to anyone for a fee (they estimate $1000) . Thieves, predators, disgruntled exes, trolls, the teens next door. Anyone can stalk you.

The researchers suggest some countermeasures, though they aren’t terribly reassuring to me.

Obviously, advertisers shouldn’t do this. I.e., they should not sell ads that are so specific they identify a single person. At the very least, it should be difficult and expensive to filter down to one device. Personally, I wouldn’t rely on industry self-regulation, I think we need good old fashioned government intervention here.

Second, they suggest turning off location tracking (if you are foolish enough to still have it on), and zapping your MAID (the advertising ID). It’s not clear to me that either of these steps actually works, since advertisers track location without permission, and I simply don’t believe that denying permission will have any effect on these amoral blood suckers. They’ll ignore the settings or create new IDs not covered by the settings.

Sigh.

I guess the next step is a letter to the States Attorney and representatives. I’m sure public officials will understand why it’s not so cool to have stalkers able to track them or their family through online adverts.


  1. Paul Vines, Franziska Roesne, and Tadayoshi Kohno, Exploring ADINT: Using Ad Targeting for Surveillance on a Budget — or — How Alice Can Buy Ads to Track Bob. Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, 2017. http://adint.cs.washington.edu/ADINT.pdf

Database of App UI Designs

This month Ranjitha Kumar and colleagues report on ‘Rico’, which is a large dataset of UI’s from published Android apps [1]. The dataset has tools to search the data for similar apps, and to use the data to autogenerate app code to follow ‘best practice’ determined by the sample. Ideally, this can aide designers to find examples to guide development.

The data itself was collected from apps from the Android app store (which has metadata, too). Screens and sequences of interactions were collected though a hybrid of crowdsourcing (human) and automated interaction.

The data was processed to extract the UI elements underlying each screen, a set of interaction paths sampled, and animations of transitions. The visual appearance is encoded in a 75 dimensional vector, which is used for searching an generating screens.

This approach lets a designer search by example, to find other that are UIs similar. Or a designer can sketch a UI, and find others that suggest ‘the rest’ of the elements for the screen, based on  similar apps.

The information encoded in this data set are a large sample of current designs, encapsulating something about current practice. The paper says this is ‘best practice’, though it actually is just ‘common’ practice not necessarily ‘best’.

It would be natural to link this dataset with empirical data about the quality of the product, e.g., user satisfaction, number of downloads, or revenue. Then, it would be possible to rank the instances and find the actual best practices.

The data is a snapshot of current practice, and it took a lot of effort to gather, The authors would like to improve the data gathering process so they can continuously update the dataset with new and upgraded apps. If they can indeed collect data over teim, they could create a dataset of historical trends in app design. This could reveal changes over time both functional and esthetic. And tt might be possible to observe UI ‘fads’ emerge and spread throughout the population of apps. That would be neat!

The project ultimately  aims to develop tools that help designers, e.g., to autogenerate code based on sketches and the knowledge encoded in the tool and dataset.

I’m a little concerned that this tool might be basically just copying what other people have done—leading designers toward the average. This may be fast and cheap, but it is no way to create outstanding products.  In my view, apps are already too similar to each other, due to the use of ubiquitous infrastructure such as standard cloud services APIs and other toolkits.

But this kind of data might actually be used to search for novel solutions. For example, the encoded designs might be used in the fashion of a genetic algorithm. A design is encoded, then the encoding is mutated and new designs generated. Or the encodings might be mixed or crossed with each other, generating a ‘mash up’ of two designs.  Many such mutations would not be viable, but you could generate lot’s of them and select the best few.  (I.e., evolutionary design.)

I don’t know how well this would work in the case, but the idea would be to search through the gaps in current practice, and to optimize current designs. Now that would be kind of cool!


  1. Biplab Deka, Zifeng Huang, Chad Franzen, Joshua Hibschman, Daniel Afergan, Yang Li, Jeffrey Nichols, and Ranjitha Kumar, Rico: A Mobile App Dataset for Building Data-Driven Design Applications (to appear), in Symposium on User Interface Software and Technology (UIST ’17). 2017: Qubec. http://ranjithakumar.net/resources/rico.pdf

Citizen Science: NoiseCapture App

Contemporary digital technology offers many opportunities for collecting scientific data. Millions of people are carrying highly capable networked computers (mobile phones), with cameras, microphones, and motion sensors. Most personal devices have capabilities available only in a few laboratories twenty years ago.

Furthermore, these devices are in the hands of “civilians”. It is now possible to do “citizen science” for real, using personal devices to collect data and aggregate it through network services.

This has been used for environmental sensing (microbe populationsmicrobe assays, weather, air pollution, particulates,, odors), earthquake detection, food quality, detecting poachers, and wildlife observations (pollinators.  bird watching, bird song, insect song).

As I have remarked before, simply collecting data is not actually that useful scientifically. It also invites misguided pseudoscicence, if data is not carefully analyzed or misinterpreted.

What is needed is the rest of the picture, including data cleaning, careful models and analysis, and useful , valid visualization and reports.  You know, the “science” part.

This summer, a team from several French research institutions are releasing the NoiseCapture app , which allows anyone tomeasure and share the noise environnement [sic]”.

Specifically, this app measures noise in a city, as the user moves through ordinary activities. The microphone records the sounds, and GPS tracks the local of the device. (There are plenty of tricky details, see their papers [1, 2].)

The collected data is transmitted to the project’s server, where it is analyzed and cross-calibrated with other data. Any given measurement isn’t terribly meaningful, but may data points from many phones combine to create a valid estimate of a noise event. They incorporate these data into a spatial model of the city, which creates an estimate of noise exposure throughout the area [1].

Ii is very important to note that estimating noise exposure from a mobile phone microphone is pretty complicated (see the papers). Crowdsourcing the data collection is vital, but the actual “science” part of the “citizen science” is done by experts.

I’m pleased to see that the researchers have done some careful development to make the “citizen” part work well. The system is designed to record readings along a path as you walk. The app gives visual indications of the readings and the rated hazard level that is being observed. The data is plotted on interactive digital maps so that many such paths can be seen for each city. The project also suggests organizing a “NoiseCapture Party” in a neighborhood, to gather a lot of data at the same time.

Overall, this is a well thought out, nicely implemented system, with a lot of attention to making the data collection easy for ordinary people, and making high quality results available to the public and policy makers.


This research is primarily motivated by a desire to implement noise control policies, which are written with detailed technical standards. Much of the work has been aimed to show that this crowdsourced consumer device approach can collect data that meets these technical standards.

That said, it should be noted that technical noise standards are not the same thing as the subjective comfort or nuisance value of an environment. One person’s dance party is another person’s aural torture. A moderately loud conversation might be unnoticed on a loud Saturday night, but the same chat might be very annoying on the following quiet Sunday morning.

I also have to say that I was a little disappointed that the “environment” in question is the urban streetscape. For instance, the app is not useful for indoors noise (where we spend a lot of time).

Also, I would love to have something like this to monitor the natural soundscape in town and country. When the machines and people aren’t making so much noise, there is still plenty to hear, and I would love to be able to chart that. These voices reveal the health of the wildlife, and it would be really cool to have a phone app for that.

This is what “dawn chorus” folks are doing, but they don’t have nearly as nice data analysis (and non Brits can’t get the app).

Finally, I’ll note that simply detecting and recording noise is only a first step.  In the event that the neighborhood is plagued by serious noise pollution, you’re going to need more than a mobile phone app to do something about it. You are going to need responsive and effective local and regional government.  There isn’t an app for that.


  1. Erwan Bocher, Gwendall Petit, Nicolas Fortin, Judicaël Picaut, Gwenaël Guillaume, and Sylvain Palominos, OnoM@p : a Spatial Data Infrastructure dedicated to noise monitoring based on volunteers measurements. PeerJ Preprints, 4:e2273v2, 2016/09/28 2016. https://doi.org/10.7287/peerj.preprints.2273v2
  2. Gwenaël Guillaume, Arnaud Can, Gwendall Petit, Nicolas Fortin, Sylvain Palominos, Benoit Gauvreau, Erwan Bocher, and Judicaël Picaut, Noise mapping based on participative measurements, in Noise Mapping. 2016. https://www.degruyter.com/view/j/noise.2016.3.issue-1/noise-2016-0011/noise-2016-0011.xml

 

“Games For Change” 2017 Student Challenge

And speaking of mobile apps with a social purpose….

The upcoming annual Games For Change (G4C) meeting has a lot of interesting stuff, on the theme “Catalyzing Social Impact Through Digital Games”. At the very least, this gang is coming out of the ivory tower and up off their futons, to try to do something, not just talk about it.

Part of this year’s activities is the Student Challenge , which si a competition that

“invites students to make digital games about issues impacting their communities, combining digital storytelling with civic engagement.

This year’s winners were announced last month, from local schools and game jams in NYC, Dallas, and Pittsburg. (Silicon Valley, where were you?) Students were asked to invent games on three topics,

  • Climate Change (with NOAA),
  • Future Communities (with Current by GE), and
  • Local Stories & Immigrant Voices (with National Endowment for the Humanities).

Eighteen winners were highlighted.

The “Future Cities” games mostly are lessons on the wonders of “smart cities”, and admonitions to clean up trash. One of them has a rather compelling “heart beat” of Carbon emissions, though the game mechanics are pretty obscure, doing anything or doing nothing at all increases Carbon. How do I win?

The “Climate Change” also advocates picking up trash, as well as planting trees. There is also a quiz, and an Antarctic Adventure (though nothing even close to “Never Alone”)

The “local stories” and “immigrant stories” tell stories about immigrants, past and present. (This kids are from the US, land of immigration.) There are two alarming “adventures” that sketches how to illegally enter the US, which is a dangerous undertaking with a lot of consequences. Not something I like to see “gamified”.

Overall, the games are very heavy on straight story telling, with minimal game-like features. Very much like the “educational games” the kids no doubt have suffered through for years. And not much like the games everyone really likes to play. One suspects that there were teachers and other adults behind the scenes shaping what was appropriate.

The games themselves are pretty simple technically, which is inevitable given the short development time and low budgets. The games mostly made the best of what they had in the time available.

I worry that these rather limited experiences will give the students a false impression of both technology and story telling. The technology used is primitive, they did not have realistic market or user testing, and the general game designs are unoriginal. That’s fine for student projects, but not really a formula for real world success, and has little to do with real game or software development.

Worse, the entire enterprise is talking about it. One game or 10,000 games that tell you (again) to pick up trash doesn’t get the trash picked up. If you want to gamify neighborhood clean up, you are going to need to tie it to the actual physical world, e.g., a “trashure hunt”, with points for cleaning up and preventing litter.

These kids did a super job on their projects, but I think the bar was set far too low. Let’s challenge kids to actually do something, not just make a digital story about it. How would you use game technology to do it? I don’t know. That’s what the challenge is.


  1. Games for Change, Announcing the winners of the 2017 G4C Student Challenge, in Games For Change Blog. 2017. http://www.gamesforchange.org/2017/07/announcing-the-2017-g4c-student-challenge-winners/

 

Native American “Wellness Warriors” App

At this week’s conference, the United National Indian Tribal Youth (UNITY), released their new “Wellness Warriors App”.

There are probably a bazillion “wellness” apps out there (and, confusingly, more than one “wellness warrior”).   This app is distinguished by begin designed to be culturally-based for Native American youth.

Cool! This is the kind of thing I hope to see more of: digital apps that strengthen community and culture rather than eroding it. So I had to take a closer look.

The idea of the project is to promote “wellness from a cultural perspective – fitness through cultural dance, healthy eating with traditional Native foods, and more.” These activities already enjoy considerable participation as an expression of cultural identity and solidarity. The app adds in an emphasis on the health benefits of these activities.

These are real world, face-to-face activities. What can a mobile app really do?

From a brief trial run, it looks like that one contribution is social connection with a digital community that promotes a broad solidarity across many locations and specific tribes. The app seeks to,

encourage Native youth to interact with each other in a way we’ve never seen before.

I’m not sure that this has never been seen before (I’m pretty sure that Facebook and everything else is already widely used by these kids), but it bundles all the stuff into a single, “just for us” app.

I admit that I don’t really know all the features WWA has, or how to use it reasonably. (I, for one, could use some directions! But I’m not in the target demographic, who are digital natives.)

Many of the features are familiar from generic apps, including sharing and messaging. The “wellness” aspect including some fitness tracking and charts (I don’t know how to use them), space for contributed regional recipes and a planner.

The ‘cultural sensitivity’ appears in many forms, such as the graphic design and in channels for various Indian languages. The “wellness tracker” itself is a self report meter through which you enter your current state of physical, mental, social, and spiritual wellness. These dimensions are probably used by many such wellness apps, but in this case they should be interpreted in the context of tribal heritage. The “social” and “spiritual” dimensions definitely have important and specific meanings for Native Americans.

This app, like any mobile app, is mainly talking, not doing. The activities of interest (eating, exercising, helping each other) are real world, face-to-face things. Digitally augmented talk is not necessarily going to promote wellness or fitness.

In general, I’m not optimistic on the effectiveness of any self-reported tracking features. Aside from the problematic nature of this kind of introspection, interrupting your life to fill in the data just seems too intrusive to work for long.

Also, I’ve never been interested myself in sharing fitness data (or recipes), so I wouldn’t be motivated by these features, even if I did take time to record my wellness. But lots of people, especially you youngsters out there, like to do this sort of thing. So there you go.

All that said, the cultural solidarity represented by UNITY should, in principle, add motivation and intrinsic rewards that make this app work better than a generic app with similar features would. It is also true that there already is a social network (UNITY and its many affiliated youth organizations), so this app overlays existing social connections, and therefore is more likely to be effective.

In other words, a digital app might or might not be especially effective for promoting wellness, but one that is embedded in a strong and positive cultural context might work better. As they suggest, the aim of the  game is “Finding wellness and healing within our cultures” which is a lot more meaningful than just “promoting wellness” in general.

This app inspires me to think of additional features that might make it even better. There are many possibilities that could be done technically, though I don’t know what will fit the spirit and practices of this group.  (Perhaps spinn off apps, if these are too far afield from “wellness”..)

Things that occur to me:

  • A gratitude meter–express gratitude every day
  • Ambient nature awareness channels, e.g., Bison cam streaming coverage of reintroduced Bison herds.
  • informal (social) games (in local languages!), with cultural content. E.g., guided meditation/story telling with traditional themes and images.
    • (can you make the game so great that kids everywhere–not just Native Americans– will want to practice Native American spiritual values, because its just cool?)
  • Idea market for mutual help (think “mindsharing”, with a cultural twist)
  • Platform cooperatives for sharing stuff (think Uber or AirBnB, except owned by the users). In this case, should be embedded in cultural heritage surrounding sharing and gifts.

Anyway, I look forward to seeing what happens with this app.


  1. United National Indian Tribal Youth, Cultural-based Wellness App to Launch at National Native Youth Conference, in UNITY – News. 2017. http://unityinc.org/cultural-based-wellness-app-to-launch-at-national-native-youth-conference/

 

Robot Testing Of Apps

Ke Mao and colleagues at University College London write about their research on “Robotic Testing of Mobile Apps[1]. As they say, testing apps on mobile devices is a very difficult challenge. Contemporary handhelds have so many possible inputs, they are used by everyone in all kinds of situations, and apps run in a complex environment potentially interacting with other apps, networks, and services. Throw in security, privacy, and operator safety. And so on.

It’s almost as if those hardware guys were trying to create a system that can’t possibly be programmed! 🙂

Ono of the key problems is that testing software on a handheld device requires some way to test, well, holding in hands. Much testing must be (literally) manual, and that’s difficult, slow, and expensive. Testers augment manual testing by capturing human tests and replaying them, possibly with variations. These are simulated interactions, which do not necessarily use the device realistically.

Mao et al. argue that the way to go is to develop robots that simulate human hands and motions. This may not be perfect, but, as they say, “the robotic gestures will at least be physical gestures”. Ideally, robots may work faster, more consistently, and cheaper than human testers, allowing more coverage.

a robotic-test generator for mobile apps. Here, we compare our approach with simulation-based test automation, describe scenarios in which robotic testing is bene­ficial (or even essential)

The robot also has the advantage that it can be directly linked to automated test generation, pushing test scenarios into the robot’s code to be executed. The researchers use manual tests and also have a model to extrapolate new test cases that are “realistic”.

Their current implementation is actually limited to finger taps. That’s probably useful, but surely we want two whole hands, to hold, turn, shake, and drop the device.

I imagine that this idea will be widely used for both testing and implementing simulated users (i.e., ‘bots’ in the software sense, run by actual physical robots!).

However, there is still a need for as much human testing as possible, particularly field testing. Personal devices must operate in the real world, which is full of unexpected and unforeseeable surprises, not least of them being those darn users, who do the daftest things.

Robot hands are a great addition to the testing arsenal, but scarcely replace existing methods.  More testing is always better than less testing.  Better testing is always better than poorer testing.  But test software as much as possible in  as many ways as possible.


  1. Ke Mao., Mark Harman, and Yue Jia, Robotic Testing of Mobile Apps for Truly Black-Box Automation. IEEE Software, 34 (2):11-16, 2017. http://ieeexplore.ieee.org/document/7888396/media