Tag Archives: Species-appropriate computer mediated interaction

“Empowering” Non-human Species?

Earlier this week I commented on two different projects that, for different reasons, made logically absurd claims about “empowering” non-human species.

With impressive digital technology, sensors, and actuators available to nearly everyone, we are seeing a burst of creativity. This is our “new age of makers”, and it is wonderful!

Inevitably, though, we are struggling with power, identity, autonomy, consent, and other perennial human concerns. As computing becomes both ubiquitous and intimate, questions of who controls it become more personal and urgent.

But these technologies are also being imposed on fellow species, who cannot give consent, informed or otherwise. I have written about the important question of “who benefits”: when we impose a digital system on an animal, plant, or whatever, we should ask how they understand the situation, and what benefit they receive [1]. I don’t like to see digital technology be a more efficient way to exploit humans or non-humans.

But the projects I noted this week had an even more troubling feature. Both of them made claims that the digital systems somehow “empower” the non-humans.

The transHumUs project equipped trees with mobile robots that responded in part to the physiology of the passenger plant. This setup was said to “reveal some autonomy” by the plants. “Autonomy” must mean “not specifically directed by any person”, because the plants certainly had no intention to move or understanding of the situation.

Another project discussed the creation of a Distributed Autonomous Organization (DAO) using blockchain technology which would somehow enable a pod of Orcas to defend their “rights”. This magical software is, itself, imagined to be “autonomous”, and somehow the “smart contracts” it implements are supposed to represent the interests of the whales. Whatever those interests may be.

These projects, along with misguided concepts for digital “games for cats”, and so on, put forward some very troubling anthropocentric thinking. Not only are the interests of non-humans not considered, the human designer feels free to design the system “for their benefit”, and imagines that this act “empowers” them.

In fact, these systems represent the interests and fantasies of the human creators. The trees move around in patterns that the human artist feel represent “tree-like” behavior: slow, stately, silent. The whale DAO apparently manages the raising and disbursal of funds to support human scientists and caretakers, who are acting to “save the whales”.

These projects aren’t terrible ideas. They are kind of clever, technically, and they reflect artistic and humane concern and love for other species. They might benefit the non-human individuals or species as a group.

But they are all about human desires. The rhetoric about “autonomy” and “empowering” the non-humans is rubbish. At best, this is romantic silliness. At worst, it is cynical word play, puffing up what are essentially selfish motives.

The non-humans have no understanding or knowledge of these systems that allegedly empower them. There is no reasonable reason to imagine that plants have any way to think about moving around on carts, or that whales give one whit about digital funds transfers or “smart contracts”.

Most important of all, I have trouble getting past the extremely paternalistic, colonialist, god-complex nature of these ideas. The all-powerful humans will endow the lesser creatures with capabilities that transcend their nature ability. For their own good, of course. To set them “free” and “give” them “autonomy”.

This kind of thinking has led to many bad results in the past. (See: colonialism. See: ecocide. See: slavery.)

It’s also really bad to think this way: you are misunderstanding your own capabilities (overestimating your ability to “fix” other people’s problems), and wrecking the chances of really helping whales and trees (by frittering away resources on things that don’t meet their actual needs).

Worst of all, you are emperilling your own karma, tempted by digital technology to act as a paternalistic god rather than a humble cohabitant of our planet.

It’s cool to think about how sensors, robots, and mobile computers might be adapted for the use of non-human species. But it is important to understand that this isn’t their idea, it is your idea.


  1. Robert E. McGrath, Species-appropriate computer mediated interaction, in Proceedings of the 27th international conference extended abstracts on Human factors in computing systems. 2009, ACM: Boston, MA, USA.

Species Appropriate Interfaces—Insect Edition

Vivec Nityananda and colleagues report some interesting experiments with “3D insect cinema”—basically augmented reality goggles for praying mantises .  (Others have used VR to study insect behavior not relying on stereo vision.)

Mantises are monochromatic (as far as we can tell), but have highly effected movement detection and tracking, as witnessed by their hunting skill. With two huge, forward facing eyes, we might expect them to probably use stereo vision of some kind. In the last forty years, evidence has accumulated that mantises indeed do have stereo vision. But how do they do it?

This is an interesting question for several reasons. With their obvious visual equipment (I mean, look at them peepers!), mantises “ought” to use stereo vision. If they did not have stereo vision, then that would be an interesting puzzle to tackle, akin to the mystery of flightless birds. Proof that they do indeed use stereo cues is an interesting point, in itself.

Whatever the mechanisms are, they have developed independently from mammals and other species with stereo vision. As Nityananda et al. point out, we can compare mantis vision to humans, birds, and other cases “to explore whether nervous systems in vastly divergent evolutionary lineages have evolved convergent or divergent solutions to the same complex problems.” (p. 1) How many ways has this problem been solved throughout evolutionary history?

Additionally, “given the relative simplicity” of mantis nervous systems, whatever mechanisms they have evolved “might be easier to adapt into robotic or computational systems of depth-perception”. (p.1-2)  Who knows what useful tips we might learn from mantis vision?

The research team adapted techniques used in human VR and AR to the mantis. First, they found that polarized filters did not work well for mantises. On the other hand, “anaglyph” 3D, as in old-style red/blue 3D glasses did work. So they “crafted miniature 3D glasses by cutting out pieces of filter, about 7 mm in diameter.” (P. 4), affixing a blue or green lens to each compound eye.

Cool!

(a) Mantis fitted with the experimental 3D colored glasses. (b) The “insect 3D cinema” for the display of stereoscopic stimuli to the mantis

The results showed that the mantis responded to the visual illusions generated by these false stereoscopic pictures, striking at what appeared to be prey.

This is clever work, for sure, an opens the way to exploring this independent evolution of stereo vision. And it’s not just a “species appropriate” interface for mantises, it species appropriate AR!

But this study made me think about the questions I raised long ago [1].

When we conduct experiments on humans or “higher” animals, we take great care to avoid harm or unnecessary discomfort for the participants.   In the case of VR or AR, we know that some people experience nausea, head aches, or other unpleasant effects. We would monitor them and remove the goggles to protect them.

How would we pursue this same ethical policy in the case of mantises? Do we know if these goggles (glued onto their eyes!) bother them? Whether the stereo illusions cause discomfort? How would we know?

No, I don’t know the answer.

Research ethics also demand that we not humiliate or degrade our subjects with the experimental activities. For example, we would be cautious asking questions about sexual experience, or presenting violent stimuli, because these might be unpleasant or embarrassing.   We always have to balance what is needed to test the research question with what the participants may experience.

In the case of the mantis experiments, the behavioral measure involved tricking the participant into striking and missing what looked like prey. This is a core behavior of the mantis, and missing a strike is a serious thing indeed. Aside from the potential long term effects of messing with predatory behavior, is it justified to mess with the mind of the mantis in this way?

I think a case can be made that this experiment is ethical, at least based on what little we know about mantis minds. But I doubt that much thought was given to these questions, beyond the obvious notion that “it’s just an insect”.

The good news is that the techniques demonstrated in this study should enable more sophisticated studies of mantis behavior, which may lead us to understand them better, and perhaps refine our knowledge of how to treat them ethically.

Oh, and, of course, VR goggles mean we can now serve up Mantis porn, which, from what I understand of their love life, could be pretty extreme. : – )


 

  1. Robert E. McGrath, Species-appropriate computer mediated interaction, in Proceedings of the 27th international conference extended abstracts on Human factors in computing systems. 2009, ACM: Boston, MA, USA.
  2. Vivek Nityananda, Ghaith Tarawneh, Ronny Rosner, Judith Nicolas, Stuart Crichton, and Jenny Read, Insect stereopsis demonstrated using a 3D insect cinema. Scientific Reports, 6:18718, 01/07/online 2016. http://dx.doi.org/10.1038/srep18718

(Thanks to friend of the blog, Alan Craig for pointing out this study to me.)

Species-Inappropriate Touch Screens

As long as I’m on the topic of Inappropriate Touch Screens, let me turn to the booming field of tablet-based “games for cats”. Given that the World Wide Web was built so people could put up pictures of their cats, who is surprised that cat lovers are a major target for mobile devices?

But entertaining pictures of cats are for people. But there is also a torrent of touchscreen games, supposedly for people to play with their cat. C’mon.

For example, a sample of games is reviewed by Yaara Lancet (IOS, Android), and Purina offers an array of games.

I can’t possibly review them all, and there is little need to do so. As Michelle Westerlaken comments, despite alleged “research”, the games all work the same way (a moving target to chase) and “do not really seem to take the senses and perceptions of the animal into account.”

The Purina games are said to have been developed “using a feline focus group of different ages and breeds.” (No word on the selection of the sample, the process that might have been used in these groups, nor on any control groups, such as “dangling a string”.) The results “revealed that cats are most intrigued by the intricate movements of objects as they wiggle or spin across the screen.” They also discovered (or at east look up) that cats can’t actually see most of the colors that your expensive tablet can display (those are there for you to watch movies).

I’m hoping this press release was a joke, because it is certainly laughable to say that anyone needed to conduct “research” to discover this particular fact. What they describe here is cargo cult “social science”, debasing the already weak currency of usability studies.

Other reports indicate that cat’s claws do no harm to the glass display (though they are hard on plastic protectors), but I’m pretty sure that paws and noses do not really register well with the touch sensing. (Heck, your tablet does track your own nose and tongue touches either.) Other inputs (e.g., shaking and tilting the device) are inaccessible to felines.

The bottom line is: tablets are essentially unusable by felines, except for the visual display, which is only partly usable. I conclude that these games are not only Inappropriate Touch Screen Interfaces, they also are Species-Inappropriate Interfaces, period.

It is abundantly clear that these games are primarily for the people, not the cats. Some cats (but certainly not all cats) will play these games. This does no harm, but does not benefit the cats any more than other similar games, such as chase a laser pointer or a piece of string. In fact, these would probably give them more fun and exercise than the touch screen based game.

This is bad design and a waste of a perfectly good tablet.

Furthermore, these games also teach us nothing we did not know about cats, or about human relations to cats.  So they aren’t even food for thought.

I’m consigning these games en masse to  the Inappropriate Touch Screen Files (Species-Inappropriate category).

 

 

More Fun With Slime Mold: Hybrid Musical Interface

I have discussed slime mold computing earlier, which is really cool and strange. These little beasties have been used as analog circuits, using their growth processes to create networks that “solve problems”. For example, slime mold can solve moderate sized traveling salesman problems.

Unlike silicon based computing, slime molds are slow, low energy and cheap. Add a little food and water, and put them in a cool, dark place, and eventually they will “grow” a solution for you.

This technology has been adopted by composer Eduardo Miranda of Plymouth University (UK), to create a hybrid piano (analog), computer (digital), and slime mold (biological) system that responds to the music played by the human.

The digital system acts to create “species appropriate interfaces” ([1]) for the Homo sapiens and the Physarum polycephalum (which is more of a colony than an individual).

As far as I can understand, the sound from the piano is picked up by a microphone, and converted to electrical signals tat tickle the Physarum polycephalum. The slime mold network responds to the current by emitting current at other contacts, which are picked up by the digital system. These signals are turned into magnetic “plucks” on piano strings, creating sounds that are perceptible by the human.

The result is sort of a responsive sound generator, governed by the otherworldly logic of slime mold growth.

Weird, but kind of cool.

As noted this wins full points for species appropriate interfacing, in a pretty difficult case. Physarum polycephalum are significantly different from Homo sapiens, so the digital mediation is complex and iffy. Using a musical task is actually a brilliant idea, though we can question whether the subjective meaning of the experience is anywhere near a joint or collaborative work for the two performers.

Unfortunately, Miranda succumbed to temptation (as well as the reality of time and resources), and uses an iPad app as a control interface. Ick! We’ll add this to the Inappropriate Touch Screen Files, though certainly we can excuse the lapse, given the awesomeness of the hybrid musical interface.


  1. Robert E. McGrath, “Species-Appropriate Computer Mediated Interaction”, alt.chi, ACM CHI 2009, Boston, April 8, 2009, pp. 2529-2534. DOI 10.1145/1520340.1520357

 

Species Appropriate Feline Interface

In my longstanding Species Appropriate Interfaces series, here is a product to watch from a local team: Mousr.

From the promotional materials it is clear that these guys have a clue (quite a few clues, in fact), as well as a serious user testing panel.

This product is clearly well designed for the intended feline users, it may or may not be great for the humans.  The materials are not very specific about the app, so we’ll have to see.

One technical question:  I have to wonder how the onboard AI will deal with more than one cat in the game.  The behaviors sketched in the description seem like they will work OK if there is a pride of cats stalking the robot, but much would depend on details of the implementation and the actual processing speeds.  Another thing to look at when the product is out.

Examining the safety aspects, this looks pretty well designed, assuming there aren’t any glitches; pieces that break or fall off or whatever.  It is certainly no more dangerous than either natural behaviors or human made toys.  (It may be more dangerous for people than cats, if you step on it in the dark. I’m sure your cat is willing to take that risk.)

Finally, examining the ethics as I always do, I find this gets a strong grade.  The device is fun for the cat in the way the cat would like to have fun.  The autonomous prey logic is a nice feature, based on the real preferences of the feline users (not on some human fantasy).

The only caveats are about the potential manufacturing and end of life pollution associated with the device (not that your cat cares one tiny meow about that).  But these are no worse than many other gadgets.

In conclusion, this is an excellent example of a Species Appropriate Interface, designed with proper respect for and alignment with the interests of the non-human participants .  I look forward to seeing it in action.

PS.

Feature suggestion:  add a networked “nose cam” to the mousr, so you cat can make cool chase videos.


See also:

  1.  McGrath, Robert E., Species-appropriate computer mediated interaction, in Proceedings of the 27th international conference extended abstracts on Human factors in computing systems. 2009, ACM: Boston, MA, USA. http://doi.acm.org/10.1145/1520340.1520357

 

Species Appropriate Interfaces: Not Just Imaginary

Augmented Reality Sensei Alan Craig pointed me to recent stories about “Second Livestock” by Austin Stewart.

This “company” presents a package of faux technologies, purporting to create an Animal Computer Interface, providing a “Virtual Free Range” for domesticated chickens.

They show a preposterous VR headset for a chicken, as well as imagery of a chicken-scale VR treadmill.

(Image from Second Livestock.)

The textual materials pretend to be an IT company presentation, which means minimal information and maximal hype.

Here Stewart touches on issues surrounding industrial chicken farming, suggesting over the top technological “solutions”.  The point is, of course, to get people to see that, since these absurd moves would not be good for chickens, why are they good when we do them ourselves?

This is fine as far as it goes, and I agree with the thrust of his points.  (I’m not sure this is the most effective way to make the point, but that’s as may be.)

The tech press more or less “get it”, though there seemed to be some confusion about whether this is or could be real technology.  (For example, Darrell Etherington insisted upon comparing it to the flavor of the month, Occulus, which is pretty dumb.)

The really cool thing is that we really could do some of these things, though it would take a lot more work than Stewart has done.  (To be fair, he isn’t really interested in chickens.)

In 2009 I reviewed what had been done up to that time (“Species-Appropriate Computer Mediated Interaction”, alt.chi, ACM CHI 2009, Boston, April 8, 2009, pp. 2529-2534.) and the field has expanded since then (Mancini, Clara, Shaun Lawson, Janet van der Linden, Jonna Hakkila, Frank Noz, Chadwick Wingrave, and Oskar Juhlin, “Animal-computer interaction SIG“, in CHI ’12 Extended Abstracts on Human Factors in Computing Systems. 2012, ACM: Austin, Texas, USA. p. 1233-1236.).  For other real research, see perhaps, Mancini’s blog.

Let me make it clear:  unlike Stewart, these researchers are really doing Animal Computer Interfaces for many species.

This is far more profound that Stewart’s rhetoric, especially since we are inflicting our own technology onto powerless individuals with no ability to consent or complain.  (See papers above for thoughtful discussion of this and other crucial points.)

From the perspective of really doing it (as opposed to making political points), I saw many flaws in Second Livestock.

First, I’m pretty sure that vision and hearing of chickens is substantially different from humans, so the presentation systems would need to be retuned for the avian users.  More importantly, vision isn’t nearly as important to chickens as to humans, so the hierarchy of inputs should be reordered.  I’m pretty sure that tactile an olifactory senses are more important, so they can’t be left until “later”–and they are really hard to do in VR.

Second, the sketch of the chicken on a treadmill, presumably to provide full body motion in VR, is species inappropriate.  Chicken locomotion is more complex than human bipedalism, notably including hopping and at least limited flying (at least, for feral chickens.)  A reasonably faithful VR motion system for a chicken would require capturing wing motions and simulating short range flight.

I note that a treadmill could be used for the bipedal walking part of avian locomotion, though the gate is substantially different and the users quite a bit smaller and lighter. In other words, the technology would have to be adapted quite a bit to work correctly for chickens.

A third issue to note is that the VR needs to accommodate the natural behaviors of the users.  In the case of chickens, this must include pecking at targets on the ground.  This means that the rig must display the ground in high fidelity, and also let the chicken peck.  The absurd headgear shown by Second Livestock doesn’t appear to be well suited to quick pecking motions.

Finally, we can think a little about how chickens might interact with each other in such a virtual environment.  Texting is obviously useless. We can provide high fidelity visual displays, though it is not clear prima facia what visual information is salient to the avian users.

The system can capture and play vocalizations, though we humans will have to work hard to grok what is important in these signals, and to deal with any semantic difficulties due to latency or missing non-auditory cues.

But, as Stewart’s work drives at, chickens, just as humans, communicate through touch and smell.  These are difficult to do in VR, even for our own species.

OK, this all goes beyond what Stewart was trying to do. But I think it is even cooler than what he makes of it.

I hope he won’t mind me doing him the favor of taking him too serious.