Tag Archives: Evan Ackerman

Singaporean Robot Swans

Evan Ackerman calls attention to a project at National University of Singapore, that is deploying robotic water quality sensors that are designed to look like swans.

The robots cruise surface reservoirs, monitoring the water chemistry, and storing data as it is collected into the cloud via wifi.  (Singapore has wifi everywhere!)  The robots are encased in imitation swans, which is intended ‘to be “aesthetically pleasing” in order to “promote urban livability.”’ I.e., to look nice.

This is obviously a nice bit of work, and a good start.  The fleet of autonomous robots can maneuver to cover a large area, and concentrate on hot spots when needed, all at a reasonable cost. I expect that the datasets will be amenable to data analysis machine learning, which can mean a continuous improvement in knowledge about the water quality.

As far as the plastic swan bodies…I’m not really sold.

For starters, they don’t actually look like real swans.  They are obviously artificial swans.

Whether plastic swans are actually more aesthetically pleasing than other possible configurations seems like an open question to me.  I tend to thing that a nicely designed robot might be just as pleasing or even better than a fake swan.  And it would look like a water quality monitor, which is a good thing.

Perhaps this is an opportunity to collaborate with artists and architects to develop some attractive robots that say “I’m keeping your water safe.”

  1. Evan Ackerman, Bevy of Robot Swans Explore Singaporean Reservoirs, in IEEE Spectrum – Automation. 2018. https://spectrum.ieee.org/automaton/robotics/industrial-robots/bevy-of-robot-swans-explore-singaporean-reservoirs
  2. NUS Environmental Research Institute, New Smart Water Assessment Network (NUSwan), in NUS Environmental Research Institute – Research Tracks -Environmental Surveillance and Treatment 2018. http://www.nus.edu.sg/neri/Research/nuswan.html


Robot Wednesday

Robot Blimp For Exploring Hidden Spaces

I noted earlier the discovery of what seems to be a chamber in the Great Pyramid at Giza. The discovery opens the question of how to further explore the hidden space without damaging the ancient structure.  One idea is to drill a small shaft, and push through a tiny robot explorer.

A research group at INRIA and Cairo University is developing a robotic blimp for such a mission.  The deflated blimp can be pushed through a 3cm shaft, then inflate and reconnoiter the hidden space.  The task requires a very compact and light system, and likely will operate autonomously.

Evan Ackerman interviewed senior investigator Jean-Baptiste Mouret for IEEE Spectrum [1].  He notes that a blimp is a good choice, because it is “pillowy”, and less likely to damage the structure.

Mouret describes the challenges imposed by the size and weight limits.  Conventional sensors, including GPS, would be too heavy and power hungry.  They are developing “bioinspired” sensors, based on bees and flies.  These include a miniature optic-flow sensor that can operate in low-light conditions.

Getting the robot into the space is one thing, making sure that it is retrieved is more difficult.  It is important not to litter the structure with a lost robot, so the robot will need to return to the tiny access hole, dock with the base, and fold up so it can be pulled out.  It will be designed with backup behaviors to search for the dock, even if damaged.

It will be years before any expedition to the Great Pyramid happens. The robot is still being developed and the measurements of the Pyramid are being refined.   The Pyramid is over 4,000 years old, so there is no need for haste.

  1. Evan Ackerman, Robotic Blimp Could Explore Hidden Chambers of Great Pyramid of Giza, in IEEE Spectrum – Automation. 2017. https://spectrum.ieee.org/automaton/robotics/drones/robotic-blimp-could-explore-hidden-chambers-of-great-pyramid


Robot Wednesday

Collapsable Delivery Drone

I’m not a huge fan of buzzy little quadcopters, nor am I a fan of delivery drones. The former are about as welcome as a cloud of mosquitos, and the latter promises to transfer even more wealth to the 0.001%. (I’m not sure who these drones will be delivering to, when none of us have jobs or money to buy things.)

That said, I was interested to see the “origami-inspired cargo drone” developed by a group at Ecole Polytechnique Fédérale de Lausanne [2]. Their design wraps the copter in a flexible cage, which protects the package and also encloses the dangerous rotors. The cage is foldable, so it closes up to a relatively small package when not in use.

The cage is a nice design. It addresses the safety (and perceived safety) of the drone in a nice way. Rather than depending on complex algorithms to make the drone “safe” and “friendly”, their design makes the drone a soft beach ball like thing—the affordances are obvious and visible. Furthermore, the safety factor is passive. The effectiveness of the enclosure does not depend on either software or humans.

I’m sure that this basic idea can be realized in a lot of geometries. The EPFL design is modular, which means that a variety of cages can be made from the same design. It folds up rather neatly, and, of course, is light and strong.

I could imagine versions of this concept that have a standard coupling to a range of quadcopters. Sort of a “delivery cage” costume for drones. (I smell a new standard for “drone costume attachment” coming.)

Clearly, there is no reason why the cage has to be so bare and undecorated. Why not streamers, glitter, and even LEDs? These might make the drone more appealing, and would also make the drone more visible to cameras, radar, and sonar. (Another standard? Passive safety reflectors for drones?)

I’m still not eager to have my local stores put out of business by Amazon, but if I’m going to have to live with drones, I’d like them to bounce off walls and people, rather than crash into them.

  1. Evan Ackerman, EPFL’s Collapsable Delivery Drone Protects Your Package With an Origami Cage, in IEEE Spectrum — Automation. 2017. https://spectrum.ieee.org/automaton/robotics/drones/epfl-collapsable-delivery-drone-protects-your-package-with-an-origami-cage
  2. Przemyslaw Mariusz Kornatowski, Stefano Mintchev, and Dario Floreano, An origami-inspired cargo drone, in IEEE/RSJ International Conference on Intelligent Robots and Systems. 2017: Vancouver. http://infoscience.epfl.ch/record/230988


Robot Wednesday

Robot Funeral Rituals? Augmenting Religious Practice

One of the most prominent aspects of human life that has been little affected by the internet and robots is religion, especially formal religious practices. Church, temple, or mosque, religious practice is a bastion of unaugmented humans.

There are obvious reasons for this to be the case. Religion is conservative with a small “C”, embodying as it does cultural heritage in the present day. Traditional ideas and practices are at the psychological core of religious practice. Religious practice is not generally about “disruption” or “move fast and break things” (at least not in the thoughtless way Silicon Valley disrupts things.)

Another obvious reason is that much of religious teaching is about human behavior and human relations. Emphasis on the “human”. From this perspective, augmenting humans or virtualizing human relations is at best irrelevant and at worst damaging to proper human conduct.

But this will surely change. Religious traditions are living cultures which adopt new technology. It will be interesting to watch how human augmentation is incorporated into religious practices, not least because it may create some interesting, humane modes of augmented living.

Obviously, many people have already adopted digital communications and social media in spiritual and religious life. Heck, even the pope is on twitter. But this is the tip of the iceberg, little more than the twenty first century version of pamphlets and sermons.

What else might be coming?

For one thing, virtual worlds will surely need to be converted.

I recall some science fiction story (quite possibly by William Gibson, but I don’t remember) that had a brief vignette about a devout Catholic who loaded his personality into an avatar in a virtual world. This splinter of his consciousness (soul?) kneels in a virtual chapel and prays 24/7. In the story, this practice is approved by the church. I think the notion is that he receives indirect credit for this pious exercise, which is sort of analogous to other practices such as hiring a mass for a deceased parent.

For another, robots and cyborgs need to be incorporated into both theology and practice.

Along these lines, Evan Ackerman reports this month on a service in Japan that offers a robot to perform Buddhist funeral rites [1].  The “humanoid robot, suitably attired in the robe of a Buddhist monk” reads the sutras and bows at the appropriate moments.

The robot is much cheaper than a human, is programmed for alternative versions of the ritual, and can live stream the affair to remote mourners. (It can probably work much longer and faster than puny Carbon-based priests, too.)

It isn’t clear how this will be accepted or how popular it may be. To the degree that the funeral is for the comfort of the living, much will depend on how the mourners like it. A robot is not a sympathetic and soothing as a person, so I don’t really know.

There are, of course, theological questions in play. Do the words count if they are said by a machine? (Would they count if a parrot recited them and bowed?) There are certain to be differences of opinion on this question.

Thinking about this, I note another interesting possibility: a robot can also be remotely operated. A human priest could very well supervise the ceremony from a distance, with various levels of control. The robot could, in principle, be anywhere on Earth, in orbit, or on Mars; extending the reach of the holy man. Would this remote augmentation of the priest’s capabilities be “more authentic” than an autonomous robot programmed to do the ceremony?

Such a remote operation would have advantages. The robot would add a level of precision to the fallible priest—the robot could check and correct the performance. The robot can operate in hazardous conditions, such as a disaster area or war zone (imagine remote chaplains for isolated military posts). The remote avatar might bring a measure of comfort to people otherwise out of reach of conventional pastoral care.

Human priests would not have to travel, and could perform more work. For that matter, a single priest could operate multiple remote robot avatars simultaneously, significantly augmenting the sacred productivity.

Taking this idea of a priestly “remote interface” seriously for a moment, we can speculate on what other rituals might be automated this way. Something like Christian traditions such as baptism or communion certainly could be done by robots, especially supervised robots. Would this be theologically legitimate? Would it be psychologically acceptable? I don’t know.

I haven’t heard of anyone doing it, and I’m not endorsing such a thing, I’m just thinking about the possibility.

To the degree that autonomous or supervised robots are accepted into spiritual practice, there will be interesting questions about the design and certification of such robots. It might well be the case that the robot should meet specific standards, and have only approved programming. Robots could be extremely doctrinaire, or dogma could be loaded as a certified library or patch. I have no idea what these software standards might need to be, but it will be yet another frontier in software quality assurance.

There are other interesting possibilities. What if a robot is programmed for multiple religious practices, coming from competing traditions. At any one moment, it may be operating completely validly for one set of rules, and later it might switch and follow another set of rules. This is how robots work. But this is certainly not how human religions work. Carbon-based units generally cannot be certified clergy for more than one sect at a time. Will robots have to be locked-in to a single liturgical version? Or, like TV or Web Browsers, would a tele-priest be a generic device, configured with approved content as needed.

While we’re on the question of software, what about hacking? What if malicious parties hack into the sacred software, and substitute the prayers for a competing version of the rite? Or defile the word or actions? Or simply destroy the religion they dislike? Yoiks! I have no idea what the theological implications of a corrupted or enslaved robot would be, but I imagine they could be dire.

  1. Evan Ackerman, Pepper Now Available at Funerals as a More Affordable Alternative to Human Priests, in IEEE Spectrum – Automation. 2017. https://spectrum.ieee.org/automaton/robotics/humanoids/pepper-now-available-at-funerals-as-a-more-affordable-alternative-to-human-priests


Remote Fun Park On The Moon?

As we approach the fiftieth anniversary of the first moon landing, it is clear that humans have pulled back from space exploration and from science in general. NASA’s budget has steadily declined, dedicated scientists politically suppressed and much of the space program has calcified into a jobs program.

What can be done?

Toys! Theme parks!

There is a lot of interest these days in sending swarms of small robots to the moon. Perhaps inspired by ubiquitous remote piloted drones, why not remote operate a moon rover?  And why can’t anybody drive one, with a game controller?

The Lunatix company is proposing to sell moon-rover-driving as a game. Earth bound computer games would be linked to the lander, and could purchase driving time. Kind of like consumer drones, except on the moon [3].

The lander might have a small science payload, but mainly it is dedicated to the commercial use. (There would be merchandise and other associated sales, as well.)

This seems relatively straightforward technically. There are some tricky bits, such as linking a consumer via the Internet to an uplink to the moon. Safely linking. Securely linking. (Hint: space communications are expensive and rare, and generally not connected to the public.)

I have no idea about the commercial case. Space projects are obscenely expensive, but getting cheaper. At something like 25 Euros per minute, it seems to me that driving time would be pretty damn expensive, at least for peasants like me. But who knows? My intuitions about business plans are often wrong.

Evan Ackerman points out that this purely commercial project raises legal questions. The moon is more or less under the jurisdiction of the United Nations, as defined by treaties among nations. There seems to be no specific framework for commercial exploitation of the moon, though there will surely need to be one soon.

Aside from the equity issues about sucking money out of the lunar commons (the moon is the common heritage of all human kind), there may be environmental and other regulatory issues.

I note that a company slogan is “Leave Your Mark on the Moon!”  The users will leave behind tracks, indelible tracks, visible from Earth.  This will surely have consequences.

How happy are we going to be when the moon is covered with tread marks? Do you want to see rude graffiti defacing the surface? How will we feel about a giant cola ad written in the dust? How will Earthly strongmen react to uncensored political messages, indelibly written on the moon?

The company proposal seems to wave its hands at the legal problems and doesn’t even list any legal issues under “Risks”. That may be optimistic.

In the end, it is quite possible that money will talk. As Ackerman puts it, despite his own misgivings, “If this is the best way to get robots to the moon, then so be it”.

While there’s a small section in the Lunatix executive summary on “Legal Framework,” there are few specifics about whether or not the United Nations would approve something like this. Lunatix seems to suggest that its use case is covered under “the common interest of all mankind in the progress of the exploration and use of outer space for peaceful purposes,” but I’m not so sure. It may be that no framework exists yet (either for or against), and my gut reaction that commercializing the moon in this way somehow cheapens it is probably just me being old and grumpy. If this is the best way to get robots to the moon, then so be it.” (From Ackerman [1])

I have my doubts about this concept. We’ll see.

But the general idea that some kind of entertainment business might be one of the earliest commercial successes for space seems to be plausible. Many important technologies started out as entertainment, or were driven by markets for entertainment [2].

For example, the Internet was designed for military and scientific applications, but the earliest commercial successes were music theft, games, and pornography, which drove markets for servers, GPUs and broadband, among other things. Today’s cord cutters are simply taking advantage of the second and third generation of these technologies. And, just as the Internet has never been comfortable with the fact that it is a great mechanism for delivering pornography, space entertainment may not turn out quite as imagined.


  1. Evan Ackerman, How Much Would You Pay to Drive a Jumping Robot on the Moon?, in IEEE Spectrum – Automation. 2017. http://spectrum.ieee.org/automaton/robotics/space-robots/how-much-would-you-pay-to-drive-a-jumping-robot-on-the-moon
  2. Steven Johnson, Wonderland: How Play Made the Modern World, New York, Riverhead Books, 2016.
  3. Space Tech, Luniatix. Graz University of Technology Institute of Communication Networks and Satellite Communications, 2017. https://www.tugraz.at/fileadmin/user_upload/tugrazInternal/Studium/Studienangebot/Universitaere_Weiterbildung/SpaceTech/Fallstudienprojekt_ST14.pdf



Space Saturday

Disposable Sensor Drones

Today, the Internet of Things is in its early “steampunk” stage, basically adding mobile phone technology to toasters and refrigerators. The real IOT will be closer to the vision of Smart Dust described a quarter of a century ago—massive numbers of very tiny sensors, networked together. No, we really don’t know how to build that, yet, but we’re working on it.

For the past few years, the US Naval Research Lab has been working on disposable drones, which are beginning to become more like smart dust. Shrinking robot aircraft down, they are creating a sort of ‘guided dust’. OK, the dust motes are pretty chunky still, but it’s the steam era.  They’ll get smaller.

The CICADA project (Close-in Covert Autonomous Disposable Aircraft) has done a lot of prototypes, and they are showing their Mark 5 this year.

The robot glider is 3D printed and made up from already worked out technologies, sensors, radios, autopilot and guidance systems are dropped in. The design is “stackable”, and designed to be dropped in batches from an aircraft. Each glider steers toward a specific target, and beams back its data when it lands. The whole thing is cheap enough to be considered disposable (at least by the Pentagon.)

With different sensors, there are many obvious things that these could do. Their usage is captured nicely by the idea of dropping a batch into a storm, to capture a bunch of readings from inside. For meteorology, these are sort of like sounding balloons, except they fall instead of float.

Right now, [CICADAs] would be ready to go drop into a hurricane or tornado,” he said. “I really would love to fly an airplane over, and each of these could sample in the tornado. That’s ready now. We’d just need a ride. And [FAA] approval.” (Quoting NRL’s Dan Edwards)

I’m pretty sure that the military will find less benign uses for this concept, though there are already plenty of guided weapons already, so this isn’t anything new.

The prototype is said to run about $250, which is cheap for the Navy, but seems high to me. I’m not seeing anywhere near that much gear in these little birds, and most, if not all of it can be done from open source. I would expect that hobbyists could probably replicated this idea in a maker space for a whole lot less per unit. Couple it will inexpensive quad copters to lift them, and I could see a huge potential for citizen science.

As a software guy, I have to wonder what the data system looks like. Whatever the Navy has done, I’m pretty sure that hobbyists or science students can whip up a pretty nice dashboard to grab, analyze, and visualize the sensor traces.

  1. Evan Ackerman, Naval Research Lab Tests Swarm of Stackable CICADA Microdrones, in IEEE Spectrum – Automation. 2017. http://spectrum.ieee.org/automaton/robotics/drones/naval-research-lab-tests-swarm-of-stackable-cicada-microdrones
  2. US Naval Research Laboratory. CICADA: Close-in Covert Autonomous Disposable Aircraft. 2017, https://www.nrl.navy.mil/tewd/organization/5710/5712/research/CICADA.



Robot Wednesday

NASA Investigating Clockwork Rover Technology

NASA has the coolest projects!

With a long-term mission to visit and measure everywhere in the Solar System, NASA has not ticked off the easy stuff—Earth orbit, Moon, Mars, orbiting all the Planets.

There are plenty of places we really want to visit, but haven’t been able to. Cold places like the ice moons. And really hot places like the Sun  and the surface of Venus.

In the case of Venus,several spacecraft have orbited and are orbiting, and a handful of probes have reached the surface–just barely. The surface is hot, over 400 degrees C, and the pressure is a crushing 90 atmospheres. Most electronics simply don’t work at these temperatures. And it’s very cloudy, so solar power is minimal.  And so on.

In short, conventional engineering has little chance. To date, the record time to failure is 2 hours, set by a heroically insulated Vernera 13 probe in 1982. Building such extreme systems is hard and very expensive.

There is no way to make a rover to explore Venus. What’s to be done?

A NASA design group is exploring ways to build a rover that uses mechanical parts—clockwork—instead of electronics and computers. This is called “Automaton Rover for Extreme Environments (AREE)”.

When I saw their animation of some initial concepts, I immediately recognized that this is a Strandbeestand indeed they did invite Theo Jansen to JPL for some advice. (Evidently, Jansen’s advice was to get rid of the legs.)

Alternative locomotive ideas include wheels and tank treads.

But moving around is the least of the problems. How do you collect data?

In an interview with Evan Ackerman, they report several intriguing ideas under development.

First of all, mechanical calculation and number storage should be doable. And rough forms of obstacle avoidance are well known, too. (Toy cars navigate around furniture by bumping and backing up, no?.)

Image: Jonathan Sauder/NASA/JPL-Caltech Obstacle avoidance is another simple mechanical system that uses a bumper, reverse gearing, and a cam to back the rover up a bit after it hits something, and then reset the bumper and the gearing afterwards to continue on. During normal forward motion, power is transferred from the input shaft through the gears on the right hand side of the diagram and onto the output shaft. The remaining gears will spin but not transmit power. When the rover contacts an obstacle, the reverse gearing is engaged by the synchronizer, thus having the opposite effect. After the cam makes a full revolution it will push the bumper back to its forward position. A similar cam can be used to turn the wheels of the rover at the end of the reverse portion of the drive.

But if you had some data, how would you return data to Earth (i.e., to an orbital relay)? One possibility would be some kind of hard copy (e.g., etched into a metal disk), which is then lifted with a balloon and potentially pick up be a high altitude UAV. That sounds cool, but pretty iffy.

Another idea is to do semaphore code with radar reflectors. The orbiter beams radar and the rover reflects back on-off signals are certain wavelengths. This might have a bandwidth of a few bits per second (one way). That’s not much, but it’s a lot more than zero bps!   Pretty cool.

They are also trying to develop some kinds of sensors that will work under these conditions. This is difficult and it might be an area where small amounts of exotic high temperature electronics might be used.

This is such a cool design project!

I’m not sure how these ideas will pan out, but this work

is also important for changing the conversation on exploring Venus. Today, long duration in-situ mobile access on Venus has not been considered a realistic option. AREE demonstrates how such a system can be achieved today by cleverly utilizing current technology and enhanced by the technology of tomorrow.”

  1. Evan Ackerman, JPL’s Design for a Clockwork Rover to Explore Venus, in IEEE Spectrum – Automation. 2017. http://spectrum.ieee.org/automaton/robotics/space-robots/jpl-design-for-a-clockwork-rover-to-explore-venus
  2. Jonathan Sauder. Automaton Rover for Extreme Environments (AREE). 2017, https://www.nasa.gov/directorates/spacetech/niac/2017_Phase_I_Phase_II/Automaton_Rover_Extreme_Environments.



Robot Wednesday