Tag Archives: Evan Ackerman

Collapsable Delivery Drone

I’m not a huge fan of buzzy little quadcopters, nor am I a fan of delivery drones. The former are about as welcome as a cloud of mosquitos, and the latter promises to transfer even more wealth to the 0.001%. (I’m not sure who these drones will be delivering to, when none of us have jobs or money to buy things.)

That said, I was interested to see the “origami-inspired cargo drone” developed by a group at Ecole Polytechnique Fédérale de Lausanne [2]. Their design wraps the copter in a flexible cage, which protects the package and also encloses the dangerous rotors. The cage is foldable, so it closes up to a relatively small package when not in use.

The cage is a nice design. It addresses the safety (and perceived safety) of the drone in a nice way. Rather than depending on complex algorithms to make the drone “safe” and “friendly”, their design makes the drone a soft beach ball like thing—the affordances are obvious and visible. Furthermore, the safety factor is passive. The effectiveness of the enclosure does not depend on either software or humans.

I’m sure that this basic idea can be realized in a lot of geometries. The EPFL design is modular, which means that a variety of cages can be made from the same design. It folds up rather neatly, and, of course, is light and strong.

I could imagine versions of this concept that have a standard coupling to a range of quadcopters. Sort of a “delivery cage” costume for drones. (I smell a new standard for “drone costume attachment” coming.)

Clearly, there is no reason why the cage has to be so bare and undecorated. Why not streamers, glitter, and even LEDs? These might make the drone more appealing, and would also make the drone more visible to cameras, radar, and sonar. (Another standard? Passive safety reflectors for drones?)

I’m still not eager to have my local stores put out of business by Amazon, but if I’m going to have to live with drones, I’d like them to bounce off walls and people, rather than crash into them.

  1. Evan Ackerman, EPFL’s Collapsable Delivery Drone Protects Your Package With an Origami Cage, in IEEE Spectrum — Automation. 2017. https://spectrum.ieee.org/automaton/robotics/drones/epfl-collapsable-delivery-drone-protects-your-package-with-an-origami-cage
  2. Przemyslaw Mariusz Kornatowski, Stefano Mintchev, and Dario Floreano, An origami-inspired cargo drone, in IEEE/RSJ International Conference on Intelligent Robots and Systems. 2017: Vancouver. http://infoscience.epfl.ch/record/230988


Robot Wednesday

Robot Funeral Rituals? Augmenting Religious Practice

One of the most prominent aspects of human life that has been little affected by the internet and robots is religion, especially formal religious practices. Church, temple, or mosque, religious practice is a bastion of unaugmented humans.

There are obvious reasons for this to be the case. Religion is conservative with a small “C”, embodying as it does cultural heritage in the present day. Traditional ideas and practices are at the psychological core of religious practice. Religious practice is not generally about “disruption” or “move fast and break things” (at least not in the thoughtless way Silicon Valley disrupts things.)

Another obvious reason is that much of religious teaching is about human behavior and human relations. Emphasis on the “human”. From this perspective, augmenting humans or virtualizing human relations is at best irrelevant and at worst damaging to proper human conduct.

But this will surely change. Religious traditions are living cultures which adopt new technology. It will be interesting to watch how human augmentation is incorporated into religious practices, not least because it may create some interesting, humane modes of augmented living.

Obviously, many people have already adopted digital communications and social media in spiritual and religious life. Heck, even the pope is on twitter. But this is the tip of the iceberg, little more than the twenty first century version of pamphlets and sermons.

What else might be coming?

For one thing, virtual worlds will surely need to be converted.

I recall some science fiction story (quite possibly by William Gibson, but I don’t remember) that had a brief vignette about a devout Catholic who loaded his personality into an avatar in a virtual world. This splinter of his consciousness (soul?) kneels in a virtual chapel and prays 24/7. In the story, this practice is approved by the church. I think the notion is that he receives indirect credit for this pious exercise, which is sort of analogous to other practices such as hiring a mass for a deceased parent.

For another, robots and cyborgs need to be incorporated into both theology and practice.

Along these lines, Evan Ackerman reports this month on a service in Japan that offers a robot to perform Buddhist funeral rites [1].  The “humanoid robot, suitably attired in the robe of a Buddhist monk” reads the sutras and bows at the appropriate moments.

The robot is much cheaper than a human, is programmed for alternative versions of the ritual, and can live stream the affair to remote mourners. (It can probably work much longer and faster than puny Carbon-based priests, too.)

It isn’t clear how this will be accepted or how popular it may be. To the degree that the funeral is for the comfort of the living, much will depend on how the mourners like it. A robot is not a sympathetic and soothing as a person, so I don’t really know.

There are, of course, theological questions in play. Do the words count if they are said by a machine? (Would they count if a parrot recited them and bowed?) There are certain to be differences of opinion on this question.

Thinking about this, I note another interesting possibility: a robot can also be remotely operated. A human priest could very well supervise the ceremony from a distance, with various levels of control. The robot could, in principle, be anywhere on Earth, in orbit, or on Mars; extending the reach of the holy man. Would this remote augmentation of the priest’s capabilities be “more authentic” than an autonomous robot programmed to do the ceremony?

Such a remote operation would have advantages. The robot would add a level of precision to the fallible priest—the robot could check and correct the performance. The robot can operate in hazardous conditions, such as a disaster area or war zone (imagine remote chaplains for isolated military posts). The remote avatar might bring a measure of comfort to people otherwise out of reach of conventional pastoral care.

Human priests would not have to travel, and could perform more work. For that matter, a single priest could operate multiple remote robot avatars simultaneously, significantly augmenting the sacred productivity.

Taking this idea of a priestly “remote interface” seriously for a moment, we can speculate on what other rituals might be automated this way. Something like Christian traditions such as baptism or communion certainly could be done by robots, especially supervised robots. Would this be theologically legitimate? Would it be psychologically acceptable? I don’t know.

I haven’t heard of anyone doing it, and I’m not endorsing such a thing, I’m just thinking about the possibility.

To the degree that autonomous or supervised robots are accepted into spiritual practice, there will be interesting questions about the design and certification of such robots. It might well be the case that the robot should meet specific standards, and have only approved programming. Robots could be extremely doctrinaire, or dogma could be loaded as a certified library or patch. I have no idea what these software standards might need to be, but it will be yet another frontier in software quality assurance.

There are other interesting possibilities. What if a robot is programmed for multiple religious practices, coming from competing traditions. At any one moment, it may be operating completely validly for one set of rules, and later it might switch and follow another set of rules. This is how robots work. But this is certainly not how human religions work. Carbon-based units generally cannot be certified clergy for more than one sect at a time. Will robots have to be locked-in to a single liturgical version? Or, like TV or Web Browsers, would a tele-priest be a generic device, configured with approved content as needed.

While we’re on the question of software, what about hacking? What if malicious parties hack into the sacred software, and substitute the prayers for a competing version of the rite? Or defile the word or actions? Or simply destroy the religion they dislike? Yoiks! I have no idea what the theological implications of a corrupted or enslaved robot would be, but I imagine they could be dire.

  1. Evan Ackerman, Pepper Now Available at Funerals as a More Affordable Alternative to Human Priests, in IEEE Spectrum – Automation. 2017. https://spectrum.ieee.org/automaton/robotics/humanoids/pepper-now-available-at-funerals-as-a-more-affordable-alternative-to-human-priests


Remote Fun Park On The Moon?

As we approach the fiftieth anniversary of the first moon landing, it is clear that humans have pulled back from space exploration and from science in general. NASA’s budget has steadily declined, dedicated scientists politically suppressed and much of the space program has calcified into a jobs program.

What can be done?

Toys! Theme parks!

There is a lot of interest these days in sending swarms of small robots to the moon. Perhaps inspired by ubiquitous remote piloted drones, why not remote operate a moon rover?  And why can’t anybody drive one, with a game controller?

The Lunatix company is proposing to sell moon-rover-driving as a game. Earth bound computer games would be linked to the lander, and could purchase driving time. Kind of like consumer drones, except on the moon [3].

The lander might have a small science payload, but mainly it is dedicated to the commercial use. (There would be merchandise and other associated sales, as well.)

This seems relatively straightforward technically. There are some tricky bits, such as linking a consumer via the Internet to an uplink to the moon. Safely linking. Securely linking. (Hint: space communications are expensive and rare, and generally not connected to the public.)

I have no idea about the commercial case. Space projects are obscenely expensive, but getting cheaper. At something like 25 Euros per minute, it seems to me that driving time would be pretty damn expensive, at least for peasants like me. But who knows? My intuitions about business plans are often wrong.

Evan Ackerman points out that this purely commercial project raises legal questions. The moon is more or less under the jurisdiction of the United Nations, as defined by treaties among nations. There seems to be no specific framework for commercial exploitation of the moon, though there will surely need to be one soon.

Aside from the equity issues about sucking money out of the lunar commons (the moon is the common heritage of all human kind), there may be environmental and other regulatory issues.

I note that a company slogan is “Leave Your Mark on the Moon!”  The users will leave behind tracks, indelible tracks, visible from Earth.  This will surely have consequences.

How happy are we going to be when the moon is covered with tread marks? Do you want to see rude graffiti defacing the surface? How will we feel about a giant cola ad written in the dust? How will Earthly strongmen react to uncensored political messages, indelibly written on the moon?

The company proposal seems to wave its hands at the legal problems and doesn’t even list any legal issues under “Risks”. That may be optimistic.

In the end, it is quite possible that money will talk. As Ackerman puts it, despite his own misgivings, “If this is the best way to get robots to the moon, then so be it”.

While there’s a small section in the Lunatix executive summary on “Legal Framework,” there are few specifics about whether or not the United Nations would approve something like this. Lunatix seems to suggest that its use case is covered under “the common interest of all mankind in the progress of the exploration and use of outer space for peaceful purposes,” but I’m not so sure. It may be that no framework exists yet (either for or against), and my gut reaction that commercializing the moon in this way somehow cheapens it is probably just me being old and grumpy. If this is the best way to get robots to the moon, then so be it.” (From Ackerman [1])

I have my doubts about this concept. We’ll see.

But the general idea that some kind of entertainment business might be one of the earliest commercial successes for space seems to be plausible. Many important technologies started out as entertainment, or were driven by markets for entertainment [2].

For example, the Internet was designed for military and scientific applications, but the earliest commercial successes were music theft, games, and pornography, which drove markets for servers, GPUs and broadband, among other things. Today’s cord cutters are simply taking advantage of the second and third generation of these technologies. And, just as the Internet has never been comfortable with the fact that it is a great mechanism for delivering pornography, space entertainment may not turn out quite as imagined.


  1. Evan Ackerman, How Much Would You Pay to Drive a Jumping Robot on the Moon?, in IEEE Spectrum – Automation. 2017. http://spectrum.ieee.org/automaton/robotics/space-robots/how-much-would-you-pay-to-drive-a-jumping-robot-on-the-moon
  2. Steven Johnson, Wonderland: How Play Made the Modern World, New York, Riverhead Books, 2016.
  3. Space Tech, Luniatix. Graz University of Technology Institute of Communication Networks and Satellite Communications, 2017. https://www.tugraz.at/fileadmin/user_upload/tugrazInternal/Studium/Studienangebot/Universitaere_Weiterbildung/SpaceTech/Fallstudienprojekt_ST14.pdf



Space Saturday

Disposable Sensor Drones

Today, the Internet of Things is in its early “steampunk” stage, basically adding mobile phone technology to toasters and refrigerators. The real IOT will be closer to the vision of Smart Dust described a quarter of a century ago—massive numbers of very tiny sensors, networked together. No, we really don’t know how to build that, yet, but we’re working on it.

For the past few years, the US Naval Research Lab has been working on disposable drones, which are beginning to become more like smart dust. Shrinking robot aircraft down, they are creating a sort of ‘guided dust’. OK, the dust motes are pretty chunky still, but it’s the steam era.  They’ll get smaller.

The CICADA project (Close-in Covert Autonomous Disposable Aircraft) has done a lot of prototypes, and they are showing their Mark 5 this year.

The robot glider is 3D printed and made up from already worked out technologies, sensors, radios, autopilot and guidance systems are dropped in. The design is “stackable”, and designed to be dropped in batches from an aircraft. Each glider steers toward a specific target, and beams back its data when it lands. The whole thing is cheap enough to be considered disposable (at least by the Pentagon.)

With different sensors, there are many obvious things that these could do. Their usage is captured nicely by the idea of dropping a batch into a storm, to capture a bunch of readings from inside. For meteorology, these are sort of like sounding balloons, except they fall instead of float.

Right now, [CICADAs] would be ready to go drop into a hurricane or tornado,” he said. “I really would love to fly an airplane over, and each of these could sample in the tornado. That’s ready now. We’d just need a ride. And [FAA] approval.” (Quoting NRL’s Dan Edwards)

I’m pretty sure that the military will find less benign uses for this concept, though there are already plenty of guided weapons already, so this isn’t anything new.

The prototype is said to run about $250, which is cheap for the Navy, but seems high to me. I’m not seeing anywhere near that much gear in these little birds, and most, if not all of it can be done from open source. I would expect that hobbyists could probably replicated this idea in a maker space for a whole lot less per unit. Couple it will inexpensive quad copters to lift them, and I could see a huge potential for citizen science.

As a software guy, I have to wonder what the data system looks like. Whatever the Navy has done, I’m pretty sure that hobbyists or science students can whip up a pretty nice dashboard to grab, analyze, and visualize the sensor traces.

  1. Evan Ackerman, Naval Research Lab Tests Swarm of Stackable CICADA Microdrones, in IEEE Spectrum – Automation. 2017. http://spectrum.ieee.org/automaton/robotics/drones/naval-research-lab-tests-swarm-of-stackable-cicada-microdrones
  2. US Naval Research Laboratory. CICADA: Close-in Covert Autonomous Disposable Aircraft. 2017, https://www.nrl.navy.mil/tewd/organization/5710/5712/research/CICADA.



Robot Wednesday

NASA Investigating Clockwork Rover Technology

NASA has the coolest projects!

With a long-term mission to visit and measure everywhere in the Solar System, NASA has not ticked off the easy stuff—Earth orbit, Moon, Mars, orbiting all the Planets.

There are plenty of places we really want to visit, but haven’t been able to. Cold places like the ice moons. And really hot places like the Sun  and the surface of Venus.

In the case of Venus,several spacecraft have orbited and are orbiting, and a handful of probes have reached the surface–just barely. The surface is hot, over 400 degrees C, and the pressure is a crushing 90 atmospheres. Most electronics simply don’t work at these temperatures. And it’s very cloudy, so solar power is minimal.  And so on.

In short, conventional engineering has little chance. To date, the record time to failure is 2 hours, set by a heroically insulated Vernera 13 probe in 1982. Building such extreme systems is hard and very expensive.

There is no way to make a rover to explore Venus. What’s to be done?

A NASA design group is exploring ways to build a rover that uses mechanical parts—clockwork—instead of electronics and computers. This is called “Automaton Rover for Extreme Environments (AREE)”.

When I saw their animation of some initial concepts, I immediately recognized that this is a Strandbeestand indeed they did invite Theo Jansen to JPL for some advice. (Evidently, Jansen’s advice was to get rid of the legs.)

Alternative locomotive ideas include wheels and tank treads.

But moving around is the least of the problems. How do you collect data?

In an interview with Evan Ackerman, they report several intriguing ideas under development.

First of all, mechanical calculation and number storage should be doable. And rough forms of obstacle avoidance are well known, too. (Toy cars navigate around furniture by bumping and backing up, no?.)

Image: Jonathan Sauder/NASA/JPL-Caltech Obstacle avoidance is another simple mechanical system that uses a bumper, reverse gearing, and a cam to back the rover up a bit after it hits something, and then reset the bumper and the gearing afterwards to continue on. During normal forward motion, power is transferred from the input shaft through the gears on the right hand side of the diagram and onto the output shaft. The remaining gears will spin but not transmit power. When the rover contacts an obstacle, the reverse gearing is engaged by the synchronizer, thus having the opposite effect. After the cam makes a full revolution it will push the bumper back to its forward position. A similar cam can be used to turn the wheels of the rover at the end of the reverse portion of the drive.

But if you had some data, how would you return data to Earth (i.e., to an orbital relay)? One possibility would be some kind of hard copy (e.g., etched into a metal disk), which is then lifted with a balloon and potentially pick up be a high altitude UAV. That sounds cool, but pretty iffy.

Another idea is to do semaphore code with radar reflectors. The orbiter beams radar and the rover reflects back on-off signals are certain wavelengths. This might have a bandwidth of a few bits per second (one way). That’s not much, but it’s a lot more than zero bps!   Pretty cool.

They are also trying to develop some kinds of sensors that will work under these conditions. This is difficult and it might be an area where small amounts of exotic high temperature electronics might be used.

This is such a cool design project!

I’m not sure how these ideas will pan out, but this work

is also important for changing the conversation on exploring Venus. Today, long duration in-situ mobile access on Venus has not been considered a realistic option. AREE demonstrates how such a system can be achieved today by cleverly utilizing current technology and enhanced by the technology of tomorrow.”

  1. Evan Ackerman, JPL’s Design for a Clockwork Rover to Explore Venus, in IEEE Spectrum – Automation. 2017. http://spectrum.ieee.org/automaton/robotics/space-robots/jpl-design-for-a-clockwork-rover-to-explore-venus
  2. Jonathan Sauder. Automaton Rover for Extreme Environments (AREE). 2017, https://www.nasa.gov/directorates/spacetech/niac/2017_Phase_I_Phase_II/Automaton_Rover_Extreme_Environments.



Robot Wednesday

Hoppy Robot!

This is a great age of robot locomotion, and human engineers are recapitulating natural evolution, trying out every biological system– butterflies, bats, snakes–and many things not seen in nature (at least above the micro scale) (quadcopters, bucky bots.

Evan Ackerman reports on the amazing Salto jumping robot from U. C. Berkeley. Salto has one (count ‘em, one) leg, and springs around spending 90% of its travel in the air. It’s absolutely astonishing.

The article indicates that the control algorithm is pretty much the same as one developed in 1984, though we can pack a lot faster computation in a smaller critter now. The mechanical design is bio-inspired, learning from the small marsupial galago, which is a crazy jumper.

However, the actual magic is done with steerable “thrusters” (propellers), and the control depends on an external motion capture system that feeds instructions via wireless (an invisible tether).  This is not the way little bushbabies do it!

The new improved version will be officially presented September at IROS 2017, probably with some even more awesome demo.

I’m not really sure if this design is especially good for anything, but it’s fun to watch and would make a great game. Imagine the fitness benefits of playing “chase the boingy bot”! Or “try to escape the boingy bot”!  (These apps would mash up some kind of planning algorithm to evade or catch the puny human.)

So Cool!

  1. Evan Ackerman, Salto-1P Is the Most Amazing Jumping Robot We’ve Ever Seen, in IEEE Spectrum – Automation. 2017. http://spectrum.ieee.org/automaton/robotics/robotics-hardware/salto1p-is-the-most-amazing-jumping-robot-weve-ever-seen


Robot Wednesday

Telepresence Robot – At the zoo

These days we see a lot of exciting stories about telepresence—specifically, live, remote operation of robots. From the deadly factual reports from the battlefields of South Asia through science fiction novels to endless videos from drone racing gamers, we see people conquering the world from their living room.

One of the emerging technologies is telepresence via a remote robot that resembles ‘an ipad on a segway’. These are intended for remote meetings and things like that. There is two way video, but the screen is mobile and under the command of the person on the other end. So you can move around, talk to people, look at things.

On the face of it, this technology is both amazing (how does it balance like that?) and ridiculous (who would want to interact with an ipad on wheels?) And, of course, many of the more expansive claims are dubious. It isn’t, and is never going to be, “just like being there”.

But we are learning that these systems can be fun and useful. The may be a reasonable augmentation for remote workers, not as good as being there, but better than just telcons. And, as Emily Dreyfus comments, a non representational body is sometimes an advantage.

Last year Sensei Evan Ackerman reported on an extensive field test of one of these telepresence sticks, called the Double 2. This test drive was an interesting test because he deliberately took it out of the intended environment, which stressed the technology in many ways. The experience is a reminder of the limitations of telepresence, but also gives insights into when it might work well.

First of all, he played with it across the continental US (from Maryland to Oregon) thousands of KM apart. Second, he took it outdoors, which it isn’t designed for at all. And he necessarily relied on whatever networks were available, which varied, and often had weak signals.

As part of the test, he went to the zoo and to the beach!

Walking the dog was impossible.

Overall, the system worked amazingly well, considering that it wasn’t designed for outdoor terrain and needs networking. He found it pretty good for standing still and chatting with people, but moving was difficult and stressful at times. Network latency and dropouts meant a loss of control, with possibly harmful results.

Initially skeptical, Sensei Evan recognized that the remote control has advantages.

I’m starting to see how a remote controlled robot can be totally different [than a laptop running Skype] . . . You don’t have to rely on others, or be the focus of attention. It’s not like a phone call or a meeting: you can just exist, remotely, and interact with people when you or they choose.

Whether or not it is “just like being there”, when it works well, there is a sense of agency and ease of use, at least compared to conventional vidoe conferencing.

This is an interesting observation. Not only does everybody need to get past the novelty, but it works best when you are cohabitating for considerable periods of time. Walking the dog, visiting the zoo—not so good. Hanging out with distant family—not so bad.

I note that the most advertised use case—a remote meeting—may be the weakest experience. A meeting has constrained movement, a relatively short time period, and often is tightly orchestrated.  This takes little advantage of the mobility and remote control capabilities. You may as well as well just do a video conference.

The better use is for extended collaboration and conversation. E.g., Dreyfus and others have used it for whole working days, with multiple meetings, conversations in the hall, and so on.  Once people get used to it, this might be the right use case.

I might note that this is also an interesting observation to apply to the growing interest in Virtual Reality, including shared and remote VR environments.  If a key benefit of the telepresence robot is moving naturally through the environment, then what is the VR experience going to be like?  It might be “natural” interactions, but it will be within a virtual environment.  And if everyone is coming in virtually, then there is no “natural” intereaction at all (or rather, the digital is overlaid on the (to be ignored) physical environments. There will be lots of control, but will there be “ease”?  We’ll have to see.

  1. Evan Ackerman, Double 2 Review: Trying Stuff You Maybe Shouldn’t With a Telepresence Robot, in IEEE Spectrum – Automation. 2016. http://spectrum.ieee.org/automaton/robotics/home-robots/double-2-review-telepresence-robot


Robot Wednesday