Category Archives: Robotics

Collapsable Delivery Drone

I’m not a huge fan of buzzy little quadcopters, nor am I a fan of delivery drones. The former are about as welcome as a cloud of mosquitos, and the latter promises to transfer even more wealth to the 0.001%. (I’m not sure who these drones will be delivering to, when none of us have jobs or money to buy things.)

That said, I was interested to see the “origami-inspired cargo drone” developed by a group at Ecole Polytechnique Fédérale de Lausanne [2]. Their design wraps the copter in a flexible cage, which protects the package and also encloses the dangerous rotors. The cage is foldable, so it closes up to a relatively small package when not in use.

The cage is a nice design. It addresses the safety (and perceived safety) of the drone in a nice way. Rather than depending on complex algorithms to make the drone “safe” and “friendly”, their design makes the drone a soft beach ball like thing—the affordances are obvious and visible. Furthermore, the safety factor is passive. The effectiveness of the enclosure does not depend on either software or humans.

I’m sure that this basic idea can be realized in a lot of geometries. The EPFL design is modular, which means that a variety of cages can be made from the same design. It folds up rather neatly, and, of course, is light and strong.

I could imagine versions of this concept that have a standard coupling to a range of quadcopters. Sort of a “delivery cage” costume for drones. (I smell a new standard for “drone costume attachment” coming.)

Clearly, there is no reason why the cage has to be so bare and undecorated. Why not streamers, glitter, and even LEDs? These might make the drone more appealing, and would also make the drone more visible to cameras, radar, and sonar. (Another standard? Passive safety reflectors for drones?)

I’m still not eager to have my local stores put out of business by Amazon, but if I’m going to have to live with drones, I’d like them to bounce off walls and people, rather than crash into them.


  1. Evan Ackerman, EPFL’s Collapsable Delivery Drone Protects Your Package With an Origami Cage, in IEEE Spectrum — Automation. 2017. https://spectrum.ieee.org/automaton/robotics/drones/epfl-collapsable-delivery-drone-protects-your-package-with-an-origami-cage
  2. Przemyslaw Mariusz Kornatowski, Stefano Mintchev, and Dario Floreano, An origami-inspired cargo drone, in IEEE/RSJ International Conference on Intelligent Robots and Systems. 2017: Vancouver. http://infoscience.epfl.ch/record/230988

 

Robot Wednesday

Agility Robotics’ Terrifying “Cassie” Robots

OK, this is a cool (and scary looking) robot! Actually, they seem to run in packs, which makes them even scarier.

Cassie from Agilty Robots is an amazingly capable bipedal robot, with better balance and coordination than I have.

The company is trying to build robots to “Go where humans go”, which probably accounts for the uncanny bipedalism.

However, these Cassies remind me mostly of dinosaurs; agile, bipedal carnivores to be specific.

In fact, Cassie isn’t exactly naturalist or bio inspired. I’ve never seen anything that does this kind of side-stepping, and I’ve definitely never seen anything except human dance companies that move in such coordination.

It’s really terrifying.

And the things don’t even have a head or eyes, which only makes them more inhumanly scary.

Forget the uncanny valley, these guys inhabit the archetypal pit of species memory. My hind brain is screaming, “they are coming to eat me!”

This psychological effect might not be a big advantage for the home delivery market they are shooting for.

Robot Wednesday

Robot Funeral Rituals? Augmenting Religious Practice

One of the most prominent aspects of human life that has been little affected by the internet and robots is religion, especially formal religious practices. Church, temple, or mosque, religious practice is a bastion of unaugmented humans.

There are obvious reasons for this to be the case. Religion is conservative with a small “C”, embodying as it does cultural heritage in the present day. Traditional ideas and practices are at the psychological core of religious practice. Religious practice is not generally about “disruption” or “move fast and break things” (at least not in the thoughtless way Silicon Valley disrupts things.)

Another obvious reason is that much of religious teaching is about human behavior and human relations. Emphasis on the “human”. From this perspective, augmenting humans or virtualizing human relations is at best irrelevant and at worst damaging to proper human conduct.

But this will surely change. Religious traditions are living cultures which adopt new technology. It will be interesting to watch how human augmentation is incorporated into religious practices, not least because it may create some interesting, humane modes of augmented living.

Obviously, many people have already adopted digital communications and social media in spiritual and religious life. Heck, even the pope is on twitter. But this is the tip of the iceberg, little more than the twenty first century version of pamphlets and sermons.

What else might be coming?


For one thing, virtual worlds will surely need to be converted.

I recall some science fiction story (quite possibly by William Gibson, but I don’t remember) that had a brief vignette about a devout Catholic who loaded his personality into an avatar in a virtual world. This splinter of his consciousness (soul?) kneels in a virtual chapel and prays 24/7. In the story, this practice is approved by the church. I think the notion is that he receives indirect credit for this pious exercise, which is sort of analogous to other practices such as hiring a mass for a deceased parent.


For another, robots and cyborgs need to be incorporated into both theology and practice.

Along these lines, Evan Ackerman reports this month on a service in Japan that offers a robot to perform Buddhist funeral rites [1].  The “humanoid robot, suitably attired in the robe of a Buddhist monk” reads the sutras and bows at the appropriate moments.

The robot is much cheaper than a human, is programmed for alternative versions of the ritual, and can live stream the affair to remote mourners. (It can probably work much longer and faster than puny Carbon-based priests, too.)

It isn’t clear how this will be accepted or how popular it may be. To the degree that the funeral is for the comfort of the living, much will depend on how the mourners like it. A robot is not a sympathetic and soothing as a person, so I don’t really know.

There are, of course, theological questions in play. Do the words count if they are said by a machine? (Would they count if a parrot recited them and bowed?) There are certain to be differences of opinion on this question.


Thinking about this, I note another interesting possibility: a robot can also be remotely operated. A human priest could very well supervise the ceremony from a distance, with various levels of control. The robot could, in principle, be anywhere on Earth, in orbit, or on Mars; extending the reach of the holy man. Would this remote augmentation of the priest’s capabilities be “more authentic” than an autonomous robot programmed to do the ceremony?

Such a remote operation would have advantages. The robot would add a level of precision to the fallible priest—the robot could check and correct the performance. The robot can operate in hazardous conditions, such as a disaster area or war zone (imagine remote chaplains for isolated military posts). The remote avatar might bring a measure of comfort to people otherwise out of reach of conventional pastoral care.

Human priests would not have to travel, and could perform more work. For that matter, a single priest could operate multiple remote robot avatars simultaneously, significantly augmenting the sacred productivity.


Taking this idea of a priestly “remote interface” seriously for a moment, we can speculate on what other rituals might be automated this way. Something like Christian traditions such as baptism or communion certainly could be done by robots, especially supervised robots. Would this be theologically legitimate? Would it be psychologically acceptable? I don’t know.

I haven’t heard of anyone doing it, and I’m not endorsing such a thing, I’m just thinking about the possibility.

To the degree that autonomous or supervised robots are accepted into spiritual practice, there will be interesting questions about the design and certification of such robots. It might well be the case that the robot should meet specific standards, and have only approved programming. Robots could be extremely doctrinaire, or dogma could be loaded as a certified library or patch. I have no idea what these software standards might need to be, but it will be yet another frontier in software quality assurance.

There are other interesting possibilities. What if a robot is programmed for multiple religious practices, coming from competing traditions. At any one moment, it may be operating completely validly for one set of rules, and later it might switch and follow another set of rules. This is how robots work. But this is certainly not how human religions work. Carbon-based units generally cannot be certified clergy for more than one sect at a time. Will robots have to be locked-in to a single liturgical version? Or, like TV or Web Browsers, would a tele-priest be a generic device, configured with approved content as needed.

While we’re on the question of software, what about hacking? What if malicious parties hack into the sacred software, and substitute the prayers for a competing version of the rite? Or defile the word or actions? Or simply destroy the religion they dislike? Yoiks! I have no idea what the theological implications of a corrupted or enslaved robot would be, but I imagine they could be dire.

  1. Evan Ackerman, Pepper Now Available at Funerals as a More Affordable Alternative to Human Priests, in IEEE Spectrum – Automation. 2017. https://spectrum.ieee.org/automaton/robotics/humanoids/pepper-now-available-at-funerals-as-a-more-affordable-alternative-to-human-priests

 

Remote Fun Park On The Moon?

As we approach the fiftieth anniversary of the first moon landing, it is clear that humans have pulled back from space exploration and from science in general. NASA’s budget has steadily declined, dedicated scientists politically suppressed and much of the space program has calcified into a jobs program.

What can be done?

Toys! Theme parks!

There is a lot of interest these days in sending swarms of small robots to the moon. Perhaps inspired by ubiquitous remote piloted drones, why not remote operate a moon rover?  And why can’t anybody drive one, with a game controller?

The Lunatix company is proposing to sell moon-rover-driving as a game. Earth bound computer games would be linked to the lander, and could purchase driving time. Kind of like consumer drones, except on the moon [3].

The lander might have a small science payload, but mainly it is dedicated to the commercial use. (There would be merchandise and other associated sales, as well.)

This seems relatively straightforward technically. There are some tricky bits, such as linking a consumer via the Internet to an uplink to the moon. Safely linking. Securely linking. (Hint: space communications are expensive and rare, and generally not connected to the public.)

I have no idea about the commercial case. Space projects are obscenely expensive, but getting cheaper. At something like 25 Euros per minute, it seems to me that driving time would be pretty damn expensive, at least for peasants like me. But who knows? My intuitions about business plans are often wrong.

Evan Ackerman points out that this purely commercial project raises legal questions. The moon is more or less under the jurisdiction of the United Nations, as defined by treaties among nations. There seems to be no specific framework for commercial exploitation of the moon, though there will surely need to be one soon.

Aside from the equity issues about sucking money out of the lunar commons (the moon is the common heritage of all human kind), there may be environmental and other regulatory issues.

I note that a company slogan is “Leave Your Mark on the Moon!”  The users will leave behind tracks, indelible tracks, visible from Earth.  This will surely have consequences.

How happy are we going to be when the moon is covered with tread marks? Do you want to see rude graffiti defacing the surface? How will we feel about a giant cola ad written in the dust? How will Earthly strongmen react to uncensored political messages, indelibly written on the moon?

The company proposal seems to wave its hands at the legal problems and doesn’t even list any legal issues under “Risks”. That may be optimistic.

In the end, it is quite possible that money will talk. As Ackerman puts it, despite his own misgivings, “If this is the best way to get robots to the moon, then so be it”.

While there’s a small section in the Lunatix executive summary on “Legal Framework,” there are few specifics about whether or not the United Nations would approve something like this. Lunatix seems to suggest that its use case is covered under “the common interest of all mankind in the progress of the exploration and use of outer space for peaceful purposes,” but I’m not so sure. It may be that no framework exists yet (either for or against), and my gut reaction that commercializing the moon in this way somehow cheapens it is probably just me being old and grumpy. If this is the best way to get robots to the moon, then so be it.” (From Ackerman [1])

I have my doubts about this concept. We’ll see.

But the general idea that some kind of entertainment business might be one of the earliest commercial successes for space seems to be plausible. Many important technologies started out as entertainment, or were driven by markets for entertainment [2].

For example, the Internet was designed for military and scientific applications, but the earliest commercial successes were music theft, games, and pornography, which drove markets for servers, GPUs and broadband, among other things. Today’s cord cutters are simply taking advantage of the second and third generation of these technologies. And, just as the Internet has never been comfortable with the fact that it is a great mechanism for delivering pornography, space entertainment may not turn out quite as imagined.

 


  1. Evan Ackerman, How Much Would You Pay to Drive a Jumping Robot on the Moon?, in IEEE Spectrum – Automation. 2017. http://spectrum.ieee.org/automaton/robotics/space-robots/how-much-would-you-pay-to-drive-a-jumping-robot-on-the-moon
  2. Steven Johnson, Wonderland: How Play Made the Modern World, New York, Riverhead Books, 2016.
  3. Space Tech, Luniatix. Graz University of Technology Institute of Communication Networks and Satellite Communications, 2017. https://www.tugraz.at/fileadmin/user_upload/tugrazInternal/Studium/Studienangebot/Universitaere_Weiterbildung/SpaceTech/Fallstudienprojekt_ST14.pdf

 

 

Space Saturday

Robogami: “Democratizing” Robot Building?

In a recent paper, Cynthia Sung and colleagues at MIT describe their automated design system, which addresses a “long-held goal in the robotics field has been to see our technologies enter the hands of the everyman [sic].” [1]

Well, I don’t know about that. Every nerd, maybe.

The idea is a high level design system that generates simple “fold up” robotic vehicles, suitable for fabrication with ubiquitous laser cutters and other shop tools. The computer system helps the designer create the “geometry”, the 3D shape of the vehicle, and the “gait”, how it moves. The system shows the results in a simulator, so the designer can rapidly iterate. The prototype is then sent to a printer, and snapped together with appropriate motors and wires.

One of the main challenges in robot design is the inter- dependence of the geometry and motion.

Cool!

As the paper makes clear, this idea was influenced by a number of current trends which I’m sure are bouncing around MIT CSAIL and everywhere esle: computational aided iterative design, rapid prototyping with personal fabrication, and, of course, Origami <<link to post>>.

The system also reports performance metrics (e.g, speed of locomotion), and helps optimize the design.

Of course, this isn’t really a general purpose robot design system. Aside from the fact that the hard part in any design is figuring out what to design (and diving into iterative prototyping often distracts from careful thought and research), useful robots have sensors and manipulators, as well as machine learning or domain knowledge or both, which is not part of this design.

This system is really only about the body and the movement: essentially, the basic shell of the robot.  Important, but really only the foundation of a working, useful robot.

“The system enables users to explore the space of geometries and gaits”

It’s cool, but not the whole story.

And, let us not forget, the appearance and sociability of the robot is increasingly important. These cute little robogamis look like toys, and are little more use than a toy. These are certainly not social robots!

Now, if you sold this as a “toy factory”, perhaps with some stickers and funny voices, you’d have a bang up product. Don’t give Suzie a doll, give her a machine to make as many dolls as she wants!  And the dolls move and talk!

Now that would be cool!


  1. Adriana Schulz, Cynthia Sung, Andrew Spielberg, Wei Zhao, Robin Cheng, Eitan Grinspun, Daniela Rus, and Wojciech Matusik, Interactive robogami: An end-to-end system for design of robots with ground locomotion. The International Journal of Robotics Research:0278364917723465, 2017. http://dx.doi.org/10.1177/0278364917723465

 

Robot Wednesday

Disposable Sensor Drones

Today, the Internet of Things is in its early “steampunk” stage, basically adding mobile phone technology to toasters and refrigerators. The real IOT will be closer to the vision of Smart Dust described a quarter of a century ago—massive numbers of very tiny sensors, networked together. No, we really don’t know how to build that, yet, but we’re working on it.

For the past few years, the US Naval Research Lab has been working on disposable drones, which are beginning to become more like smart dust. Shrinking robot aircraft down, they are creating a sort of ‘guided dust’. OK, the dust motes are pretty chunky still, but it’s the steam era.  They’ll get smaller.

The CICADA project (Close-in Covert Autonomous Disposable Aircraft) has done a lot of prototypes, and they are showing their Mark 5 this year.

The robot glider is 3D printed and made up from already worked out technologies, sensors, radios, autopilot and guidance systems are dropped in. The design is “stackable”, and designed to be dropped in batches from an aircraft. Each glider steers toward a specific target, and beams back its data when it lands. The whole thing is cheap enough to be considered disposable (at least by the Pentagon.)

With different sensors, there are many obvious things that these could do. Their usage is captured nicely by the idea of dropping a batch into a storm, to capture a bunch of readings from inside. For meteorology, these are sort of like sounding balloons, except they fall instead of float.

Right now, [CICADAs] would be ready to go drop into a hurricane or tornado,” he said. “I really would love to fly an airplane over, and each of these could sample in the tornado. That’s ready now. We’d just need a ride. And [FAA] approval.” (Quoting NRL’s Dan Edwards)

I’m pretty sure that the military will find less benign uses for this concept, though there are already plenty of guided weapons already, so this isn’t anything new.

The prototype is said to run about $250, which is cheap for the Navy, but seems high to me. I’m not seeing anywhere near that much gear in these little birds, and most, if not all of it can be done from open source. I would expect that hobbyists could probably replicated this idea in a maker space for a whole lot less per unit. Couple it will inexpensive quad copters to lift them, and I could see a huge potential for citizen science.

As a software guy, I have to wonder what the data system looks like. Whatever the Navy has done, I’m pretty sure that hobbyists or science students can whip up a pretty nice dashboard to grab, analyze, and visualize the sensor traces.


  1. Evan Ackerman, Naval Research Lab Tests Swarm of Stackable CICADA Microdrones, in IEEE Spectrum – Automation. 2017. http://spectrum.ieee.org/automaton/robotics/drones/naval-research-lab-tests-swarm-of-stackable-cicada-microdrones
  2. US Naval Research Laboratory. CICADA: Close-in Covert Autonomous Disposable Aircraft. 2017, https://www.nrl.navy.mil/tewd/organization/5710/5712/research/CICADA.

 

 

Robot Wednesday

Is “Cute” Enough for a Robot?

In the great rush to create home robots, it seems that 1,000 flowers are blooming. Many different robots are being tried, combining the basic core of features with different appearances and artificial personalities.

One of this year’s models is ‘Kuri’, which is designed to be simple and cute. It understands speech commands, but “speaks robot”—not synthesized speech, but “cute” beeps and buzzes.

As far as I can tell, it does nothing that a computer or tablet or Alexa can’t do, except in a “friendly”, autonomously mobile package.

It seems that Kuri wanders around your house with its cute face and twin HD cameras. These can live stream over the Internet, to “be your eyes when you’re” away. Kiri also has microphones, of course, to capture sounds and conversations. Kuri will “investigate” unusual sounds. It has speakers, so you can play music, and yell at your baby sister.

This little guy is supposed to “bring joy to your house”. As far as I can tell, the main feature of Kuri is “cuteness”. Is this enough?

Well maybe.

http://content.jwplatform.com/players/4JIA0lOM-5Zdv3OJ1.html

Unfortunately, Kuri has gone way off the rails with a new feature, “autonomous video”.

Basically, as Kuri wanders around mapping your house, listening to you, and generally being cute, it will record videos.

The results of this snooping are sent to you (or at least to whoever controls Kiri), where you can select ones that you like. Supposedly, Kiri uses this feedback to learn what you like, and thereby to construct a please selfie video of your house.

Who doesn’t want that?

Well, me, obviously.  But, who asked for this feature, anyway???

I have no idea why I would ever want “daily dose of short “life at home’ videos”.  I mean, if there is any place I don’t need to visit virtually, it’s the place that I live physically.

But if I did want it, I don’t want an Internet connected device streaming video out of my house to the Internet. And I really don’t want an “autonomous” camera wandering around unpredictably recording my private life.

It’s Alexa on wheels. Eeek.

“Turn it off” doesn’t even begin to cover it.


I’ll add a couple of other points that Kuri brings to mind.

Like may contemporary robots, Kuri does some simple psychological tricks to indicate that he (apparently Kuri is male) is listening. It looks up, it looks ‘happy’, it makes ‘eye contact’ (more or less). This is “cute” in the same way as a pet may be “cute”, and for the same reason—you are projecting human personality onto a non-human actor.

This is probably good marketing, but there is some weird psychology going on here, especially if kids are involved.

First of all:  No, Kuri doesn’t actually like you. It isn’t capable of feelings of any kind.

The head and eye gestures raise the interesting question of whether people will tend to mirror these inhuman movements in the same way that they tend to mirror other people as they interact. And will children develop weird behavioral patterns from living with a household robot?  Who knows.

Then there is Kuri’s gaze.

It is becoming common to put camera’s behind features that look like human eyes. Kuri has a very abstract, but unmistakably analog to a human head and face, and the eyes are where the cameras are. This is a physical analogy to human senses, but has a sort of perverse twist to it. While a person or a dog sees you with their eyes, a robot is usually recording and streaming with its eyes. This mismatch means that you may unconsciously overlook the invasiveness of those robot eyes (which are really web cams), or perhaps edge toward paranoia about other people’s eyes (which are not web cams).

These “uncanny” confusions are hardly unique to Kuri, though the “cuter” the robot the more powerful the psychological illusions.

Is “cute” a good thing for a robot to be? I’m not so sure.


  1. Alyssa Pagano, Kuri Robot Brings Autonomous Video to a Home Near You, in IEEE Spectrum -Automation. 2017. http://spectrum.ieee.org/video/robotics/home-robots/kuri-robot-brings-autonomous-video-to-a-home-near-you

 

Robot Wednesday Friday