Category Archives: Robotics

Interesting Ocean Energy Source

Of the vast amount of energy arriving at our planet from our sun, a considerable amount ends up warming the surface of the oceans.  The sea is characterized by a complex set of currents, mostly driven by the heating and cooling of the water.  And one of the most prominent physical features of the ocean is the theromocline, the gradient of temperatures from the warm surface to the cold depths.

This month Jeremy Hsu writes about an interesting passive energy system that exploits these temperature differences to generate power for underwater devices [1].

The technology is rather closely guarded, but the general idea is pretty clear:  specific chemical compounds phase change from solid to liquid to gas in the temperatures found in the ocean.  The Seatrec system uses the energy from these transactions to generate electricity to charge batteries.

Basically, the buoy or other underwater device simply floats up or down (increasing and decreasing buoyancy), changing the temperature of the water.  It’s that simple!

The current systems are available as add ons to extend the life of otherwise not rechargeable sensor buoys.  This increases the amount of data that can be collected, and reduces the loss, and concomitant pollution, due to dead buoy batteries.

The company seems to be guarding the details closely, which makes sense.  The whole deal depends on matching the exact chemical recipe with the thermocline to be inhabited.  So, no, they don’t want to tell me their secret sauce.

I’m not an expert in this kind of thermochemistry, but I’m pretty sure that any given reactor will have a fairly narrow range of temperatures that work well.  This means that a given generator will work best in specific locations in the ocean, and quite possible during specific time periods.  Presumably, these ranges are wide enough to be useful.

But, I wouldn’t expect this system to work for anything but shallow water in the tropics or temperate zones.  And I wonder if this system will work well in areas with major currents, especially upwelling or downwelling.  And, of course, arctic or winter storms both cool the water and stir the surface reducing the thermocline.

Still, there are a lot of places where this technology will work well, and they are important areas like continental shelves and shores and lakes.

Another question I have is the timeline of this recharging.  When the buoy changes depth, it takes time for the thermal energy to penetrate and for the chemical reactions to occur.  So this is a pretty slow motion process:  a slow graceful ascent or descent, patient absorption of heat, and a trickle of recharging power.  Then slowly return to station.

This slow process is probably fine for buoys, which are very passive entities.  But there has to be enough power generated to allow a lot of data collection in between recharging.  The public materials don’t give much information about this duty cycle, but I have to assume that it works well enough to be useful.  (And, of course, the alternative may be sending a ship to replace the battery or losing the buoy.)

Anyway, this is a fairly elegant technology, and I look forward to seeing how well it fares.


  1. Jeremy Hsu, These Underwater Drones Use Water Temperature Differences To Recharge, in IEEE Spectrum – Robotics, September 3, 2020. https://spectrum.ieee.org/automaton/robotics/drones/renewable-power-underwater-drones
  2. Sea Technology, Ocean Temperature Differences Enable Energy Harvesting, in Sea Technology, February 27, 2020. https://sea-technology.com/seatrec-sl1

Preventing Robot Abuse With Social Pressure?

As robots become more common and appear in more settings, especially non-work settings, there is considerable interest in the strange but all-too-human phenomenon of humans abusing robots. (e.g., this and this).  While some projects deliberately elicit hostile interactions (fe.g., or “catharsis”, for “punishment”).

This is actually a fuzzy area philosophically.

Generally, the “abuse” in question is behavior that would be considered hostile or harmful when directed against another person or an animal.  Robots are technically machines, which cannot feel pain or humiliation.  So the social and moral status of these behaviors isn’t all that iron clad.  Indeed, at least part of the issue is that abusing a robot might make the abuser more likely to abuse animals or humans, not that it harms the robot, per se.

This summer, researchers at Yale report an interesting study that uses “spectators” who witness the interactions [2]. The idea is that the disapproval of onlookers might be a form of social pressure that reduces abusive behavior.

The twist is, the “spectators” are robots.

This is yet another weird concept, in this hazy space surrounding human-robot interactions.  The general idea is:

“we investigated whether the emotional reactions of a group of bystander robots could motivate a human to intervene in response to robot abuse.” ([2], P. 211)

The underlying idea comes from observations that an audience (of humans) can influence the behavior and interaction of humans.  There are different ways to think about this, but these researchers refer to the notion of “social contagion”, which is basically means that people will follow the lead of a group (at least some of the time).  In this case, the idea is that an audience that visibly disapproves of an abusive interaction will influence the person to moderate his or her behavior to align with the apparent desires of the audience.

This study extends this situation in two ways, both of which are not a priori obvious.

First, the interaction in question is a person (a confederate of the experimenter) who yells or mishandles a robot in ways that would definitely be perceived as abusive if the victim were human.  The subjects are asked to evaluate the interaction they witnessed, eliciting how much “empathy” they feel for the robot victim.  They also have opportunity to act to defend or aid the victim robot.

Let’s note that whatever “empathy” the subjects experienced, it must be some kind of analogy or extension of conventional empathy toward humans and other living things.  Robots are, by definition, not human, and cannot experience pain or humiliation.  So, empathizing is actually projecting human emotions onto an inhuman, non-emotional entity.  (One could take the position that all “empathy” is projection onto others, but in this case, the projection of human emotions onto a machine is a bigger leap than, say, projecting human emotions to a dog.)

The second twist is that the “audience” is robots, and they are manipulated to display anthropomorphic facial expressions indicating “neutral” or “sad” “feelings”.  This is a rather complicated bit of theater:  these robots form an “audience”, as opposed to random scenery, by virtue of their anthropomorphism and apparent kinship to the victim.  The audience performs the supposed emotional reaction via human expressions, relevant to the human observers.

There is a fundamental question here: does “social contagion” operate because the person identifies with the audience to some degree?  If so, then it is not obvious that a person should or will identify with robots, however anthropomorphic.  Even if you grant the inference that a “sad” face reflects an emotional state in the robot, it doesn’t follow that you should feel the same emotion.

So, should this  experimental manipulation work at all?  That’s far from clear to me.

In fact, there was only minimal evidence of any effect.  The most noticeable result was that more subjects “intervened” to defend the victim more when the audience expressed “sadness”.  This is suggestive, but in the absence of corresponding verbal reports of empathy, it is not conclusive.

It may be important to note that the in the debriefings, the subjects indicated that relations with the human confederate were highly salient.  I.e., subjects who did not intervene may have been focused on avoiding conflict with the confederate or the experimenter. I would note, too, that the visible abuse of the robot could be seen as threatening to the human subject.

It wouldn’t be far fetched to suggest that the anthropomorphic robots might have been taken as “weakly human”, perhaps with some social influence but less than the human confederate and experimenter.

I would also note that much of this effect must depend on the individual’s perception of robots and human-robot interaction, which will surely vary in different cultures and with experience.  Just “how human” is a robot?  How should you treat a robot?  People will have different ideas depending on experience and background, and attitudes are surely evolving rapidly.

(I have to wonder, for instance, the influence of popular culture.  Were the subjects were familiar with and/or fans of comic books and science fiction cinema which prominently features sympathetic anthropomorphic robot characters?    This might account for much of the behavior seen.)

Overall, there is considerable reason to question whether this kind of “social contagion” should, in principle, occur with this kind of “audience”.  From this perspective, this study could be seen as mostly confirming my own hypothesis that robot audiences have little influence.


  1. Evan Ackerman, Can Robots Keep Humans from Abusing Other Robots?, in IEEE Spectrum – Robotics, August 19, 2020. https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/can-robots-keep-humans-from-abusing-other-robots
  2. Joe Connolly, Viola Mocz, Nicole Salomons, Joseph Valdez, Nathan Tsoi, Brian Scassellati, and Marynel Vázquez, Prompting Prosocial Human Interventions in Response to Robot Mistreatment, in Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. 2020, Association for Computing Machinery: Cambridge, United Kingdom. p. 211–220. https://interactive-machines.gitlab.io/assets/papers/connolly-HRI20.pdf

 

Robot Wednesday

If it looks right…and this robot definitely looks great

If it looks right…it must be right.

This week I read about a telepresence robot being demonstrated in Japan.  The Telexistence “Model-T” is intended to do stocking in a convenience store, operated remotely [2].  I’m not sure how unique this device is, but I kind of like it.

 

For success, much will depend on latency and reliability of the network connectivity.  Even short drop outs or delays will be maddening for the operator and probably screw up the operation.

The company press release suggests that the operator can work from a comfortable location, etc.  I’m not sure just how comfortable or fatiguing this teleoperation may be in practice.  Of course, it’s probably competitive with working in person in such a store.

By introducing Model-T into stores, FamilyMart store staff will be able to work in multiple stores from a remote location, which will help solve challenges around labor shortage and help create new job opportunities. It will also lead to the reduction of human-to-human contact to help prevent the spread of COVID-19.” (From [2])

It is true that this might be safer for the employee, who is not exposed to robbers or to viruses. And it might be safer for customers, especially if the robot can be disinfected frequently.  (I’m not sure whether this model can be sanitized easily or not.)

I’ll not that network security will be a serious issue.  Hackers taking command of the remote would be pretty bad for everyone.

But the best thing is that this LOOKS great.  In fact, it looks far too cool to be a stocker. It looks like an alien invader, ready to blast any resistance.  There’s something about the weird almost insectoid face that I really like, I don’t know why.  Very cool.

Photo: Telexistence (From [1])
I wonder if this will be an issue or not.  It’s kind of like having a movie start stocking the coolers.  People are going to wonder if it’s for real, or if something else is going on. And they’ll hang about just to watch.  Maybe that’s good, maybe not.


  1. Evan Ackerman, Erico Guizzo, and Fan Shi, Video Friday: This Robot Will Restock Shelves at Japanese Convenience Stores, in IEEE Spectrum – Robotics, August 28, 2020. https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-telexistence-model-t-robot
  2. Telexistence, Telexistence Begins the Trial Operation of its Remote Controlled Robot, Model-T, at a FamilyMart Store. Aims to Realize a New Labor-Saving Store Operation Platform, in Telexistence – Blog, August 26, 2020. https://tx-inc.com/en/blog/telexistence-begins-the-trial-operation-of-its-remote-controlled-robot-model-t-at-a-familymart-store-aims-to-realize-a-new-labor-saving-store-operation-platform/

 

Robot Wednesday

Very Cool Small Ornithopter Concept

I’m not really that fond of quadcopters, which have become ubiquitous.  The noise alone is a good reason to want something else.

What’s more, rotary fliers are inefficient and generally clumsy, especially compared to insects and birds of similar size.  As Evan Akerman puts it, “For most applications, though, drones lose out to birds and their flapping wings in almost every way” [1].  But flapping winged craft are hard, especially compared to copters. (As Ackerman snarks, “Making flapping-wing robots is so much more difficult than just duct taping spinning motors to a frame“)

This summer, researchers from the Pacific report on a new design for an impressive flapping wing drone [2].  It hovers!  It zips along!  It reverses on a dime!  Wow!   The design is inspired by the fast and agile flight of swifts, which, honestly, is some of the most dramatic natural flying of all.  If you are going to take inspiration, take it from the best!

 

A key to the high performance is a very efficient power system, which transfers rotary power of the motor into wing flaps.  This system is more efficient than an equivalent propeller system.

Control is achieved by an oversized tail. The research demonstrates some very impressive maneuvers with this configuration.

This design is also friendlier than those damned buzzing copters.  The wings move 20 times slower than a rotor, and are consequently quitter and less dangerous.  And, of course, the agility is insane!

As Ackerman says, “flapping-wing drones easily offer enough advantages to keep them interesting.”

This design is relatively new, so there is work to do.  In particular, developing algorithms for autonomous flight will make things a whole lot easier.  Perhaps the researchers will collaborate with ornithologists to examine the algorithms embedded in the motor controls of swifts as inspiration for controls for this flyer.

PS. As far as I can tell, this amazing flier doesn’t have a name! C’mon, guys!  This is too cool to not have a name!


  1. Evan Ackerman, High Performance Ornithopter Drone Is Quiet, Efficient, and Safe, in IEEE Spectrum – Robotics, August 3, 2020. https://spectrum.ieee.org/automaton/robotics/drones/high-performance-ornithopter-drone
  2. Yao-Wei Chin, Jia Ming Kok, Yong-Qiang Zhu, Woei-Leong Chan, Javaan S. Chahl, Boo Cheong Khoo, and Gih-Keong Lau, Efficient flapping wing drone arrests high-speed flight using post-stall soaring. Science Robotics, 5 (44):eaba2386, 2020. http://robotics.sciencemag.org/content/5/44/eaba2386.abstract

 

Robot Wednesday

Clockwork Rovers To Explore Venus

The moon and Mars are pretty darned inhospitable.  But NASA has deployed relatively conventional electrical and digital systems.

Venus on the other hand is really, really harsh.  The mean time to failure for electrical systems is minutes to hours.  So, your basic Mars rover would scarcely get down to the surface, if that far, before it breaks due to heat, pressure, radiation, and corrosion.  Missions to the surface of Venus have used heroic measures (e.g., extremely heat resistant electronics), and managed to last minutes, with Venera 13 holding the record at a couple of hours [1].

So how could you actually operate a rover on Venus?

Clockwork!  Holy Steampunk, Batman!

This year NASA added a bit of fun to the process, holding an open design contest to develop detailed designs for obstacle avoidance sensors using mechanical systems.

Obstacle detection and avoidance is a very basic feature of a mobile robot.  It is also really easy to implement with electronics (and also, by the way, neurons), and there are many off the shelf systems that use light, sound, and touch to signal feedback loops controlling motors (and every microorganism can do it).

The contest yielded a dozen interesting designs, including the wonderfully named “Venus Feelers” (first place) and “Skid n’ Bump” (second place) [2]. The basic concepts are generally bumpers that transmit the position of a collision to some kind of mechanical linkage that controls the (spring powered) drive.  I.e., clockwork.

Kewl!

(Somewhat ironically, most entries were submitted as digital movies based on CAD CAM models.)

Actual deployment of these concepts is another story, of course.

It’s not like clockwork isn’t affected by heat, pressure, and corrosion.  Not to mention being shot into space, cruise and near absolute zero for years, and then turbulently drop to the surface, with a sudden, gigantic change in heat and pressure.  And we really don’t know much about the dust and chemicals that might be floating near the surface.

Success will require extremely high performance materials and machining. Remote operation on a planetary surface also needs fault tolerant design, which is actually something that electronics are relatively good at, compared to clockwork.  One bent rod and it could be all over.

This is hardly the end of the story.  In fact, obstacle detection is probably the second easiest design problem on the list, after clockwork power train for the wheels.

if you plan to collect data, what are you going to do for cameras, instruments, recordings, etc.?

Even if we get some data, we still need radio to get data back.  We might want to try some kind of high performance semaphore visible from orbit.  One concept uses radar reflectors that can ge detected from orbit through clouds [1].

And even if this is run by clockwork, it still needs an energy source.  Probably nuclear power of some kind, though a small wind turbine might work, at least a little

So—a long way to go.


  1. Elizabeth Howell, Steampunk Venus rover ideas win NASA contest to ‘explore hell’ with clockwork robots, in space.com, July 27, 2020. https://www.space.com/steampunk-venus-robot-lander-nasa-jpl.html
  2. NASA Tournament Lab, Exploring Hell: Avoiding Obstacles on a Clockwork Rover, in NASA Turnament Lab, 2020. https://www.herox.com/VenusRover/128-meet-the-winners
  3. Ian J. O’Neill and Clare Skelly, NASA’s Venus Rover Challenge Winners Announced, in JPL News, July 6, 2020. https://www.jpl.nasa.gov/news/news.php?feature=7693

 

PS. This project is chak-a-block with great names for bands

“Venus Feelers”
“Skid n’ Bump – All-mechanical, Mostly Passive”
“Clockwork Cucaracha”
“Scotch Yoke Clinometer”
“Double Octopus”
“Compound Obstacles”

 

A Robotic Mobile Phone Case

A product you didn’t know you needed:  what if your phone could find its charging dock, and move there?  Or how about, a phone that comes to you from its charging base?

This concept puts new meaning into the term “mobile phone”.

This summer, researchers at Seoul National University report on CaseCrawler, a phone case with robot legs and sensors [2].  The phone can crawl around the room (or at least across a table).

I guess there are a lot of ways you could do this, but they focus on a design that maintains the basic form of a phone case: a thin wrapper around the hand sized smartphone. They implement “anisotropic” legs moved by cranks.  These retract to maintain a flat profile when not in use.

The resulting gate is jerky, but it gets there.

 

The researchers envision the possibility of the phone sharing sensors and power with the crawler, which could allow substantial computation and autonomy.

Thinking about a future product, one thing I would worry about is security.  Hacking a cell phone is damaging enough.  It’s much more trouble if someone can hack a phone and get it to walk out the door with secrets, or walk into a secured area to snoop.  (Of course, the current version is easily blocked by closed door or a small trench in the floor.)

Now, I admit I’ve long wanted to not have to carry my own phone.  Nothing says “big wig” like having a minion to carry and dial your phone for you.

But what I was thinking of is a robot caddy to follow and hand me the phone when demanded.

Preferable, a robot raptor.  Or maybe a flock/posse of raptors that follow me as I walk, carrying my stuff and terrorizing the peasants.

  1. bipedal, meter high, raptor robots, capable of running as fast as I can walk
  2. they autonomously follow me as a flock
  3. they can carry things in mouth and hands, including phone, laptop, coffee cup (without spilling!)
  4. they obey spoken commands, such as “phone, please” (i.e., hand me my phone), and “take this”
  5. ideally, individuals can split off to deliver or pick up objects, e.g., take this package to <person> at <address>

Now that’s a product idea!


 

  1. Evan Ackerman, CaseCrawler Adds Tiny Robotic Legs to Your Phone, in IEEE Spectrum – Robotics, August 1, 2020. https://spectrum.ieee.org/automaton/robotics/robotics-hardware/casecrawler-adds-tiny-robotic-legs-to-your-phone
  2. J. Lee, G. Jung, S. Baek, S. Chae, S. Yim, W. Kim, and K. Cho, CaseCrawler: A Lightweight and Low-Profile Crawling Phone Case Robot. IEEE Robotics and Automation Letters, 5 (4):5858-5865, 2020. https://ieeexplore.ieee.org/abstract/document/9143416/

 

Punishing Robots

I generally hold that abusing robots is wrong, for the same reason that abusing people or animals is wrong.  To the degree that a person harms another without adequate justification, that has to be wrong.

But what about “punishment”?  Punishment is harm inflicted for a purpose, to suppress behavior.  In principle, justified and proportional punishment can be morally acceptable, and even morally required.  (Punishment may or may not actually work as intended, but that’s not the question here.)

This summer, researchers from Germany explore this mode of interaction with robots.  Specifically, they designed an experiment in which people were asked to train a robot by punishing it when it strayed from the correct path [1].  The participants were instructed to increase the level of punishment from verbal scolding, dazzling with a light in its eyes, to breaking one of its legs.  (The robot trebled in response to harsh treatment.)

The goal of the experiment was not to actually train the robot, but to examine how the people felt about the punishment as a purported training method.

The people exhibited some discomfort, and were especially hesitant to damage the robot.  In interviews, people expressed economic, emotional, and social inhibitions.  Some were concerned to not damage the robot (economic).  Most felt discomfort in the punishment, not dissimilar to what might be expected if the target was a person or animal.  And some felt social discomfort, not wishing to be seen to be cruel or violent.

The researchers conclude that the participants considered the robots somewhere between alive and “just a machine”.  They felt empathy with the victim, but many did not necessarily feel that they clearly should feel such empathy.

participants conceptualized the robot somewhere between alive and lifeless” ([1], p. 187)

The participants did not seem to have any moral concerns about scolding the robot, though, of course, they probably realized that scolding would not actually hurt the machine.  Some participants appeared to feel silly yelling at a robot—discomfort for their own dignity.

The participants accepted the dazzling punishment, possibly because they assumed that it did not inflict permanent damage.  This also is an interaction with a machine that makes logical sense—the machine clearly can detect the light.

Many participants were uncomfortable with the mutilation punishment.  However, this mainly stemmed from the desire to not destroy the value of the machine, not because of empathy for any hypothetical pain (despite the deceptive ‘hurt’ reaction of the robot).

Some of the more interesting responses were complaints that these interactions set bad examples or might encourage punishing interactions between humans.  I.e., that asking the humans to behave this way with a robot may teach them to behave this way toward humans and animals in other situations.

The researchers suggest that punishment does not seem to be a preferred interaction with robots.  But if there is punishment, the people preferred an abstract, machine relevant, non-permanent form (dazzling with light).

I would add the additional caution:  an important reason why psychologists generally prefer to avoid punishment is because it is effective only to the extent that the unpleasant stimulus is clearly associated with the behavior that is intended as a target.

In the case of relatively mature humans, this can be achieved with verbal instruction.  (I.e., explaining what is being punished.)  For animals, it is important to have the punishment closely associated with the target behavior.  This association is difficult to achieve, and for this reason punishment often has side-effects, changing behavior in unintended ways.

For punishment to work for a robot, it would be necessary for the robot to have an adequate model of the situation; to understand that the unpleasant stimulus is connected with a specific behavior in the recent past, and that the stimulus is intended to be punishment, and intended to be contingent on changing behavior.

I’ve never heard of a robot having such a model of human interaction, but I suppose it is possible.

Perhaps a robot could be trained to understand punishment, though just how a robot would learn what punishment is supposed to mean that is hard to know. I suspect that any such training would probably teach the robot how to behave correctly long before it would learn how to learn from punishment.

And, remember, if the robot associates the punishment with the wrong behavior—or misunderstands the situation entirely—then it will be learning the wrong lesson, or learning nothing at all.

So, basically, I would not expect punishment by human trainers to be a particularly useful training method, regardless of how people might feel about it.


  1. Beat Rossmy, Sarah Theres Völkel, Elias Naphausen, Patricia Kimm, Alexander Wiethoff, and Andreas Muxel, Punishable AI: Examining Users’ Attitude Towards Robot Punishment, in Proceedings of the 2020 ACM Designing Interactive Systems Conference. 2020, Association for Computing Machinery: Eindhoven, Netherlands. p. 179–191. https://doi.org/10.1145/3357236.3395542

 

Robot Wednesday

Tiny Origami Bots from Michigan

In recent years, I’ve come to argue that engineering and design students should be taught origami as a regular part of the curriculum.  Three-D design, parsimonious with materials, flat-pack storage, and amenable to self-assembly—definitely great principles for contemporary design.

This summer researchers at Michigan report some new origami-inspired millimeter scale robots [1]. These little guys are fabricated from ionic polymer metal composite (IPMC), a material that deforms when heated.  Controlling the temperature makes the little robot flap and vibrate, so it can potentially move and do work.

 

Actually, as I discovered while looking this up, ionic polymer metal composite (IPMC) actuators are a really hot topic these days, including millimeter scale robots (e.g., [2]).

The Michigan work adds the new twist, using paper and fabric as a base for the IPMC material.  The idea is to devise reliable fabrication methods.  Intuitively, a fabric or paper base contributes strength and stability to the composite structure, but is flexible enough to be moved by the IPMC actuators.

In addition to literally using paper, the designs work by folding along creases, which is inspired by origami.  They even look like paper cranes!  Tiny cranes that move!

Cool.


  1. A. Ishiki, H. Nabae, A. Kodaira, and K. Suzumori, PF-IPMC: Paper/Fabric Assisted IPMC Actuators for 3D Crafts. IEEE Robotics and Automation Letters, 5 (3):4035-4041, 2020. https://ieeexplore.ieee.org/document/9057684
  2. A. Kodaira, K. Asaka, T. Horiuchi, G. Endo, H. Nabae, and K. Suzumori, IPMC Monolithic Thin Film Robots Fabricated Through a Multi-Layer Casting Process. IEEE Robotics and Automation Letters, 4 (2):1335-1342, 2019. https://ieeexplore.ieee.org/document/8626130
  3. James Lynch, Origami microbots: Centuries-old artform guides cutting edge advances in tiny machines, in Michican Engineering News, July 30, 2020. https://news.engin.umich.edu/2020/07/origami-microbots-centuries-old-artform-guides-cutting-edge-advances-in-tiny-machines/
  4. Yi Zhu, Mayur Birla, Kenn R. Oldham, and Evgueni T. Filipov, Elastically and Plastically Foldable Electrothermal Micro-Origami for Controllable and Rapid Shape Morphing. Advanced Functional Materials, n/a (n/a):2003741, 2020/07/30 2020. https://doi.org/10.1002/adfm.202003741

Abusing Robots is Wrong-sort of

One of the things that makes robots interesting is their special moral location.

By definition, a robot is a machine that behaves, in some way, “like a human”.  Some robots look like humans, some don’t.  Some talk or listen, others just move in “meaningful”, i.e., human-like ways.

Even more fundamentally, robots are not human, but they are built by humans.  A robot cannot be held morally responsible for its actions, but whoever built it certainly can.

In short, when humans deal with robots, all of our implicit morality sort of, but not quite, engage.  Since robots are “just machines”, we could justifiably treat them like a bicycle or a bucket.  But we generally don’t, not least because they act human enough to be treated at least a little human(e)ly.  Mostly.  (e.g., see here, here)

This summer researchers in NZ report a study of how people feel about “abusive” interactions between people and robots [1].  As they note, there are well recorded incidents of people abusing robots, including verbal abuse and physical violence.  (There are few, if any, recorded incidents of robots abusing humans, at least outside of fiction.)

They present videos of people abusing robots and similar abuse of people.  Their question is, do observers empathize with the victims and view the aggressive behavior similarly, depending on the victim.  They add the additional twist, with the victim—human or robot—turning on the bully and fighting back.

Overall, viewers considered the abusive behavior to be wrong for either the robot or the human victim.  Interestingly, the return aggression was rated less acceptable for the robot as the human victim.  The same retaliation was viewed as more abusive when done by the robot.  So, it’s not OK to be nasty to a robot, but a robot is not supposed to resist in the same way that a human might.

It’s difficult to interpret the results for certain, but people seem to act as if robots “deserve protection from harm to the same extent as humans but are not perceived to have the same right of self-defence” (p. 280)  And, of course, any sign of the much foretold robot uprising is frightening.

This is certainly an interesting and thought-provoking study.  I can’t help but wonder about what kinds of attitudes the participants may have brought in to the study.  Recruited via Amazon Mechanical Turk, these people already work for a giant robot system, and may well be disposed to like robots.  I could certainly imagine other populations, such as anti-automation activists, or football fans, or the kind of rich people who abuse their servants, that might have substantially different attitudes.

Along similar lines, I wonder if these attitudes might be learned from experience.  How many of the participants have actually worked with physical robots?  (Not to mention, being replaced by them?)  For that matter, how many have been in serious fights, either as a victim or an equal participant?

And, of course, the context and back story of the videos surely must matter.  Why was the confrontation happening?  Clearly, if the retaliation had been shown first, it would have been rated much lower for any victim, without the preceding abuse.  But also, who is the person, and why is he attacking the victim?  Is the abuse justified in any way?

And, speaking of justification, I’m pretty sure that you could get radically different results if you included inflammatory cues.  It is almost trivially easy to incite verbal abuse on the Internet, much of which is “justified” by cultural or political ideology.   (In fact, it’s hard not to incite abuse, no matter what you do.)

Just imagine the possible effects of different mixes of gender and skin color for the participants.  I’ll bet dollars to donuts that non-white victims get less empathy and non-white perpetrators even lower ratings.  And retaliation by a non-white victim would be much lower rated.

I suspect that a female victim might get more empathy, and a female perpetrator less.  But certain sub populations would certainly not give you those results.

And finally:  don’t try this with an animal!

The bottom line is that our culture is in an interesting transition phase.  We are deeply familiar with fictional machine intelligences, and this influences our encounters with the much more rudimentary machines that exist today.  We are inclined to protect human-like entities (animals and machines), but not to recognize them as morally equal to humans.

This is quite interesting, because it is similar to paternalistic attitudes about many “minorities”.  For example, colonized people were considered worthy of protection (and “uplift”), but not self-determination.  Women have been treated as protected objects, without self-determination.  And animals should not be mistreated, but have no civil rights.

Over the last few centuries, most of these attitudes have been highly contested, and have radically shifted in many cases.

Are robots likely to see a similar arc towards a recognition of full humanity?  Should they?


  1. Bartneck Christoph and Keijsers Merel, The morality of abusing a robot. Paladyn, Journal of Behavioral Robotics, 11 (1):271-283, 2020. https://www.degruyter.com/view/journals/pjbr/11/1/article-p271.xml

 

Robot  Wednesday

Robot Acrobatic Performance from Östgötateatern

For several years, I’ve been hoping to see exoskeletons incorporated in artistic performances.

(My research has taught me that this is unlikely to happen soon.  While these look like cool costumes to me, their designers consider them to be (potentially dangerous) vehicles for people to drive.  As such, exosuits are designed with speed regulators and many kinds of limits and overrides to prevent accidently whacking yourself or others.  Sigh.)

(And when am I going to see someone riding a Boston Dynamics Spot down the street?  (Ride ‘em robotboy!))

Anyway, I was interested to see video of the Swedish acrobats of Östgötateatern fiddling with an industrial robot arm.  Yes!

The practice session shows some of the moves the acrobats were trying out.  (This is a bit easier to follow than the stage show.)

 

This robot eventually appeared in performances in 2019.

My limited understanding of acrobatics suggests that strength, precision (including consistent repeatability) are prime assets for acrobatics.  So its no surprise that this multi-ton behemoth is a valued member of the troupe!

It may not be quick and agile, but you certainly don’t worry about its ability to lift you!

It also isn’t going to get into personality conflicts, and it simply can’t walk off in a huff!

On the other hand, it certainly could maim or kill you, so you definitely need to keep on your toes around it.  But, then, ‘on your toes’ is the wheelhouse for acrobats, no?

Neat.