All posts by robertmcgrath

Hacking Water Supplies

We’re shocked, shocked!, to learn of yet another grievous security problem with commercial IoT products [1].

In this case, “smart” irrigation systems connected to a network are—wait for it—vulnerable to hacking.

These devices are intended to conserve water by managing irrigation of yards, gardens, and farms. Digitized sensors and controllers make it possible for remote and algorithmic control of irrigation, precisely delivering only what is needed, where and when it is needed.  Anyone who has observe legacy systems mindlessly dumping vast amounts of water everywhere, including into the air, appreciates the value of this precision.

There are many such systems available for municipal, commercial, and residential use.  These systems typically work at the client’s end, i.e., connected to a tap from the water supply.  Some of the systems also connect to wider internet services, such as weather reports, cloud services, and mobile devices.  The latter provide control and tracking information, in lieu of dedicated local resources.

The paper outlines how these systems can be attacked and taken over by hackers. Honestly, there isn’t much surprising here.  (There is one unique for of attack: hacking a weather forecast in ways to fool the algorithms.)  The attacker uses a bot net to find and take control of these smart digital irrigation systems.

So, who cares if my garden sprinklers go haywire?

As the researchers show, a coordinated attack on these systems is an attack not only on the users (including, possibly food production), but on the water supply.  The damage to any one user is minimal, but if many systems are hacked at the same time, it can have a large impact on water supplies—critical infrastructure, indeed.

The paper sketches the basic arithmetic:  a few thousand sprinklers running for an hour could empty a water tower.  Twenty thousand sprinklers running overnight could suck dry a reservoir.  And so on.

This hazard is particularly pernicious because it is

“an attack against critical infrastructure that does not necessitate compromising the infrastructure itself and is done indirectly by attacking attacking [sic] client infrastructure that is not under the control of the critical infrastructure provider.“

No matter how well the water company protects its systems, it is vulnerable to errors and weaknesses in the consumer’s infrastructure.

In this, “piping botnet” is a paradigm for one of the greatest threats posed by the IoT: poorly defended devices are connected directly and indirectly to critical infrastructure.  In this case, the connection is extremely clear (the valves are attached to the faucet from the infrastructure).  In other cases (e.g., a refrigerator that orders food), the links are less direct and harder to identify—but real nonetheless.

Similarly, this is a classic example of a consumer system that can’t do much harm on its own, and appears to need no special security or expertise.  But when a “smart city” is infested with millions of poorly secured basically autonomous devices, the aggregate is a significant potential hazard.

I’ll note that because the effects are so very clear, there are defenses that will probably be deployed to project the infrastructure from these systems.  For one thing, utilities will try to use “smart meters” to detect and disconnect misbehaving consumer systems. Smart meters can be made hard to hack, though they still might be suborned.  In that case, a last line of defense could be an off line monitor installed by the consumer that detects gross misbehavior and cuts the system off the infrastructure.

  1. Ben Nassi, Moshe Sror, Ido Lavi, Yair Meidan, Asaf Shabtai, and Yuval Elovici, Piping Botnet – Turning Green Technology into a Water Disaster. arXiv, 2018.


Triassic Winged Dinosaurs*

Of all the wondrous dinosaurs, surely the ancient fliers are the most awe inspiring.  At the same time as ancestral birds and close relatives evolved near the ground and up into the air, a whole other group of animals, the pterosaurs and pterodactyls soared over the heads of the dinosaurs.

Some of the largest animals ever to fly, they must have been awesome to see.

This summer a group from the US reports a new find, a pterosaur fossil from Utah [1].  With a 1.5 m wingspan, this find is significantly earlier (late Triassic, circa 200 million years ago—long before the classic dinosaurs species we all know) than other pterosaurs. It was also found in sandstone from a dry, desert environment, while other finds have been in marine environments in Europe [2].

Artist’s impression of Caelestiventus hanseni (Credit: Michael Skrepnick)

It isn’t difficult to believe that large flying animals could spread to many environments, and also evolve to specialize for, say hunting fish.  So we can see that this family may well have lived in many places, for a long time.  But what we have generally considered the “normal” lifestyle of the pterosaurs—cliff side nesting along shores, eating fish—may well be a successful specialization of a much more diverse family.

Which all goes to show that we need to be very careful about over interpreting the sparse fossil record.  Previous evidence only included marine pterosaurs from much later.  We now know that interpreting this as evidence that the species did not live elsewhere much earlier was incorrect.

* For some, this species is technically not a “dinosaur”. But it’s a large, ancient, school-of dinosaur, so that’s close enough for me.

  1. Brooks B. Britt, Fabio M. Dalla Vecchia, Daniel J. Chure, George F. Engelmann, Michael F. Whiting, and Rodney D. Scheetz, Caelestiventus hanseni gen. et sp. nov. extends the desert-dwelling pterosaur record back 65 million years. Nature Ecology & Evolution, 2018/08/13 2018.
  2. Mary Halton, Winged reptiles thrived before dinosaurs, in BBC News – Science & Environment. 2018.

Confusing ‘Blockchain’ Projects

“Blockchain technology” is becoming a term with a variety of meanings, some of which have little to do with blocks or chains.

This month Microsoft released a variation on the theme, “proof of authority”.  This concept is a consensus protocol that works on a “permissioned” network, i.e., all the parties have to be registered and therefore are “trusted” to some degree.  Because the parties are vetted, there is no need for the grievous waste of mining.

These features are definitely not Nakamotoan, but they allow the construction of robust decentralized applications similar to the idea of basic blockchains—at a fraction of the computing cost, in principle.

Things are further confused by the fact that this system is deployed on Microsoft’s Azure cloud [1]—the antithesis of the Nakamotoan open, peer-to-peer network.  For example, there is something they call an identity leasing system” (definitely a “centralized” concept), and the usual cloud services that assure high availability, so that, for instance, “[i]n the case of a VM or regional outage, new nodes can quickly spin up and resume the previous nodes’ identities.”   I’m not sure that that means, but it ain’t exactly the classic Nakamotoan open peer-to-peer internet.

Diagram from Microsoft documentation: a network of admins who run the network.  This is surely a non-Nakamotoan architecture.

On the other hand, this system is implemented on top of Ethereum, in the form of “smart contracts” (in fact, adapted from Parity).  So, in between the non-Nakamotoan cloud and the non-Nakamotoan consensus protocol, lies the very Nakamotoan Ethereum network and its school-of-Nakamoto executable contracts.

So, should this be considered a ‘blockchain’ system, or not?  I dunno.

As an engineer, I wonder what the advantage of using Ethereum is.  Obviously, in a permissioned network, it is possible to deploy whatever virtual machine you want. In fact, the system is implemented in VMs on Azure.  So what is the benefit of using Ethereum qua Ethereum?  I dunno.

Is it mainly for fault tolerance?  Microsoft documentation suggests that this might be true:

[I]n private/consortium networks the underlying Ether has no value. An alternative protocol, proof-of-authority, is more suitable for permissioned networks where all consensus participants are known and reputable. Without the need for mining, Proof-of-authority is more efficient while still retaining Byzantine fault tolerance.”

Along those lines, I also wonder what the performance of this wonky hybrid stack really is. The point of the ‘proof of authority’ protocol is efficiency, and using the cloud provides resiliency and robustness and maybe a kind of “trustlessness”.

So, is the overhead of Ethereums’ protocol worthwhile?  How do these layers interact anyway?  Is the “trustless” network relevant, given the “trusted” layer below and above it?

Interesting times.  Baffling. But interesting.

  1. codyborn and Pat Altimore, Ethereum proof-of-authority consortium, in Microsoft Azure – Blockchain Workbench. 2018.
  2. Wolfie Zhao (2018) Microsoft Rolls Out ‘Proof-of-Authority’ Ethereum Consensus on Azure. Coindesk,


Cryptocurrency Thursday

A Robot Begs For It’s Life. How Do You Respond?

With the rise of social robots, we are now seeing a wave of social robot psychology.  This is kind of neat, since we have more than six centuries of experimental social psychology that we can reprise with robots in the picture.

As discussed earlier, we can do ‘person perception’ studies of many kinds.  But we can also investigate pretty much everything else. (Heck there is an entire conference on “interpersonal attraction” with robots.)

This month researchers from Germany report a study that parallels studies of empathy [1].  The participants interacted with a robot which used more and less human like social signals. When the interaction was over and the robot was to be switched off, some of the robots objected, and asked not to be turned off.

The general hypothesis here is that the more human-like a robot’s interaction, the more it will be unconsciously treated like a human.  The researchers cite quite a few studies that generally support this idea (including a reprise of the Milgram obedience studies, with a robot victim).

The act of turning off a robot can be interpreted in different ways.  In particular, “turning off” a human implies a very significant and potentially harmful act, compared to turning off a machine which can be turned back on without harm.

The addition of the robot’s “objection” to being turned off is yet another cue to humanity, indicating autonomy and self-protection.  To the degree that the robot is perceived as “human”, a person might feel stronger empathy toward the plea to not be turned off.

The results showed that people hesitated or refused to turn off the robot more often when it behaved socially, and when it objected.  This is further evidence of the influence of social behaviors on the perception of robots, and indicates that autonomous behavior is a potent cue.

This is an interesting study.  (And the paper has a number of interesting references, too.)

However, I think the researchers should consider some additional points.

First of all, they repeatedly assert that the “switching off” action does not correspond to human interactions.

“The switching off situation of the current study does not occur between human interaction partners.” ([1], p. 16)

I would argue that there are quite a few situations that are analogous.  Turning away from a stranger in distress. Terminating a conversation (e.g., “there is nothing more to say.  Good bye.”)  Denying service. Firing or ejecting a person.  And so on. These actions do not “kill” the person literally, but effectively “kill” them socially.

So this experimental interaction may be a lot more representative than the researchers think.

Second, as I suggested above, this situation might be seen to be influenced by empathy for the victim.  Discomfort with turning off the robot might reflect a question of “how would feel if someone wanted to turn me off.”  To the degree that the robot is human (and likable), the subjects may identify with its plight.

This leads to the vexed topic of race.  As discussed earlier, the appearance of the robot may be (unconsciously) perceived as a racial class, with accompanying behaviors.  Interracial interactions are often characterized by less empathy, among other unfortunate features.

In this study, I cannot help but notice that the victim is a white robot.  And the subjects are young adults living in Europe, and therefore clearly part of a racially mixed culture, with considerable racial tension.  Inevitably, I have to speculate that robots with different color skins would be perceived as more and less likable, and would be more and less likely to be turned off, regardless of an objection.

So, there is a very obvious follow up experiment….

  1. Aike C. Horstmann, Nikolai Bock, Eva Linhuber, Jessica M. Szczuka, Carolin Straßmann, and Nicole C. Krämer, Do a robot’s social skills and its objection discourage interactants from switching the robot off? PLOS ONE, 13 (7):e0201581, 2018.


Robot Wednesday

Software Chaff? Probably Not A Good Idea

If we can’t avoid software bugs, then let’s make bugs our product!

This summer researchers at NYU propose “Chaff Bugs”, deliberately introducing many harmless bugs into software, so that attackers will waste time trying to exploit the chaff rather than the wheat of real bugs [2].


My first thought was, “this is easy!”  After all, I’ve been creating buggy software all my life!

But, of course, the trick is to create bugs that are provably but non-obviously harmful and that are indistinguishable from real bugs.  Which is hard work.  Though it may be easier than eliminating all bugs, which is basically impossible.

“Rather than eliminating bugs, we instead add large numbers of bugs that are provably (but not obviously) non-exploitable. Attackers who attempt to find and exploit bugs in software will, with high probability, find an intentionally placed non-exploitable bug and waste precious resources in trying to build a working exploit. “ ([2], p.1)

“[Y]ou have to be positive that the chaff bugs are in fact harmless, it only works if it’s okay if the program crashes on malicious outputs, and you have to make sure the faux bugs are indistinguishable from naturally occurring bugs.” (from [1])

The logic of this approach is that developing an exploit from a bug is difficult, manual process. I’d say that exploits are by definition not designed, and they are often obscure, i.e., the nefarious result is unrelated and often unpredictable from the bug itself. The first steps in breaking in are to discover a bug and evaluate its implications.

Tossing in a large number of false trails forces an attacker to expend time evaluating each fake bug, reducing the time spent on any real bugs.

This approach is vaguely similar to software defenses based on obfuscation by randomization or encryption of code. Obfuscation increases the effort to understand the code, which makes bugs harder to find and harder to evaluate.  Chaff buries the vulnerability in noise.

The technique focuses on memory safety bugs, which is only one type of error, but certainly a rich source of security exploits.  In fact, overwriting memory is the classic and most effective attack on software, and many other exploits are used to induce memory safety violations.

Anyway, the technique discussed centers on injecting errors into the source code that cause out of bounds memory references. The second key feature is for the bug to be difficult to triage, i.e., to know what the effect of the bad memory references.

Making the bug harmless Is done by fiddling with what gets overwritten, and what is written.

“[A]s the bug injector we have a significant advantage: rather than trying to find an existing unused variable we can simply add our own variables and then overwrite them.” ([2], p. 3)

Values written are constrained to point to non-mapped or non-executable memory.  This can crash, but cannot harm the legitimate code.

To make these bugs harder to evaluate, they must be obfuscated. Setting the values is spread through the execution path, requiring extensive analysis to discover the injection logic.  The fake variables are obscured by using them later, after the fake bug.  These steps defeat simplistic program analysis that would reveal the deliberate contraints that make the fake bugs safe.

One thing this particular demonstration does not do is try to make the injected bugs indistinguishable from real bugs. For one thing, it isn’t at all clear what a “naturally occurring bug” looks like, so it isn’t possible to make artificial bugs “realistic”

“Although we believe it is possible to make our bugs indistinguishable from naturally occurring bugs, we do not currently have a good understanding of what makes a naturally occurring bug look “realistic”, and without such an understanding our attempts would necessarily be incomplete.” ([2], p.2)

The authors also note that the initial work the bugs are pretty homogeneous, which is likely to be make them easier to detect.  Once a handful of chaff is identified, but may well be possible to quickly find all the chaff.

I’ll note some other weaknesses of this approach.

First, and most important, their threat model rests heavily on poorly established hypotheses of how attackers work. In fact, this approach will work only for attackers that have only rudimentary analysis tools.

“The process of going from an initial bug to a working exploit is generally long and difficult […and]it is still a largely manual process.” ([2], p. 1)

I would say that this fact is not in evidence, and most likely is false. Given this hand wavy hypothesis, the paper does not demonstrate that this technique is actually effective, and it is hard to think exactly how you would show that it is.

Second, adding chaff is, by definition, lowering the quality of the code.  This will be paid for in performance and in maintenance. If nothing else, the chaffed code can no longer be tested or patched, which adds a one way “trap door” into the development pipeline.  I.e., we can test and patch the code, but then, at some point a chaffed version is created and it can no longer be changed. It will be necessary to go back to the pre-chaffed code and regenerate a new, chaffed code.

Third, adding chaff may also add unintended bugs. Yes, there will be bugs in the chaff process.  These will be especially painful to analyze, as they are probably self obfuscating and, as noted, one-way.

Finally, this whole idea is purely academic.  It is meaningful only for languages that are not memory safe.  Using a memory safe language is probably going to be a lot simpler than hacking up your code with this unproven and unprovable obfuscation.

  1. Samantha Cole, Cramming Software With Thousands of Fake Bugs Could Make It More Secure, in Motherboard. 2018.
  2. Zhenghao Hu, Yu Hu, and Brendan Dolan-Gavitt, Chaff Bugs: Deterring Attackers by Making Software Buggier. arXive, 2018.

What is Coworking? It’s All About Community Leadership

Coworking is all about community, community, community.

But this community doesn’t happen by chance or arise spontaneously.

As discussed in Chapter 5 of my book [2] “What is Coworking?”, the contemporary coworking phenomenon is characterized by a cadre of community leaders, who combine roles and skills from a number of other professions.  The success of a coworking space and its community depends on great community leadership.

This month Sensei Cat Johnson illustrates this point in “An Open Letter to Community Managers”, which is surely addressed to her own community leaders [1].

As usual, Sensei Cat says it so much better than I could.


“Without you, this whole coworking thing would fall apart.”

Sensei Cat calls out many roles these professional “community managers” play in her coworking space, including technical IT support, orienting new workers, and office management.  The “manager” also organizes social events (“what about those happy hours we all roll into without much thought”), deals with personal conflicts, talks to everyone, and generally “connects’ everyone.

“You balance badass, cruise director, networker extraordinaire and all around kind/thoughtful/fun person, and you do it with style and flair.”

Besides the vital social glue that is so important to the happiness and well being of the workers, the community leader fosters networking and collaboration, which is one of the key benefits workers find in their coworking space.

“Your knack for connecting people has led to more collaborations and friendships than we could ever count.”

Indeed, these leaders create and sustain the community, and really are the heart of a coworking community.  As Sensei Cat puts it.

“You are the face and maestro of our community.”

As usual, she says it so much better than I could.

But if you want to read my own exposition of this topic and a lot more, please read my new book [2].  Available from several sources.

  1. Cat Johnson, An Open Letter to Community Managers, in Cowroking Out Loud. 2018.
  2. Robert E. McGrath, What is Coworking? A look at the multifaceted places where the gig economy happens and workers are happy to find community. 2018, Robert E. McGrath: Urbana.



What is Coworking?

Book Review: “Only To Sleep” by Lawrence Osborne

Only To Sleep by Lawrence Osborne

Phillip Marlow appeared in stories by Raymond Chandler from the 1930’s to 1950’s, as well as movie adaptations on into the 1980’s.

In recent years, the Chandler estate has authorized some new works about Marlowe by contemporary authors, emulating the original style.  The Black-Eyed Blonde (2014) by Benjamin Black [1] has now been followed by a new novel by Lawrence Osborne.

Only To Sleep is set in 1988, when Marlow is 72 years old and the world has moved on from mid century California noir.  Retired in Mexico, Marlowe is drawn back to work to investigate the death of an American.  Deep in debt, heavily insured, and a poorly documented drowning in Mexico—the insurance company would like to be sure this isn’t a scam.

The old war horse can’t resist one more charge when the trumpet blows.

The story features a lot of scenery in rural Mexico (circa 1988): dust, jungle, light, and a lot of people on the make, both locals and gringos.  Phillip chases clues from place to place, drinking, wise cracking and bribing bus boys.  Just like the old days.

If this is a classic Marlow case, the man himself is scarcely the same. Old and slow, he’s not going to be kicking in doors or knocking heads.  And, as for the dames, the pilot is out, and he’s out of the combat zone.  Nothing but memories on that front.

“Count me as one of those who know that life is unbearable not because it’s a tragedy but because it’s a romance. Old age only makes it worse, because now the race against time has reached the hour of high noon.” (p. 194)

It’s been a long time since I read the originals, and I frankly don’t remember the style very clearly.  So I can’t judge how well Osbourne emulates Chandler.  You can draw your own conclusions.

But the story certainly hits the noir song dead on.  Marlow is not motivated by the money, or by the interests of his insurance company clients.  And the facts are murky, to say the least.  So why does he persist?

An old man could be excused for walking away, especially when things get dicey.  But how can he let it be?  The whole story is driven by the desire to know what really happened. And as always, he is trying to answer the noirest question of all:  what is the true moral course?

If noir is a tale about the last honest man, this must be the last case of the last honest man.

  1. Benjamin Black, The Black-Eyed Blonde, New York, Henry Holt and Company, 2014.
  2. Lawrence Osborne, Only To Sleep: A Phillip Marlowe Novel, New York, Hogarth, 2018.


Sunday Book Reviews