“Artificial Creatures” from Spoon

There are so many devices wanting to live with us, as well as a crop of “personal” robots. Everything wants to interact with us, but do we want to interact to them?

Too many products and not enough design to go around.

Then there is Spoon.

We design artificial creatures.

A partner to face the big challenges rising in front of us.

A new species, between the real and digital domains for humans, among humans.

OK, these look really cool!

I want one!

But what are they for?

This isn’t very clear at all. The only concrete application mentioned is “a totally new and enhanced experience while welcoming people in shops, hotels, institutions and events.” (I guess this is competing with RoboThespian.)

Anyway, it is slick and sexy design.

The list of company personnel has, like, one programmer and a whole bunch of designers and artisans. Heck, they have an art director, and a philosopher, for crying out loud.

Did I forget to say that they are French!

I have no idea exactly what they are going to build, but I will be looking forward to finding out.

 

Robot Wednesday

HFOSS – Humanitarian Free and Open Source Software

Open source software is a good thing, and humanitarian applications are a good thing, too.

So Humanitarian Free and Open Source Software should be a really good thing, no? It’s even got an acronym, HFOSS.

This fall, Gregory W. Hislop and Heidi J. C. Ellis discuss a related point, the potential value of Humanitarian Open Source Software in Computing Education. [1]

For one thing,, any open source project is a potential arena for students to learn about real life software development. By definition, FOSS projects are open and accessible for anyone, including students. An active and successful FOSS project will have a community of people contributing in a variety of roles, and usually will have open tasks that students might well take up. In addition, the decision making process is visible, and, as Hislop and Ellis note, the history of the project is available. A sufficiently motivated student could learn a lot.

(We may skip over the question of whether FOSS projects represent best or even common practices for all software projects. I.e., FOSS isn’t necessarily a “real world” example for many kinds of software.)

Humanitarian projects are interesting for other reasons. For one thing, by definition, a successful humanitarian project of any kind is focused on problem solving for people other than programmers, college students. Simply figuring out how and even whether technical solutions actually help the intended targets is a valuable exercise, in my opinion.

In addition, real life humanitarian software generally addresses large scale, long term problems, with non-trivial constraints. They are excellent challenge problems, all the more so because the price point is zero dollars and the IP must be robustly open to everyone.

Hislop and Ellis make some interesting observations about ways in which these projects can be sued in computing education.

They encourage thinking about all the roles in a technology project, not just coding or testing. (Hear, hear!) Documentation, planning, above all, maintenance not only consume most of the work effort, but are usually the difference between success and failure of a software project. Get good at it, kids!

(I’ll also point out that designing a solution involves so much more than whacking out software–you need to understand the problem from the user’s point of view.)

They also point out the value of connecting the digital problems solving with an understanding of the actual, on the ground, problems and customers. Technological glitz generally does not survive contact with the customer, especially if the customer is an impoverished mission-oriented organization. Good intentions are only the starting point for actually solving real world humanitarian problems.

This last point is actually the main distinction between FOSS and HFOSS. There is just as much practical value in participating in most FOSS projects. And, for that matter, there is a long tradition of service learning, much of it “humanitarian”. HFOSS is the intersection of these educational opportunities, and it is actually pretty tiny. Most FOSS isn’t “humanitarian”, and most human service or humanitarian problems don’t need software.

In fact, engagement with actual community organizations and initiatives is highly likely to teach students that humanitarian problems don’t have technological solutions, especially software solutions. Digital technology may be able to help, at least a little. But humanitarianism is really a human-to-human thing.

If I were supervising a HFOSS class, I would probably want to try to get the students to think about a number of philosophical points relevant to their potential careers.

First off all, students should observe the personal motivations of participants in an HFOSS project, and compare them to motivations for people doing the same kind of work—the exact same kind of work—for other contexts (e.g., large corporation, personal start-up, government agency). Working on something with the goal to make someone else’s life better is kinda not the same thing as angling for a big FU payout.

The second thing that students will need to learn is just how problematic it can be to try to help “them” to solve “their” problems. However great your Buck Rogers tech might be, swooping in from on high to “fix the world” isn’t likely to garner a lot of enthusiasm from the people you mean to help. In fact, “they” may not think they need wheeze-bang new software at all.

Working with real people to understand and solve real problems is rewarding. And in some cases, a bit of HFOSS might be a home run. But HFOSS for the sake of HFOSS cannot possibly succeed. And that is a lesson worth learning.


  1. Gregory W. Hislop and Heidi J. C. Ellis, Humanitarian Open Source Software in Computing Education. Computer, 50 (10):98-101, 2017. http://ieeexplore.ieee.org/document/805731

New Study of Mass Extinctions

There have been five mass extinctions in the history of life on Earth, during which vast numbers of animals and plants died out. So far, after each big die off, new species and families have evolved, filling the world with a new, but just as diverse array of life forms as before the disaster

The general intuition is that a mass extinction creates an impoverished, less diverse collection of species. The survivors who weather the disaster are the founders of the great radiation of new diversity. (This pattern is seen at a smaller scale in local disasters, such as volcanic eruption that obliterates almost all life.)

This intuition is often applied to our own age, which we recognize as the beginning of the sixth great extinction.. We see many specialized species reduced and wiped out, while other robust “generalists”, such as cockroaches or rats, thrive and spread. Presumably, 100,000 years from now, there may be a vast radiation of new species of rodents, expanding into the empty niches of the post human Earth.

But is this process really what has happened in the past extinctions?

This month, David J. Button and colleagues publish a report of their study of “faunal cosmopolitanism” among 1046 early amniote species ranging from 315–170 M years ago. This period includes the Permian–Triassic and Triassic–Jurassic mass extinctions [1].

They take into account the relationships among the species, so that individuals from related but distinct species can reflect the geographical range of the group, even if only a few samples are available.

The basic finding supports the common intuition: there is a sharp rise in their index of cosmopolitanism (phylogenetic biogeographic connectedness) after the Permian–Triassic extinction, followed by a decrease (i.e., more geographic specialization through the Triassic, and another spike after the Triassic–Jurassic extinction.

Furthermore, they find evidence that “the increases in pBC following each extinction were primarily driven by the opportunistic radiation of novel taxa to generate cosmopolitan ‘disaster faunas’, rather than being due to preferential extinction of endemic taxa .” (p. 4)  I.e., new “cosmopolitan” species emerge, rather than a old species survives to spread over the world.  (This is bad news for cockroaches and rats, I’m afraid.)

These results certainly indicate the importance of unique events in the history of life, such as mass extinctions. They also suggest that mass extinctions have a predictable effect, at least at a global level.

Neat.


  1. David J Button, Graeme T. Lloyd, Martin D.Ezcurra, and Richard J. Butler, Mass extinctions drove increased global faunal cosmopolitanism on the supercontinent Pangaea. Nature Communications, 8 (1):733, 2017/10/10 2017. https://doi.org/10.1038/s41467-017-00827-7

Book Review: “A Spool of Blue Thread” by Anne Tyler

A Spool of Blue Thread by Anne Tyler

I haven’t read very much by Tyler, though she has been writing longer than I’ve been able to read (which is a long time, now.). That says as much about me and my own reading tastes as about her writing.

A Spool of Blue Thread (2015) is about a family and a house in mid-twentieth century Baltimore. The family is like every other family, filled with loyalty, affection, conflict, and history. This family also has secrets and mysteries. I wouldn’t say these are deep secrets or mysteries (this isn’t a Da Vinci Code or anything like that). Mostly they matter only to the family itself.

The plot centers on the end of the lives of the “middle” generation, and circle back to the previous generation (in the 1940s and 50s), and we glance at the beginnings of the next generation of children and grand children.

There is a spool of blue thread appears in the story, though I didn’t really grok the exact metaphor. At least partly, the thread is an unexplained connection between generations. The family is bound together in ways that they don’t really understand.

(Maybe I got it after all.)

There isn’t a lot of action, most of the story is a slow uncovering of the past and how it has come out in the present. As the novel unfolds, we come to discover some unexpected hidden depths in some of the people and their relationships. It is fair to say that they both understand and misunderstand each other.

It is notable that at the end, there remain mysteries about the current generation, as well as uncertainty about the future. How will this generation turn out? Will they remain close, or scatter? What, after all, makes them tick?

We don’t know, and Tyler seems to suggest that we never will know.


  1. Anne Tyler, A Spool of Blue Thread, New York, Ballantine Books, 2015.

 

Sunday Book Reviews

More Evidence of Pesticide Harm to Pollinators

Bees and other pollinators seem to be dying off or disappearing in many parts of the Earth. This is a bad thing—if only because humans depend on these species to help plants grow.

In the past decade, evidence has accumulated, confirming the grim picture of world wide decline. It isn’t clear why this is happening, but one leading factor seems to be pesticides, specifically, neonics. These chemicals are applied to seeds to protect them which is much less dangerous than broad spraying or soaking the soil. However, it appears that residues persist and are picked up and accumulate in the bodies of pollinators. Neonics are potent neurotoxins for bees, and they certainly could be dangerous.

This month a group from University of Neuchâtel published a study of 198 samples of honey from around the world [2]. Traces of neocortinoids were found in 75% of the samples, representing every continent except Antarctica. These traces suggest that the bees that made the honey have indeed been exposed to these chemicals.

The levels of the chemicals in the honey are not dangerous, per se. It also isn’t clear what the reported contamination implies about exposure of the bees. I.e., how much exposure do these samples represent? Do these residues indicated harmful effects on the pollinators?

The findings certainly raise concern because of the broad geographic range, and the presence of multiple chemicals in the samples. Whatever is going on, it seems to be happening everywhere.

This issue is becoming mired in controversy. Manufacturers of the pesticides seem to be in a “denial” stage, rejecting early evidence of the harm to pollinators, and demanding higher standards of proof (e.g., [1]). Obviously, they have reason to want solid evidence that their lucrative products need to be withdrawn. (There is also a geopolitical dimension, as some countries have found it easy to ban US made products, regardless of the reason.)

I have to wonder a bit at the criticism of this study. The press and industry organizations were emphatic that the reported contamination level isn’t dangerous to people <<link BBC>>. That, of course, is nearly irrelevant. The important point is how healthy the bees are, which we don’t know.

There was also criticism that the sample is “too small” to draw conclusions. This is a bit hard to understand. The conclusion is that traces were found in many samples all over the world. Who cares if it is 75% or 50% or 10% of the samples, when the same contaminants are found everywhere. If these tiny traces show up in a small sample, they aren’t likely to disappear in any larger sample, too.

I would hope that these trade associations that reject this research as inconclusive are conducting their own, larger studies to determine what the actual facts are. If not, then they are just playing PR games to protect their profits, and I will have no trust in anything they say on the topic.


  1. Matt McGrath, Pesticides linked to bee deaths found in most honey samples, in BBC News -Science & Environmen. 2017. http://www.bbc.com/news/science-environment-41512791
  2. E. A. D. Mitchell, B. Mulhauser, M. Mulot, A. Mutabazi, G. Glauser, and A. Aebi, A worldwide survey of neonicotinoids in honey. Science, 358 (6359):109, 2017. http://science.sciencemag.org/content/358/6359/109.abstract

 

Machine Learning Study of Couple Therapy

One of the interesting developments in recent decades has been the deployment of massive computational analyses to observations of human behavior. Even more remarkably, machine learning has proved as good or better at understanding and predicting human behavior as any other method including human judgment (or introspection).

There are many sensors available, which opens the way for all kinds of measurements, including interpersonal behavior. These capabilities have opened a whole new type of social psychology. Alex Pentland called this Social Physics [2].  Md Nasir, and colleagues at USC report on experiments in what they call “Behavioral Signal Processing” [1]. (The same technology is also used for surveillance and persuasion, which are not necessarily in the interests of the subject.)

Of course, the most unique and important human behavior is language. Computers have astonishing abilities to understand speech and written language, despite the absence of any specific “knowledge”, vocal apparatus, or human nervous system. These abilities are simply impossible, according to psychological theory of the 1970s. Yet there they are.

This fall Nasir et al report yet another study, this one measuring the speech of couples in therapy [1]. The machine learning was able to predict outcomes of the therapy as well or better than any other measures, including clinical judgment.

showed that predictions of relationship outcomes obtained directly from vocal acoustics are comparable or superior to those obtained using human-rated behavioral codes as prediction features.“ (p. 1)

The actual study used a large collection of recordings of therapy sessions, but the techniques could be applied to any digitized recording, and likely to live streams of data. One advantage of the prerecorded collection is that it has been hand coded for behavioral features, which can be compared to the machine derived predictions. Also, sufficient time has passed since the collection was recorded to give realistic estimates of long term outcomes.

(One interesting aspect of this study is that the researchers ignored the original comparison conditions. “[O]ur interest … is on predicting relational outcomes independent of treatment received. “ (p. 7))

The speech analysis used common techniques, which are bound to yield a flood of data. In addition to features and statistics for each individual, there were also dyadic measures. The speaking was broken into turn-taking and across the whole interaction, recording various measures of changes.

The machine learning used the rated outcomes to build a classification using various combinations of features. As always, the high dimensional data had to be winnowed down (a process that involves human judgment).

The results are clear: the machine learning essentially tied the human ratings, at least as far as predicting the (human generated) outcome measures.

It is important to note that the machine learning was based on shallow analysis of the speech: loudness, pitch, timing, and so on. No semantic information was included, nor were other modalities such as gestures or facial expressions. The fact that even these relatively trivial features could even tie human judgment is yet another indictment of the unreliability of human intuition about human behavior.

This study is quite suggestive. Perhaps therapists (or self-therapizing individuals, who have fools for clients) might have tools that signal the “state” of a relationship, and help guide the subjects to a better state.

Of course, the models developed in this particular study only predicted the “outcome”. They neither explain the meaning of the variables (just how does the loudness and pacing of speech cause the outcome?), nor even document much in the way of process. If the therapist intervenes to, say, moderate the intensity of their voices, would that have beneficial effects. When and how much would be needed?

Finally, the analysis includes only the two subjects. Shouldn’t the behavior of the therapist be included in the classifier? In principle, the therapist should be doing something, no? Even if it is a placebo effect, it should show up in the machine classifier.

Its early days, but it certainly is exciting to think about creating tools that help people learn to interact with each other in positive ways. And it will be really good to see this technology employed to actually help people, rather than to try to control and manipulate them. (I’m talking to you Facebook, Google, et al.)


  1. Md Nasir, Brian Robert Baucom, Panayiotis Georgiou, and Shrikanth Narayanan, Predicting couple therapy outcomes based on speech acoustic features. PLOS ONE, 12 (9):e0185123, 2017. https://doi.org/10.1371/journal.pone.0185123
  2. Alex Pentland, Social Physics: How Good Ideas Spread – The Lessons From A New Science, New York, The Penguin Press, 2014.

Bitcoin Is Designed To Be Wasteful

..and that won’t work for long.

One of the great curiosities of Nakamotoan cryptocurrencies is that the key innovation in the protocol is the use of “proof of work” to implement a truly decentralized timestamp [2]. At the core of this innovation there is a scratch off lottery, in which computers spin and spin, looking for a winning number. This computation is deliberately designed to be inefficient, so that it cannot be cheated or repeated. In fact, there is a “knob” that resets the difficulty to keep it inefficient in the face of technical improvements.

For me, this feature is just plain weird. My whole career–in fact, everybody’s career–has been about making software go faster. Bitcoin not only doesn’t want to go faster, it keeps adjusting the parameters to prevent software from going faster. This is so backwards and so wrong to conventional software engineers.

The underlying reason for this approach is to force real world costs into the protocol, in order to make the system “fair”. There is no back door or magic key for privileged users to game the system.  Only real (computing) work counts.

As a side-effect, these costs create a form of “value” for Bitcoin, which logically must be worth at least as much as the cost of the computing work needed to obtain them. This is a sort of computational labor theory of value, which is no doubt amusing  to twenty first century Marxists.

Unfortunately, the “work” that is used to mine and handle Bitcoin is a crude, brute force algorithm. It is simple and effective, but it sucks down computing cycles like mad, which use up large amounts of electricity.

Peter Fairley writes in IEEE Spectrum about “The Ridiculous Amount of Energy It Takes to Run Bitcoin” [1]. In all, the Bitcoin network does 5 quintillion (5,000,000,000,000,000,000) 256-bit cryptographic hashes every second which he estimates consumes about 500MW of power. In addition, there are other cryptocurrencies and blockchain networks (including multiple versions of Bitcoin itself), with substantial, if lesser, power consumption.

This is quite a bit of power, something along the lines of a small city. Of course, it’s only a small slice of the power consumed by the whole Internet, not to mention the rest of modern life. But the engineer in me hates to see so much power burned off for so little meaningful work.

Fairley argues that a bigger problem is that if Bitcoin or some form of Nakamotoan blockchain succeeds and grows to be come truly ubiquitous, then the power consumption is likely to grow to the point that it is unsustainable. Even if we are OK with expending cycles for this purpose, at some point there will not be enough power to run and cool all the computers.

Predicting the future is difficult, of course. Computers in general are becoming more efficient, so growth in cryptocurrency networks will not lead to a linear growth in their power use. Nevertheless, it seems likely that the crude proof of work algorithm designed by Nakamoto will be difficult to sustain over the long haul.

As Farley discusses, there are alternative methods to achieve the same goal. Many alternatives, in fact.

For one, there is substantial interest in various “proprietary” blockchains, which may work the same way as Bitcoin, but do not rely on the open Internet. These networks trade off the “trustless” and “decentralized” nature of Nakamotoan style protocol in various ways, gaining much more efficient performance as well as other potential benefits, such as legally documented authentication

There are also alternative “math problems” that may be used instead of Nakamoto’s brute force hashing algorithm (e.g., Proof of Stake, or Algorand). It is also possible to utilize special purpose hardware, or even Quantum Computing.

In short, there are alternative technologies that would make a cryptocurrency far more scalable. If Bitcoin were normal software, there would be a strong case for reengineering it.

But Bitcoin isn’t “normal”. Not even close to normal.

Another cunning innovation from Nakamoto is its “decentralized” governance model. Changes to the code are published and users vote on them by adopting or ignoring them. There is no central planning, or any planning at all. Furthermore, changes that are not backward compatible essentially create a “new currency”, which may or may not eliminate the “old” code. These fork events can and do create parallel, competing versions of a cryptocurrency.

The point of Bitcoin’s decentralized decision making is to protect against “the man”. At the core of Nakamotoan ideology is the desire to make sure that no government or corporate cabal can fiddle with the currency, block access, or rewrite history. Changes require “consensus”, and “everyone” has a vote.

Unfortunately, this design also protects from centralized engineering. Technological progress requires decisions, and sometimes the decisions are complicated. Furthermore, good engineering is proactive, not reactive: it is a bad idea to wait until a problem is catastrophic or evident to everyone. Furthermore, rational engineering cannot always make everyone happy.

This is a formula for disaster. Ethereum has not only split into two currencies, one of the forks actually rewrote history. Bitcoin itself has been stuck in a rut, unable to deal with the most basic engineering problem (data structures), and heading for a catastrophic split into multiple versions. For that matter, dozens of other cryptocurrencies have floated, competing with Bitcoin (and sucking down yet more power).

If recent history is a guide, no improvement to Bitcoin is likely to be accepted by the current Bitcoin network. However, it is possible to boot up a technology that successfully competes with Bitcoin (as, say, Ethereum has done), and which might one day overshadow it. But Bitcoin probably cannot change.

At some point, Bitcoin qua Bitcoin will surely crash. Perhaps it will be replaced by other cryptocurrencies. Perhaps politics will keep it marginalized. For example, access to vast amounts of electricity is clearly a potential choke point for such a profligate algorithm. Or perhaps technical changes will break it. For example, Quantum Computing will eventually be able to both crack the encryption and likely will also be able to overwhelm the protocol with replay and other attacks. At that point, the blockchain will be corrupted and Bitcoins will have little value.

One of “Bob’s Rules” is that “All software becomes obsolete, sometimes much sooner than you expect”.

The problem is, Bitcoin is supposed to not be software, it is supposed to be money.  The ramifications of Bitcoin’s inevitable crash are staggering.


  1. Peter Fairley, The Ridiculous Amount of Energy It Takes to Run Bitcoin. IEEE Spectrum, 54 (10):36-59, October 2017. https://spectrum.ieee.org/energy/policy/the-ridiculous-amount-of-energy-it-takes-to-run-bitcoin
  2. Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System. 2009. http://bitcoin.org/bitcoin.pdf

 

Cryptocurrency Thursday

 

A personal blog.

%d bloggers like this: