Category Archives: Technology

New Article on the Clean Energy Credit Union

Hot off the press:  this month’s edition of the Public I includes a new article, “A New Option to Finance A Clean Energy” [1], about the new Clean Energy Credit Union.

[read the article here]

If you want even more information, there is a longer piece here, including an interview with Blake Jones, co-Founder of the Clean Energy  Credit Union.

Check it out.


  1. Robert E. McGrath, A New Option to Finance A Clean Energy Future for Everyone, in The Public : A Paper of the People. 2018. http://publici.ucimc.org/2018/12/a-new-option-to-finance-a-clean-energy-future-for-everyone/

 

A Da Vinci Machine

I read Mark Rosheim’s book [3] when it came out, and loved these elegant mechanisms.  How could you not love robots created by one of the most brilliant design artists known?

What’s been up to since?

Well–the  Leonardo davinci robot society has been building and displaying Leonardo’s robots for all the world to see.  As the web page says, “Exploring the robotic world of the Renaissance Master”.

Cool!

I ran across these guys because they are in the midst of a kickstarter to make some new kits, realizing Leonardo’s drawing machines, da Vinci’s Drawmaton.

The developers are learning how these machines are programmed, via the intricate wood disks, called “petalos” (they look very botanical).

Reverse engineering these petalos, the developers are creating design software to make it easier to create new petalos to make new drawings.

Cool!

It’s kind of cool using digital fabrication to make an analog control to make the analog drawing, rather than just digitally controlling the generation of the analog drawing (or digital drawing).  (Did you follow that?)

It’s harder, but way more elegant.  I’d say that the “technical coolness” multiplier is massive!

Honestly, at the prices they are charging, I’m more interested in their software, which will be an interesting intellectual “capture” of the logic of Leonardo’s design.  The kickstarter doesn’t seem to say when or how the software might be released.  I certainly hope it is published in some form, along with an explanation of how it works.  (Robot society – call me if you want help documenting your software. It would make an interesting paper.)


  1. Leonardo davinci robot society. da Vinci’s Drawmaton. 2018, https://drawmaton.com/.
  2. Charles Nicholl, Leonardo da Vinci: Flights of the Mind, New York, Viking, 2004.
  3. Mark Elling Rosheim, Leonardo’s Lost Robots, New York, Springer, 2006. https://link.springer.com/content/pdf/bfm%3A978-3-540-28497-0%2F1.pdf

 

Ideas for Band Names:

Leonardo’s Petalos
Drawmaton
Leonardo’s Lost Robots

 

Robot Wednesday

What will it take to be cryptotulip of the year in 2018?

The Crypto Tulip of the Year was first awarded in 2018 for “achievements” in 2017.

The award for 2018 will be announced in early January, 2019.

But what does this award mean, and how do you win it?  (Why would you want to win?  I don’t know, but there seems to be plenty of competition.


In the finest Nakamoto traditions, the criteria for this award have been “transparent”—not!

It is now time to give a bit more explanation of this not-very-presigious prize.


To review, the winner of the first award in 2017 was Ethereum. It was cited for an array of accomplishments including spectacular technical failures, problematic successes (CyrptoKitties), disfucntional and very non-Nakamotoan governance—yet no noticeable loss of enthusiasm or optimism.

Runners up included Bitcoin (especially, the lack of progress on scaling) and ICOs (unregulated securities!  Both illegal and ridiculously risky!)

Over the course of 2018, a number of projects and technologies have been mentioned as candidates for the 2018 Crypto Tulip of the Year .

Why were these projects cited, and what will it take to win?

See the brand new explanation at:

What it takes to be cryptotulip of the year
[here]

 

PS. Ideas for Band Names

CryptoTulip
irrationally exuberant technophilic mania

 

Cryptocurrency Thursday

Is Quantum Computing Infeasible?

I’ve written several times about the profound implications of the coming of Quantum Computing, especially Quantum Cryptography. In these comments, I have taken QC to be a done deal (e.g., here, here)

Mikhail Dyakonov writes this fall to remind us (including me) that practical QC has never been demonstrated, and doesn’t seem to be just around the corner either [2].

Oops!

Reading this, I have to realize that my own analyses have been based on a shallow understanding of both the theory and technology, and relying on perhaps over enthusiastic reports.   Be careful, Bob!

Dyakonov makes the interesting point that QC seems to be understood quite well theoretically, but practical implementations are a huge, huge leap.  He estimates that a practical quantum computer would need 1,000 to 100,000 qbits.  This means that the computation amounts to managing 21000 or more (continuously variable) parameters, which is a lot. More that the number of atoms in the universe.  Equivalent to a significant chunk of the current Internet connected infospace.

Even if today’s 10, 30, or 50 qbit systems work (which is not well established, at least in the open literature), the exponential factor means that scaling up to 1,000 or more is far from given.  As he says,

“Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system?

“My answer is simple. No, never.”

So my own confident pronouncements about QC, including the obsolescence of current blockchain systems (e.g., here, here) may be premature and/or uninformed.

Point taken, and I really need to be careful.

However, I think there is still reason to think that QC is coming and will make many current systems obsolete.

For one thing, this is a classic case of Clarke’s First Law:

“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.” [1]

In fact, even Dyakonov says that it is theoretically possible.  He’s arguing that it is so difficult and impractical that it will never be made to work.  And that’s a whole different kind of “no, never” than, say, faster than light or time travel.

So here are some things that might make interesting things happen, even in the face of seemingly intractable difficulty.

One thing to consider is that the small scale systems already demoed might be made useful, perhaps in swarms.  Maybe you don’t need one system with 10,000 qbits.  Maybe 1,000 QCs each with a few tens of qbits can be architected into a powerful system.  (Stranger architectures are done all the time. See Intel’s chip set.)

It also seems to me that, it looks like QC, or school of, will be very useful for things like secure networks and key exchange.  This kind of use case needs the quantum weirdness, but not gigantic amounts of logic.  The point being, there may be niche uses that work log before general purpose quantum supercomputers are available.

Finally, it’s hard to say what is or isn’t possible if cost is no objects. Given the theoretical possibility, it is safe to say that the big code breakers would pay pretty much any price for a working QC.  With the resources of a major nation state, and national survival at risk, ordinary intuitions of what is reasonable do not apply.

For that matter, Sensei Clarke was completely right about this kind of prediction.  It’s hard to guess what will be reasonable.

For example, if you told me in 1969 we would replace all the copper and microwave links of the (proprietary telephone and telex) network with glass and lasers, link billions of computers all over the world, and also replace all the land lines with pocket radios (which are also supercomputers)—I would have said “No, never”.  But we did.

(On the other hand, we have not gone to Mars, which everyone would have bet we would.  But that’s because there is no real good reason to travel to Mars.  There are many very, very good reasons to build QC.)

As my late father used to say, “never is a long time”,  So, who knows?

At this point, I guess I don’t really know that QC is coming or how soon, but I would bet that it is coming and coming soon.  If nothing else, it would be disastrous to assume QC won’t happen, if it does happen.  On the other hand, expecting QC an being disappointed is relatively harmless.

Worrying about “quantum proof” cryptography is a good thing to do, even if QC never materializes.  After all, the cryptography that the Internet and national security depend on are all based on an argument that factoring primes is impractically hard for conceivable, real life computers.  QC is conceivable, so it is well worth worrying about.


  1. Arthur C. Clarke, Profiles Of The Future: An Inquiry Into The Limits Of The Possible, New York, Harper & Row, 1962.
  2. Mikhail Dyakonov, The Case Against Quantum Computing, in IEEE Spectrum – Computing. 2018. https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-computing

 

David Chaum on Blockchain

This fall Nick Paumgarten reported in The New Yorker on the Blockchain Week in NYC [3], including some interesting personal impressions of the Ethereum leadership, from Buerin to Zamphir.  It is nice to read some disinterested observations on this activity—Paumgarten is no enthusiast, nor is he a blind luddite.

Having closely followed cryptomania for many years, I can’t say I was especially surprised by his reports.  If anything, I was worried because he saw things that confirmed what I suspected but hoped were not true. The Emperor’s new clothes really are pretty threadbare.

But the best part of the article was his quote from Sensei David Chaim (who really did invent much of the technology back in the 80’s [1]).

“There’s never been, in the history of civilization, this much money aggregating as a result of doing nothing”  ([3], p. 75)

Me-ow!

I’m not sure whether Paumgarten understood the depth of the dig in that remark.  He cites it in the context of all the non-progress and non-success of the crypto movement.

But it’s actually a wonderfully sophisticated techno-dig.

As I have said in the past, Nakamotoan cryptocurrency drives me mad because I was (and still am, I guess) a professional software engineer.  My entire career has been, to a first approximation, all about making software go faster.  Most of what I know is all about ways to speed things up, not least by avoiding unnecessary computational work.  (The fastest code is the code that doesn’t run at all.)

Nakamoto’s innovation (and the important thing he added to Chaum’s pioneering work) is the “proof of work” mechanism, which provides a difficult to cheat distributed substitute for a timestamp.  The classic version, used in the grand patriarch Bitcoin, is a scratch-off lottery, computing a cryptographic hash function over and over until a winning ticket comes up.  The whole idea is to use up so much computing power that it is practically impossible to short cut to the answer.  (This makes the answer “trustworthy”, because the computation can’t be redone or faked.)

So, the entire Nakamotoan project is based on an algorithm that is, by design, as inefficient as needed.  In fact, if and when we get better at computing this nonsense hash, there is a ‘knob’ on the protocol that is adjusted to make the computation slow down, to preserve the level of inefficiency.

This is so backwards and upside down.  It breaks my old-fashioned software engineers brain!  Everything I know how to do is wrong!

(Note: I understand the logic of Nakamotoan proof of work, which really is a clever, if unsustainable innovation.  I’m not saying it is wrong, just that it is backwards from 99% of software algorithms.)

So, to me, Chaum’s zinger is really on target.  The core innovation underlying all the crypto enthusiasm is this proof-of-work, which is “doing nothing”, at least nothing useful.

As he says, there is a lot of money being thrown at this doing “nothing”.  (And by the way, we starving academics can’t help but be irritated that we get so little funding for a lot of important “somethings” that we are trying to do.)

(By the way, the same issue of The New Yorker has a great piece by Charles Duhigg about the Google/Waymo dustup [2].)


  1. David L. Chaum, Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms. Communications of the ACM, 24 (2):84-88, 1981.
  2. Charles Duhigg, Stop, Thief! When Google Sues To Keep Its Secrets, in The New Yorker. 2018, Conde Nast: New York. p. 50-61.
  3. Nick Paumgarten, The Stuff That Dreams Are Made Of: Cryptocurrency’s Priests Envision A New Society, in The New Yorker. 2018, Conde Nast: New York. p. 62-75.

 

Cryptocurrency Thursday

Robot material science

We’ve been dreaming of AI and robot science for many, many years.  (Isaac Asimov imagined robot laboratories before I was born.) And we’ve been building examples in many problem domains.  For example, nobody designs circuits or mechanical systems today, they use very expert CAD systems. And anyone doing anything with DNA uses highly automated (robotic) equipment, as well as clever software.

But one area that has been relatively slow to automate is the discovery or new chemicals and materials.  This process utilizes intense computer simulations, of course, and many aspects of experimentation are highly automated, but there is a lot of human input into what to try next.

I don’t know why this area of “expert” knowledge has resisted the invasion of expert systems, though I would guess that the search space is absurdly large, both the number of “moves” and the number of physical properties that might be relevant.

It also looks to me like a lot of the successes are serendipitous or at least not obviously predictable.  Finding just the right combination of stuff, organized just so, is really, really non-linear.

But surely these experts will be augmented by automated systems, just like everything else.

This fall, an MIT spin off is announcing another try at this game, with AI and robots. The AI generates plausible ideas, and a robot lab automatically tests the guesses.  If this is smart enough, it should be able to rapidly discover new materials (and how to make them).

There aren’t too many details available about the proprietary system, but clearly it has to incorporate a ton of expert knowledge about chemistry and physics of the relevant materials. Much will depend on getting a lot of really good data for the system to learn from.

It seems to me that using it will require careful specification of the desired target.  If the goal is too narrow, the system will miss chances for really innovative results.  But if the goal is too vague, the system will make lots of “discoveries”, but they won’t necessarily be what you are looking for.

Is this kind of technology going to make scientists obsolete?  No, of course not.  But it will mean that scientists will need to think at a higher, strategic, level of abstraction, and let the robots figure out the tactical details.  In fact, scientists will be the experts at posing “good questions”, suggesting good starting points and good goals.

Sounds pretty cool.


  1. Will Knight, A robot scientist will dream up new materials to advance computing and fight pollution, in MIT Technology Review. 2018. https://www.technologyreview.com/s/612388/a-robot-scientist-will-dream-up-new-materials-to-advance-computing-and-fight-pollution/

 

Hybrid Hydrogen photocell

There are two major technologies for harvesting solar energy for power (not counting biological systems)—photovoltaic and hydrogen separation.  The former uses sunlight to generate electrons, the latter uses sunlight to separate hydrogen from water. Nothing is 100% efficient, so the trick is to get enough usable power or fuel to be feasible.

This fall researchers at Lawrence Berkeley Lab report a cool hybrid system that combines thee two modes, generating both electricity and hydrogen [2].  This design has the very useful feature of generating a tunable mix of either power now or hydrogen to store for generation later.

The device is built from combining well known techniques, and basically squeezes three times or more power out of the same cell.   When you have two partly successful technologies–stack them together to get a more successful technology!

Cool!

Illustration: Berkeley Lab/JCAP

The researchers believe that the photochemical part can be specialized for other reactions, e.g., to neutralize air pollution.


  1. Peter Fairley, This Photocell Generates Both Power and Hydrogen, in IEEE Spectrum – Energywise. 2018. https://spectrum.ieee.org/energywise/green-tech/solar/hybrid-photocell-generates-power-and-hydrogen
  2. Gideon Segev, Jeffrey W. Beeman, Jeffery B. Greenblatt, and Ian D. Sharp, Hybrid photoelectrochemical and photovoltaic cells for simultaneous production of chemical fuels and electrical power. Nature Materials, 17 (12):1115-1121, 2018/12/01 2018. https://doi.org/10.1038/s41563-018-0198-y