Tag Archives: Charles Q. Choi

Byzantine agreement algorithm for a Better Bitcoin?

One of the few actual innovations in Nakamotoan cryptocurrencies is the “consensus protocol”, which is a fully distributed algorithm to solve the classic Byzantine agreement problem [3].  The challenge is to reach agreement, or at least consensus, when there are many messages and it is impossible to know which messengers are honest and which are dishonest.

Emperor Nakamoto’s new clothes involve broadcasting proposed updates to all participating nodes. Each node checks the validity (using checksums), and accepts an update that agrees with its own history.  Basically, all participating nodes “vote” on the results, and in the event of alternative proposals, the one with the most votes is taken as the consensus of the network.

It should be clear that this simple protocol is also extremely conservative with a small ‘c’.  Any record that is accepted by this consensus protocol is surely well supported. I will have been confirmed thousands of times over, eventually, by all nodes.  (This is a fine semantic point, because the definition of “participating” is that you accept the consensus up to now, which is somewhat circular logic.  Everyone who counts is in agreement because only those who agree really count.)

It should also be clear that Nakamoto’s approach does not scale well.  The number of messages and decisions is linear with the size of the network, and the network is intended to be very large to make cheating difficult.  (To dictate the result you need 51% of the votes—which is more difficult when there are a large number of votes.)

The upshot is that classic Nakamotoan consensus is very expensive and takes a long time, and becomes more expensive and slower as the network grows. In short, Bitcoin is isn’t scalable, and probably isn’t sustainable.

(This result is no surprise to anyone who has studied computer science.  As a matter of fact, you can learn a lot in college, if you stick with it and take it seriously.)


This summer researchers at Ecole polytechnique fédérale de Lausanne (EPFL) report a suite of probabilistic algorithms that could replace Nakamotoan consensus [2].  The basic idea is to use a probabilistic sample of nodes, rather than all of them.  Just as probability sampling can reliably get very near the result of a complete canvas, these algorithms make it possible to achieve confidence in a blockchain from only a fraction of the whole network each vote.

These Byzantine algorithms scale as the square root of the number of nodes, and use negligible computation and power resources [1]. <<link>>  Clearly, you could make a better Bitcoin with these algorithms.  It would be just as secure, just as decentralized, and way more sustainable, with way less latency.  So, as Charles Q. Cho and others imply, this could be a “new alternative to Bitcoin”.

This technology joins many other variations on Nakamoto’s ideas, including permissioned blockchains, zero-knowledge blockchains, and zillions of alt-coins.

The question is, would this new thing be “Bitcoin”, or something else?  It would do the same thing, just as the plethora of cryptocoins and blockchains do.  But could you still call it “bitcoin”?


Some enthusiasts might well want a better engineered Bitcoin.  We’ve seen many proposals for “2.0”. But experience has shown that something this basic would not be supported by many Nakamotoans (e.g., this, this, this, this).

There are many reasons for this resistance, most of them non-technical.

First, Nakamoto (2009) [3] is scripture, it is the very definition of what Bitcoin is.  Whatever these Byzantine protocols are, they simply aren’t Nakamotoan.  (Though, Nakamoto’s protocol is probably a degenerate case of the general Byzantine Reliable Broadcast family.)

Second, the probabilistic protocols are complicated and require a certain level of “trust” in the mathematics and the laws of chance.  Nakamoto’s simple, brute force approach is easy to understand and requires little math to believe in its correctness.  For those concerned with “trust”, it may be difficult to lean on such relatively difficult math.  (What if those sneaky Swiss guys are pulling a fast one, and there is a back door for “the man” to secretly control the results?)

Third, this protocol would surely scramble the mining economy, at least in the short run.  I think it would come out with similar results for everyone, but a lot of current investments would probably be misplaced, and current business models upset.  There is little chance that miners would agree to such a radical reworking of Bitcoin, even though it would probably benefit everyone in the long run.

And finally, there is an intangible value in the inefficiency of Bitcoin.  For those who view Bitcoin as “virtual gold”, it is psychologically good for Bitcoin to be expensive and inconvenient, just like gold is expensive and inconvenient.  For that matter, gold bugs are happy with poor scaling and long latency.  This keeps Bitcoin “scarce” and therefore, in this mindset, “valuable”.


So this Swiss study joins many other schemes for how you might redo Bitcoin to get a better system.  However, it is much more likely to become a competitor to Bitcoin than to be incorporated into the Nakamotoan Empire.


  1. Charles Q. Choi, New Alternative to Bitcoin Uses Negligible Energy, in IEEE Spectrum – Energywise. 2019. https://spectrum.ieee.org/energywise/computing/software/bitcoin-alternative
  2. Rachid Guerraoui, Petr Kuznetsov, Matteo Monti, Matej Pavlovic, and Dragos-Adrian Seredinschi, Scalable Byzantine Reliable Broadcast (Extended Version). arXiv arXiv:1908.01738 2019. https://arxiv.org/abs/1908.01738
  3. Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System. 2009. http://bitcoin.org/bitcoin.pdf

 

Cryptocurrency Thursday

Soft Logic for Soft Robots

In the past decade, there has been a lot of work on “soft robots”, crawly devices that are flexible and squishy instead of rigid and mechanical.  (E.g., this, this, this, this, this, this, this, this) Think octopus rather than Dalek (Dr. Who has dealt with both.)  There is a nest of these critters coming out of Harvard.

This spring researchers there report on “soft logic” that can provide the control systems for soft robots [2].  Following in the general design of earlier soft robots, this is based on rubbery materials and air pressure.  This technology replaces metal valves and electronic circuit boards—the main “hard” pieces in most previous “soft” robots.  In principle, it should be possible to create a complete working robot entirely of rubber-like material, liquid, and gas.

At first, I thought this was basically pneumatic or hydraulic logic, which is hardly new. But actually it seems to be hybrid pneumatic switches that control electric signals.  Looking at the diagrams, they look a lot like, well, push buttons, pushed by pneumatic fingers. Connecting various combinations of push buttons can implement whatever digital logic you want: AND, OR, etc.  They describe digital clocks and memory, too.

The demonstration is “macroscale”—centimeter scale—logic.  The logic switches state in sommething like a half second (500 msec, 500,000 nsec).  In other words, this is not even in the ballpark for “hard” electronics, and not likely to be useful until the density is thousands or millions of times higher.

Is that even possible?  I’m not expert on pneumatics, but I suspect that you can’t build nano or even micrometer scale devices that work on air pressure.  And I’m pretty sure you can get even kilohertz clock ticks with pneumatic logic.

So, are these “digital circuits” more useful than, say, the logic in a mechanical device such as an old coin operated machine?  Nah. And I kind of wonder if this could ever be much use.

On the other hand, if you build really, really tiny “hard” circuits, on the scale of a grain of sand, a swarm of them could work fine in a “squishy” system.  Sort of electric ink, with some “smart dots” mixed in.  It could be painted on, or sloshing around in a balloon (an “organ”).


  1. Charles Q. Choi, Soft Circuits to Control Soft Robots, in IEEE Spectrum -Tech Talk. 2019. https://spectrum.ieee.org/tech-talk/computing/hardware/soft-circuit
  2. Daniel J. Preston, Philipp Rothemund, Haihui Joy Jiang, Markus P. Nemitz, Jeff Rawson, Zhigang Suo, and George M. Whitesides, Digital logic for soft devices. Proceedings of the National Academy of Sciences, 116 (16):7750, 2019. http://www.pnas.org/content/116/16/7750.abstract

 

Electric Drone Raises The Bar

Here’s one good thing that has come out of Tesla—an alum who is doing serious development for electric UAVs.

It never made sense to me that it was possible to have a battery-powered car that could drive more than 300 miles but not have a battery-powered drone that could fly more than about 20 minutes,” ([1], Quoting founder Spencer Gore)

The big news is that the development started at the batteries, and built the aircraft up from there, rather than slapping a battery into an airframe.  The design builds the Li-ion battery into the structure itself.

(from Impossible Aerospace web page)

The result is superior performance, said to be two hours.  This is in the ball park with, say, a similar gasoline powered motor.

I think that many developers have been so entranced with how easy it is to create cheap radio-controlled helicopters that they have treated power supplies as only secondary.  This craft will probably raise the bar for the design of these small drones.

Excellent work.


  1. Charles Q. Choi, New Electric Drone Has Groundbreaking Flight Time, in IEEE Spectrum – Energywise. 2018. https://spectrum.ieee.org/energywise/aerospace/aviation/new-electric-drone-has-groundbreaking-flight-time

 

Robot Wednesday

Real Quantum Blockchain

More WTF-Science!

Nakamotoan blockchains have a certain mystical quality about them, but they are surely built on Von Neuman or at least Turing machines, no?  Plain old physics.  Time runs one-way. No spooky action at a distance.

At base, The general goal of Nakamoto is to create immutable data structures, permanent across time.  No action in the future can ever change the results of an action. Another way of saying that is that the data today is necessarily tied to the data at the original moment of creation.

This is, in a way, a form of time travel, isn’t it?  When I access the data, I want to access it at the exact moment of creation (or at least, the moment when it was “preserved” or “frozen” or whatever).

From this perspective, cryptographic schemes are mathematically simulating this time travel, by attempting to tunnel through the future in a sealed time corridor, i.e., the cryptographically signed data.  All the rigmarole of Nakamotoan signatures and “consensus” is a mathematical dance designed to make an (almost) unbreakable virtual link between the data and all future incarnations of it.

This dance is all necessary because we can’t have real time travel.

Or can we.


This month, researchers in New Zeeland report a conceptual design for a blockchain using quantum time-entanglement [2].

“Perhaps more shockingly, our encoding procedure can be interpreted as non-classically influencing the past; hence this decentralized quantum blockchain can be viewed as a quantum networked time machine.“ ([2], p. 1)

A time machine?!?   Now this is what we were thinking of when we were first imagining the blockchain!

The concept involves “entanglement in time between photons that do not simultaneously coexist”, which is even spookier action at a distance.

The details are beyond my puny understanding of quantum physics, but the paper describes a system that encodes data in a way that is not just difficult to tamper with, but impossible to tamper with.  Furthermore, it isn’t even possible to try to tamper with any blocks except the latest, because the photons no longer exist!

“in our quantum blockchain, we can interpret our encoding procedure as linking the current records in a block, not to a record of the past, but linking it to the actual record in the past, which does not exist anymore.”

Or, as they say, “…measuring the last photon affects the physical description of the first photon in the past, before it has even been measured. Thus, the “spooky action” is steering the system’s past” (quoting reference 22)

Assuming this concept is valid, it not only solves the challenge that QC poses for conventional blockchains, it is actually a direct implementation of the distributed “time machine” that classical blockchains only simulate.

Very cool.

And very, very spooky.


  1. Charles Q. Choi, Quantum Blockchains Could Act Like Time Machines, in IEEE Spectru – Tech Talk. 2018. https://spectrum.ieee.org/tech-talk/computing/networks/quantum-blockchains-could-act-like-time-machines
  2. Del Rajan and Matt Visser, Quantum Blockchain using entanglement in time. arxive arXiv:1804.05979, 2018. https://arxiv.org/abs/1804.05979

 

 

Cryptocurrency Thursday

Cool Solar Hydrogen Extractor

Solar energy can be used to directly generate electricity (Photo Voltaic), and as a source of heat for direct use or to generate electricity. In any use case, solar energy is available only when sunlight is available. For this reason, technology for storing solar generate energy for later use are increasingly important.  Hence, the intense interest in battery technology.

Another approach is to use solar power to generate clean)fuel, which can be burned when and where needed. Theoretically, an ideal candidate is Hydrogen, which is a potent fuel and can be generated from electrolysis of water.

In practice, generating Hydrogen from water is not so simple, because water must be pumped to the separator (which takes energy and is prone to leaks and breakdowns), and because purifying water is expensive and energy intensive.

This winter, a group at Columbia University report on an interesting new technique for electrolyzing salt water to separate Hydrogen and Oxygen [2]. The system is designed use PV electricity, and has no pumps or moving parts.

Key to the design is that the products (the Hydrogen and Oxygen) are separated by the geometry of the electrodes. This is simple and requires no special membrane to filter the gasses. It also makes the system much more robust and less prone to clogging up with impurities and precipitated trace materials.

The device uses sunlight to generate electricity, and the electrolysis creates bubbles of H2 and O2 – in separate columns.  The O2 and be vented, and the H2 captured. The process is continuous as long as the sun shines.

They envision floating stations at sea, which load the accumulated H2 into tanker ships for delivery to users.

Illustration: Justin Bui/Columbia Engineering An artist’s rendering shows a hypothetical “solar fuels rig” floating on the open sea.

Cool.

There is work to do, yet.  It will be important to avoid generating Chlorine gas (a possible from the salt that is abundant in the ocean) and to create industrial scale devices.


  1. Charles Q. Choi, Floating Solar Rig Produces Hydrogen Fuel, in IEEE Spectrum – Energywise. 2017. https://spectrum.ieee.org/energywise/energy/renewables/floating-solar-fuel-rigs-could-produce-hydrogen-fuel
  2. Jonathan T. Davis, Ji Qi, Xinran Fan, Justin C. Bui, and Daniel V. Esposito, Floating membraneless PV-electrolyzer based on buoyancy-driven product separation. International Journal of Hydrogen Energy, https://www.sciencedirect.com/science/article/pii/S036031991734466X

100% Renewable Energy Worldwide?

Following up on a highly controversial 2015 article which made a case that the US could convert to 100% renewable energy by 2050 [2, 4, 5], the same Stanford led group has assembled a “roadmap” for 139 countries to go renewable [3].

The new work works through country by country, to show that power production can be 80% converted to “wind, water, and sunlight (WWS)” by 2030, and 100% by 2050. The argument is based on publicly available data from the International Energy Agency, and their estimates of plausible conversion scenarios.

Basically, the study works out for each country and each category of generation, what can be generated. The headline result is that is all cases, there is enough potential that 100% of the power supply could come from wind, water, and sunlight.

The “roadmap” part of the story is some broadbrush calculations of the rate of installation of new technologies. The headline here is that it’s clearly possible to deploy the needed equipment.

The article sketches important reasons why this would be worth doing. Phasing out carbon emissions would be a good idea (at least for everyone except the oil and coal business), and these technologies are generally better for public health. They also find lots of jobs and other economic benefits, including stable energy prices (these technologies do not use fuels).

The controversy over the earlier work focused on the operational question of supply and demand matching. Many of these resources (notable sunlight) are variable, which means that they are not necessarily available when power is needed. The 2017 roadmap doesn’t really deal with these issues, it is all about aggregate supply.  The controversy remains, though the bulk of this article won’t be disputed on those grounds.

In a sense, this study is nothing new. The article cites a dozen other studies, and there are plenty of other versions of this same argument. This particular study is notable for the uniform methodology and global coverage.

There are a lot of technical assumptions under the cover of the headline numbers, and I am in no position to evaluate them. (From past experience, there will be detailed critiques soon enough.) But even if these estimates are off by, say, 50% (or maybe only half implemented), that’s still a huge deal. 50% renewable by 2050 would be a huge impact, and probably would reach a tipping point where the new technologies (e.g., electric vehicles) simply push out older technologies.

Of course, there is a big difference between “feasible” and “actually trying to do it”. In this case, there are political and social barriers to such a roadmap. If nothing else, the beneficiaries (the general public and the planet) do not control the decision making of the implementers (companies and agencies). And in the case of the US, the ruling party is in the thrall of both anti-science ideologues and the coal industry. Adoption of WWS will have strong headwinds for the foreseeable future.

However, this study, with its fellows, make the case that this is possible and probably a good idea. You might argue whether WWS is the best thing to do or not, but I don’t think you can’t really say that it would be impossible to go renewable.


  1. Charles Q. Choi, A Road Map to 100 Percent Renewable Energy in 139 Countries by 2050, in IEEE Spectrum – Energywise. 2017. https://spectrum.ieee.org/energywise/energy/renewables/100-percent-renewable-energy-for-139-countries-by-2050
  2. Christopher T. M. Clack, Staffan A. Qvist, Jay Apt, Morgan Bazilian, Adam R. Brandt, Ken Caldeira, Steven J. Davis, Victor Diakov, Mark A. Handschy, Paul D. H. Hines, Paulina Jaramillo, Daniel M. Kammen, Jane C. S. Long, M. Granger Morgan, Adam Reed, Varun Sivaram, James Sweeney, George R. Tynan, David G. Victor, John P. Weyant, and Jay F. Whitacre, Evaluation of a proposal for reliable low-cost grid power with 100% wind, water, and solar. Proceedings of the National Academy of Sciences, 114 (26):6722-6727, June 27, 2017 2017. http://www.pnas.org/content/114/26/6722.abstract
  3. Mark Z. Jacobson, Mark A. Delucchi, Zack A. F. Bauer, Savannah C. Goodman, William E. Chapman, Mary A. Cameron, Cedric Bozonnat, Liat Chobadi, Hailey A. Clonts, Peter Enevoldsen, Jenny R. Erwin, Simone N. Fobi, Owen K. Goldstrom, Eleanor M. Hennessy, Jingyi Liu, Jonathan Lo, Clayton B. Meyer, Sean B. Morris, Kevin R. Moy, Patrick L. O’Neill, Ivalin Petkov, Stephanie Redfern, Robin Schucker, Michael A. Sontag, Jingfan Wang, Eric Weiner, and Alexander S. Yachanin, 100% Clean and Renewable Wind, Water, and Sunlight All-Sector Energy Roadmaps for 139 Countries of the World. Joule,August 23 2017.  http://dx.doi.org/10.1016/j.joule.2017.07.005
  4. Mark Z., Jacobson. Mark A. Delucchi, Mary A. Cameron, and Bethany A. Frew, Low-cost solution to the grid reliability problem with 100% penetration of intermittent wind, water, and solar for all purposes. Proceedings of the National Academy of Sciences, 112 (49):15060-15065, December 8, 2015 2015. http://www.pnas.org/content/112/49/15060.abstract
  5. Mark Z. Jacobson, Mark A. Delucchi, Mary A. Cameron, and Bethany A. Frew, The United States can keep the grid stable at low cost with 100% clean, renewable energy in all sectors despite inaccurate claims. Proceedings of the National Academy of Sciences, 114 (26):E5021-E5023, June 27, 2017 2017. http://www.pnas.org/content/114/26/E5021.short

 

Virtual Reality for Animals

For several years now, Andrew Straw and colleagues at Albert-Ludwigs-University Freiburg have been doing interesting development of Virtual Reality for non-human species.  This work applies the basic idea of (mainly visual) Virtual Reality for non-humans.

This is trickier than it sounds, because VR depends on a deep understanding of the subjective experience of the world through vision. For humans, we have both research and extensive experience to guide development. For other species, we have no experience and it is much more difficult to understand the individuals subjective experiences.

At the same time, if you can make VR work, it is has a lot of advantages for learning about non-human perception. Conventional experiments require the animal to be restrained while test stimuli are presented. Even if not uncomfortable, this is an unnatural situation, and precludes a fully natural response to the stimuli.

An all round VR experiment that allows the animal to move naturally is a much more realistic situation, that can elicit normal behavior. In addition, a VR space can be configured and reconfigured in many ways, and can include an extensive virtual world. These capabilities open the way for extensive experimentation without seriously harming the animals.

Building on earlier work which created a VR experience for flies, the research group has extended and generalized the system so it can be used with other species. In their recent paper they report on experiments with flies, mice, and fish [2]. (Land, sea, and air—get it?)

The FreemoVR system exploits contemporary 3D video game technology to rapidly render a realistic scene all around the animal. It also uses computer vision to non-invasively track the position of the animal. The system rapidly renders the correct perspective view of the virtual world as the animal moves.

To prove out the system with different species, a species-appropriate VR world has to be created for each.

For a given species, the virtual world needs to be designed that reflects the natural environment of the animal, and that is rendered for their sensory apparatus. A fly’s world is different from a fish, and their eyes see differently.

The computer vision also needs to be trained to recognize the body pose and motion for each species. The VR depends on accurately tracking both the position and where the animal is looking.

In earlier work, they showed how the system can reveal how the fly uses visual cues to navigate. The current work illustrates other creative experiments. For instance, the fish were presented choices in the form of “teleportation” ports, which instantly shifted the fish to a new scene. (Apparently, this didn’t distress the fish as much as it would upset me!)

This is a classic single user VR system that presents the world registers to one point of view. It isn’t suitable for experiments with multiple animals at the same time, because the viewpoint is correct only for one of them. It is, as they say, a CAVE for animals.

However, they are able to present some group or even “social” situations, by projecting other animals nearby. And, in the case of the fish, they simulate a school of fish, and the subject swims along with them. These effects make it possible to explore interactions, at least based on visual cues.

Indeed, they also presented a world full of cartoonish “space invaders”, which did seem to worry the fish a bit.

Image: Straw Lab

 

The technology is open source, but kind of complicated, building on video game VR and computer vision libraries. The also use Robot Operating System (ROS) as the framework, presumably because it is a modular real time operating system.

Cool stuff!


  1. Charles Q. Choi, Virtual Reality Platform Created For Lab Animals, in IEEE Spectrum – The Human OS. 2017. http://spectrum.ieee.org/the-human-os/computing/hardware/virtual-reality-platform-created-for-lab-animals
  2. John R. Stowers, Maximilian Hofbauer, Renaud Bastien, Johannes Griessner, Peter Higgins, Sarfarazhussain Farooqui, Ruth M. Fischer, Karin Nowikovsky, Wulf Haubensak, Iain D. Couzin, Kristin Tessmar-Raible, and Andrew D. Straw, Virtual reality for freely moving animals. Nat Meth, advance online publication 08/21/online 2017. http://dx.doi.org/10.1038/nmeth.4399