Category Archives: science and technology

FOAM: Decentralized Localization Using Ethereum

FOAM is a technology that seeks to use blockchain and Ethereum contracts to create mapping and location based services.  The project wants to address a complex of perceived problems: GPS is spoofable, maps are owned by big actors, and location services aren’t private.  In addition, they think that “people lie about their location,” (Ryan John King, the co-founder and CEO of FOAM, quoted in Coindesk [3])  The solution deploys blockchain technology and Nakamotoan philosophy [2].

Looking at their materials, it is clear that FOAM is mainly focused on replicating Internet location-based services, not on navigation or engineering or geoscience.  The geospatial model is a two-dimensional map of the surface of the Earth.

The location service depends on many local low-power radio beacons instead of satellites. They imagine an ad hoc mesh of locally operated beacons, which are recognized and validated via Nakamotoan style consensus rather than a central authority (such as a government space agency). These beacons are used to trangulate positions.  Good behavior and trustworthiness of the beacons is supposedly assured by cryptocurrency tokens, in the form of incentives, notably buy in and security deposits.

They imagine this to be used to construct datasets of “Points of Interest”, which are “where are the stores, cafes, restaurants and malls, where a fleet of vehicles in a ride sharing program like Uber should be anticipating if demand is shifting or surging, or which traffic bottlenecks drivers should avoid on an app such as Waze.”  These are stored and validated through a decentralized protocol. “[G]ranting control over the registries of POI to locally-based markets and community forces, allowing the information provided to be validated by those who contribute to the relevant locality.

These datasets are to be created through bottom up efforts, presumably incentivized by desire to operate local services. “FOAM hopes that the Cartographers and users will contribute the necessary individual work, resources, and effort themselves to contribute to the ongoing community-driven growth and supplement this important cartography project.

Interestingly, the crypto token-based incentive system relies on negative incentives, namely buy ins and “security deposits” that can be forfeited by consensus. I’m not sure I’ve seen another Nakamotoan project with this sort of punishment based (dis-)incentive.  (I’ll note that psychologists generally find that the threat of punishment does not engender trust.)

Obviously, this entire concept will depend on the development of the localization network and the datasets of “Points of Interest”.  As far as I can see, realizing this is based on “hope” that people will contribute. I’d call this “faith-based engineering”

We can pause to reflect the irony of this “trustless” system that appears to be entirely based on “hope” and the threat of punishment.

As far as the actual technology, it is, of course, far short of a “map of the world”.  The local beacons are fine for a dense urban setting, but there is little hope of coverage in open space, and no chance that it will be useful at sea, up in the air, inside significant structures, or underground. Sure, there are ways to deploy beacons indoors and other places, but it isn’t easy, and doesn’t fit the general use cases (Points of Interest).

Ad hoc networks aren’t immune to jamming or interference, either, and are essentially defenseless against determined opposition.  In classic fashion, the protocol “routes around” interference, discarding misbehaving nodes and corrupted data. Unfortunately, this means that the response to a determined and sustained attack is to shut down.

The incentive system is somewhat unique, though the notion of a “security deposit” is widely used. How well will it work?   (How well do security deposits work?)  It’s hard to say, and there doesn’t seem to be much analysis of potential attacks.  The notion that the loss of security deposits and other incentives will guarantee honest and reliable operation remains a theoretical “hope”, with no evidence backing it.

The system depends on a “proof of location”, but it isn’t clear just how this will work in a small, patchy network. In particular, assumptions about the security of the protocol may not be true for small, local groups of nodes—precisely the critical use case for FOAM.

Finally, I’ll note that the system is built on Ethereum, which has had numerous problems. To the degree that FOAM uses Ethereum contracts, we can look forward to oopsies, as well as side effects from whatever emergency forks become necessary.

Even if there are no serious bugs, Ethereum is hardly designed for real time responses, or for datasets at the scale of “the whole world”.  Just what requirements will FOAM put on the blockchain, consensus, and Ethereum virtual machine?  I don’t know, and I haven’t seen any analysis of the question.

This is far from an academic question.  Many location services are extremely sensitive to time, especially to lag.  Reporting “current position” must be really, really instantaneous.  Lags of minutes or even seconds can render position information useless.

Can a blockchain based system actually deliver such performance?

Overall, FOAM really is “a dream”, as Alyssa Hertig says.  A dream that probably will never be realized.


  1. Foamspace Corp, FOAM Whitepaper. Foamspace Corp, 2018. https://foam.space/publicAssets/FOAM_Whitepaper_May2018.pdf
  2. FoamSpcae Corp, The Consensus Driven Map of the World, in FOAM Space. 2017. https://blog.foam.space/
  3. Alyssa Hertig (2018) FOAM and the Dream to Map the World on Ethereum. Coindesk, https://www.coindesk.com/foam-dream-map-world-ethereum/

 

Cryptocurrency Thursday

3d Football on Table Top – From 2D Video

It shouldn’t be possible, but there it is.  From flat, 2D video, a 3D movie projected on a table top.  Cool!

I think I see how this works, at least approximately.  The developers report this summer on the techniques that extract 3D positions of players from 2D video, as well as “pose” information [1].   This data is then used to construct a 3D model that represents the action in the video.

The demos are pretty remarkable.

The paper reports that the technique relies on a considerable amount of domain knowledge. The soccer pitch and game are very well defined, so there are many clear cues to the camera position and orientation of the scene.  (For example, if the corners or goal are visible, their geometry is known very precisely.)

They also used machine learning to recognize the players.  Again, this exploits constraints on the positions and clothing of players, as well as the contrasting visual background of the field.  In addition, visual confusions can be untangled by knowledge of human motion, and by tracking each player through time.

In short, they “cheat” like mad, taking advantage of the highly structured scene.

The 3D output involves not just the position of the players, but creating 3D point clouds representing the play.  These are then projected into a 3D virtual scene, frame-by-frame.  They smooth out the jitter introduced by all this computation and the imprecision of their estimates to generate a reasonable scene.

Obviously, there are limitations to this magic.  The 3D scene can only be viewed from more or less the original camera position.  You can move around a little, but you can’t go to the other side of the field—there is no data from that side.

The analysis is imperfect, possibly resulting in drop outs (even players mysteriously disappearing) or distortion of pose or trajectories.  A big tangle of multiple players will probably be impossible to reconstruct.  And, “We do not model jumping players since we assume that they always step on the ground.

I don’t think this demonstration operates in real time (i.e., keeping up with a live video feed).  That will require considerable data handling and compression.

Finally, I wonder how this relates to animal 3D vision.  Part of what captured my attention is the way this captures a detailed 3D representation from monocular 2D data.  Humans and other animals extract such 3D models from video also, though I’m pretty sure that the specific techniques used in this demonstration are not exactly what human brains do.

However, it is likely that humans interpreting the complicated 3D scene of a football video may well utilize domain knowledge, as well (if not exactly the same way). We know that humans learn to recognize humans in images, and many viewers surely have deep understanding of football that may guide perception of a video scene.

One thing this software does not use is haptic or proprioception cues—anyone who has played football or even watched football in person probably has learned a lot about the body movements he or she is seeing.  A disembodied neural network will never feel the effort, resistance, or pain of the player’s movements.

This is a pretty neat demo, but I’m not totally user it is particularly useful. Even if fully developed to give all around, faithful 3D version of 2D video in near real time, is this something people want or need?

Given how well we are able to interpret the original 2D video, I have to wonder what additional value the 3D version might have.

Yes, we might be able to choose our own “camera angle”, even in otherwise impractical possibilities such as right on the field.  We might be able to zoom in, fly around, and otherwise get right into the (visual) game.

This might be an entertaining novelty, though who knows how enduring the appeal.

It is possible that such 3D reconstructions might be useful for analysis or training.  However, there is a difference between an interpolated view and the actual original data. It would be difficult to rely on these reconstructions to, say, review a controversial penalty or close call.  When millimeters and split seconds are in question, the accumulated errors in the 3D approximation would make them hopelessly imprecise.

One thing is for sure.  A table top 3D rendering will never replace playing or watching the game in person.


  1. Konstantinos Rematas, Ira Kemelmacher-Shlizerman, Brian Curless, and Steve Seitz, Soccer On Your Tabletop, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2018: Salt Lake City. http://grail.cs.washington.edu/projects/soccer/soccer_on_your_tabletop.pdf

 

Nakamoto’s Fifty One Percent Problem

“As long as a majority of CPU power is controlled by nodes that are not cooperating to attack the network, they’ll generate the longest chain and outpace attackers.” (Nakamoto, 2009 [3] )

At the very core of Nakmotoan cryptocurrencies is a consensus protocol that relies on the principle that a decentralized voting system spread across a very large network cannot be manipulated because the majority of nodes are “honest” and not cooperating to game the voting.

The system works as long as “honest nodes control a majority of CPU power.”

On the other hand, if more than 50% of the CPU power on the network coordinate, they can manipulate Nakamotoan protocols, and produce “dishonest” results.  This is the dreaded “51% attack”. Depending on the situation and the details of the specific protocol, this attack can result in canceled payments, improper payments, double spending, or other forms of theft via manipulation of the ledger–pretty much exactly what Nakamotoan blockchains are supposed to preclude.

It is important to note that the all too many troubles reported about cryptocurrencies; hacking, fraud, theft, and criminal activities; all have nothing to do with a 51% attack. Indeed, in most of these cases, the protocol is working just fine, and securely implementing the nefarious activities  which are enabled by dishonesty and breaches in other parts of the system.

Indeed, actual 51% attacks have been so rare that they seemed purely theoretical.  However, as Alyssa Hertig points out, with the growth in the number of cryptocurrencies, these attacks have become more common [2].

So what is going on here?

The Nakamotoan project is based in large part on what amount to probabilistic claims about the participating nodes of the network.  The consensus protocol is secured as long as the “majority of the CPU power is honest” condition remains true.  (We may pause to note the irony of a “trustless” protocol depending on the trustworthiness of vast numbers of independent computers on the Internet….)

Bitcoin and it’s extended family seem to work, and people have come to have confidence in these decentralized networks.  But why would we believe that this condition can be met in a real implementation?

Confidence is based on intuitive beliefs about the Internet and cryptocurrency networks on the Internet.  The basic intuition is that 1) the internet has a huge number of independent nodes with a huge aggregate computing power and 2) the computing power is distributed approximated evenly across the Internet.  The idea is that It is impractical to round up zillions of independent mom-and-pop nodes to make up a majority.

The size of the Internet is indisputable, though it is important to remember that cryptocurrencies operate on only a fraction of the nodes. Dreams of a universal blockchain, computed and protected by every computer everywhere are a long way from reality, and probably unrealistic. (Remember, many of the nodes are mobile devices and dedicated systems that are not necessarily available for computations.)

There are many blockchains and cryptocurrencies, and their networks are not necessarily very large.  Actually, some “private” blockchains are not only small, but also not open to the public, so we don’t really know anything about them.

Whatever the number of nodes, the CPU power is never evenly spread, not even approximately.  In this assumption, we see the Libertarian instincts of Nakamotoists overriding knowledge about networks.  Statistically, networks are never egalitarian, they are always hierarchical [4].  Cryptocurrency networks are no different, and no sensible person would expect them to be different.

In the real world, therefore, cryptocurrencies fail to meet one or both of these intuitive criteria.

Only the largest cryptocurrencies, such as Bitcoin or Ethereum, have truly massive numbers of nodes.  For many alternative cryptocurrencies or blockchains it is hard to know just how big the network might be, though they obviously start out small.  At the extreme, a small network means that only a relative handful of nodes could collude to control it.

Even the largest networks are characterized by huge concentrations of CPU power in the hands of a few operations.  There may be millions of ordinary Joe’s out there, but there are gigantic server farms with millions of times Joe’s CPU power.  The largest of them all, Bitcoin, still has a handful of operations that control 51% of the computing power and therefore theoretically could control the network.  This has been evident in the governance stalemate over scaling issues, with the interests of a few operators successfully blocking upgrades that disadvantage them.

The upshot is that the likelihood of a 51% attack is unknown (and can depend on a lot on obscure details of the implementation ), but is clearly more likely if the network is small and computing power concentrated. In these cases, “honest nodes control a majority of CPU power” is more of a hope than a solid assumption.

So we shouldn’t be surprised that these problems are increasing, especially in smaller networks, and especially in opaque networks [2].  This is definitely a case where being “just as good as Bitcoin (or Ethereum or whatever) is scarcely a guarantee that it will work just as well.


  1. Alyssa Hertig (2018) Blockchain’s Once-Feared 51% Attack Is Now Becoming Regular. Coindesk, https://www.coindesk.com/blockchains-feared-51-attack-now-becoming-regular/
  2. Alyssa Hertig (2018) Verge’s Blockchain Attacks Are Worth a Sober Second Look. Coindesk, https://www.coindesk.com/verges-blockchain-attacks-are-worth-a-sober-second-look/
  3. Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System. 2009. http://bitcoin.org/bitcoin.pdf
  4. Mark Buchanan, Nexus: Small Worlds and the Groundbreaking Theory of Networks, New York, W. W. Norton and Company, 2002.

 

Cryptocurrency Thursday

AI Music Translation

Humans are musical, more than any other species.  People make music with every tool at their disposal, and with our bodies even without tools.  It is no surprise that the second thing we did with computers, after military uses, was making music.

Over the years, the capabilities of digital technology have come to pervade music making and distribution.  However, computers still lag in understanding music, and other tasks such as mimicking music.

A new study from Facebook’s AI lab reports techniques that “for the first time as far as we know, to produce high fidelity musical translation between instruments, styles, and genres.” ([2], p. 1)

The technology inputs the sound of music and creates another audio that mimics the piece in a different style or instrument. (I.e., it works from actual music, not some symbolic representation or notation.)  For example, this  system might translate the recording of an orchestral piece into a piano work.  The researchers claim that the results approach the abilities of professional musicians.

The technique uses large scale machine learning, the details of which quickly exceed my own understanding of the area. Importantly, they develop a single, general system (i.e., not a bunch of cases), and it operates unsupervised (i.e., not relying on examples or directions from humans).

The most important trick here is to get the level of analysis right. Too much low level detail, and attention to the wrong details, defeats algorithms and machine learning.  In this case, they want to force the algorithm to learn the “high-level semantic features” of the inputs, and then translate them into other representations that will be recognized by humans as “the same”.

The key technique appears to be to inject noise into the input, messing it up.  The analyzer is then tasked with extracting the “undistorted” original. The effect is to force the algorithm to ignore the domain specific details.

“The input data is randomly augmented prior to applying the encoder in order to force the network to extract high-level semantic features instead of simply memorizing the data.” ([2]. p.3 )

This is pretty amazing!  (Caveat:  I’m not up to date, in the field, so I don’t know if this is a commonly used approach.)

Mess things up a little to learn the essence.  Cool!

I’ll note that this general concept certainly seems like something that natural neural systems might employ. I could imagine, for example, human brains processing music both undistorted (to memorize it) and simultaneously processing slightly distorted versions to extract semantic representations that can mix with other domains.

I would further speculate that experience and training might result in abilities to selectively access these alternative representations.  For example, trained performers might be able to call up and operate on a precise reproduction, or a semantic representation, or both, as needed.

This ideas certainly suggest possible studies of human music perception and performance.  (Again, this is not my specialty, so I have no idea if there is research similar to this idea.)


  1. Tristan Greene, Facebook made an AI that convincingly turns one style of music into another, in The Next Web. 2018. https://thenextweb.com/artificial-intelligence/2018/05/22/facebook-made-an-ai-that-convincingly-turns-one-style-of-music-into-another/
  2. Noam Mor, Lior Wolf, Adam Polyak, and Yaniv Taigman, A Universal Music Translation Network. arxive, 2018. https://arxiv.org/abs/1805.07848

3D Printed Cornea -Yes, Please!

Speaking of 3D printing for biomedical uses…researchers at Newcastle University report this summer on a promising technique for 3D printing of a replacement cornea [1]!

The technique creates a digital model of the cornea, in this case by optical scanning to create a 3D mesh representing the precise geometry of the cornea surface. This process is essentially creating an elevation map of the cornea.  The STL file was processed with slicer software to generate G-Code—just like personal 3D printing.

A “bio ink” was created which contained cultured structural cells from human cornea tissue. This was printed on a surface to create the artificial cornea.  The result was incubated to keep the cells alive.

The proof of concept study created promising tissue, though there is a lot of work to do before a viable replacement cornea can be created.

This development is of more than passing interest to me personally, as I will need cornea replacements in the coming decades.  Cornea transplants have been getting better and better, but there are only so many donors.  I’m really, really excited at the possibility of an unlimited supply of custom-built corneas!


  1. Abigail Isaacson, Stephen Swioklo, and Che J. Connon, 3D bioprinting of a corneal stroma equivalent. Experimental Eye Research, 173:188-193, 8// 2018. https://www.sciencedirect.com/science/article/pii/S0014483518302124

Smart Toys: Threat or Menace?

The cover of the May IEEE Computer magazine teases, “Are Smart Toys Secure?”, but the article by Kshetri and Voas buries the lede, “Cyberthreats under the Bed [1].

The topic, of course, is the plethora of new toys built with Internet and IoT technology.  These devices are designed for children, and deploy adult technology including location tracking, internet services, and AI to entertain and, in many cases, sell products.

I have blogged many times about the problematic design and numerous, grievous security and  privacy problems with this technology. I’m sure we are all shocked to learn that these same flaws are found in many “smart toys”.

K&V are particularly concerned about the security weaknesses that have already led to massive breaches. This is particularly troubling because successful identity theft of a child is particularly damaging. A child’s SSN is easily reused because there is no real history to undo.  And the damage may well not be known until much later when the young person begins to establish a credit rating and other financial standing.

Besides identity theft, there are a raft of other dubious features. “Smart” toys may record and track the children.  Personal information may be sold on, and children targeted by advertisers. And, of course, the toys are hackable, so bad guys may be able to take over.

Part of the problem is that toy designers have not focused on security, cutting corners financially and relying on outdated and poor technology.  This is exacerbated by the “let the user beware” attitude inherited from Internet companies. Pushing responsibility onto the children is not only daft, it isn’t legally operative.  So parents are required to take responsibility for groking the security of these devices—not that there is much that they can do.

“The expectation of understanding smart toys’ security and privacy risks might be unrealistic for most parents.” (p. 96)

This attitude has raised hackles when the vendors impose, and sometimes unilaterally change, contractual terms and conditions that absolve them of responsibility. No need to build a safe product if you make people agree that you have no liability.

K&V report that there is little effective regulation, government or otherwise. So parents are pretty much on their own.  Good luck. “As a general rule, however, parents should be wary of toys with recording technology, connect to the Internet, or ask for personal data.

And, basically, don’t buy them.

Just say no.

“Returning “creepy” dolls and other suspect smart toys to vendors for refunds and exchanges, or refusing to purchase them, will likely motivate toymakers to improve their products’ security.”


  1. Nir Kshetri and Jeffrey Voas, Cyberthreats under the Bed. Computer, 51 (5):92-95, 2018.

Ethereum Governance Thrashing

Winner of the 2017 CryptoTulip of the Year Award, the Ethereum community is working hard to repeat this year.

I give this community credit.  They are one of the most open and open-hearted cryptocommunities out there. As they tackle the deep problems encountered by every Nakamotoan cryptocurrency, they are honestly and openly trying to find good solutions.

Which makes it especially painful to watch them struggle and strain so hard.

Ethereum is still struggling to figure out what to do about last fall’s oopsie  which has frozen $100M worth of Ether due to a minor coding error.  The obvious and normal solution is to override the technical error, and return the funds to the owners in some simple fashion.  But Nakamotoan blockchains cannot do this, except by rewriting history.  Ethereum already went through that with an earlier oopsie, which caused the creation of an alternative version of Ethereum.  That was an ad hoc decision by a few insiders, and most people agree that there should be a better way to do it.

This question has generated heated discussions of decision making.  A proposal for a standard way to allow proposals for rewriting history was hotly contested.  An official way to violate the core sanctity of the ledger is bound to be controversial, and led to consideration of how contested decisions can and should be made.

Other communities have fallen apart over such issues, but Ethereum has retained remarkable solidarity even in the face of deep divisions [2].

But, just like other crypto communities (and many Internet communities), they seem bound to recapitulate the history of human government, step by step.  (This kind of ignorance is one of the consequences of eschewing conventional education, IMO.)

So, “Ethereum Is Throwing Out the Crypto Governance Playbook” [3] reports Rachel Rose O’Leary.  This turns out to be a proposal governance by “non-political” technocrats.  If the problem is that “technical debates have been obscured by politics”, then the solution is to let “the developers” decide what the code is and does.

This concept was called the “Fellowship of Ethereum Magicians” and Rachel Rose O’Leary tagged it a “Magic Solution?” [5].. ‘?”, indeed.  This concept is said to be modelled after the Internet Engineering Task Force (IETF), which has stewarded the basic technical specifications of the Internet.   (The IETF has also stewarded hundreds of proposals that were never adopted or implemented.)

Apparently, the person quoted has never participated in actual Internet standards development, since it is characterized as operating “without any kind of corporate funding or any other sponsorship body that could in some way influence the activity of the collective.”  Really?  Do you know anything at all about the development of the DARPAnet NSFNet Internet?

Another proposed “innovation” is an “Experimental Voting” scheme [4]. This turns out to be a variant of “one dollar-one vote” (basically, with a deflator to dilute the top end of the distribution).  At bottom, people will buy a stake in a decision. In principle, this will make the decision fair and representative of the stake holders, if not of the world in general.

To review.  Ethereum currently has a classical Nakamotoan governance, inspired by the open source software model. Majority rules, minority walks.  Consensus via apartheid. And In the case of Ethereum, there is a visible and influential founder who wields enormous implicit power [1].

Tossing out this Nakamotoan playbook, the “innovations” include a dictatorship by experts (Plato’s Republic). Technicians above politics will run the show, they’ll let us all know what has been decided.  The second “innovation” is market-based voting, essentially shareholder “democracy”. People with money will buy votes to make their wishes come true. (That’s never been tried before!)

Wow!  Such amazing originality.

There is a third “innovation”, and that is a (potentially giant) town hall meeting [6]. In fact, Wolfie Zhao reports that “Ethereum Summit Attendees Commit to Governance Plan” [6].  This plan includes teleconferences, creation of “open-source tools to collect key signals and metrics,” and an open Summit (i.e., a town hall meeting). The idea is to develop a more visibly democratic process.

Oh, and meetings, bloody meetings.

I’m not really sure what “key signals and metrics means”, but I’m very sure that different people will have different opinions on what should be measured, and how to interpret the measures. (Pesky politics again!)

And I am 100% sure that an unstructured meeting will not produce any clear results. In fact, it could easily devolve into factions and *gasp* politics.

The key theme here (aside from my own use of Coindesk’s irreplaceable reporting) is a trust in technology and a distrust of humans.  This philosopy is fundamental to the Nakamotoan project.  Somehow, technical solutions will save us from the fallibility and selfishness of humans.

No points for guessing my own view on that.

But, again, I am impressed at how well Ethereum is handing this struggle (however misguided and hopeless it may be). These is genuine respect and decency most of the time (and when things have fray, it is for very good reason).

This is truly a ray of hope: with enough good will and good leadership, pretty much any technical system can be made to work.  So maybe Ethereum can make it after all.  But if it does, it will be because of trust, not trustlessness, and people, not technology.


I should acknowledge the consistent and useful reporting from Coindesk on these issues.  Rachel Rose O’Leary and  Wolfie Zhao obviously have the Ethereum Desk at Coindesk, and they have done a thorough and even handed job.  Thanks, much.


  1. Rachel Rose O’Leary (2018) Ethereum Governance ‘Not That Bad’ Says Buterin Amid Fund Debate. Coindesk, https://www.coindesk.com/ethereum-governance-not-bad-says-buterin-amid-fund-debate/
  2. Rachel Rose O’Leary (2018) Ethereum Infighting Spurs Blockchain Split Concerns. Coindesk, https://www.coindesk.com/even-ethereums-top-developers-think-blockchain-split-might-inevitable/
  3. Rachel Rose O’Leary (2018) Ethereum Is Throwing Out the Crypto Governance Playbook. Coindesk, https://www.coindesk.com/ethereum-throwing-crypto-governance-playbook/
  4. Rachel Rose O’Leary (2018) Experimental Voting Effort Aims to Break Ethereum Governance Gridlock. Coindesk, https://www.coindesk.com/experimental-voting-effort-aims-break-ethereum-governance-gridlock/
  5. Rachel Rose O’Leary (2018) Magic Solution? ‘Fellowship’ of Coders Embark on Ethereum Quest. Coindesk, https://www.coindesk.com/ethereums-magic-solution-fellowship-coders-embark-governance-quest/
  6. Wolfie Zhao (2018) Ethereum Summit Attendees Commit to Governance Plan. Coindesk, https://www.coindesk.com/ethereums-eip0-attendees-commit-to-governance-plan/

 

Cryptocurrency Thursday