The Department of All Possible Answers

One of the interesting trends in the early twenty first century is the use of vast quantities of computation to conduct numerical experiments that elucidate statistical findings.  For example, analysts use Monte Carlo experiments to estimate the likely outcome of sports or elections under certain assumptions.  These studies yield findings along the lines of “given the polls and their uncertainly, X will win 25% of the time.”

These methods are essentially computing many, many possible outcomes, to amass a distribution of possible worlds.

This is an interesting contract to classical statistical inferences that attempt to work from one set of data to estimate what “world” it was sampled from.  Uncertainties in the data are reflected in the estimate as a range of possible world that they might be sampled from (which is turned into artifacts such as confidence intervals).

These methods are complimentary, and obviously should agree in the end.

These days, there is an even more interesting case that is increasingly possible:  computing all possible worlds, and specifically, all possible ways that a particular outcome could be realized.

This summer, a group of researchers down the block report a method that reasons from reported statistics to compute all possible datasets that could have generated the statistics [1].

The researchers are particularly interested in psychological studies which use ordinal scales and relatively small sample sizes.  With sufficient computation, it is possible to reconstruct the dataset, and also to discover unreported biases and flaws.

This technique is essentially a brute force (with a bit of guidance) recreation of the original data.  It should work for any kind of data that is integers with reasonably constrained values.  Obviously, it works faster for smaller datasets.

This is an important technique because there are many, many datasets from psychology and related fields that fit these constraints.  It is well understood that conventional statistical estimation is a poor guide to understanding this flood of reports.  The original data may be unavailable (an increasingly common situation for commercial “research”), or just impractical to access.

CORVIDS and other similar tools can automatically give insights into the dataset, even if the original data is not available.  It may discover that the data must have been highly skewed, or that some values could not have been present. These characteristics may indicate biases or problems in the original data, such as systematically missing data.  These problems might represent fraud, error, or just plain misunderstanding. But in any case, they offer additional insights into what should be concluded from the data.

Cool!


  1. Sean A. Wilner, Katherine Wood, and Daniel J. Simons, Complete recovery of values in Diophantine systems (CORVIDS). PsyArXiv Preprint, 2018. https://psyarxiv.com/7shr8/

 

FOAM: Decentralized Localization Using Ethereum

FOAM is a technology that seeks to use blockchain and Ethereum contracts to create mapping and location based services.  The project wants to address a complex of perceived problems: GPS is spoofable, maps are owned by big actors, and location services aren’t private.  In addition, they think that “people lie about their location,” (Ryan John King, the co-founder and CEO of FOAM, quoted in Coindesk [3])  The solution deploys blockchain technology and Nakamotoan philosophy [2].

Looking at their materials, it is clear that FOAM is mainly focused on replicating Internet location-based services, not on navigation or engineering or geoscience.  The geospatial model is a two-dimensional map of the surface of the Earth.

The location service depends on many local low-power radio beacons instead of satellites. They imagine an ad hoc mesh of locally operated beacons, which are recognized and validated via Nakamotoan style consensus rather than a central authority (such as a government space agency). These beacons are used to trangulate positions.  Good behavior and trustworthiness of the beacons is supposedly assured by cryptocurrency tokens, in the form of incentives, notably buy in and security deposits.

They imagine this to be used to construct datasets of “Points of Interest”, which are “where are the stores, cafes, restaurants and malls, where a fleet of vehicles in a ride sharing program like Uber should be anticipating if demand is shifting or surging, or which traffic bottlenecks drivers should avoid on an app such as Waze.”  These are stored and validated through a decentralized protocol. “[G]ranting control over the registries of POI to locally-based markets and community forces, allowing the information provided to be validated by those who contribute to the relevant locality.

These datasets are to be created through bottom up efforts, presumably incentivized by desire to operate local services. “FOAM hopes that the Cartographers and users will contribute the necessary individual work, resources, and effort themselves to contribute to the ongoing community-driven growth and supplement this important cartography project.

Interestingly, the crypto token-based incentive system relies on negative incentives, namely buy ins and “security deposits” that can be forfeited by consensus. I’m not sure I’ve seen another Nakamotoan project with this sort of punishment based (dis-)incentive.  (I’ll note that psychologists generally find that the threat of punishment does not engender trust.)

Obviously, this entire concept will depend on the development of the localization network and the datasets of “Points of Interest”.  As far as I can see, realizing this is based on “hope” that people will contribute. I’d call this “faith-based engineering”

We can pause to reflect the irony of this “trustless” system that appears to be entirely based on “hope” and the threat of punishment.

As far as the actual technology, it is, of course, far short of a “map of the world”.  The local beacons are fine for a dense urban setting, but there is little hope of coverage in open space, and no chance that it will be useful at sea, up in the air, inside significant structures, or underground. Sure, there are ways to deploy beacons indoors and other places, but it isn’t easy, and doesn’t fit the general use cases (Points of Interest).

Ad hoc networks aren’t immune to jamming or interference, either, and are essentially defenseless against determined opposition.  In classic fashion, the protocol “routes around” interference, discarding misbehaving nodes and corrupted data. Unfortunately, this means that the response to a determined and sustained attack is to shut down.

The incentive system is somewhat unique, though the notion of a “security deposit” is widely used. How well will it work?   (How well do security deposits work?)  It’s hard to say, and there doesn’t seem to be much analysis of potential attacks.  The notion that the loss of security deposits and other incentives will guarantee honest and reliable operation remains a theoretical “hope”, with no evidence backing it.

The system depends on a “proof of location”, but it isn’t clear just how this will work in a small, patchy network. In particular, assumptions about the security of the protocol may not be true for small, local groups of nodes—precisely the critical use case for FOAM.

Finally, I’ll note that the system is built on Ethereum, which has had numerous problems. To the degree that FOAM uses Ethereum contracts, we can look forward to oopsies, as well as side effects from whatever emergency forks become necessary.

Even if there are no serious bugs, Ethereum is hardly designed for real time responses, or for datasets at the scale of “the whole world”.  Just what requirements will FOAM put on the blockchain, consensus, and Ethereum virtual machine?  I don’t know, and I haven’t seen any analysis of the question.

This is far from an academic question.  Many location services are extremely sensitive to time, especially to lag.  Reporting “current position” must be really, really instantaneous.  Lags of minutes or even seconds can render position information useless.

Can a blockchain based system actually deliver such performance?

Overall, FOAM really is “a dream”, as Alyssa Hertig says.  A dream that probably will never be realized.


  1. Foamspace Corp, FOAM Whitepaper. Foamspace Corp, 2018. https://foam.space/publicAssets/FOAM_Whitepaper_May2018.pdf
  2. FoamSpcae Corp, The Consensus Driven Map of the World, in FOAM Space. 2017. https://blog.foam.space/
  3. Alyssa Hertig (2018) FOAM and the Dream to Map the World on Ethereum. Coindesk, https://www.coindesk.com/foam-dream-map-world-ethereum/

 

Cryptocurrency Thursday

“Divine” Robots?

The twenty first century is the era of robots entering every aspect of human life.  One of the most challenging, both technically and theoretically, are robots that seek to interact directly with humans in everyday settings.  Just how “human” can and should a non-human agent appear?  This question is being explored on a hundred fronts.

Robots have begun to enter into extremely intimate parts of human life, and, indeed, into intimate relationships.  But these have generally been secular settings, work, transportation, entertainment, and home.  Religious situations, broadly defined, have mostly been reserved for humans only.

Indeed, for some people, religious and sacred activities are, by definition, expressions of humanity and human relations.  For all the handwringing about robots uprisings, there has been little anxiety about robots taking over churches, temples, or mosques.

Maybe we should worry about that more than we do.

This summer researchers from Peru discuss robots that are not purely function, not anthropomorphic, nor even zoomorphic, but “theomorphic” [3].  Their idea is that robots may be designed to represent religious concepts, in the same way that other “sacred objects” do.

“[A] theomorphic robot can be: – accepted favourably, because of its familiar appearance associated to the user’s background culture and religion; – recognised as a protector, supposedly having superior cognitive and perceptual capabilities; – held in high regard, in the same way a sacred object is treated with higher regard than a common object.” ([3], p.29)

The researchers note that the psychology that impels humans to create robots, and to endow them with imagined humanity, is similar to the drive to imagine supernatural divinities with human characteristics. The act of creating robots is a pseudo-divine enterprise, and interacting with intelligent robots is definitely akin to interacting with manifestations of supernatural forces.

“[R]obots always raised questions on divine creation and whether it can be replicated by humans,” (p. 31)

In many religious traditions, concepts of the divine have been represented by the most technically advanced art of the time, including stories, visual imagery, music, and architecture [2]. It seems inevitable that robots will be deployed in this role. Trovato et al. want to explore “design principles” for how this might be done.

Much of the paper is backward looking, unearthing precedents from the history of religious art and religious analysis of art.

One obvious design principle must be “a specific purpose that depends on the context and on the user” (p. 33)  This principle is critical for the ethical rule that the robot should not be intended to deceive.  It is one thing to create a sublime experience, it is entirely another to pretend that a mechanical object has supernatural powers.

They give a useful list of general use cases: religious education, preaching (persuasion); and company for religious practice (formal or informal ritual).  In addition, there may be a related goal, such as augmenting health care.  This is certainly something that will ultimately be incorporated as an option for, say, elderly assistant devices.

A paper about design principles must inevitably consider affordances.  In this case, the question is intimately related to the identification and use of metaphors and references to earlier practices. One example is for a robotically animated statue may resemble traditional carvings, while its behavior and gestures should evoke tradition rituals.  These features make the robot identifiably part of the tradition, and therefore evoke appropriate psychological responses.

Other dos and don’ts are phrased in pseudo-theological language.  “A theomorphic robot shall not mean impersonating a deity with the purpose of deceiving or manipulating the user.” (p.33)

The list of key principles is:

  • Identity
  • Naming
  • Symbology
  • Context
  • User Interaction
  • Use of The Light (I)

The role of symbolism is, of course, critical. A sacred object almost always has a symbolic association. In some cases, this is represented by imagery or other features of the object itself. It may also be conferred by context, such as a ritual of blessing to confer a sacred status to an otherwise mundane object.  Getting the symbolism right is pretty much the alpha and omega of creating any sacred object, including a robot.

The researchers are rather less concerned about human interaction than I expected.  After all, a robot can interact with humans in many ways, some of which mimic humans, and some of which are non-human and even super-human (e.g., great strength or the ability to fly).

A sacred robot must display its powers and communicate in ways that are consistent with the underlying values it is representing.  Indeed, there needs to be an implicit or explicit narrative that explains exactly what the relationship is between the robot’s actions and messages and the divine powers at play.  Getting this narrative wrong will be the comeuppance of these robots.  Imagine a supposedly sacred robot that misquotes scripture, or clumsily reveals the purely mundane source of what is supposed to be a “divine” capability.


It seems clear that digital technology will be incorporated into religious practices far more than has happened to date, in many ways.  Robots will likely be recruited for such uses, as this paper suggests.  So will virtual worlds and, unfortunately, Internet of Things technology (the Internet of Holy Things?  Yoiks!)

This paper made me think a bit (which is a good thing), and I think there are some important omissions.

Of course, the paper suffers a bit from a pretty restricted view of “religion”.  The research team exhibits personal knowledge of Buddhism and Roman Catholicism [1], with only sketchy knowledge of Islam, Judaism, other flavors of Christianity, and, of course, the many other variants (Wicca [4]? Scientology?)

There are general engineering principles that need to be taken seriously. The issues of privacy are bad enough for “smart toasters”, they become extremely touchy for “holy toasters”.  If we are unhappy having our online shopping tracked, we will be really, really unhappy if our prayers are tracked by software.

There are also problems of hacking, and authentication in general.  How ever a holy robot is designed to work, it must be preserved from malicious interference.  The ramifications of a robot that is secretly polluted with heresy are catastrophic.  Wars have been started by less.

At the same time, there are interesting opportunities for authentication protocols.  If a robot is certified and then ritually blessed by a religious authority, can we represent this with a cryptographical signature (yes). In fact, technology being developed for provenance and supply chain authentication is just the thing for documenting a chain of sacred authority.  Cool!

As far as the context and human interaction, it has to be recognized that there is a very serious “Eliza” situation here. There is surely a strong possibility of placebo effects here, possibly driven by totally unintended events.  I predict that there will be cases of people coming to worship robots, not because they are designed to be “theomorphic”, but because the robot was part of a “miraculous” event or situation.

Finally, it is interesting to think about the implications of robots with superhuman capabilities, cognitive, strength, or motive.  Even within more or less human abilities, robot bodies (and minds) are different and alien.  Why should a robot not be designed to demand the deference ordinarily given to divine entities?

This proposition violates Trovato et al’s first rule, as well as their general ethics.  But who says robots or designers are bound by this norm?

A sufficiently powerful robot is indistinguishable from a god

…and has a right to be treated as one.


  1. Evan Ackerman, Can a Robot Be Divine?, in IEEE Spectrum – Robotics. 2018. https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/can-a-robot-be-divine
  2. Norman M. Klein, The Vatican to Vegas: A History of Special Effects, New York, The New Press, 2004.
  3. Gabriele Trovato, Cesar Lucho, Alexander Huerta-Mercado, and Francisco Cuellar, Design Strategies for Representing the Divine in Robots, in 2018 ACM/IEEE International Conference on Human-Robot Interaction. 2018: Chicago, IL, USA. p. 29-35. https://dl.acm.org/citation.cfm?id=3173386.317
  4. Kirsten C. Uszkalo, Bewitched and Bedeviled: A Cognitive Approach to Embodiment in Early English Possession. First ed, New York, Palgrave Macmillan, 2015.

 

Robot Wednesday

Survey of Antarctic Ice Cores

I’ve always been fascinated by stratigraphy, the careful examination of deposits laid down for thousands and millions of years.  In addition to soil and rock, layers of deep ice record history as they are laid down.

This month Edward J. Brook and Christo Buizert review what has been learned from ice cores taken from Antarctica [1].  The southern ice cap dates back some 34 million years ago, and drilling has retrieve ice as old as 800,000 years.  (Ice flows, so most older ice has melted back into the ocean.)

Ancient ice not only reflects the snowfall, it contains traces of the atmosphere and chemicals and particles in the air when the ice froze.  These traces reflect conditions far from Antarctica.

The broad picture shows cycles of global cooling and warming, roughly 100,000 years each. This corresponds to many other records and to periods of high glaciation.  The Antarctic ice is remarkably well aligned with variations in the the Earth’s orbit, and in fact with solar warming in the northern hemisphere.  This indicates that variations in solar heating have world wide effects, presumably through the atmospheric and ocean circulation.

The ice also records greenhouse gasses, which follow the same cycles.  As we know from contemporary studies, the levels of these gasses reflect a complex variety of interacting processes, including biological sources and sinks, and sequestration in the ocean.  So there is no simple story to explain the historic trends in these gasses, though they are certainly consistent with what we know about the positive feedback between mean temperature and greenhouse gas emissions.

At finer scales, episodes of rapid change, especially glacial retreat, are found in both northern and Antarctic cores.  There is also a north-south hemispheric see-saw, with a lag of a century or two between the two hemispheres.  (I.e., a sudden cooling in Greenland will eventually reach Antarctica, but only after 200 some years.)

The cores also record large volcanic episodes, and in some cases the same event is identified in both Greenland and Antarctica—which would clearly have been a globally significant occurrence.

The cores also show that greenhouse gasses are at significantly higher levels today than any time in the last 800,000 years, and have been increasing faster than any earlier time.  This ice core record shows the clear correlation of these gasses with warming through the whole record, so the current trends obviously indicate extreme warming.

The review notes that there is a high priority to extend the current data from ice cores as far back in time as possible, and to cover more locations in Antarctica.

The difficulty of these new studies should not be understated:  finding and sampling ancient ice is no picnic, especially in remote locations of Antarctica where it must be found.  (This is actually way more important than digging tunnels for underground supertrains.  I’m talking to you , Elon Musk.)

Waiter!  More ice cores!


  1. Edward J. Brook and Christo Buizert, Antarctic and global climate history viewed from ice cores. Nature, 558 (7709):200-208, 2018/06/01 2018. https://doi.org/10.1038/s41586-018-0172-5

Tracking the Gig Economy

This month the US Bureau of Labor Statistics issued its first ever report on “Contingent and Alternative Employment”, AKA, “The Gig Economy” [2]. The BLS survey found a relatively small proportion (circa 10%) of US workers are “contingent”.  This number contrasts sharply with the most widely reported number to date from the Freelancers Union, which claimed over 30% of the US workforce, including more than 60% of “millennials”.

In an earlier posts, I criticized the FU survey for its overly broad definition of “worker”.  (For example, they count people who have a full time job and moonlight as “freelance workers”—which is at least double counting, if not conceptually wrong.)  My own reading of the FU survey gave numbers not that different from the BLS survey, when I excluded some of the categories I questioned.

The new official survey suffers from the same problem of definitions.  Who should count as a “contingent” worker? The relatively low numbers of contingent workers reported by the BLS in part stems from their restricted definition of who should be counted in this classification.  The BLS does not count moonlighters (which I think is correct).  They also appear to not count independent workers who are employed as “sub contractors”, e.g., of an employment service.  These workers really should be counted as freelancers, in my opinion.  And so on.

Caitlin Pearce of the Freelancers Union (which produced the earlier reports) raises these and other issues [1].  She also points out that the BLS survey specifically asked workers for how they worked in the last week, which might well will miss many workers with irregular schedules.

Pearce (and the FU survey) argue that “diversified” workers, i.e., people with multiple jobs, should be counted as independent workers.  The FU tends to count them as freelancers, no matter what their mix of work is.  (They project that more than half of all workers will be “freelancing” soon—though since this includes more than one part time job per worker, this number is hard to interpret.)  The BLS is probably biased to counting workers only once, generally for their “steadiest” job.   (This would seem to include at most one job per worker, which does not capture the real diversity of independent work.)

Clearly, there is a tricky counting problem here that deserves some thought.  In particular, there needs to be some concept of “an adequate income”, regardless of how many separate contracts or days of work a given worker puts in to achieve it.

Overall, it looks to me like the BLS and FU surveys are fairly consistent on the fundamentals.  The contracting headlines reflect different decisions about how to classify and count workers.  These differences stem from the reality that independent or contingent working is a complicated way of work.

And I completely agree with Pearce that getting a clear picture is important.

Building a better future for freelancers starts with learning as much as we can who freelancers are and what challenges they face.


  1. Caitlin Pearce, The government must do more to understand the freelance workforce, in Freelancers Union Blog. 2018. https://blog.freelancersunion.org/2018/06/13/the-government-must-do-more-to-understand-the-freelance-workforce/
  2. US Departmen of Labor, Contingent and Alternative Employment Arrangements Summary. Economic News Release, 2018. https://www.bls.gov/news.release/conemp.nr0.htm

 

Book Review: “Armistice” by Lara Elena Donnelly

Armistice by Lara Elena Donnelly

The sequel to Amberlough continues the complex plot (and “plot” is the right word), following the successful coup in the first story.  (I’m not sure exactly how much of an “armistice” is actually happening)

The main characters have fled the city of Amberlough, but resistance and espionage continues in other locations. The politics is complicated, and gets ever more complex as we discover regional factions, double and triple agents, and subterfuge of many kinds.

It’s all hard to keep track off. Spy versus spy versus spy etc.

In this book, we learn a lot more about many of the characters, and meet a bunch of new folks.

Donnelly gives us yet more complicated gender relations and related politics thereof. It’s head-spinningly confusing.  It’s not so much the relations (which are no more or less predictable than the human heart), it’s the variety of social meanings, positive and negative.  There are taboos and rules and laws and family reactions, and it all so clearly arbitrary. I assume is part of the point.

Of course, however you slice it, there is love and family, and people will go to great lengths to protect and help the ones they love—no matter how that love is defined, and no matter who does or doesn’t like it.  There is politics, and then there is taking care of your family, friends, and partner(s).  That’s definitely the important point, no?

As in book 1, there is lavish attention to the “scenery”, fantastic settings, architecture, exotic fashion, and complex cultural situations.  Donnelly obviously had fun creating this world, and she is really good at it.  (Honestly, though, I haven’t a clue about the clothing. Donnelly’s lovingly detailed descriptions are lost on me.)

The world of Amberlough has technology sort of late twenties in Europe. Much of the action takes place at a film studio, where they are making new-fangled cinematic spectaculars.  Apparently, wireless and airplanes have also appeared at least for the elite (these were noticeably not visible in book 1.)

The political situation is careening toward a messy, multi-sided civil war, and our protagonists are deeply involved in.

By the end of book 2, nothing is settled yet, so there will surely be a Book 3.


  1. Lara Elena Donnelly, Armistice, New York, TOR, 2018.

 

Sunday Book Reviews

Summary Article on Southern Ocean Currents

The ice is melting everywhere.  One of the biggest effects is the flow of fresh water into the oceans, potentially changing the chemistry and temperature of the water, and, most important, possibly changing the ocean currents.

Everyone knows the Gulf Stream, which warms the British isles (which are at the same latitude as Hudson’s Bay in Canada).  Should the Gulf Stream slacken or veer in another direction, Scotland is going to become rather more Canadian—quickly. Melt waters from Greenland may affect this current in the coming decades.

There are other major currents on the surface and deep down under the sea, with equally profound effects, and just as subject to change from melting.

This month Stephen R. Rintoul reviews the influences of the largest one of all, the Antarctic Circumpolar Current (ACC) which circles the entire globe around Antarctica [3].  Since the isolation of Antarctica some 100 million years ago, there has been continuous ocean around the southern continent, i.e., with no land interrupting a circumnavigation.

Driven by strong winds, the ACC has developed in this water, circling the globe eastward. In addition, there are huge eddies from the current, and also complicated vertical flows of denser saltier water (created when sea ice forms, making the remaining water saltier) and lighter fresh water (from precipitation or melting ice), which massively churn the ocean, moving heat and chemicals. These eddies and overturning circulation are thought to be driven by wind and buoyancy (i.e., flows of fresh and salt water).

The point is, of course, that this continent scale ocean system has a huge impact not only near the pole, but on the whole ocean and atmosphere—the whole planet. The new paper also points out that this system is also sensitive to changes elsewhere which can cause wind, atmospheric temperature, and melting.

“Given the effect of Southern Ocean processes on the global ocean circulation, climate and sea level, changes in the region could have widespread consequences.” (p. 214)

Recent studies indicate that the current is not uniform, but characterized by hot spots.  These are created by undersea topography which steers the current and creates turbulent, energetic eddies down stream.  While there aren’t any continents blocking the current, there are undersea mountains that partially block the current. The vertical flows are also spotty and influenced by topography, with little mixing occurring in deep, open areas and undersea mountains unsurprisingly deflecting currents upward.

these processes move around heat and chemicals, including atmospheric Carbon.  Strong upwelling, including in the Southern Ocean, releases more Carbon into the atmosphere.  These processes also move heat in the water, generally exporting heat northward (and thus, keeping the pole cool while heating the mid latitudes). These processes have a major impact on the overall temperature of the ocean and the amount of Carbon in the atmosphere.

The formation and melting of sea ice “distills” saltier water (which sinks) and then floods of fresh water (which floats).  These processes vary along the shores and at sea, and are influenced in complicated ways by wind, precipitation and temperatures.

Finally, the ice cover of the continent is melting overall, pouring fresh water into the ocean.  The retreat is uneven, influenced by local conditions. In particular, glaciers exposed to ocean water are melting as the ocean warms.  The ocean currents interact with the inshore waters, potentially warming and undermining glaciers.

(It is not a coincidence that the same issue of Nature has several articles about the Antarctic ice  [1,2,4].)

Overall, as recent studies have given a more detailed picture (in finer spatial and temporal granularity), it is clear that the southern ocean is complicated and variable. We are far from being able to understand and predict its behavior.

“A consistent theme of this Review has been the importance of local and regional dynamics, which are often linked to topography. “  (p. 215)

Of particular importance is better understanding of the three-dimensional characteristics of the southern ocean.  This will require sustained and detailed measurements of this vast, and difficult to access region.


  1. Mass balance of the Antarctic Ice Sheet from 1992 to 2017. Nature, 558 (7709):219-222, 2018/06/01 2018. https://doi.org/10.1038/s41586-018-0179-y
  2. Edward J. Brook and Christo Buizert, Antarctic and global climate history viewed from ice cores. Nature, 558 (7709):200-208, 2018/06/01 2018. https://doi.org/10.1038/s41586-018-0172-5
  3. Stephen R. Rintoul, The global influence of localized dynamics in the Southern Ocean. Nature, 558 (7709):209-218, 2018/06/01 2018. https://doi.org/10.1038/s41586-018-0182-3
  4. Andrew Shepherd, Helen Amanda Fricker, and Sinead Louise Farrell, Trends and connections across the Antarctic cryosphere. Nature, 558 (7709):223-232, 2018/06/01 2018. https://doi.org/10.1038/s41586-018-0171-6

A personal blog.

%d bloggers like this: