IEEE Computer Society Working on Reproducibility of Research

I’ve been worrying about reproducibility of research results for quite a while now (since the late 90’s [2, 3, 5]).  As digital and network technology we developed in the 80s and 90s has been taken up, science and technical research has become digital, computational, and digitally published.  These technical advances are super useful, but they raise many issues for evaluating research results [4].  They also revolutionize the notion of “publication” and “reproducing” research.

So we worry about not just the data and software, but also the computational steps involved .  New technologies may or may not help track the complexities of data and computation underlying specific results (e.g., cloud computing, blockchain).

But all the technology in the world can’t solve the problem.  To make results “reproducible” requires authors to do a bunch of work to maintain and publish adequate descriptions of the technical underpinnings of their claims.  And it requires publishers to publish and archive not just papers, but digital data and metadata.  (I’ll note that universities need to expand their mission to require data and metadata be deposited as part of thesis and other academic projects.)

We’ve been pushing these requirements since the early days of the World Wide Web [2, 3, 5].  So I’m glad to see more and more publishers and professional societies moving to finally deal with these issues.

I should note that the Astronomy community has long led in this field.  For decades now, all major astronomy publications have required that the relevant datasets be deposited in open archives at the time that a paper is published.  (Sensei Ray Plante pioneered some of the early efforts [5].) Well done.


The IEEE Computer Society is catching up.

This fall and ad hoc committee of the IEEE Computer Society has published recommendations for bringing their journals and conferences into the digital age [1]. 

The report sketches the interested parties and proposes some steps for the professional organization, which is an important publisher. Ironically, even in this savvy and well-funded professional field, the field that created the digital and network technologies in question, 60% of the publications do not have any processes in place.

In the end, the main proposals are to (1) enable and require submission of data and code along with manuscripts to be published, and (2) to link the archived code and data with the published paper. Just like astronomers have been doing for twenty years.

So I say, “yes, please”.   This was the right idea when we wrote about it last century, and it’s long overdue now.


  1. Joanna Goodrich, Study Shows Ensuring Reproducibility in Research Is Needed, in IEEE Spectrum – News, September 30, 2021. https://spectrum.ieee.org/study-shows-ensuring-reproducibility-in-research-is-needed
  2. Robert E. McGrath, Joe Futrelle, Ray Plante, and Damien Guillaume. Digital Library Technology for Locating and Accessing Scientific Data. In ACM Digital Libraries ’99, 1999, 188-194. http://dx.doi.org/10.1145/313238.313305
  3. James D. Myers, Alan R. Chappell, Matthew Elder, Al Geist, and Jens Schwidder, Re-Integrating The Research Record. Computing in Science and Engineering, 5 (3):44-50, May/June 2003. http://ieeexplore.ieee.org/document/1196306/
  4. National Academies of Sciences Engineering and Medicine, Reproducibility and Replicability in Science., The National Academies Press, Washington, DC, 2019. https://doi.org/10.17226/25303.
  5. Raymond L. Plante, The NCSA Astronomy Digital Image Library: The Challenges of the Scientific Data Library. D-Lib Magazine,   October 1997. http://www.dlib.org/dlib/october97/adil/10plante.html

Speaking of Martian Spectroscopy…

An earlier post noted recent efforts to use machine learning to create “autonomous science” capabilities for exploring Mars.

This seems like a good idea to me.  In fact, it has been a good idea for a long time.  And, I would like to note, NASA has been working on the idea for quite a while.

I know this because I discussed this concept with Professor Erzsébet Merényi of Rice U. way back when (circa 2002-3), to the point of discussing possible collaborations.  (Nothing came of the collaboration—my fault, I couldn’t get things together on my end.)

The point is, Merényi and colleagues demonstrated the use of unsupervised learning to classify hyperspectral images in general, and Mars in specific [3]. (See multiple references below.)

For background, I’ll explain that this particular research used Kohonen maps, which is a specific neural net patterned after a human retina.  So, unlike general-purpose machine learning, this is a more limited and less resource sucking neural net.  And, being based on mammalian eyes, it may be particularly useful for image processing.

Merényi’s research demonstrated that these neural nets can effectively classify segments of images into geologically meaningful groupings.  With hundreds of channels of spectral data, it is really, really sensitive.  (I was told that this software can tell “a Ford from a Chevy”, from orbit, based on the different paint formulas.)

For completeness, I’ll note that one thing Kohonen maps don’t do is understand the image.  They give you a breakdown of the different kinds of rock, but not what the rocks are.  I was looking into that part of the problem, thinking about creating some kind of automatic tagging system.  (You’ll say, “that’s easy, Facebook and everybody does that”.  But it hadn’t been invented, at least at scale, at that time.)

And, yes, she and her colleagues (and me) were definitely thinking about putting this onboard to do “autonomous science”

So, yeah, it’s a good idea.

So why is NASA funding the same thing 15 years later?  Basically, NASA funds a lot of cool ideas, but only some things get developed all the way.  And NASA has very little money for development, so a lot of stuff gets abandoned at the early stage of “demonstration”.  But good ideas come back again and again, so there can be multiple demonstrations of the same concept.


Anyway, I just wanted to tip my hat to Sensei Erzsébet, who has been pioneering machine learning for spectroscopy for a long time now.


(Merényi publication list is here.)

  1. W. H. Farrand, E. Merényi, S. Murchie, and O. Barnouin-Jha. Spectral Class Distinctions Observed in the MPF IMP SuperPan Using a Self-Organizing Map. In 36th Lunar and Planetary Science Conference,, 2005.
  2. E. Merényi, “Precision Mining” of High-Dimensional Patterns with Self-Organizing Maps: Interpretation of Hyperspectral Images. , in Quo Vadis Computational Intelligence: New Trends and Approaches in Computational Intelligence. Studies in Fuzziness and Soft Computing, P. Sincak and J. Vascak, Editors. Physica-Verlag, 2000.
  3. E. Merényi, W.H. Farrand, and P.  Tracadas. Mapping Surface Materials on Mars From Mars Pathfinder Spectral Images With HYPEREYE. In International Conference on Information Technology (ITCC 2004), 2004, 607 – 614. https://dl.acm.org/doi/10.5555/977403.978487
  4. E. Merényi, E.S. Howell, L.A. Lebofsky, and A.S. Rivkin, Prediction of Water In Asteroids from Spectral Data Shortward of 3 Microns ICARUS, 129:421- 439,  1997.
  5. E. Merényi and A. Jain. Forbidden Magnification? II. In 12th European Symposium on Artificial Neural Networks, ESANN’2004 2004, 57 – 62.
  6. E. Merényi, R.B. Singer, and J.S. Miller, Mapping of Spectral Variations On the Surface of Mars From High Spectral Resolution Telescopic Images, . ICARUS, 124:280-295,  1996.
  7. L. Rudd and E. Merényi. Assessing Debris-Flow Potential by Using AVIRIS Imagery to Map Surface Materials and Stratigraphy in Cataract Canyon, Utah. In Fourteenth Airborne Earth Science Workshop, 2005.
  8. K. Taşdemir and E. Merényi. Considering Topology in the Clustering of Self-Organizing Maps (accepted). In 5th Workshop On Self-Organizing Maps (WSOM 2005), 2005.
  9. T. Villmann and E. Merényi, Extensions and Modifications of the Kohonen-SOM and Applications in Remote Sensing Image Analysis in Self-Organizing Maps: Recent Advances and Applications, U.Seiffert and L.C. Jain, Editors, 2001, 121-145. .

Bitcoin ATM Security Issues

We are shocked—shocked!—to learn that many Bitcoin ATMs have security problems.

Bitcoin “ATMs” are stand alone systems that enable people to purchase Bitcoin, i.e., to trade US Dollars or Euros or other money for Bitcoins or other cryptocurrencies.  However, these devices are not operated by a bank or other reputable institution, so who knows what quality assurance has been done?  (Spoiler alert—not enough QA has been done.)

I gather that many popular BATMs are built using Android OS.  Essentially a smart phone app in a box.  I’m not sure why this was the choice, but it’s not a terrible idea.  Provided you grok the ins and outs of AOS (which I’m sure I don’t).

This fall Kraken Security Labs report very serious security flaws with popular BATMs [1].  Very. Serious.

These devices not only conduct monetary transactions, they use credit cards, bank accounts, and other sensitive information.  If they are compromised, users may be hacked. 

To start with, since these devices are not operated only by banks or other grown ups, they could be located anywhere.  It is a mistake to assume that a BATM is secure in and of itself, especially if it is in an untrusted location.  No matter how secure and trustworthy the Bitcoin protocol might or might not be, a box sitting in a random convenience store probably isn’t.

What else is wrong?

The whole list of problems reported by Kraken would be comical, were there not so much customer money at risk.

First of all, the systems are shipped with a default admin password, which—spoiler alert—is often not changed.  Since there isn’t any password management system, this has to be done by hand for each unit, so, yeah, it’s probably not being done.

Anyone with this default password will be able to get into a system and take it over.  (And, for your convenience, access is via QR code, so you don’t even need to type the password, just wave the right QR at it.  Sigh.)

Second, the hardware is accessible to anyone with access to the main box.  This means that whoever changes the cash box could monkey with the hardware, including the camera, fingerprint reader, and computer.  Kraken notes that there is no alarm indicating that the hardware was exposed.  (And if there is no security camera, there could be no record at all of who opened the box.)

This application is essentially a kiosk, no?  So the Android OS should probably be set to “kiosk mode” which locks down things to a single use.  Unfortunately, the BATM isn’t set to kiosk mode, so the AOS is potentially open.  In fact, Kraken reports that “by attaching a USB keyboard to the BATM, gaining direct access to the full Android UI is possible.” 

It seems that the boot loader isn’t secure, either, so anyone with physical access to the processor could completely reprogram the device.

The BATM works by communicating with a server.  Shockingly enough, there are weaknesses in this communication protocol that could enable an attacker to forge messages. I’m not sure, but I think this means that an attacker to basically pretend to be an ATM, and fake any transactions it wants.


Overall, this report reads like a textbook example of what not to do. If you turned this in as a class project, I probably would flunk it.


Clearly, no one should ever use a Bitcoin ATM unless you really, really trust the people who run it.  And even then, it’s a risk.

Which is ironic, because one of the main use cases for these BATMs is because you don’t trust “the man”, and want to eschew conventional financial systems.

As I’ve said before, the cryptography and Bitcoin protocol might be secure, but that doesn’t mean that this kind of DIY ATM.

And in this case, DIY security is, well, pretty darn iffy.


  1. KrakenFX, Kraken Security Labs Identifies Vulnerabilities In Commonly Used Bitcoin ATM, in Kraken – Security Labs, September 29, 2021. https://blog.kraken.com/post/11263/kraken-security-labs-identifies-vulnerabilities-in-commonly-used-bitcoin-atm/
  2. Tom McKay, Widely Used Bitcoin ATMs Have Major Security Flaws, Researchers Warn, in Gismodo, September 30, 2021. https://gizmodo.com/widely-used-bitcoin-atms-have-major-security-flaws-res-1847776137

Cryptocurrency Thursday

VoloCopter Flies!

The (terrifying) VoloCoper is a real thing!

This fall, the more than 10 meter across, 18 (!) bladed copter competed a 3 minute test flight in Hamburg.  Ta da-a!

The cargo was then transferred to a cargo bicycle for final delivery, just to show we don’t need no steekin’ combustion engines.

It’s not clear to me just when you would need this kind of delivery, but then I don’t live in a really dense city.  Nor do I deal with delivering supplies through congested roads.  So what do I know?

Robot Wednesday

What will Coworking Become?  Kane on “Productivity as a Service” [repost]

[This was posted earlier here]

We’re all wondering just what coworking will look like after the pandemic shutdown.

The “flexible office” industry is stepping forward with pitches, including spiffy new jargon.

This fall Kenny Kane writes about “Productivity As A Service”, which he links to the future of coworking [1].

Huh?  What?

OK, the “as a service” tag has been popular for a while, riffing off the original breakthrough, “Software as a Service” (i.e., rent, not own).

So what in the world could “productivity as a service” even mean, with or without coworking?

I mean, productivity is a statistic, a number which can be computed for a person or a group.  You can’t buy or sell “productivity”, so how can it be rented as a “service”?  This makes no logical sense.  Or, more to the point, it is making up a new, trendier word for “renting office space”.

What I think Kane is talking about are the features of an office that help workers and groups be more productive.  Good connectivity, comfortable work areas, appropriate meeting spaces, etc.  These are things that operators do sell to their users, and I think the notion is that operators should be sure to provide the right array of features.  He’s thinking that you charge more for features that arguably pay off for the business, i.e., improve results, AKA “productivity”. So you are renting not just space but valuable infrastructure services.

The “as a service” part also suggests that the operator should provide these features as part of a menu that renters can select from.  I.e., they are not built to order, they are built in as standard parts of the workspace, though possible with options.

OK, this all makes sense, even if the term “PAAS” is an abuse of the historic sources.


What does this have to do with coworking?

I think Kane’s point is summed up in the section header, “Beyond Coworking: Physical Spaces Designed For Productivity”.

He focusses on the original format of coworking, the open plan shared workspace.  As he notes, workers need more than a desk and a chair.  Or at least, many workers, some of the time, want other things.  Such as a quiet space or private office.  So, Kane says, building managers should offer more than just desks-by-the-hour in a big “chatty” room.

“For this reason, we may start to see coworking evolve from chatty social hubs to productivity destinations.”

(From [1])

Kane also notes the important value to workers of having someone else run the office space.  As workers and organizations work back from Work At Home, everyone has a new regard for professionally maintained office and infrastructure.


Of course, Kane makes some good points here.  I’ve been making the same points for quite a while, long before the pandemic.

However, his implications that flexible office space is the future of coworking is dubious. 

For one thing, the idea of coworking emerged out of flexible office rental, so Kane is describing devolution, not evolution.  Coworking spaces have always provided a variety of features, including everything Kane describes here.

In fact, what Kane describes here is basically what I call “sprinkling community on rental office space”.

As I have argued for many years, the essential product of coworking is not office space, productive or otherwise, it is community.  That is why coworking spaces always have a “chatty social hub” at the core.  The social part is what the coworking space is selling.  The rest is just infrastructure.

Tellingly, Kane provides for this crucial function in the form of “Designated collaboration rooms to keep noise levels at a minimum.”  Let the hippies have their little room, he is saying, while the real workers hunker down alone in quiet, private offices.

Is this the future of coworking?  Hardly. 

A successful coworking space must be all about building and sustaining community, not about selling “productivity as a service”.   This requires community leadership (i.e. talented humans) and plenty of face to face interaction.  And, no, there is no such thing as “community as a service”.

Is this PAAS future of rental offices?  Maybe, but who cares?

The good news is that you can build a good coworking community on top of many variations of flexible office space.  So PAAS may enable coworking operations to build and sustain their communities.

“Community as a Layer on Top of PAAS”  There’s an Nth order buzzword!


  1. Kenny Kane, The Rise Of Productivity As A Service In The Coworking Model, in Forbes – Forbes Biz Council, October13, 2021. https://www.forbes.com/sites/forbesbizcouncil/2021/10/13/the-rise-of-productivity-as-a-service-in-the-coworking-model/

(For much more on the Future of Work, see the book and blog  “What is Coworking?”)

What is Cowworking?  What Will Coworking Become?

3D Printed Duct Work

3D printing is revolutionizing fabrication at many scales.  Submillimeter precision enables interesting things like catalytic reactors with bespoke geometry and surface structure

If tiny is good, then big must be better.  So we can 3D print building sized structures.  Like, say, a rocket pad.  Or, for that matter, a rocket.

This fall researchers from UT Sidney and BVN Architecture demonstrate another application—duct work for interior air flow [1].

(From [1])

They fabricate these bespoke air vents, using a robot arm to manipulate the print head.  The design is curvy and organic, and, like plants emits air through pores throughout the brachia.

And, for extra marks, they use shredded plastic recycled from bottles, so the Carbon footprint is way less than conventional steel ductwork. The ducts are therefore recyclable as well.

The video shows the robot fabricator, and there is an illustration of how the air is delivered.  I kind of like the idea of air coming from many directions, not just blasting out of a hole or two.


I can see some advantages to this design.  The designers note that the organic shape is aerodynamic which saves energy.  I expect it is fairly light for its strength, and also flexible so it can handle tremors without breaking.

We see images of computer design that is modelling the air flow through these bespoke tubes.  This reminds us that this needs to be carefully designed not only to deliver air where we want it but to avoid pockets of dead air where contaminants might accumulate.  I note that biological systems like this often have active cleaning / scavenging mechanisms.  Will there need to be little robot ants in the pipes, cleaning out dust, moisture and detecting mold or other contamination?

There may be some drawbacks.  Recycled plastic may be fragile, vulnerable to heat, and probably isn’t rodent proof.  The material itself is also rather flammable, which is less than ideal. There may need to be firebreaks to prevent disastrous chimney effects.

And, if anything happens to part of the system, how do you patch or replace the damage? All those bespoke, organic curves are hard to fix by hand.

The porous design requires the whole pipe to be exposed to the human space, not just at a few air points, which may not always be convenient in some spaces.  Along those lines, it looks to me like the human victims inhabitants have little ability to physically direct or block the air.  (I myself have been known to tape cardboard over an annoying vent to block or redirect annoying air.)

And finally, I can’t help but recall the many movie sequences involving secret agents infiltrating via square steel air ducts.  Imagine the comedy as our cat burglar hero find herself crawling and falling into a weird, flexible clear plastic gerbil tube. : – )

  1. BVN Architecture, Systems Reef 2.0, in BVN – Projects, 2021. http://www.bvn.com.au/projects/systems-reef-2-0/

Book Review: “The Last Graduate” by Naomi Novik

The Last Graduate by Naomi Novik

The first book of the Scholomance was a cliff hanger for sure, so we are all ready for this second installment.

This book continues the story of Galadriel (call her ‘El’ or suffer) and her friends, now seniors who must prepare for the traumatic gauntlet that follows graduation.  This year is different, though, because last year El and Orion Lake intervened in the graduation, tipping the scales dramatically against the monsters.

It soon develops that business is not as usual for this class of seniors.  There seems to be a shortage of monsters and a surfeit of students, entirely due to the special talents of El and Orion.  This altered predator-prey situation soon alters the human social relations–dramatically.

El receives offers and gifts from all over, as people recognize her powers. More ominously, the school itself seems to be out to get El, or maybe send her a message.  Her schedule is appalling, and she is more or less forced to take care of other students, and perform other selfless acts.   

One thing leads to another, and El resolves to not just get through herself, but to get everyone else through.  This requires a major change in the school culture; to get everyone working as a united team. It is only possible because in El and Orion they have a unique combination super weapon, and a real possibility of actually getting everyone out.

And El discovers that the school itself seems to be on their side, too.

Strange days, indeed.

Organizing this mass breakout is hardly simple.  Worse, El wants to solve the problem once and for all, not just for her own class.  How are they supposed to survive and make sure future kids don’t have to go through what they have endured?

This is going to call for the best effort of these elite students, that’s for sure.

And however well it goes, what is going to happen after graduation?


Once again Novik has dealt up an amazing fantasy world with wonders on every page.  OK, there’s more teen angst than I really need, but there is so much cool magic that all the jabber slides by. 

I suppose that some will find a certain amount of satirical social commentary on elite schools and the kids that endure them.  Are we supposed to recognize mundane world equivalents?

Jam all these bright kids together, make them compete fiercely, and leave them to fend off hordes of malicious, kid eating “monsters”.  With no adults.  Basically, anything goes, just to survive.

As far as the fantasy world is concerned, Novik sketches a weak argument for how this school might seem like a good idea, but honestly I really have to work to suspend my disbelief. It makes no sense to me. Who would actually send their kid to this horror show?

But who cares?  The magic is very cool, and the kids will be OK.  We hope!

Book three will need to tell us about events post-graduation.


  1. Naomi Novik, The Last Graduate, New York, Del Rey, 2021.

Sunday Book Reviews

Mortati and Carmel on the Academic Arms Race

It’s been quite a while since I have taught at University, and from what I hear, I’m glad to not be doing it.

This fall Joseph Mortati and Erran Carmel remind me of yet another reason:  digitally enabled cheating [1].   “Can We Prevent a Technological Arms Race in University Student Cheating?”, they ask.

Sigh.  I guess teaching has always had an adversarial aspect, but this is a bit much.

Mortati and Carmel define the core problem as authentication of authorship, with the most pressing issue determining if the registered student actually complete the assignment.  Tellingly, M&C used the phrase “actually complete any of the assignment” (emphasis added).  Sigh, again.

The authors catalog the resources available to the adversaries. 

Students have the rich communications, data storage, and search capabilities of the internet available.  Anything distributed digitally can and presumably quickly will be captured, stored, and shared with the whole world.  From this perspective, it’s probably more difficult to not cheat than to cheat.

And, of course, modern digital markets make it easy to purchase work, or hire a substitute to do the work.  Sigh.

(I helped boot up the internet, so I have extremely conflicted emotions about this. The point of the internet was to put vast information resources in the hands of everybody. But we didn’t really mean them to be used for stupid stuff like buying term papers.)

Teachers have two way video links (to emulate in class exams) and various technical means to limit or monitor behavior.  VPNs or other filtering can restrict access to authenticated users or computers.  An ever escalating array “plagiarism checkers” attempt to flag suspicious duplication and other blatant deceptions.  M&C anticipate that the full gamut of online authentication technology may well be deployed soon, up to and including biometrics and behavioral analysis.  Sigh.

There are risks on both sides.  Students face an array of penalties, generally up to expulsion and loss of accreditation.  (But, as far as I know, criminal prosecution is rare to date.)

Institutions face risks of both false negatives and false positives.  Undetected cheating undermines the credibility of the institution and dilutes the key product.  But false accusations of cheating is unfair and also undermines the credibility of the institution.

M&C note that anti-cheating technology is essentially surveillance technology, and cultural norms regarding privacy are fluid and contested.  Aggressive surveillance might prevent or deter cheating, but the cost could be abusive intrusion into the lives of the students.  Attitudes about these tradeoffs are evolving rapidly, in and out of the academy.

M&C make some longer term suggestions. 

For one thing, some aspects of this process are pretty much the same as overall network security processes, so they can be enforced by the net police.  Letting someone else use your login is already against policy, so letting them do your work is a violation of security policy.  Other “cheating” could be added as sub-cases of data and network security policy.  One advantage here is that improper access is usually dealt with swiftly and efficiently by disabling the user id.

M&C also note that courses should be redesigned and—shockingly—should be more interesting.  The activities most vulnerable to cheating are also some of the dumbest types of assignments.  Rote recall and trivial writing assignments are not only boring, they are easy to cheat.

The ideal, of course, is to try to assure that students are learning because they really want to do the work.  Students who are doing something they love don’t cheat because that would ruin it.

I’ll also suggest that collaboration in person is way, way harder to fake.  So we might argue that face to face interactions should be included in the curriculum as a required authentication measure.

(I note that many graduate programs traditionally include oral exams as a quality control step.  These grueling experiences make a student demonstrate that, regardless of test results and grades, they actually can communicate at a professional level, i.e., that they really know the stuff they are supposed to know.)

So—better classes and more in person contact.  That sounds expensive.  I expect that institutions will probably focus on more and more extreme surveillance.  Sigh.


  1. Joseph Mortati and Erran Carmel, Can We Prevent a Technological Arms Race in University Student Cheating? Computer, 54 (10):90-94,  2021. https://ieeexplore.ieee.org/document/9548108

Autonomous Science for Mars Exploration

Despite the loopy dreams of over-funded tech bros, there aren’t many reasons to actually travel to Mars. 

But one problem that humans on Mars do address is decision making, including in science projects such as looking for life.  Where to look next, based on current knowledge is a complex question that usually is done by humans.  To date, the suite of orbiters and landers have generally been tasked from Earth, which is cumbersome to say the least.

It is also very inefficient, expending huge amounts of time and interplanetary bandwidth sending tons of data home, waiting while the Carbon based units cogitate, and then sending new instructions back out.  It’s a shame to waste the limited time robot probe lives, and it’s a shame to use scarce bandwidth for all this overhead.  And it only works at all where the probe can maintain reliable radio contact home.

Having humans in orbit or on the surface of Mars to operate the explorers could cut a lot of this traffic and delay. 

Or we could build better robots.  (Which will be a good thing even if there are humans closer by.)

This fall researchers at Goddard Space Flight Center discuss work on what they term “science autonomy”—souped up “intelligent” instruments that are at least partly self-guided [1].  These instruments aim to optimize the returned data—the payoff—of the remote instrument.  This requires the probe itself process and interpret data, make decisions about future observations (i.e., where to look next), and to send back the most important data.

In short, the instrument should understand more of the science, and behave more like a scientist. Cool!

“Onboard science data processing, interpretation, and reaction as well as the prioritization of telemetry constitute new, critical challenges of mission design. “

([1], p. 70)

There are many possible instruments that might be in use, each with its own features to interpret.  The Goddard group has focused on mass spectrometers.  Mass spectrometry is incredibly versatile, and is widely used on space probes.  

Mass spectroscopes generate a lot of raw data, and the better the instrument, the more data.  This is a significant challenge, because bandwidth is and will remain limited.  It would be easy enough to put better mass spectroscopes on Mars, but we couldn’t get the data back in any reasonable amount time.  For example, the ESA/Roscosmos ExoMars rover scheduled to launch in 2022 has a mas spectroscope that can collect 50,000 samples per second, 1,000 times the rate of the 2012 Mars Science Lab.

Of course, what we really do with the MS data is look for features, signatures of specific molecules.  In fact, we are usually most interested in a few features, as well as any big surprises.

So it would be really useful and super cool for the instrument to look for this important stuff, and quickly report back the highlights; the “interesting”, and “surprising” results.  The raw data can be held and transmitted later if desired.

The question is, can this be automated?  Can a space probe accurately filter and report such data?  We surely would be unhappy if our instrument’s summaries were error prone, or if it overlooked something really important.  The whole idea is that we can’t check all the data to confirm the results, so we really need to trust the judgement of the autonomous instrument.

The researchers explored the use of—wait for it—machine learning to automatically classify data to quickly identify the most interesting observations.

One of the important goals is to help automatically control the instrument.  Ideally, the instrument would recognize bad data or other problems, and suspend or correct the observation.  In some cases, the observations suggest that additional measurements should be made, and the instrument should follow up.  And so on.

The second goal will be to assess the most valuable data to return first.  This judgement is subjective, and may depend on circumstances beyond the instrument.  Nevertheless, choices must be made, so automated assistance is crucial.

This task is actually pretty hard.  Interpretation of mass spectroscopy is complicated, not least because there is so much of it.  A given sample can have many features, all interesting for some purposes, but not others.  And in the case of planetary exploration, there aren’t necessarily relevant training examples available.

The research used data accumulated from an engineering test system at Goddard.  They note that they used metadata to screen out “useless” examples (e.g., electrical tests).  A machine learning model was developed to detect junky samples statistically and with expert input from humans.   Finally, supervised learning was used to identify the most interesting (and accurate) results. 

One output could be a matching search to identify “similar” samples.  Consumers (scientists) could be given a list of “you may also like” spectra.  The system can also classify samples and generate tags to describe the observation.  This information can be analyzed and visualized to help quickly interpret the significance of the results.

The preliminary investigation worked, kinda.  There really isn’t very much good training data, and it is difficult to verify the results without more detailed study.  It will take a lot more work to develop confidence in the algorithms enough to get the full benefit.

This study is basically a retrofit on an instrument not designed to do this sort of analysis.  It shows what might be done, but it seems clear that this kind of “science autonomy” should be built in from the beginning.  As more and more data is collected, there will be opportunities to create robust analysis and decision making in future instruments.


  1. Victoria DaPoian, Eric Lyness, William Brinckerhoff, Ryan Danell, Xiang Li, and Melissa Trainer, Science Autonomy and the ExoMars Mission: Machine Learning to Help Find Life on Mars. Computer, 54 (10):69-77,  2021. https://ieeexplore.ieee.org/document/9548129

Crypto Mining is Still Evil

I’ve noted before the grotesque amount of electricity that Bitcoin mining consumes—by design

This profligate consumption at least has a purpose, however crude and unfortunate.  The cost of computation is used as a deliberate “difficulty” to implement distributed timestamps.  Cheating is impossible because the computation cannot be fudged.  And one reason it can’t be fudged is that it consumes so much electricity that it is infeasible to just scale it up and rewrite history with a bigger computer.

But Bitcoin and other crypto mining uses other resources besides electricity, including incidental side effects from power generation and a neverending arms race of computer systems.

The former has produced massive amounts of Carbon emissions. The latter has led to occasional shortages of some computer components (notably GPUs).

This fall two  European researchers  report yet another unwanted side effect of this crypto arms race:  the generation of tons of electronic waste [2].  The constant pressure to mine more Bitcoins has resulted in rapid turnover of computer systems, replacing last year’s model with newer versions.  The decommissioned systems are mostly discarded, generating toxic e-waste.

The new study tries to estimate the amount of e-waste generated by Bitcoin mining.  They base their estimate on plausible life cycles of specialized mining hardware.  Reasoning from statistics of what devices were in use, and when they become obsolete (i.e., unprofitable for mining), they estimate that specialized mining chips are used for less than 1.5 years.  After this period, these devices will likely become waste.

Their estimates suggest that “the whole Bitcoin network currently cycles through 30.7 metric kilotons of equipment per year.“ ([2], p. 9)  This is approximately the amount of similar waste generated by, say, The Netherlands.

This is a lot of toxic waste. 

And, as the researchers comment, the resource consumption of Bitcoin is particularly large considering that “the actual use of the Bitcoin network has remained limited”.  All this electricity and e waste serves as many transactions in a year as conventional finance does in a few minutes.

“Over the course of 2019, the network processed 120 million transactions […], while traditional payment service providers processed about 539 billion transactions.”

([2], p. 2)

I’ll note that the emissions and e-waste generated by cryptomining are direct consequences of the fundamental design philosophy of Nakamotoism.  Bitcoin mining in particular relies on a simple-minded economic that rewards computing performance and does not account for external costs of any kind.  How could a Nakamotoan network not result in this kind of costly technical arms race and negative side effects?

In addition, the distributed design and decentralized governance assure that no one is responsible and no one is able to do anything about unwanted side effects.  Bitcoin is intended to be “impossible to censor”, resistant to regulation by “the man”.  In this case, Bitcoin is highly resistant to any policy to reduce e-waste– it is difficult to “censor” the production of pollution and toxic waste.

It’s not likely that Bitcoin will change. It’s working exactly as it was designed to do. Bitcoin and similar cryptocurrencies are designed to waste resources, designed to ignore damaging side effects, and designed to be ungovernable.

On the other hand, cryptocurrencies can be regulated, and certainly will be.  Regulating cryptocurrencies will surely be good for Bitcoin and other cryptocurrencies, if not for the current crop of speculators and poorly thought out businesses.


  1. BBC News, Bitcoin mining producing tonnes of waste, in BBC News – Tech, September 20, 2021. https://www.bbc.com/news/technology-58572385
  2. . Resources, Conservation and Recycling, 175:105901, 2021/12/01/ 2021. https://www.sciencedirect.com/science/article/pii/S0921344921005103

Cryptocurrency Thursday

A personal blog.

%d bloggers like this: