Origami for spacecraft!

As I’ve said before, Origami should be included in the curriculum for engineers and other designers.  The techniques are influencing new designs for robots, furniture, and everything.  (e.g., see earlier posts here, here, here, here, here, here, here, here, here, here, here, here, here, here)

This winter NASA designers discuss their explorations of Origami for spacecraft [3].  One gigantic reason for this interest is what they term “the tyranny of the faring”.  Launch vehicles are only so big, so large things have to be packed compactly and then expanded and assembled in space.

Principles from traditional Origami are just the thing:  a compact package unfolds to self-assemble the object.   So, for instance, a flower-like antenna or solar array might be launched as a “bud”, and then open up to the functional shape.

(NASA JPL has a neat little exercise to make a paper version of a “starshade” [2].)

Origami can also be used to implement “flat packed robots”, which can be transported in a compact package, and then unfolded to self-assemble the body of a robot.

These and other applications use classic Origami as an inspiration.   Of course, satellites and robots are usually not made of paper, they are made of fancy materials.  They also will have circuits, sensors, motors, and whatnot embedded in them.

This is not your grandfather’s origami!

 

This wonderful explosion of origami inspired design is enabled by theoretical developments of a mathematics of folding, e.g., [1].  And if there is math, we can make software to do it.

Kewl!

The NASA people were at pains to point out that this isn’t just messing around in the lab.  These principles are already being employed in real projects, including the James Webb Space Telescope, which will set out later this year and will unfold out at the L2 Lagrange point.

The planned unfolding of the sunshade and mirrors is illustrated in this animation:

 

This is real as real gets, and cool as cool gets!


  1. Robert J. Lang, Origami Design Secrets: Mathematical Methods for an Ancient Art, Boco Raton, CRC Press, 2012.
  2. NASA Jet Propulsion Laboratory, Space Origami: Make Your Own Starshade, in NASA Jet Propulsion Laboratory – Education, September 9, 2019. https://www.jpl.nasa.gov/edu/learn/project/space-origami-make-your-own-starshade/
  3. NASA Jet Propulsion Laboratory, The von Kármán Lecture Series: 2021, in NASA Jet Propulsion Laboratory – Lecture Series, January 14, 2021. https://www.jpl.nasa.gov/jpl-and-the-community/lecture-series/the-von-k%C3%A1rm%C3%A1n-lecture-series-2021/

Fukada on Playfulness at Work [repost]

[This was posted earlier here] Everyone is willing, wanting, waiting to get back to work.  But we are all wondering what it will be like. I’ve noticed that even a global pandemic can’t stop the never ending supply of advice about how to improve work and office life.  And the supposed improvements don’t change, even though we can’t even have a person-to-person meeting these days, even if we wanted to. This month I read a piece by designer Takuo Fukuda,  “The surprising tactic that could help workplaces recover in 2021” [1]. The “tactic” he is talking about is playfulness. What does he mean by playfulness? Sometimes this is an actual game.  Sometimes it is just a non-standard, less formal situation, e.g., no “deliverables”, open agenda, brainstorming, etc. I’m pretty sure that the critical factor is permission:  permission to depart from existing convention, permission to explore ideas without penalty for failure, and so on. So it could mean lots of things. Setting aside precisely what “playfulness” means for a moment, he is mainly concerned with the inevitable “disruptive forces” that will force changes, wanted or not.  (Like I said, the discussion isn’t really different than before the pandemic, is it.)
The essence of the argument is, that playfulness can “help engage employees and give them more buy-in to the changes”.  (This sounds suspiciously like tricking people into swallowing the medicine they don’t want, but then I have a real bad attitude about work.) He makes three main claims.

Playfulness helps overcome a fear of change

“Playfulness lowers the stakes and provides permission to take risks,”

Playfulness encourages creative thinking

“Playfulness opens the door to new experiences that are more sensory and impactful than reading a presentation. This provides a fresh perspective and encourages creative thinking.”

Playfulness unites us

“playful moments foster stronger bonds and a shared sense of accountability.”

Hmm. These claims are far from self-evident to me.  Even from my own limited experience, I can think of cases where this kind of “playfulness” had the opposite effects. “Overcome fear?”  When the fear is justified, e.g., in the face of layoffs, playful meetings can be more a form of denial. “Fresh, creative thinking?”  If the playfulness brings out deep problems, it may exacerbate problems.  There also is no guarantee that you can come up with good ideas, especially in limited time, e.g., a single “playful” meeting. “Unity?”  Games can reveal deep dissent and differences that divide rather than unify.  “Permission” to think and create freely can unleash personal biases, cultural rifts, and all sorts of divisive behavior. (See, for example, the internet.)
Of course, playful meetings, with permission to be creative can be fun.  In fact, I could ask why most meetings aren’t permissioned.  If the answer is, “because we need to get this work done now”, then I think we understand that the enterprise is probably poorly managed. One thing that struck me the most about Fukada’s claims is that all of this playfulness and permission only works if there is significant trust, especially between management and workers. For example, if management has already decided that change is coming and what the changes will be, then having playfull sessions for workers to imagineer “change” is just sugar coating.  Asking for creative ideas is a sick joke unless the ideas will be taken seriously and hae a chance to be implemented.  I.e., workers have to believe that management actually cares about their ideas. So, ironically, if play is not for real, it is a waste of time or worse, an insult. Does playfulness create or enhance trust?  In my experience, not in itself. What matters most of all is what happens after the play.  If the group successfully creates some new ideas (and possible solidarity, etc.), then this must be followed with encouragement and resources, and a real effort to try to make them real.  (And, by the way, this can be quite challenging to do.) If management ignores the creative ideas, then the playfulness was an insulting waste of time, and probably damaged the organization. So, I say playfulness + follow through is what is needed. Is this going to be especially important in 2021? Not really. But in 2021-22 we’re all going to be embracing (sometimes literally) the opportunity to actually be together in person. So maybe it wouldn’t be a bad idea to do some playfulness, for the human contact if nothing else.
  1. Takuo Fukuda, The surprising tactic that could help workplaces recover in 2021, in Fast Company, December 23, 2020. https://www.fastcompany.com/90588625/the-surprising-tactic-that-could-help-workplaces-recover-in-2021
  (For much more on the Future of Work, see the book “What is Coworking?”)  

Brazilian Study of Ancient Shorelines

It is clear that sea levels were higher around the world in the last warm period, some 100,000 years ago.  However, fine details of the rise and fall of sea levels are rarely discernable in the geological record.

This winter researchers at Federal University of Rio Grande do Sul (Brazil) report an interesting case where detailed records are available [3].  The site has a shallow incline that has been the site of coastal lagoons since the Pleistocene. As the sea level changed, the lagoons moved inland and out, leaving characteristic sediments filled with fossils of marine shellfish.

Different species flourish in water of specific depths, so the different layers indicate how deep the water was at that location at that time.  The study focused on Amiantis purpurata, a bivalve that lives in shallow water.  Their presence in sediments indicates the location was shallow water at the time the animal lived.

To fill in the picture, the researchers use electron spin resonance (ESR) dating.  I’m not familiar with this technique (it emerged after I left my Anthro studies), but I gather that it the measurement detects the effects of radiation on electrons.  In geological materials, these levels can be used to infer the age of the material.  ESR has become widely used in paleontology because it can date tooth enamel or other fossils with a range of 10,000 to 300,000 years [2].

From the ESR data and stratigraphy, the researchers constructed a timeline of the coastline 100,000 to 300,000 years ago, i.e., how deep the water was at various geolocations over the period.

The most recent high sea level was about 10,000 to 20,000 years ago.  This study finds a high water mark (“highstand”) of about 7 meters above today’s sea level at about 120,000 years ago, corresponding to a warming period.  Much evidence of this 120 ka event was erased by the later ris .  There were earlier high sea levels, but the evidence has been eroded even at this favorable location in Brazil.

The important point is that the ESR data was able to identify sea levels as far back as 120 ka, including possible smaller events not previously documented.

These findings are basically consistent with other evidence.  The researchers argue that this study shows that fossil shells can survive 100,000 years, which is roughly the length of the high-low sea level changes in recent times.  They also emphasize that this kind of evidence needs to be cross checked with other evidence. ESR dating itself is pretty finnicky, and, of course, fossil seashells might have been buried or uncovered in the intervening centuries.


  1. José Tadeu Arantes, Dating of shell fossils shows how shoreline changed during glacial-interglacial cycles, in Agência FAPESP – News, January 13, 2021. https://agencia.fapesp.br/dating-of-shell-fossils-shows-how-shoreline-changed-during-glacial-interglacial-cycles/34978/
  2. Rainer Grün, Electron spin resonance dating in paleoanthropology. Evolutionary Anthropology: Issues, News, and Reviews, 2 (5):172-181, 1993/01/01 1993. https://doi.org/10.1002/evan.1360020504
  3. Renato Pereira Lopes, Jamil Corrêa Pereira, Angela Kinoshita, Michelle Mollemberg, Fernando Barbosa, and Oswaldo Baffa, Geological and taphonomic significance of electron spin resonance (ESR) ages of Middle-Late Pleistocene marine shells from barrier-lagoon systems of Southern Brazil. Journal of South American Earth Sciences, 101:102605, 2020/08/01/ 2020. http://www.sciencedirect.com/science/article/pii/S0895981120301188

Book Review: “The Last Million” by David Nasaw

The Last Million by David Nasaw

At the end of World War II, Europe lay in ashes, cities and infrastructure wrecked, countries occupied by foreign armies and provisional governments.   The populations were homeless and starving, and there was also a flood of human wreckage, millions of refugees far from home.  Even with the end of fighting, there was a vast humanitarian crisis.

During the war, large numbers of people were displaced voluntarily or involuntarily.  Some were in fighting forces or followed fighting forces through advance and retreat.  Others fled the fighting as it ebbed and flowed.  And many were taken prisoner or conscripted as slave labor, moved far from home.  And, of course, millions were consigned to death camps.

At the end of hostilities, the occupying armies found themselves in charge of these millions.  Millions were returned home, when possible.  POWs and many citizens were returned to their countries, though not all wished to return to the post war regimes in the East.

As post war politics set in, populations continued to move.  Ethnic Germans were booted out of Poland and other neighboring countries.  Many Poles and others fled West as Communist regimes took hold.

Jews liberated from the camps and ghettos were in desperate condition.  They also would not and could not be asked to return to the countries that killed so many.  Hundreds of thousands of Jews were in Germany, but they surely could not live there permanently.

It was a huge, unholy mess.

The occupying British and American armies established camps for these millions of DPs, offering safety and care.  But this could not be a permanent solution.  The DPs had to go somewhere.

“The Last Million” is the story of the struggles to resettle these people, especially the difficulty resettling the last, hardest cases.  It’s a messy story, the beginning of the Cold War politics, and perennial American and British xenophobia and racism.

One of the big problems is that not all DPs had the same story.  The Jews were there because they had been rescued from the Holocaust, and had nowhere else to go.  Thousands of Poles, and others from Russian occupied areas did not want to return to the communist states.  In many cases, these DPs had collaborated with the Germans, and rightly feared punishment if they returned to communist countries.  And, indeed, there were war criminals among the easterners, especially the Balts, Ukrainians, and returned ethnic Germans.

The initial policy of segregating by pre-war nationality had the horrible effect of forcing Jews to live in camps dominated by people who persecuted and murdered them during the war.  In any case, the traumatized and shattered Jewish DPs needed special care to even survive.   So separate camps for and run by Jews were established.

But these camps in the middle of a wasted Germany could not be kept for long.  The DPs had to be settled somewhere.


Looking at it from today, it seems remarkable that these problems were resolved in a relatively few years.  These days we leave entire populations in camps for decades, generations, and  routinely deny and evade responsibility for the consequences of our wars.  Hell, we use refugees as political pawns and weapons of war.

But the post war solution wasn’t easy or pretty.

Some DPs were imported to be essentially slave labor in the UK, Canada, Australia, and other places.   Many of the pro-Nazi groups, including war criminals, were allowed into the US, UK, and elsewhere—while Jewish DPs languished in camps.

In America, ugly bigotry rose.  Attempts to help Jewish survivors were blocked in Congress, and pro-Jewish lobbying was attacked as “anti-Christian discrimination”.  Jews were slandered as communist agents, while actual Nazi war criminals skated free as “reliably anti-communist”.  And so on.

Worst of all, the situation in the 40s and 50s  set the standard, such as it is, for today’s treatments refugees.

“It is near impossible to overemphasize the degree to which the IRO and the recruiting nations, in stressing utilitarian and political over humanitarian rationale, paved the path the developed world would follow when confronted by similar refugee crises in the second half of the twentieth and the first quarter of the twenty-first centuries.” (p. 358)

Soon enough, the big power politics evolved to the Cold War.  DPs from the East  harbored dreams of returning to liberate their homes from the hated Russians became soldiers and pawns in the new great game.  The CIA recruited and exploited DPs, often overlooking wartime collaboration or worse.


And then there was Israel….

With hundreds of thousands of Jews stuck in DP camps and no country willing to take them in, pressure grew to emigrate to Palestine.  For Jewish DPs, a new state of their own was the only hope for safety, let alone self-determination or a good life.  For the US, Palestine became a convenient solution to empty the camps and get the DPs off our hands.

But the UK held the mandate in Palestine, and was already in a multi-sided war with the Arabs and Jews there.  They blocked immigration and turned back Jewish migrants, even in defiance of the US.  It was a nasty business all the way around.

Soon enough, though, the (bankrupt) UK walked away, leaving toothless UN supervision, open warfare, and mass migration from European DP camps (financed and organized with a lot of help form US Jews) to the newly declared state of Israel.  The rest is history, and the fires of this conflict are still burning today.  Israel is a safe home for Jews who have come from everywhere.  But the cost has been and continues to be awful.  The pain has never ended, it has just shifted around.


It is very important for people to understand this pivotal time in history, which still reverberates today.

I admit that I had hoped that this period might offer ideas for how to deal with today’s mass migrations.  Of course, it did not.  The US and the world hasn’t really changed, even after 70 some years and a lot of water under the bridge.  If anything, it has become a worse place for refugees and victims of war.

Sigh.


  1. David Nasaw, The Last Million: Europe’s Displaced Persons fro World War to Cold War, New York, Penguin Press, 2020.

 

Sunday Book Reviews

Sandia Technique for Simulating New Materials

At the very cutting edge of new technology, the design involves not just the structure and behavior of a component, but the design of the materials themselves [2].  I.e., design right down to the quantum physics.  Phew!

Design is aided by computer simulation, of course, but high fidelity simulation of materials is slow and expensive.  As a consequence, developing a custom alloy or other material can be a significant bottleneck in the process.

This winter researchers at Sandia Labs report a new technique that speeds this process by orders of magnitude, seconds rather than hours, with a development cycle less than an hour instead of a year [1].  Cool!

The technique uses—wait for it!—machine learning.

Specifically, the ML model learns from high fidelity simulations, and can predict the temporal development the simulated material.  This prediction is fairly accurate (95%?), and can be fed back into simulation.

This is a “hybrid” simulation, and they describe the ML as “leaping in time” through the classical simulation.   Yes!  Time travel!

I’m beginning to see a pattern here:  the hot technology today is hybrid modelling that combines ML and conventional simulations.   This has advanced to the point of simulating realistic macroscopic scale systems.

I also note that it is common to see that the ML is “only” 95% accurate, i.e., is close but not identical to the high fidelity physics model.  But it seems that 95% but 10000 faster is good enough to really help.

Neat.


  1. David Montes de Oca Zapiain, James A. Stewart, and Rémi Dingreville, Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials, 7 (1):3, 2021/01/04 2021. https://doi.org/10.1038/s41524-020-00471-8
  2. Troy Rummler, Advanced materials in a snap, in Sandia Labs News Releases, January 5, 2021. https://share-ng.sandia.gov/news/resources/news_releases/advancing_materials/

Solar Trains!

OK, let’s mash up two things I love, solar power and trains.

There have been electric trains around for a long time.  They work great.  And it is natural to swap out fossil generation for low Carbon electric power, and, voila, seriously low pollution transportation.  Who doesn’t love the idea of a train that runs forever on sunshine?

Now, the first generations of electric trains depended on transmitted power from generating stations, through rails or overhead wires.  This is costly extra overhead, and generally limits both the number of lines and the range.

In recent years, battery technology has been improving rapidly, opening the way for “off grid” trains.

Now yer talkin’ !


This winter, Richard F. Tolmach discusses these new technologies [1].

At the top of the list is flash charging, just like for mobile phones.  Wired (or wireless!) recharging a train in minutes means no overhead distribution is needed.  And this is DC technology, ideal for local solar.

Hybrid technology popularized by Toyota was first deployed in railroads, so it is no surprise that diesel hybrid locomotives are a very efficient option.  Hybrids are flexible, and capable of serving remote branches that have no electric infrastructure.

As batteries get better, hybrid technology evolves naturally to pure battery technology.  Battery powered trains are being deployed to cover gaps in overhead lines, and as the range increases the gaps can be much larger, so less investment in costly overhead wires is needed.

What goes around comes around. Advances made by electric car development will move naturally to railroads—and will work even better.  Light weight batteries and fast charging systems for cars should scale up to trains easily.

The point is, of course, that all this battery based technology can easily be supported by solar and wind generation.  This will lower the cost of operation, and make trains nearly emission free.


Much of Tolmach’s article is about developments in Europe and Asia, with the US out of the picture (except for automobiles). This is largely driven by political policy, as it has for the last century.  But, as he points out, as off the grid solar charging becomes cheap and potentially ubiquitous, existence will create use.

Soon enough, even in the US the question will be, why build expensive overhead wires or use costly diesel locomotives, when solar charged battery powered trains are cheaper?


  1. Richard F. Tolmach, Trains Reinventerd with Solar, Wind and Battery Power, in Solar Today. 2020. p. 7-12.

Cryptocurrencies Show Links Between Coding and Markets

Nakamotoan cryptocurrencies emerged from the open source software world, and many cryptocurrencies from Bitcoin on have been open source.  In part, this transparency is intended to ensure that there are no hidden agendas or tricks in the code, and, theoretically, to ensure that anyone can contribute to the project.

Whether these goals are met or not, the open source does mean that the development and developers are visible to the world, which means they can be studied.  (Hint to social scientists out there!)

This winter, researchers at City University of London report a study of the relationship between code, coders, and the market behavior of the cryptocurrency, using public information from source code repositories [1].

As has been reported before, relatively few programmers contribute most of the code of cryptocurrencies, as is the case for most open source software (and, I’m sure, most software).  I don’t think this is particularly mysterious—doing software is hard work.

The folk wisdom is that a programmer generally averages something like 10 lines of code per day.  Emitting code is not the hard part, figuring out what code needs to be emitted is what takes time and effort.  If this folktale represents an underlying truth, then just how many different projects could any one programmer contribute to?

(I will leave it as an exercise for the user to consider what this means for the hope that open source code is high quality because it is open to “many eyeballs”.)

Cryptocurrencies are a particularly interesting case because people are using them for financial transactions, to the tune of hundreds of millions of dollars per year.  This means that the behavior of these self-chosen, unpaid, programmers might have considerable economic consequences.  In turn, the economic fortunes of cryptocurrencies may also have consequences for the code, attracting coders and influencing the code.

The latter case is well illustrated by the “governance wars” seen in many cryptocurrency communities, in which economic interests are played out in arguments about alternative versions of the code.  For example, the great “scaling wars” of Bitcoin and Ethereum are not really about engineering, they are about the economic fallout of certain design decisions.

The London researchers are particularly interested in cases where the same individual contributes to the code of more than one cryptocurrency.  When this happens, “cryptocurrencies are not isolated entities but rather form a network of interconnected codes.” ([1], p.1)

What they find is that a) this happens a lot and b) when it happens the economic behavior of the two cryptocurrencies becomes more correlated.  Specifically, the research found that the “asset returns” become more correlated in the months following the first “connection”, i.e., shared contributor between two nominally independent cryptocurrencies.  (Caution: other metrics do not show this pattern.)

One thing that is happening here is that programmers who have contributed to the largest coins, Bitcoin, Ethereum, etc., often contribute to newer coins.  And this tends to be followed by the new coin’s returns correlating more closely with the older coin.

It’s not obvious that all of this is tremendously meaningful, at least in practice.  But, as the researchers emphasize, it does mean that “the temporal dynamics of co-coding of cryptocurrencies provides insights on market behaviors that could not be deduced on the basis of the combined knowledge of the code of single currencies and the present state of the market itself” ([1], p.1)  This fact should be considered by “regulators and by professional investors”.

They point out that there is a lot more that could be happening here, whether we can see it or not.  In some cases, individual coders may have financial or other interests that influence the contributed code.  I.e., not all coders are equal, so not every coding event is equivalent.

It is also far from clear just what these coders could be doing that causes such indirect effects.  Are they aligning the features in the code, e.g., making ‘newcoin’ operate similarly to ‘Bitcoin’?  Or is the effect more indirect, e.g., by equalizing the software quality and therefore reputations of the two cryptocurrencies?  Or is the shared development an indicator of something else, such as a community of users who seek to correlate the two coins?

I’ll note that there are also plenty of other interactions besides operations on the code repository.  Design decisions, problem analyses, reviews, and testing may also influence the behavior of the product, and all of these can be substantially influenced by individual and small groups of participants.  (Many of these activities can be followed from archives of discussions, documentation, and public presentations—another hint to social scientists.)

The researchers note that this is an example of a general issue, “[c]ode has become an important societal regulator that challenges traditional institutions, from national laws to financial markets” ([1], p. 4)  This is true across all of society, private code bases have huge ramifications which are difficult to assess.  Many companies and organizations rely on open source code, which they build on.  This means that there is a hidden social network of contributors, inside and outside organizations.

(How is this different from the non-digital infrastructure, especially of finance?  Large companies share infrastructure and personal connections, etc., in ways that are not transparent.)

I think the bottom line is, whether the particular correlation reported is important in itself or not, it behooves analysts to dig deeply into both the code and the developer communities of cryptocurrencies.  Pretending that the technology is somehow neutral is foolish, and assuming that coders are only interested in writing code (for free!) is dumb.


  1. Lorenzo Lucchini, Laura Alessandretti, Bruno Lepri, Angela Gallo, and Andrea Baronchelli, From code to market: Network of developers and correlated returns of cryptocurrencies. Science Advances, 6 (51):eabd2204, 2020. http://advances.sciencemag.org/content/6/51/eabd2204.abstract
  2. John Stevenson, City researchers reveal link between the coding of cryptocurrencies and their market behaviour, in University of London – News, December 17, 2020. https://www.city.ac.uk/news/2020/december/city-researchers-reveal-link-between-the-coding-of-cryptocurrencies-and-their-market-behaviour

 

Cryptocurrency Thursday

Very Cool Video From Boston Dynamics

I’m generally not a huge fan of Boston Dynamics robots.  They are, of course, technically awesome.  But they terrify me, and since they seem to be going to the hands of people who very well might use them against me, it’s hard to be happy about it.

But their New Years video is not only awesome but actually beautiful [1] .  A little scary, maybe, but seriously joyful.

 

Now, I know this is a rigged demo, but it is the best kind of rigged demo–the kind that even your grandmother likes!

(As we used to say in our lab, “any sufficiently rigged demo is indistinguishable from magic.” : – ))

Evan Ackerman reports that the routine was developed in collaboration with human dancers, who modelled concepts which were then adapted to the robot capabilities. And then the dancers were asked to think out how the non-humanoid bots can and should dance.

“what we found was that the dancers connected with the way the robots moved, and then shaped that into a story, and it didn’t matter whether there were two legs or four legs.”  (Aaron Saunders of BD quoted in [1])

 

This must have been really fun project to work on!

One of the things I like most about this video is that the robots are obviously “happy”.  As in most great dance exhibitions, the dancers portray emotion to the audience.  When we watch humans dancing, we presumably are at least partly feeling what the dancers are feeling.

In this case, though, we know that the dancers are not feeling emotions, nor even feeling their bodies like we do (and two of they have very different bodies!), and aren’t even hearing the music they are dancing to.  In short, the machines manage to convey human emotions through extremely non-human motions.  Cool.

I guess that technically, this robot crew and relevant control systems can be considered an artistic medium in this case, used by the human designers to communicate emotions to the audience.

Nice work all.


  1. Evan Ackerman, How Boston Dynamics Taught Its Robots to Dance, in IEEE Spectrum – Robotics, January 7, 2021. https://spectrum.ieee.org/automaton/robotics/humanoids/how-boston-dynamics-taught-its-robots-to-dance

 

Robot Wednesday

Triceratops poop simulation studies

One of the most powerful and underrated principles of science is “uniformity”, the principle that things work the same now as they did in the ancient past, and will continue on into the far future.  Only cranks and creationists doubt that dinosaurs were animals similar to contemporary animals, subject to the same biophysics as today.  This principle is our secret weapon for interpreting the admittedly sparse fossil record.  The past must have been similar to the present.

This month George L. W. Perry reports a study of dinosaurs that infers symbiotic behavior with plants, behavior that is seen throughout contemporary ecosystems [2].

Specifically, Perry explored how dinosaurs might have helped spread seeds.  Fossil evidence suggests that dinosaurs diet included seeds which remained viable in their gut, and presumably could grow later from dinosaur droppings.  In contemporary ecosystems, this process is a crucial mechanism by which plants disperse their seeds. Therefore, it is quite plausible that dinosaurs were important dispersers for plants of their time.

Perry specifically looked at how far seeds might have traveled through such a process.  He simulated the situation using models of how fast dinosaurs walk (with assumptions about where they walk) and estimates of the seeds’ passage.  Dinosaur gaits and digestion were estimated as comparable to contemporary animals of similar size and body type.

The key insight is that, in contemporary animals, larger animals carry seeds farthest.  Also, herbivores tend to carry seeds farther.

From this relationship, he concludes that large herbivorous dinosaurs, such as Triceratops [2], probably carried seeds over large areas.  Assuming that the seeds survived and bloomed, this would be an important contribution to the reproduction of the plants that these dinosaurs ate.

Now, all this is indirect evidence, with little hope of direct confirmation.

However, this is very plausible, given what we know of contemporary ecosystems.  For that matter, it is difficult to think why this would not be true.

Cool.

By the way, this study makes me wonder if we can find evidence of ancient mycorrhizal networks, like we see underground today.   This would most directly be seen in preserved root networks from forest, marsh, or grassland, which are very rare.  But maybe we can find traces of microscopic fungi symbiotes.  That would be neat!


  1. Becky Ferreira, Another Thing a Triceratops Shares With an Elephant, in New York Times. 2021: New York.
  2. George L. W. Perry, How far might plant-eating dinosaurs have moved seeds? Biology Letters, 17 (1):20200689, 2021/01/27 2021. https://doi.org/10.1098/rsbl.2020.0689

Moore on Gravity Storage

As physics teachers often say, “Gravity.  It’s not just a good idea.  It’s the law.”

When it comes to generating energy (i.e., for human use), everything is on the table.  And, if there is one force that truly is free and ubiquitous, it has to be gravity.

Technically speaking, gravity is one of the ultimate sources of solar energy (driving the sun’s fusion), hydro power (falling water), and is a key factor in wind energy (influencing atmospheric circulation).

But it is fun to think of ways to directly use gravity as a storage mechanism.  Basically, use energy to move things uphill, and then later drop them and recover the energy (e.g., this, this).  Simple, elegant.  No emissions.  No invisible chemistry or quantum hoodoo.  Just weight.

This month Samuel K. Moore writes that more is coming, “Gravity Energy Storage Will Show Its Potential in 2021” [1].

Moore points out there are stored energy projects already running, mainly in the form of pumped hydro.  I.e., systems that use clean energy to pump water uphill into a reservoir, where it later drains down through an otherwise conventional hydro generator.

But there are even more, different gravity technologies (and that’s such a cool, sci-fi term) nearing commercial use.

There is, of course the “stack up heavy things with cranes” school.  Moore discusses “the Skyline Starfish”, which has not one, but six cranes.  The idea is simple:  use excess clean energy to make a pile of bricks, and then later unpile them to recover the potential (gravitational) energy.  This can get complicated, but that just makes it more fun.

“To maintain a constant output, one block needs to be accelerating while another is decelerating. “That’s why we use six arms,” explains Robert Piconi, the company’s CEO and cofounder.

“What’s more, the control system has to compensate for gusts of wind, the deflection of the crane as it picks up and sets down bricks, the elongation of the cable, pendulum effects, and more, he says.”

Another approach is to reuse abandoned industrial sites, such a mines.  Gravitricity uses huge weights, hundreds of tons, suspended in kilometer deep mineshafts.  One advantage of this approach is that the huge weight only needs to move centimeters per second to produce megawatts output.  And, unlike the stacked bricks with crane, the mineshaft is a fairly stable environment.  No wind, constant temperatures, easy to secure.

And, of course, reusing a Scottish coal mine to store green energy wins huge, huge, Karma points!

Moore points out that these storage systems are very similar to pumped hydro, with the benefit of not having the headache  of maintaining the reservoir (which leaks), and dealing with water which corrodes everything.

These gravity technologies are pretty expensive.  But they are competing with battery storage, which ain’t cheap, and require exotic materials.  So, these technologies may be successful as part of the deployment of solar and wind energy at mass scales.


  1. Samuel K. Moore, Gravity Energy Storage Will Show Its Potential in 2021, in IEEE Spectrum – Energy, January 5, 2021. https://spectrum.ieee.org/energy/batteries-storage/gravity-energy-storage-will-show-its-potential-in-2021

 

A personal blog.

%d bloggers like this: