Book Review: “Island People” by Joshua Jelly-Schapiro

Island People by Joshua Jelly-Schapiro

Jelly-Schapiro loves the Caribbean, it’s people, history, cultures, and surely the music.  He has been able to come up with enough professional gigs to make this is “job”.  A fortunate man, indeed.

Island People is a compendium of writing about some of the islands he has visited over the past few decades.  He augments the personal travel with academic history and cultural studies.

And is there anywhere with cooler history?  Columbus!  Conquistadors! Pirates! Clash of Empires! Slavery!  Liberation!  Revolution!  Independence!  Diaspora! Dictatorship!  Drugs!

Of course, Jelly-Shapiro loves the peoples of the Islands, and the music.  Whatever the politics, poverty, and troubles, the Caribbean is a great cultural mixing bowl, and has created some of the greatest music ever.

There is quite a bit of intellectual history, too.  The Caribbean has fascinated Europeans, immigrants and emigres for centuries.  It has been a fountain of contemporary racial thinking, for good reasons and less good reasons.  The searing legacy of slavery still echoes, though there are many important local variations.  Ditto for imperialism.

These islands are the quintessential multicultural mixing bowl.  The very languages are Creoles, forged from the collision of peoples.  The region has seen revolutions and rebellions and repression and conquests.  The economy has always been both global and extremely unequal.

Jelly-Schapiro tells these stories, along with personal travelogs which bring it back to Earth.  Whatever the myths and history, he shows us what life is like in these could-be-but-aren’t Paradises.

It’s great.

I learned a lot, and loved the read.  I wish I could write this kind of book.

  1. Joshua Jelly-Schapiro, Island People: The Caribbean and the World, New York, Alfread A. Knopf, 2016.


Sunday Book Reviews

Peter Hart and Steve Kellingblog on “The Future of Birding?”

This winter Peter Hart and Steve Kellingblog about “Birding with Technology in the Year 2025[1]They take “a road trip through Silicon Valley to explore the future of birding—and see what new Birding Tech might be out within the next five years” [1].


OK, I have problems with this article, on many levels.

At the most basic level, I have a different notion than they do of what “birding” is, or should be.  Maybe that makes me a poor birder, but I don’t think so.  And I’m not the only one.

For me, birding is about paying attention, and about taking time to pay attention. And its about being with birds (and everything else).  Be here now.  (Which, I’m pretty sure, is how birds live.)

Life list?  Don’t have one, don’t care.  Identifying species? Mostly they are familiar already, and that’s part of the experience.  Competing with other humans?  Not even remotely interested.

You get the picture.

So, just what do Hart and Kelling foresee for the future?

They note recent technical “advances”, including digital assistants which are increasingly available in your pocket.  No need for a paper book, you can have a guide to birds on your mobile device.  And it can also connect to databases of migration information and sightings.

When you have a hammer, everything looks like a nail.

H&K see this expanding to “By the Year 2025, Birding Could be the Latest Addictive Gaming Craze”.  Gamification has been the flavor of the month for more than a decade now.  It’s one of the coolest “hammers” that Silicon Valley has, so everything looks like a “nail”, including birding.

In this case, the game features include tracking activities, giving digital feedback (“badges”, fer cryin out loud), and competition, e.g., trying to accumulate the most sightings.

A second notion is “By the year 2025, birding could become a social-media movement”.   Obviously, “social” apps are one of the biggest “hammers” of all, and birding is another “nail”.

One thing this could mean is digitally connected teams, sharing information and “birders working together, encouraging and sharing experiences.”  With mobile devices, this can be real time, in the field, communications.

They note, correctly, that these tools “eBird is that it works for birders and scientists alike—a handy digital checklisting tool for the former, a big-data treasury about birds for the latter.”  This is the one good idea in the article.

Having these ubiquitous surveillance methods available to all can enable citizen science and professional science.  Bird watchers invented citizen science, long before Turing (heck, long before Babbage/Lovelace).  So, yeah, putting better tools in the hands of birders will make the cooperation easier and more powerful.  That’s a good thing, at least for science.

Inevitably,  H&K think that  “AR* and AI** will combine for a more powerful birding experience”, which will also lead to “smart devices could identify birds on their own”.  Two more “hammers”, and apparently “powerful” ones.  Sigh.

Essentially, these technologies offer suggestions and “coaching”, telling you stuff instantly, answering questions you didn’t even ask yet. And ultimately, knowing more about birding than you do.

But wait, there’s more.  How about “Smart Bird Feeders” that “Turn the Backyard Feeder Into a Data-Monitoring Station”.  This is a permutation of the “smart refrigerato”r IoT technology, which basically makes your feeding station spy on watch the birds for you.  How about “Robo-Birders” to “Autonomously Survey Birds Across The Entire World”.  Some of these will no doubt be aircraft, bothering birds even where puny humans can’t go.  Sigh.

The theme here is clear:  these surveillance technologies widely used for social control, marketing, and advertising/propaganda can be deployed to surveill birds as well as people.  In fact, they can do things humans can’t do, and can potentially amass comprehensive databases about birds.

What does that have to do with birding?  Best case, it is replacing the birding experience, displacing the puny human.  (Worst case, it has nothing at all to do with birds, and everything to do with human egos.)

But the big, big problem with this article is the unquestioned assumption that we not only do, but should carry our digital devices with us into the field, and that we want to use them for bird watching.

For my money, the point of bird watching is to #Turn It Off (TM).  Put away the phone and screens, and pay attention to birds, trees, weather, nature.  Paying attention to your screens, having screens built into your optics, and even thinking about birds as data is antithetical to being there with the birds.

These technical developments may be “powerful”, but they are mostly powerful psychological distractions that degrade the best part of birding.

So, no.  No, no.  Just say, “no”, to this E-birding nonsense.

Put you phone away and pay attention.  And don’t let those dangerous fools in Silicon Valley ruin birding for you.

  1. Peter Hart and Steve Kelling, Birding with Technology in the Year 2025: Our Predictions, in All About Birds, January 9, 2020.


‘Tracking the Sun’ Report from LBNL

The 2019 Tracking the Sun report covers distributed PV systems installed in the US in 2018 [1].  This is a real-deal, actual report based on data recorded for more than a million installations over the last 20 years.

“Distributed” PV is defined as non-utility installations, i.e., homes, businesses, and campuses.  (There is a separate report on Utility-Scale Solar [2].) These systems are trending larger, more efficient, and a small but increasing percentage have battery storage.  As noted elsewhere, the ‘loading’ is steadily increasing, at least for non-residential installations.

The price per Watt of installed capability fell 5% or more over the year with considerable variability. This continues a long term trend.  However, much of this decrease is offset by the elimination of subsidies and incentives.  Price decreases in this study are slowing, though.

It is good to have these authoritative reports not only public but well documented.  (I am so, so tired of ‘reports’ that have no explanation of the methodology.  A waste of perfectly good photons, IMO.)

This report has it limits.  The report makes clear that the sample is incomplete: IL not there, TX and FL likely under reported.  CA and NY are heavily reported.  These sampling issues may mean that details of the size and cost statistics are a bit off.  But I’m pretty sure that the main trends and overall statistics are pretty well represented.

One funny thing is that the report seems to be totally focused on new installations.  With data going back 20 years, we can be confident that many of the older systems have been shutdown or are only partly operational.  So boasting about “1.6 million” systems is definitely misleading!

And obviously, everybody knows that the total cost of operation over the lifetime of the system is at least as important as the installation cost.  This report has nothing much to say about these systems over time.

Another point occured to me.

Reading this report and the companion “Utility scale” report, I am not sure exactly where “community solar” projects would appear in this data.  Here I’m thinking of small scale local installations owned by the users (who might be both residential and non-residential). These seem to fall in the cracks of the report’s definitions.  The system could be located in a non-residential location, but might serve residences and/or businesses.  The technology is going to be similar to home or business systems, though they may be operated by third parties.  But the ‘third party’ may be a cooperative or local corporation owned by the consumers.

Even the basic “small” versus “large” vs “utility” criteria (up to 100kW, 100-5000kW, over 5000kW) are hard to interpret.  The overall array might be large, but it is shared out in what might be small slices for many owners.  So, it may be purchased and operated in ‘small’ ways, even as the infrastructure becomes ‘large’.

Essentially, community solar mixes the technical and business models in a way that doesn’t fit the report categorizes things.

I strongly suspect that there aren’t very many such coops at this time, though there is considerable interest in some places.  It seems to me that this survey might want to carefully consider the definitions if community solar grows in the future, as I hope it will.

  1. Galen Barbose and Naïm Darghouth, Tracking the Sun: Pricing and Design Trends for Distributed Photovoltaic Systems in the United States. Lawrence Berkeley National Laboratory, Berkeley, 2019.
  2. Mark Bolinger, Joachim Seel, and Dana Robson, Utility-Scale Solar: Empirical Trends in Project Technology, Cost, Performance, and PPA Pricing in the United States –2019 Edition. Lawrence Berkeley National Laboratory, Berkeley, 2019.


Smart Contracts Are No Where Near Best Practices

Nakamotoan cryptocurrencies and blockchains are often described as “innovative”, indeed “disruptive”.  It may or may not be “disruptive”, but much of the technology is not new at all.  Worse, a lot of the technology was whipped together with no reference to decades of relevant research, which means it is naive, often catastrophically so.   (The notion that “smart contracts” can, even in principle, be error free is just silly.)

One of the worst cases of childish ignorance are the so-called “smart contracts”.  These are basically computer programs, no more and no less.  The most common forms are designed to execute in a virtual machine which both defines the language and limits the possible actions of the program.

If this sounds familiar, that’s because it is widely used for decades.  There is a vast literature which informs best practices.  Sadly, “smart contracts” don’t, and mostly can’t, implement these best practices.  (And let’s not get me started on the concept of code that can never be modified or deleted, which I have referred to as the “insane clown school of software engineering”.)

Last spring Florian Daniel and Luca Guida consider blockchains in light of experience from “Service Oriented Architectures”, which have been the flavor of the month for some 20 years now [1].  SOAs involve independent components that interact through logical interfaces which are often defined as, yes, contracts.

The authors show that Nakamotoan “smart contracts” can indeed be seen as SOAs, though the implementations, especially in the case of Bitcoin, are crude and lack key features.  This finding is not surprising, because the whole idea of SOA is to abstract and virtualize away the details of the networks and platforms.  So it is perfectly possible to implement using a distributed write-once ledger, i.e., a Nakamotoan blockchain.

Daniel and Guida identify some significant areas missing from the Nakamotoan technology.  And by “significant”, I mean, “necessary or else it won’t really work”.  Their list is:

  • Cost awareness
  • Performance
  • Interoperability and standardization
  • Composition
  • Search, discovery and reuse

Everyone knows that Nakamotoan blockchains are inefficient, but it is somewhat ironic that they have no general mechanism “to properly communicate and negotiate” service levels.  There is a lot of effort put into dinking with the perceived incentives in the protocols (e.g., fees and payouts), but little concept of programmatically negotiating or even inquiring about cost and performance.

To date, Nakamotoan “smart contracts” have little interoperability, and therefore it is difficult or impossible to compose a service from components.  These things are hard, and cannot be whacked together by a few guys donating their time—it is necessary to create, or better, use existing, standards.

This lack of standards and interoperability is evident in the proliferation of contracts and libraries and the use of opaque, essentially monolithic codes.  These dapps are, ironically, neither decentralized nor trustless.

The last bullet–search, discovery, resuse–is particularly important, and is very well known to me.  (The work I did at the turn of the century is still relevant to this topic [2].)  If smart contracts are going to be useful, then there has to be a way to discover them, to discover what they do, and to discover how to use them.  These are extremely difficult problems that have nothing at all to do with the specifics of ledgers or mining, and everything to do with having good metadata, which requires deep understanding of the logic of components.  I haven’t seen anything that suggests that Nakamotoans even understand that these problems exist.

I will say that Nakamotoan cryptocurrencies do have potential to make distributed components work better than they do now.  A blockchain is a really good way to distribute directories of services and also to implement micropayments that might make both discovery and use of components more sustainable.  On the latter point, there should be a tiny registration fee to publish a component in the directory, and may be a usage fee for a module.  Cryptocurrency is a great way to do such payments.

  1. Florian Daniel and Luca Guida, A Service-Oriented Perspective on Blockchain Smart Contracts. IEEE Internet Computing, 23 (1):46-53, 2019.
  2. Robert E. McGrath.”Semantic Infrastructure for a Ubiquitous Computing Environment.” Ph. D. Dissertation, Computer Science, University of Illinois, Urbana-Champaign, 2005.


Cryptocurrency Thursday

JackHammer: Faster Rowhammer with FPGA

It’s hard to keep up with the computer security world these days.  In recent years, the infamous “oopsie” in contemporary memory hardware, tagged “rowhammer”, has evolved into a growing family of grievous, sphincter tightening, weaknesses.  Yoiks!

This winter researchers at Worcester Polytechnic Institute report yet another variation:  FPGA assisted rowhammering [2]. The use of an FPGA coprocessor makes things oh, so much faster and more efficient.  The authors tag it, “JackHammer”.

It’s the same idea as Rowhammer, fiddling with memory accesses and observing correlated behavior of the hardware to infer the content of what is supposed to be protected memory.  This can be used to, for instance, reveal cryptographic keys that are stored in “secure” memory.

The key to the attack is the use of field-programmable gate array (FPGA) coprocessor in a hybrid system with a CPU.  There are many ways these can be constructed, and in many architectures the FPGA has direct access to the internal bus, cache, and memory.  This means that the memory operations of the FGPA are invisible to the CPU that shares the same memory.

The details are absurdly arcane, but the basic point is that it is possible for the FPGA to force data to be loaded from memory and detect the bits read, even when the FPGA does not have access to the memory itself.

The whole point of having the FPGA is speed, so it comes as no surprise that the FPGA version of Rowhammer can be much faster, i.e., uncover more bits in less time. There seem to be several possible countermeasures, though most of them are not really available to users or application programmers.

“a user-configurable FPGA on a cloud system needs to be treated with at least as much care and caution as a user-controlled CPU thread, as it can exploit many of the same vulnerabilities. “ ([2], p. 15)

  1. Catalin Cimpanu, FPGA cards can be abused for faster and more reliable Rowhammer attacks, in ZDnet. 2020.
  2. Zane Weissman, Thore Tiemann, Daniel Moghimi, Evan Custodio, Thomas Eisenbarth, and Berk Sunar, JackHammer: Efficient Rowhammer on Heterogeneous FPGA-CPU Platforms. arXive, 2019.

Freelancers (and Coworking) in Popular Culture? [repost]

[This was posted earlier here]

This month, the Freelancers Union* asks “Where are all the freelance characters on TV?” [1]  They point out that even in more or less realistic shows, few people identify themselves as “Freelancers”, even in cases where they work as writers and similar gig workers.  Worse, some of the portrayals are wildly unrepresentative of how real freelancers live.  Is anyone surprised that corporate entertainment media is oblivious if not outright hostile toward the lives of real workers?

(In part, there is a semantic issue here.  Actors and Writers generally are gig workers, but they identify with their profession, not with their contractual arrangement.  A thespian is “an Actor”, not “a Freelancer”.  May dramatists just don’t think about “freelancer” as an identity for a character.)

The article hones in on the apparent lack of medical insurance even for characters who get hurt or have a baby.  Huh?  If the only thing you find unrealistic about Sex and the City is that the show doesn’t discuss medical insurance….

Eventually, it becomes clear that the FU is actually advocating their own insurance products, which explains that specific emphasis.  And, yeah, its important, and yeah, I’m glad the FU is on it.

Anyway, the title does actually raise a good point.  Freelancing and Coworking are important work life experiences for a growing number of people, and something that young people should know about because they may want to or have to be part of the gig economy.  So it would be nice to have realistic role models in popular culture—for better or worse.

Personally, I’m not going to watch anything that spends a lot of time worrying about the challenges of health insurance for gig workers.  But why not have a ‘cheers’ set in a coworking space?  Why not have more shows about interesting gig workers, and fewer shows about obnoxious billionaires?

It would be particularly valuable for young people to see and to identify with some good examples of gig workers.  People who have to hustle for gigs, are responsible for delivering their contracts, who constantly learn, and who are good members of a coworking community.  People who more or less successfully balance work and family life.  Etc.  You know–real people.

So, how could this come to be?

Well…the FU surely has within its membership more than enough talent to create such popular fiction in every medium.  It would certainly be apt for freelancers of the FU to tell our own story this way….

*Disclosure:  I am a proud member of the FU.

  1. Freelancers Union, Where are all the freelance characters on TV?, in Freelancers Union Blog. 2020.


(For much more on the Future of Work, see the book “What is Coworking?”)

What is Coworking?

Teenage T. Rexes

If there is anything more terrifying to contemplate for a puny prey animal such as myself than a Tyrannosaurus rex, it must be a teen aged T. rex!

We know that every rex must have been young once, and must have grown and grown to become the 15 ton, 5 meter terror of all things small and juicy.

How did they get so big?  T. rex was a member of an extended family, which included a bunch of related species over many centuries. How are these related to each other?  Many of these relatives are smaller than the apex critters.  It stands to reason that there would be smaller ancestors, and it certainly is reasonable that there might be smaller cousins living at the same time.  But some of theses must also be younger T. rexes, not fully grown.

It’s hard to tell from external appearances.  And paleontology is ever plagued by “splitters” who give each new specimen a new species and genus.  (I’m a long time “lumper”, seeing fewer, but more diverse, species in the fossil record.)

This winter researchers report on a study of fossil bones of one such “pygmy tyrannosaur”, which was originally classified as adults of a new species, Nanotyrannus [2].  The new study examined the bone structure, which shows the age and growth history of the animals. The detailed analysis show that these animals were probably partly grown, and likely adolescent individuals that would have been full sized T. rexes.

In addition to an estimated age (13 and 15), the bones show the growth rate over the animals’ lives.  These “growth rings” show variability from year to year, which is consistent with what is seen in contemporary species that face variable resources.  In years of abundant food, the individual grows fast, growing much slower in sparse years.  The paper notes that this observation means that care must be taken when attempting to project “average” growth rates for the whole species.

The study also implies that rexies were mid-sized even at age 15 or so, growing to full size in a late spurt.  Presumably, this late growth would happen only in favorable hunting environments.  This would suggest that locations with many large T. rexes must have had abundant prey, and other places with few or no T. rex remains might have been resource poor—though there might have been smaller rexies there, who did have as much to eat.

This study definitely calls into question the proposed “pygmy” species.  All of the putative Nanotyrannus remains seem to be immature, so there is no evidence for the stature of the full grown animals.  On the other hand, these remains could well be immature individuals from the abundant T. rex family.  If so, then there was one type of Tyrannosaur extant, in many sizes, not two co-existant branches.

I was a little surprised to read that earlier studies had reached conclusions about the age and maturity of these specimens without examining the bones, or at least without adequately examining them.  We have centuries of evidence that guessing the age from external appearance is inaccurate, so the research reported here seems like an obvious approach.  In other words, this is the final word in an argument that really should not have been happening at all.

So hooray for the lumpers!

“Together, our results support the synonomization of “Nanotyrannus” into Tyrannosaurus” ([2], )

  1. Cara Giaimo, Beware Tyrannosaurus Rex Teenagers and Their Growth Spurts, in New York Times. 2020: New York.
  2. Holly N. Woodward, Katie Tremaine, Scott A. Williams, Lindsay E. Zanno, John R. Horner, and Nathan Myhrvold, Growing up Tyrannosaurus rex: Osteohistology refutes the pygmy “Nanotyrannus” and supports ontogenetic niche partitioning in juvenile Tyrannosaurus. Science Advances, 6 (1):eaax6250, 2020.

A personal blog.

%d bloggers like this: