An “AI Apocalypse” Scorecard

Yeah, we’re all talking about ChatGPT and friends this year, and I’m no exception.  While some of us have enjoyed the unintended comedy, many pundits are sure that these large language models and their variations are (a) approaching “Artificial General Intelligence” and (b) will soon wipe out all the puny Carbon-based units.

This summer Eliza Strickland and Glenn Zorpette assembled the first iteration of an AI Apocalypse Scorecard, documenting the views of 22 actual experts [1].

Glancing at the list, there isn’t a consensus, but there doesn’t seem to be a big panic.  A majority of these pundits don’t see current ML models as close to AGI, and only a handful are concerned about extinction.

I think these results reflect, in large part, disagreements about how the heck to define “Artificial General Intelligence”, assuming that is even a meaningful concept (which is really isn’t IMO).

It is also clear that even if extinction isn’t immanent, pretty much everyone is concerned about potential harms of many kinds from these AIs.  For obvious reasons.

In fact, there seems to be an inverse correlation in this group between worrying about the obvious shortcomings of these models and the fear of extinction:  if AGI involves, like, getting right answers, then current AI has a long way to do.  A long, long way.

I’m going to book mark this score card, and check back for future updates.


  1. Eliza Strickland and Glenn Zorpette, The AI Apocalypse: A Scorecard, in IEEE Spectrum – Artificial Intelligence, June 21, 2023. https://spectrum.ieee.org/artificial-general-intelligence