Cummings on AI Risks

With all the yick-yack about the coming AI apocalypse, it has been easy to miss the much more down to earth disaster that is “self-driving” cars. 

No matter how much we scream, the tech bros are deploying AI into lethal ground vehicles.  We know they are lethal because there are already deaths.

This summer, Professor Mary L. “Missy” Cummings, just off a gig advising the US National Highway Traffic Safety Administration, discusses the risks from self-driving cars. [1]

The basic point is:  “the AI that runs vehicles… are based on the same principles as ChatGPT and other large language models (LLMs).”

Ruh-row!

This does not give us a feeling of confidence and comfort.

Cummings lists five serious concerns:

  1. Human errors in operation get replaced by human errors in coding
  2. AI failure modes are hard to predict
  3. Probabilistic estimates do not approximate judgment under uncertainty
  4. Maintaining AI is just as important as creating AI
  5. AI has system-level implications that can’t be ignored

As far as I’m concerned, these are all show-stoppers.  Can we stop the entire show 5 times over, please?

The first one is basically all I need to know.  Anyone who claims that ‘humans are error prone, so substituting software will be safer’, is just plain ridiculous.  Software is buggy.  Substituting software bugs for human error (if that is even what is happening) is hardly a guaranteed formula for a safer system.  As Cummings points out, basically, AI doesn’t eliminate human error, it moves the human error from the driver to the coder—the distant, invisible coder.

In a sense, all this ChatGPT nonsense has been a great public service.  The well-publicized howlers, termed “hallucinations”, produced by these gizmos give us some idea of what might go wrong with an AI auto driver.  We can laugh at the preposterous nonsense, but fluently pompous overconfidence in an autodriver is lethally dangerous.

These errors are funny, and they are inexplicable.  Why did it get the answer it got?  It’s really hard to know

The third point raises the question whether these probabilistic models are even doing the right thing at all.  Is the skill of safe driving accomplished by probabilistic guessing from context?  Or is some other cognitive process a better model?

As Cummings notes, neural networks “struggle to perform even basic operations when the world does not match their training data”. 

“What these systems lack is judgment in the face of uncertainty, a key precursor to real knowledge.”

(from [1])

The fourth point is interesting.  While all software needs to be maintained, machine learning based systems also need to keep learning.  Which most of them can’t, really.  Even if you have a machine learning based autopilot that works pretty well, the world is going to change.  The ML will need to learn new stuff all the time.

I’ll note that the “instability” of large language models does not bode well for keeping automated vehicles on the road.  If every update might make things dramatically worse, that’s kind of a problem.

The fifth point widens the scope of our worries.  One self-driving vehicle is a small risk to a few people.  Hundreds become a risk for a whole city.  Cummings recounts an incident when wireless connectivity dropped out, causing 20 self driving cars to stop moving.  This was a safe response, but it caused a large traffic jam.  No one died (at least not directly or immediately)—but the city stopped.

And, by the way, it would be relatively simple for hackers or terrorists to knock out wireless for a short period. (And lets not even think about hackers or governments suborning the AI.)

From her perspective of trying to create reasonable public safety measures for these menaces, Cummings isn’t optimistic.  Not only is there no meaningful action being taken, no one understands the problem well enough to even sketch what should be done.

“The lack of technical comprehension across industry and government is appalling.”

(from [1])

For me, this is show stopper numero six.  It’s bad enough that this lethally dangerous technology that we don’t understand is being deployed, there doesn’t seem to be any adult supervision, and little prospect for any.

The AI apocalypse won’t be superintelligent machines exterminating us.  It will be buggy machines that crash everything.

Sigh.


  1. Mary L. “Missy” Cummings, What Self-Driving Cars Tell Us About AI Risks, in IEEE Spectrum – Transportation, July 30, 2023. https://spectrum.ieee.org/self-driving-cars-2662494269

One thought on “Cummings on AI Risks”

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.