More Chat Bot Weirdness From Sensei Janell Shane

Back in the day, you had really work to get stupid AI tricks out of GPT and friends.  These days, any dufus can do it. 

But, Sensei Janelle Shane was doing AI Weirdness before it was cool!   And, as a matter of face, she does know what she’s doing.

This spring she forayed into the arcane medium of ASCII art [1].  (If you were born in this century, you have no idea how cool this was when we invented it!)

No one is surprised that ChatGPT and friends are really bad at generating ASCII art.  For one thing, they almost certainly weren’t carefully trained with examples.  (It’s actually an interesting question to think about how a computer can perceive ASCII art at all.  A text based system is going to have to ignore the fact that it is text, and try to see it as an image.)

Anyway, the interesting thing isn’t how clueless the AIs were–though they are very, really, totally clueless.  The interesting thing is that she asked them to rate their own work.  The AIs uniformly gave themselves A’s, with high confidence that these were really, really good.

“It’s not that the ASCII art has nothing to do with what I ask for. There is often an attempt. Followed by a wildly optimistic rating.”

(From [1])

To us puny Carbon-based units, this stuff is complete and utter junk, not even responsive to the requirements.  Definitely an “F”.  As in, “flailing”.

“What’s going on here? The chatbots are flailing. Their ASCII art is terrible, and their ratings are based on the way ratings should sound, not based on any capacity to judge the art quality.”

I think this comment is spot on:  ChatGPT generates text that matches “the way things should read” according to what’s on the Internet. Why would we even want that?

Anyway.


By the way, I had my own “thought in the night” I had for an experiment you could really do with ChatGPT.

I realized that me and a lot of people have been posting the output of ChatGPT and friends on the Internet. 

For example, Sensei Janelle’s blog is fully of examples generated by ChatGPT and other models.  She already asked ChatGPT to explain what her blog is about (which a real search engine could look up).  What will happen if we ask that question next year, and it has added this year’s text to the training set?  

More generally, here’s the experiment:

Generate a bunch of questions and answers from ChatGPT.  Suppose there were enough text from ChatGPT to be 1%, or 5%, or 10% of the original.

Add this text to the training set,  There are lots of ways to mix in the new data with the old.

Generate a new, second generation model.  This model will represent patterns in the patterns it found in the first model.

Now, ask the same questions as before, using the second generation model.

What will happen?

(…and repeat….)


I’ll give Shane the last word here:

“Am I entertained? Okay, yes, fine. But it also goes to show how internet-trained chatbots are using common patterns rather than reality. No wonder they’re lousy at playing search engine.”

She is not amused.

  1. Janelle Shane, ASCII art by chatbot, in Ai Weirdness, March 31, 2023. https://www.aiweirdness.com/ascii-art-by-chatbot/

3 thoughts on “More Chat Bot Weirdness From Sensei Janell Shane”

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.