Software Demo of the Year and Something I’ve Always Wanted To Do

I mean, who hasn’t wanted to just ask their software to explain itself?

This winter OpenAI’s ChatGPT has made quite a smash, invading mainstream media as well as techland. 

I’m imagining the conversation, “Hey, let’s slap a chatbot on the front of a big, general natural language model, and let people talk to it.”  : – ) And it kind of works. Neat!

It’s a great demo, for sure.  How significant it is remains to be seen. 


This month IEEE Spectrum decided to go to the source itself, and interview the program itself (!).[1]

I love it!  And the fact that the interview mostly worked pretty well is remarkable.  It’s truly the demo of the year.  It’s obviously better than most chatbots. And way better that a lot of AI generated text

Reading the interview, though, I am struck by the shallowness of the answers.  They are (a) very conventional and (b) minimally informative.  For example, the answer to “How do you work?” begins:

“I am a large language model trained by OpenAI. I rely on advanced machine learning algorithms and a huge amount of data to generate responses to the questions and statements that I receive. I am designed to be able to generate human-like text that is relevant to the input that I receive.”

This is pretty much the dictionary definition of what GPTChat is, phrased in the first person.  The information is correct, yet useless.  And, technically, the answer is not responsive to the actual question (i.e., how it works).

Overall the answers are well formed sentences, and close enough to the topic to seem reasonable on first reading.  Which, as many have commented, is not that different from a lot of text generated by undergraduate students or corporate middle managers.

I can see why some consider this a BS generator.  Many teachers will see this text as very similar to the kind of bluff submitted by students when they don’t know the answer and seek to fill the space with conventional knowledge.  And all of us will recognize the kind of true-but-irrelevant fluff we see on slide decks all the time. 

Perhaps this is not so much Artificial Intelligence as Artificial I Want You To Think I’m Intelligent.


Putting on my dusty old psychologist’s hat, I think that a successful “general BS generator” is actually interesting in its own right.  The fact that the text is plausible, and in some cases very plausible, tells us a lot about how we judge “plausibility”.  This program literally doesn’t know what it is talking about.  But a combination of glib style, enough “facts” to fill the space, and no glaring errors makes us accept it as equivalent to text written by an unknown human. (Granted, random humans on the internet may well have no idea what they are talking about, either.)

There seems to me to be an interesting placebo effect going on here.  Regurgitating vacuous, common knowledge in decent sounding text seems to make it both more authoritative and more “human”.

I think the Q&A format is important for plausibility, too.  When the user phrases a question, they have an implicit expectation for what the answer should look like, and often they expect certain answers.  When the program is able to generate text that is close enough to these expectations, the human user will project their expectations on the “reply”, and attribute it with more authority.  Confirmation bias works just as well with bot generated text.

If this hypothesis is correct, I suspect that many GPTChat responses would be judged far less convincing out of their original conversation or Q&A context.


I do worry a bit because the text is so plausible it is really hard to tell that it is computer generated.  It is mostly junk or at best shallow common knowledge, but it is written in proper sentences and more or less on topic (which would put it in the upper 1% of reddit posts, I bet).  How can I tell if I am chatting with a pretty good bot or a lousy human? 

When this kind of chatbot becomes cheap and ubiquitous, they will be able to flood the internet with plausible, yet useless text.  Even if the text is a little bit useful, its still going to flood every channel.  I don’t see how this is a good thing.  It’s basically a denial of service attack.

And, of course, it will be perfectly possible to create bots that spew deliberate misinformation, propaganda, and slander.  This is basically a denial of service attack targeted as specific networks of discourse.  Swell. Automated information warfare.

For that matter, I could see a similar DoS attack on specialized sub-cultures.  Your favorite discussion group could be flooded by plausible but meaningless junk.  Even highly creative discussion groups could be “attacked” by junk text. For example, it is easy to imagine a GPT-Qbot, for instance, which plausibly imitates and even contributes to Q-world chatter, flooding the discussions with bot generated dot connecting. Would such robot posts help, hinder, or be noticed at all by such a community?


In short, there are a lot of interesting social psychological research questions raised.  Perhaps OpenAI should sponsor and pay attention to some serious research on these topics, rather than emitting splashy demos.

And, as GPTChat itself says, “it is important for users to use their own critical thinking skills and to verify the information that I provide”.


  1. Edd Gent, Hello, ChatGPT—Please Explain Yourself! , in IEEE Spectrum – Artificial Intelligence, December 9, 2022. https://spectrum.ieee.org/chatbot-chatgpt-interview

8 thoughts on “Software Demo of the Year and Something I’ve Always Wanted To Do”

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.