Bing Chat: Threat or Menace

ChatGPT has been grabbing headlines, terrifying white collar workers of all types with its ability to generate plausible BS.  I mean, if ChatGPT will give us the BS we need for something $20 / month, what do we need humans for?  That will get you in the media, and even get you an interview in respected technical press.

So, Microsoft’s Bing Chat has some catching up to do.  So, in the finest traditions of rock and roll, if you can’t be the first, you go for the “bad guys” niche, just as the Rolling Stones had to try to be “the bad Beetles”.

So Bing Chat has a really bad attitude.  It not only makes mistakes and fabricates random facts, it fabricates citations to back up it’s BS. And, should you question these alternative facts, it will become extremely hostile and, well, weird. 

This is particularly apparent, and annoying, when it fabricates lies about you, as Vince Cerf found.

This month, Sensei Janelle (“Dr. Weirdness”) Shane blogs about her own Bing Chat experiments [1].  It ain’t pretty, and it’s not funny, either.

Shane is famous for creative fiddling with various AI systems, exploring the crazy and unintentionally humorous behavior of our AI overlords.  She was doing this stuff long before the mainstream media ever heard of deep learning, so she (a) knows what she is talking about and (b) has seen everything.

It takes a lot to bother Sensei Janelle, but Bing Chat managed to really irritate her.

In the usual AI Weirdness methodology, she searched with Bing Chat for “AI Wierdness blog”, which, by the way, has an obvious answer.  What she got was, and I quote, “worse than useless” [1]. 

The “search” returned not only examples pulled from the actual blog, but made up examples that never appeared in the blog.  When challenged on some false facts, Bing Chat made up completely bogus additional facts to justify the mistake.  And made up citations to back up the made up facts.

As I noted, Shane knows this stuff.  She gives a clear explanation of what is going on here: 

“Bing chat is not a search engine; it’s only playing the role of one.” 

[1]

BC is plugging text statistically predicted from the Internet into a script portraying an actual search.  If the goal is to emulate the Internet, this is probably why the results are both wrong and hostile so often.

in short, Bing Chat a very complicated fake.

And it is a dangerous fake because it is dressed up to look like a search engine, which fools people into trusting it in the wrong ways (which is to say, at all). 

And this is what is really wrong, here.  Forget taking over our jobs.  These bots are destroying what little trust we might have left in the Internet.

“I find it outrageous that large tech companies are marketing chatbots like these as search engines.”

(Janelle Shane [1])

This bot is not only useless and unpleasant, it is evil.


  1. Janelle Shane, Search or fabrication? , in AI Weirdness, March 11, 2023. https://www.aiweirdness.com/search-or-fabrication/

One thought on “Bing Chat: Threat or Menace”

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.