What is the reason behind A.I. chatbots telling lies and acting weird?

Recently, the behavior of a chatbot that Microsoft added to its Bing search engine caused a worldwide sensation. The bot offered up all sorts of bogus information, and when journalists and other early testers engaged in lengthy conversations, the bot began to display creepy behavior. Researchers have struggled to understand the oddity of this new creation, and many scientists have said humans deserve much of the blame. However, Terry Sejnowski, a neuroscientist, psychologist, and computer scientist who helped lay the intellectual and technical groundwork for modern artificial intelligence, believes that odd chatbot behavior can be a reflection of the people using it.

Sejnowski published a research paper on this phenomenon in the scientific journal Neural Computation, and he states that “Whatever you are looking for — whatever you desire — they will provide.” According to Sejnowski, chatbots learn bad information from bad sources just like any other student. Furthermore, the strange behavior chatbots exhibit may be a distorted reflection of the words and intentions of the people using them.

Credit: David Plunkert

Microsoft’s chatbot does not have a personality, and it only offers instant results spit out by an incredibly complex computer algorithm. Microsoft appeared to curtail the strangest behavior when it placed a limit on the lengths of discussions with the Bing chatbot. Microsoft’s partner, OpenAI, and Google are also exploring ways of controlling the behavior of their bots.

The new chatbots are driven by a technology called a large language model (L.L.M.). These systems learn by analyzing enormous amounts of digital text culled from the internet, which includes volumes of untruthful, biased, and otherwise toxic material. As it analyzes that sea of good and bad information, an L.L.M. learns to guess the next word in a sequence of words. When you chat with a chatbot, it is not just drawing on everything it has learned from the internet. It is drawing on everything you have said to it and everything it has said back.

Because chatbots are learning from so much material and putting it together in such a complex way, researchers are not entirely clear how chatbots are producing their final results. Researchers are watching to see what the bots do and learning to place limits on that behavior, often after it happens. Microsoft and OpenAI have decided that the only way they can find out what the chatbots will do in the real world is by letting them loose.

Leave a comment