What bothers me most about the debate is the use of the term hashtag #hallucineren in the context of LLMs. It was even voted (digital) word of the year in the Netherlands, but it doesn’t quite capture the essence.
LLMs don’t drink, don’t take drugs, and can’t have fevers. The term is essentially a way to justify what an LLM actually does: lie when they don’t know the answer and then, like a cunning con artist, pass off their fiction as fact.
Incidentally, I fall into the same trap here by humanizing AI; the system is not conscious. According to researchers in Glasgow, these models sell bullshit in the philosophical sense of the word: they provide information without regard for the truth. Or, better yet, they simply generate text without the intention of telling the truth.
In Shakespeare’s language it sounds even sharper: “the models are in an important way indifferent to the truth of their outputs”.
It remains incredibly difficult to avoid anthropomorphism. We seem so eager to attribute consciousness and human-like qualities to AI, while under the hood it’s just empty zeros and ones.
Interesting reading: Michael Townsen Hicks, James Humphries & Joe Slater “ChatGPT is bullshit”