AI hallucinations can’t be stopped — but these techniques can limit their damage
Developers have tricks to stop artificial intelligence from making things up, but large language models are still struggling to tell the truth, the whole truth and nothing but the truth.
nature.com/articles/d41586-025…
The Great Big Lying Machine can't stop lying. Even after all these billions in investment. Because lying is a feature, not a flaw of AI. Because the killer app of AI is advertising and propaganda.
reshared this
Androcat
in reply to Gerry McGovern • • •Sensitive content
Just in case the audience didn't pick up on the sarcasm:
The article is itself a lie.
There is no such thing as a non-hallucination LLM output.
Thinking that hallucinations are some sort of waste product or accident presupposes that LLMs ever know what they are doing.
They do not, cannot. They do not have any understanding, do not have a mind, cannot reason, do not know anything.
All their answers are just stochastic hallucinations. Every last one of them.
tivasyk likes this.
reshared this
Sally Strange, Brad Rosenheim, Elizabeth (H.) Bonesteel and Elyse M Grasso reshared this.
feliz
in reply to Androcat • • •@androcat
We shouldn't even use terms like “hallucinations” or “thinking” in the context of #LLMs at all.
Those anthropomorphizations (used for marketing AI products) easily give the impression that such systems undergo human-like mental processes – which is definitely not the case. LLMs are based on statistical pattern recognition and probability calculations.
For those fields we have correct terms:
processing
text or image generation
error output
fabrication
data anomaly
Androcat
in reply to feliz • • •Sensitive content
@feliz
In the case of hallucinations, a better word might be Uninterpretable Stochastic Gibberish.
As opposed to "normal output" which is Interpretable Stochastic Gibberish.
In either case, it is interpretable or not based on whether the user mistakenly thinks it makes sense.
Troed Sångberg
in reply to feliz • • •@feliz
Why do you assume humans aren't doing statistical pattern recognition and probability calculations?
@androcat @gerrymcgovern
Pete Alex Harris🦡🕸️🌲/∞🪐∫
in reply to Troed Sångberg • • •@troed @feliz @androcat
Nobody's assuming that. But what natural intelligence does that LLMs don't is form a predictive model of *the world*. That's an advantage if you want to guess right about if a lion is likely to eat you, and it's a springboard to being able to run the model with imagined actions, i.e. problem-solving intelligence.
LLMs only model words, a reduced map, not the territory. We're good at language so it can look similar sometimes, but it's an AGI dead end.
Pete Alex Harris🦡🕸️🌲/∞🪐∫
in reply to Pete Alex Harris🦡🕸️🌲/∞🪐∫ • • •@troed @feliz @androcat
This is why they say dumb things like water wouldn't freeze at 2 Kelvin because the freezing point of water is 273 K and 2 K is much lower so water would still be a gas.
The syntax is fine, the words are words that are statistically likely to follow in that order, but there's literally no *meaning* captured anywhere in the language model.
Bernd Paysan R.I.P Natenom 🕯️
in reply to Androcat • • •@androcat Why was an LLM allowed to run for president? And what's worse: why did this nonsense-babbling thing get elected?
While LLMs aren't intelligent, me thinks humans are overrated, too.
Coach Pāṇini ®
in reply to Androcat • • •@androcat
The #miracle of #LLMs is that they produce sequences of words in such an order which actually make sense to humans.
#sensemaking
Androcat
in reply to Coach Pāṇini ® • • •Sensitive content
@paninid Humans can make sense of things that was intended not to make sense (cf. Colorless green ideas sleep furiously).
It's a feature of the human mind.
LLMs are text in, garbage out. No miracles.
Once upon a time
in reply to Androcat • • •@androcat @paninid Amazing that science is possible after all. Although many hallucinated theories lie by the wayside.
Politicians also seem to hallucinate policy solutions and the people seem in agreement. While AI bashing is fine, lying or hallucinating humans seems a more urgent problem to me.
Androcat
in reply to Once upon a time • • •Sensitive content
@knitter
The lying humans are promoting LLMs.
It's a two-fer.
Giving people a more correct understanding of what LLMs do is an important antidote to the lying humans that seek to destroy the world to push this bullshit technology.
@paninid @gerrymcgovern
Peter Sørensen
in reply to Gerry McGovern • •naah....we call them bugs