Skip to main content


AI hallucinations can’t be stopped — but these techniques can limit their damage
Developers have tricks to stop artificial intelligence from making things up, but large language models are still struggling to tell the truth, the whole truth and nothing but the truth.
nature.com/articles/d41586-025…

The Great Big Lying Machine can't stop lying. Even after all these billions in investment. Because lying is a feature, not a flaw of AI. Because the killer app of AI is advertising and propaganda.

reshared this

in reply to Gerry McGovern

Sensitive content

reshared this

in reply to Androcat

@androcat
We shouldn't even use terms like “hallucinations” or “thinking” in the context of #LLMs at all.

Those anthropomorphizations (used for marketing AI products) easily give the impression that such systems undergo human-like mental processes – which is definitely not the case. LLMs are based on statistical pattern recognition and probability calculations.

For those fields we have correct terms:

processing
text or image generation
error output
fabrication
data anomaly

in reply to feliz

Sensitive content

in reply to feliz

@feliz

Why do you assume humans aren't doing statistical pattern recognition and probability calculations?

@androcat @gerrymcgovern

in reply to Troed Sångberg

@troed @feliz @androcat
Nobody's assuming that. But what natural intelligence does that LLMs don't is form a predictive model of *the world*. That's an advantage if you want to guess right about if a lion is likely to eat you, and it's a springboard to being able to run the model with imagined actions, i.e. problem-solving intelligence.

LLMs only model words, a reduced map, not the territory. We're good at language so it can look similar sometimes, but it's an AGI dead end.

in reply to Pete Alex Harris🦡🕸️🌲/∞🪐∫

@troed @feliz @androcat
This is why they say dumb things like water wouldn't freeze at 2 Kelvin because the freezing point of water is 273 K and 2 K is much lower so water would still be a gas.

The syntax is fine, the words are words that are statistically likely to follow in that order, but there's literally no *meaning* captured anywhere in the language model.

in reply to Androcat

@androcat Why was an LLM allowed to run for president? And what's worse: why did this nonsense-babbling thing get elected?

While LLMs aren't intelligent, me thinks humans are overrated, too.

in reply to Androcat

@androcat
The #miracle of #LLMs is that they produce sequences of words in such an order which actually make sense to humans.

#sensemaking

in reply to Coach Pāṇini ®

Sensitive content

in reply to Androcat

@androcat @paninid Amazing that science is possible after all. Although many hallucinated theories lie by the wayside.

Politicians also seem to hallucinate policy solutions and the people seem in agreement. While AI bashing is fine, lying or hallucinating humans seems a more urgent problem to me.

in reply to Once upon a time

Sensitive content

in reply to Gerry McGovern

"computer scientists tend to refer to all such blips as hallucinations"
naah....we call them bugs