My favourite story of that involved attempting to use LLM to figure out whether it was true or my hallucination that the tidal waves were higher in Canary Islands than in Caribbean, and why; it spewed several paragraphs of plausibly sounding prose, and finished with “because Canary Islands are to the west of the equator”.
This phrase is now an inner joke used as a reply to someone quoting LLMs info as “facts”.
It was Claude in November 2024, but the “west of equator” is a good enough universal nonsense to illustrate the fundamental issue - just that today it is in much subtler dimensions.
A not-so-subtle example from yesterday: Claude Code claiming to me yesterday assertion Foo was true, right after ingesting the logs with the “assertion Foo: false” in it.
I think it got backgrounded. I'm talking about the first big push, early 90s. I remember lots of handwringing from humanities peeps that boiled down to "but just anyone can write a web page!"
I don't think it changed, I do think people stopped talking about it.
The web remains unreliable. It's very useful, so good web users have developed a variety of strategies to extract and verify reliable information from the unreliable substrate, much as good AI users can use modern LLMs to perform a variety of tasks. But I also see a lot of bad web users and bad AI users who can't reliably distinguish between "I saw well written text saying X" and "X is true".
Yes, it still isn't, we all know that. But we all also know that it was MUCH more unreliable then. Everyone's just being dishonest to try to make a point on this.
This phrase is now an inner joke used as a reply to someone quoting LLMs info as “facts”.