Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This snarky site may make you feel smart but really there’s no reason to cite and trust anything, and AI isn’t much worse than alternatives. Even peer review isn’t the guarantee you think it is. AI is often right as well and we should keep that in mind.


AI is never right.

It’s also never wrong.

LLMs bullshit us, in the truest sense: there’s no distinction between right and wrong, no investment in being correct, no remorse or embarrassment whatsoever when wrong.

They don’t really deserve to be called “right” when they spit out words that happen to be. They aren’t “wrong” when they spit out words that happen to be. They don’t care so we shouldn’t project these higher notions onto them.

It’s worthless empty air either way. Prose with the value bargained down to the average of all prose.


First of all, you can only verify the informations correctness if you know fairly much about the topic. Did you know that Sweden lost the battle of Pultava, because syfilis was affecting Charles XII’s brain? If you don’t believe me, I’m pretty sure I can gaslight some model or another to agree with me. That you cannot do with a peer-reviewed journal and even less so with a respected book on the subject.

Better LLM or even internet forums are more useful you know about the subject. You can use them for sparring, testing theories and just for fun, but you shpuld not use them to learn about a subject. For that you need a book and some practice, maybe a lecture or two won’t hurt. Of course there is nuance to this, but in general they just are not trustworthy and will most likely never be.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: