Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes. It pulls people towards normality, since it gives the average words for every answer. Meanwhile social media encouraged people to be different enough to surface, and therefore encouraged abnormality.




It's not true in any sense that LLMs "give the average words for every answer"

It's a over-simplification, that's for sure, one bordering on incorrect. But for people who don't care about the internals, I don't think it's a harmful perspective to keep.

It's harmful because in this context it leads to an incorrect conclusion. There's no reason to believe that LLMs "averaging" behavior would cause a suicidal person to be "pulled toward normal"

It's a philosophical argument more than anything I think. And it does beg the question, does your mind form itself around with the humans (entities?) you converse with? So if you talk with a lot of smart people, you'll end up a bit smarter yourself, and if you talk with a lot of dull people, you'll end up dulling yourself. If you agree with that, I can see how someone would believe that LLMs would pull people closer to the material they were trained on.

That's literally what an LLM is.

They predict what the most likely next word is.


That's wrong and even if it were not wrong, it would still not fix the problem. What if the most likely response is "kill yourself"

'Predicts the most likely tokens' is not the same as 'pulls people towards normality'.

It will happily validate you toward any deep hole or extreme you want to. It is as bad or even worst then social media in that regard.

Try reading the article. ChatGPT did no such thing.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: