Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They also tried to heal the damage, to partial avail. Besides, it's science: you need to test your hypotheses empirically. Also, to draw attention to the issue among researchers, performing a study and sharing your results is possibly the best way.


Yeah I mean I get that, but surely we have research like this already. "Garbage in, garbage out" is basically the catchphrase of the entire ml field. I guess the contribution here is that "brainrot"-like text is garbage which, even though it seems obvious, does warrant scientific investigation. But then that's what the paper should focus on. Not that "LLMs can get 'brain rot'".

I guess I don't actually have an issue with this research paper existing, but I do have an issue with its clickbait-y title that gets it a bunch of attention, even though the actual research is really not that interesting.


I don’t understand, so this is just about training an LLM with bad data and just having a bad LLM?

just use a different model?

dont train it with bad data and just start a new session if your RAG muffins went off the rails?

what am I missing here


The idea of brain rot is that if you take a good brain and give it bad data it becomes bad. Obviously if you give a baby (blank brain) bad data it will become bad. This is about the rot, though.


Do you know the conceot of brain rot? The gist here is that if you train on bad data (if you fuel your brain with bad information) it becomes bad


I don’t understand why this is news or relevant information in October 2025 as opposed to October 2022




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: