Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

OP isn’t talking about systems at large, but specifically about LLMs and the pervasive idea that they will turn agi and go rogue. Pretty clear context given the thread and their comment.


I understood that from the context, but my question stands. I'm asking why OP thinks that sentience is necessary for risk in AI




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: