Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While I agree entirely about what Grok teaches us about alignment, I think the argument that "alignment was never a technical problem" is false. Everything I have ever read about AI safety and alignment have started by pointing out the fundamental problem of deciding what values to align to because humanity doesn't have a consistent set of values. Nonetheless, there is a technical challenge because whatever values we choose, we need a way to get the models to follow those values. We need both. The engineers are solving the technical problem; they need others to solve the social problem.


> they need others to solve the social problem.

You assume it is a solvable problem. Chances are that you will have bots following laws (as opposed to moral statements) and each jurisdiction will essentially have a different alignment. So in a social conservative country, for example, a bot will tell you not being hetero is wrong and report you to the police if you ask too many questions about it. While, in a queer friendly country, a bot would not behave like this. A bit like how some movies can only be watched in certain countries.

I highly doubt alignment as a concept works beyond making bots follow laws of a given country. And at the end of the day, the enforced laws are essentially the embodiment of the morality of that jurisdiction.

People seem to live in a fictional world if they believe countries won't force LLM companies to force the country's morality in their LLMs whatever their morality is. This is essentially what has happened with intellectual property and media and LLMs likely won't be different.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: