You can align to general, broad principles that apply to all of humanity. Yes, of course, there are going to be exceptions and disagreements over what that looks like, but I would wager the vast majority of humanity would prefer it if a hypothetically empowered AI did not consider "wiping out humanity" as a potential solution to a problem it encounters.
Are our principles aligned with other beings in mind? Other Humans, animals, etc.?
At this point we have a bunch of rules and principles that have exceptions if someones benefit from violating them outweighs the negatives.
I don't see how we find a unified set of rules, that are clear enough and are not exploited or loopholed around in some context.
If the AI was actually smart und you tell it to not harm or kill any human beings, but we ourselves do just that every day. What is this smart AI going to do about that?
It's like parents that don't practice what they preach, but expect their kids to behave.