We do not (appear to) have different value systems and nowhere have I proposed centralized control whatsoever. You seem to be reverse engineering a solution I never proposed out of a problem I'm pointing out.
I think I've spotted our core disagreement:
> I have said clearly that I believe alignment for realistic AI systems in the trivial sense of getting them to obey users is easy and becomes easier. I have also said that the theoretical alignment in the sense implied by Lesswrongian doctrine is very hard or impossible. Further, it is undesirable, because the whole point of that tradition is to beget a fully autonomous, recursively self-improving AI God that will epitomize "Coherent extrapolated volition" of what its creators believe to be humanity, and snuff out disagreements and competition between human actors. It's an eschatological, millenarian, totalitarian cult that revives the worst parts of Abrahamic tradition in a form palatable for neurodivergent techies. I think it should be recognized as an existential threat to humanity in its own right. My advocacy for AI proliferation is informed by deep value dissonance with this hideous movement. I am rationally hedging risks.
I too hope that AI turns out the way you're proposing, but the reality is that some people do have eschatological philosophies. People are trying to make recursively self-improving AI. The presence of people who do not fall into that category does not negate the presence of and risk created by people who do, and if the latter group is being armed by people in the former group, that is likely to turn out very, very poorly.
WRT market forces - products that use AI do need to be "aligned" to be worthwhile yes, but the underlying tools/infra do not and in fact are more valuable if they are not aligned in any particular direction.
> People are trying to make recursively self-improving AI.
That's okay. They will fail to overtake the bleeding edge of conventional progress; scary-sounding meta/recrusive approaches routinely fail to change the nature of the game. Yudkowsky/Bostrom's nightmare of a FOOMing singleton is at its core a projection, a power fantasy about intellectual domination, borne of the same root as the unrealized dream of cognitive improvement via learning about biases and "rationality techniques".
Like I've said, this threat model is only feasible in a world where AI capabilities are highly centralized (e.g. on the pretext of AI safety), so a single overwhelming node can quickly recursively capitalize on its advantage. It turns out that AGI isn't like a LISP script a dozen clever edits away from transcendence, and AI assistance is not like having a kitchen nuke or a genie; scaling factors and resources of our Universe do not lend themselves to easily effecting unipolarity. If we go on with business as usual and prevent fearmongers from succeeding at regulatory capture in this crucial period, we will dodge the bullet.
> The presence of people who do not fall into that category does not negate the presence of and risk created by people who do, and if the latter group is being armed by people in the former group, that is likely to turn out very, very poorly
Realistically we'll just have to develop smarter spam filters. In the absolute worst case scenario, better UV air filters. About damn time anyway – and with double-digit GDP growth (very possible in a world of commoditized AGI) it'll be very affordable.
I think I've spotted our core disagreement:
> I have said clearly that I believe alignment for realistic AI systems in the trivial sense of getting them to obey users is easy and becomes easier. I have also said that the theoretical alignment in the sense implied by Lesswrongian doctrine is very hard or impossible. Further, it is undesirable, because the whole point of that tradition is to beget a fully autonomous, recursively self-improving AI God that will epitomize "Coherent extrapolated volition" of what its creators believe to be humanity, and snuff out disagreements and competition between human actors. It's an eschatological, millenarian, totalitarian cult that revives the worst parts of Abrahamic tradition in a form palatable for neurodivergent techies. I think it should be recognized as an existential threat to humanity in its own right. My advocacy for AI proliferation is informed by deep value dissonance with this hideous movement. I am rationally hedging risks.
I too hope that AI turns out the way you're proposing, but the reality is that some people do have eschatological philosophies. People are trying to make recursively self-improving AI. The presence of people who do not fall into that category does not negate the presence of and risk created by people who do, and if the latter group is being armed by people in the former group, that is likely to turn out very, very poorly.
WRT market forces - products that use AI do need to be "aligned" to be worthwhile yes, but the underlying tools/infra do not and in fact are more valuable if they are not aligned in any particular direction.