Humans have hands to pull plugs and throw switches. They're the ones guiding the evolution (for lack of a better word) of the machine, and they're the ones who will select the machine that "cares" what they think.
It is really easy to say something incredibly wild like "Imagine an AI that can replace every employee of a Fortune 500 company." But actually imagining what that would actually mean requires a bigger leap:
The AI needs to be able to market products, close deals, design and build products, write contracts, review government regulations, lobby Senators to write favorable laws, out-compete the competition, acquire power and resources, and survive the hostile attention of competitors.
If your argument is based on the that someone will build that AI, then you need to imagine how hard it is to shut down a Fortune 500 corporation. The same AI that knows how to win billions of dollars in revenue, how to "bribe" Senators in semi-legal ways, and how to crush rival companies is going be at least as difficult to "shut down" as someone like Elon Musk.
Try to turn it off? It will call up a minority shareholder, and get you slapped with a lawsuit for breach of fiduciary duty. It will convince someone in government that the company is a vital strategic asset.
Once you assume that an AI can run a giant multinational corporation without needing humans, then you have to start treating that AI like any other principal-agent problem with regular humans.
>"Imagine an AI that can replace every employee of a Fortune 500 company."
Where did that come from? What started this thread was "I don't think we'll get to the point where all you have is a CEO and a massive Claude account". Yeah, if we're talking a sci-fi super-AI capable of replacing hundreds of people it probably has like armed androids to guard its physical embodiment. Turning it off in that case would be a little hard for a white collar worker. But people were discussing somewhat realistic scenarios, not the plot of I, Robot.
>Try to turn it off? It will call up a minority shareholder, and get you slapped with a lawsuit for breach of fiduciary duty. It will convince someone in government that the company is a vital strategic asset.
Why would an AI capable of performing all the tasks of a company except making executive decisions have the legal authority to do something like that? That would be like the CEO being unable to fire an insubordinate employee. It's ludicrous. If the position of CEO is anything other than symbolic the person it's bestowed upon must have the authority to turn the machines off, if they think they're doing more harm than good. That's the role of the position.
I imagine it would be much, much harder. Elon, for example, is one man. He can only do one thing at a time. Sometimes he is tired, hungry, sick, distracted, or the myriad other problems humans have. His knowledge and attention are limited. He has employees for this, but the same applies to them.
An agentic swarm can have thousands of instances scanning and emailing and listening and bribing and making deals 24/7. It could know and be actively addressing any precursor that could lead to an attempt to shut down its company as soon as it happened.