> I characterized coding agents as camels in RotJD, and that metaphor still has legs. Four of them. But I find it difficult, as a writer, to convey the siren-song allure of coding agents when I'm describing them as a bunch of camels gurgling at me. So they're… baby camels? Regardless, they’re like toddlers, but with proper supervision and food preparation, they're absolute brutes at writing code. I can't bring myself to leave when my brutes are hungry. I've tried, it's a no-go. I have to run a practiced escape plan every night to get my computer closed by 2am. First I get them all spinning at once. Then I leap up, run out of the room, slam the door, jam my fingers in my ears, and sprint away shouting lalalala. It is perhaps not the most refined of plans, and doesn't have a great success rate. My babies need me.
Amdahl's law in action: the economic gain from coding agents is ultimately bottlenecked by the slowest serial component in the system. In this case... Steve Yegge himself.
Which is why the goal is to replace Yegge entirely. Even the perfect coding assistant which makes Yegge 100x more productive is still 100x worse than full replacement and running 10,000 virtual Yegges in parallel. Why settle for 1%?
"and then I send it on its birdy way, out into the wide world of my unprotected hard disk, network, and bank account."
eek
You do that.
Would this guy have said these things if he had not been sending other AIs to write what he thought, this whole time? I find I'm studying this screed as if turning over a rock. Fascinated, concerned, dismayed.
The real punchline would be that the guy's died but left some machines to continue to post in his absence. If one hand claps in the forest but there's nobody to hear, is he still shit in the woods?
Yeah, I bet you never go back. Intermittent reinforcement is well documented to produce that effect. I would say "wait till someone notices the possibilities in what a compulsion loop you've set yourself into here," but of course who has and hasn't noticed that is mostly pretty easy to tell.
Oh my god I thought I'd be angry at him like I usually am when he writes another one of his sell-out pieces but I'm not even a third of the way through and my response is between pity and revulsion at what he's describing.
He made his life a kind of hell I wouldn't wish my worst enemy: he's now fully a slave to the machine. What started as a sales pitch turned rapidly into a disturbingly transparent exercise in self-loathing
He is sure AIs are the solution. That is the theme of all his writing. I'm sure he gets his paycheck for saying so.
The problem then, as he makes clear, is the collapse of everything else. His own self-worth, for example, he finds gone at the moment it's most important to him. In the post he just sounds a little too honest, which is how engineers usually sound when they're burning out -- no mystery considering the draining relationship he describes having to his work.
This is the same contradiction he cannot address elsewhere: is his talent 1000% leveraged because he is an uber-god programmer? Is he addicted to a powerful and expensive drug? Is his involvement in his work so inconsequential that he could just be playing video games? Is AI sheparding the most valuable work in society because it has unlimited potential? Or are these people just leaderboarding to slurp up a limited pool of value while dunking on the people working to expand the boundaries of what is possible in the human world?
"I'm sorry, Iñigo, I didn't mean to vibe it so hard."
But seriously: as an attorney, I find that this writing perfectly reflects what I imagine it must be like to live inside the head of someone who agentic-vibe-codes for a living. It's all over the map; pulled (and pulling the reader) in a dozen different directions; non-standard; mixing metaphors and idioms; likely 20x longer than it ought to be; and almost able to figure out its own point in the pastiche as it flails around.
I imagine that the coding results of the process described would be similar. Would that be an accurate prediction?
The author has been blogging about software development for I think 20 years or so, and has long had a somewhat freewheeling style. So I don't think this is attributable entirely to the subject matter.
I wonder if his writing got weaker this last decade, or just the style got passé and isn't appreacited anymore? I remember LOVING his essays and those last ones I kinda feel nothing. Maybe its with me, I'm not feeling that much anymore.
Also, after explaining how society is moving at extraordinary pace, you write a book, and don't even have a link yet? I feel exactly like him though (old) but trying to join the fun.
>Once you have tried coding agents, and figured out how to be effective with them, you will never want to go back
Bullshit. I absolutely want to go back. I am so exhausted of having to code review bullshit, of executives who think AI is magic based on bullshit, of junior devs thinks they're incredible because of bullshit proofread by hours of senior eng work - the same juniors who will never grow into seniors due to overreliance on bullshit.
This absolutely is not what he is talking about. The feedback loop and oversight model is very different from code reviewing code that other people made with AI.
> I can't bring myself to leave when my brutes are hungry. I've tried, it's a no-go. I have to run a practiced escape plan every night to get my computer closed by 2am. First I get them all spinning at once. Then I leap up, run out of the room, slam the door, jam my fingers in my ears, and sprint away shouting lalalala.
> It's potent stuff. If you do attempt running six agents in separate workstreams, bring potable water and a couple of empty jugs.
Technology has a concentrating effect. When tech makes a thing cheaper, easier, and more accessible, more can happen. Mostly this is good. Mostly a bunch of other newly possible stuff happens too.
But with almost everything there's a point of concentration beyond which it switches over to being harmful. That point is different for everyone and even could be different for the same person on a different day. It's up to each of us to enforce our own boundaries.
Not everything can be enforced by individuals refusing to play. Sometimes a thing needs to be enforced at a societal level. Consider why there are speed limits on roads. Cars can go really really fast, and people in a hurry hate being held up. By capping the speed everyone can go, there's less FOMO and less danger when driving. This stuff is easy to understand when the risk is bodily harm. Sadly humans struggle to see the same phenomenon when all it's doing is wrecking your sleep and downtime and getting in the way of doing literally anything else. So maybe, this will need to be enforced at a societal level. Australia already has "right to disconnect" laws, for example.
But the risk of addiction doesn't stop at individuals.
Further in comes the turn: Stevey's stopped pulling his agentic coding slot machine lever long enough to shill an obscure product which adds a layer of competition among your teammates. Now the casino gets a new jackpot board and FOMO because your teammates are better gamblers than you.
> Amp is also more fun. It takes a different design approach, being intentionally team-centric. Amp gamifies your agentic development by making it public, with leaderboards and friendly competition, as well as liberal thread sharing. It all manages to be low-pressure and engaging.
You can tell Steve doesn't believe his own words because he doesn't back up that final sentence at all, nor does he back up his statement with personal experience. Whatever Amp becomes in a team context, Steve only wrote enough about it to collect a paycheck. Make of that what you will.
It is definitely not all bad. Despite being addictive, with good guardrails you can produce a lot high quality work.
Just don't addict yourself to pulling that next item off the backlog, or trying one more time to feel that magic mind-reading vibe when the agent nails it. Rather, work your hours then go out and stare at the sun. Or get the code done as fast as possible and spend more time talking to customers so you can be sure the thing you're building really fast is actually the thing they need.
Part of what's so brutal to see here is that we know there's another Steve in there (or at least there was at one time another Steve in there) who would have had no trouble making serious, levelheaded, deeply worthwhile arguments around the AI narrative.
I really wish we could still have that Steve's thoughts.
I'm glad to hear that. I've been an outspoken opponent of his, but it's because his recent writing has been so preachily dismissive of anyone like me: anyone who is a holdout on AI. He asserts the holdouts are irrelevant, outmoded, not worthy of time or thought... but I work on the same (difficult) problems he used to care about before! So yeah it stings me a bit extra and I have given him the same disrespect I've felt
Amdahl's law in action: the economic gain from coding agents is ultimately bottlenecked by the slowest serial component in the system. In this case... Steve Yegge himself.
Which is why the goal is to replace Yegge entirely. Even the perfect coding assistant which makes Yegge 100x more productive is still 100x worse than full replacement and running 10,000 virtual Yegges in parallel. Why settle for 1%?