It's doesn't work...yet. I agree my stomach churns a little at this sentence. However, paying customers care about reliability and performance. Code review helps that today, but it's only a matter of time before it is more performative than useful in serving those goals at the cost of velocity.
How does that compare to those of us with 15-50 years of software engineering experience working on giant codebases that have years of domain rules, customers and use cases etc.
When will AI be ready? Microsoft tried to push AI into big enterprise, Anthropic is doing a better job -but its all still in infancy
Personally for me I hope it won't be ready for another 10 years so I can retire before it takes over :)
I remember when folks on HN all called this AI stuff made up
Thats the problem, the most “noise” regarding AI is made by juniors who are wowed by the ability to vibe code some fun “sideproject” React CRUD apps, like compound interest calculators or PDF converters.
No mention of the results when targeting bigger, more complex projects, that require maintainability, sound architectural decisions, etc… which is actually the bread and butter of SW engineering and where the big bucks get made.
>>like compound interest calculators or PDF converters.
Caught you! You have been on HN very actively the last days, because these were exactly the projects in "Show HN: .." category and you would not be able to tell them if you wouldnt have spent your whole time here :-D
It's still new, but it's useful now. I'm on the Claude Pro plan personally. I had Claude write a Chrome extension for me personally this morning. It built something working, close to a MVP, then I hit the Claude Pro limit.
I have access to Claude Code at work. I integrated it with IntelliJ and let it rip on a legacy codebase that uses two different programming languages plus one of the smaller SCADA platforms plus hardware logic in a proprietary format used by a vendor tool. It was mostly right, probably 80-90%, had a couple mis-understandings. No documentation, I didn't really give it much help, it just kind of...figured it out.
It will be very helpful for refactoring the codebase in the direction we were planning on going, both from the design and maybe implementation perspectives. It's not going to replace anybody, because the product requires having a deep understanding across many disciplines and other external products, and we need technical people to work outside the team with the larger org.
My thinking changes every week. I think it's a mistake to blindly trust the output of the tool. I think it's a mistake to not at least try incorporating it ASAP, just to try it out and take advantage of the tools that everybody else will be adopting or has adopted.
I'm more curious about the impacts on the web: where is the content going to come from? We've seen the downward StackOverflow trend, will people still ask/answer questions there? If not, how will the LLMs learn? I think the adoption of LLMs will eventually drive the adoption of digital IDs. It will just take time.
As a guy in his mid-forties, I sympathize with that sentiment.
I do think you're missing how this will likely go down in practice, though. Those giant codebases with years of domain rules are all legacy now. The question is how quickly a new AI codebase could catch up to that code base and overtake it, with all the AI-compatibility best practices baked in. Once that happens, there is no value in that legacy code.
Any prognostication is a fool's errand, but I wouldn't go long on those giant codebases.
Yeah agreed - It all depends on how quickly AI (or more aptly, ai driven work done by humans hoping to make a buck) starts replacing real chunks of production workflows
“prediction is hard especially about the future” - yogi berra
As a hedge - I have personally dived deep into AI coding, actually have been for 3 years now - I’ve even launched 2 AI startups and working on a third - but its all so unpredictable and hardly lucrative yet
As an over 50 year old - I’m a clear target for replacement by AI
This is what people were saying about Rails 20 years ago: it wows the kids who use it to set up a CRUD website quickly but fails at anything larger-scale. They were kind of right in the sense that engineering a large complex system with Rails doesn't end up being particularly easier than with Plone or Mason or what have you. Maybe this will just be Yet Another Framework.
Ruby OnRails is an interesting hype counter point.
A substantial number of the breathless LLM hype results come, in my estimation, quicker and better as 15 min RoR tutorials. [Fire up a calculator (from a library), a pretty visualization (from a js library), add some persistence (baked in DB, webhost), customize navigation … presto! You actually built a personal application.]
Fundamental complexity, engineering, scaling gotchyas, accessibility needs, customer insanity aren’t addressed. RoR optimizes for some things, like any other optimization that’s not always a meaningful.
LLMs have undeniable utility, natural interaction is amazing, and hunting in Reddit, stackoverflow, and MSDN forums ‘manually’ isn’t a virtue… But when the VC subsidies stop and the psychoses get proper names and the right kind of egg hits the right kind of face over unreviewed code, who knows, maybe we can make a fun hype cycle called “Actual Engineering” (AE®).
I'm currently in a strange position where I am being that developer with 15+ years of industry experience managing a project that's been taken over by a young AI/vibe-code team (against my advise) that plans to do complete rewrite in a low-code service.
Project was started in late 00s so it has substantial amount of business logic, rules and decisions. Maybe I'm being an old man shouting at the clouds, but I assume (or hope?) it would fail to deliver whatever they promised to the CEO.
So, I guess I'll see the result of this shift soon enough - hopefully at a different company by the time AI-people are done.
The problem is, feedback cycles for projects are long. Like 1-10 years depending on the nature and environment. As the saying goes, the market can remain irrational longer than you can remain solvent.
Maybe the deed is done here, and I'd agree it's not particularly fun, but you could still think about what you can bring to the table in situations like this. Can you work on shortening these pesky feedback cycles? Can you help the team (if they even accept it) with _some_ degree of engineering? It might not be the last time this happens.
I think right now we're seeing some weird stuff going on, but I think it hasn't even properly started yet. Remember when pretty much every company went "agile"? In most cases I've seen they didn't, just wasting time chasing miracles with principles and methodologies few people understand deeply enough to apply. Yet this went on for, what, 10 years?
How does that compare to those of us with 15-50 years of software engineering experience working on giant codebases that have years of domain rules, customers and use cases etc.
At most of the companies I've worked at the development team is more like a cluster of individuals who all happen to be contributing to a shared codebase than anything resembling an actual team who collaborate on a shared goal. AI-assisted engineering would have helped massively because the AI would be looking outside of the myopic view any developer who is only focused on their tiny domain in the bigger whole cared about.
Admittedly though, on a genuinely good team it'll be less useful for a long time.
Exactly. As compute increases these algorithms will only get more compelling. You can test and evaluate so many more ideas than any human inventors can generate on their own.
I worry a lot about fads in engineering management. Any time you proscribe process over outcomes you create performative behavior and bad incentives in any discipline. In my observation, this tends to happen in engineering because senior leaders have no idea how to evaluate EMs in a non-performative way or as a knee-jerk to some broader cultural behavior. I think this is why you see many successful, seasoned EMs become political animals over time.
My suspicion about why this is the case is rooted in the responsibilities engineering shares with product and design at the management level. In an environment where very little unilateral decision making can be made by an EM, it is difficult to know if an outcome is because the EM is doing well or because of the people around them. I could be wrong, but once you look high enough in the org chart to no longer see trios, this problem recedes.
The author really got me thinking about the timeless aspects of the role underlying fads. I have certainly noticed shifts in management practice at companies over my career, but I choose to believe the underlying philosophy is timeless, like the relationship between day to day software engineering and computer science.
I worry about the future of the EM discipline. Every decade or so, it seems like there is a push to eliminate the function altogether, and no one can agree on the skillset. And yet like junior engineers, this should be the function that grows future leadership. I don't understand why there is so much disdain for it.
Process over Outcome is something that I think would be easy for anyone to proscribe to a process that they didn't like.
In my younger years, I was very cavalier about my approach to programming even at a larger company. I didn't particularly want to understand why I had to jump through so many hoops to access a production database to fix a problem or why there were so many steps to deploy to production.
Now that I more experienced, I fully understand all of those guardrails and as a manager my focus is on streamlining those guardrails as much as possible to get maximum benefit with minimum negative impact to the team solving problems.
But this involves a lot of process automation and tooling.
The problem imo tends to be not that there are guard rails in place. It's that they are often build by people that only care about the guard rail part and completely forget that its supposed to be last barrier and that there are other things you can do before you get people to hit a guardrail
What if teams were integrated groups of engineers, designers, and product people, managed by polymaths with at least some skill in all of these areas. In this case, do you think it would be easier to evaluate the team’s (and thus the manager’s) performance and then higher levels of management would care less about processes and management philosophy?
You're describing the GM (general manager) model, sometimes called the single threaded leader. This does work well in large scale organizations...especially ones where teams are built around projects and outcomes but exist for a finite time. Video game development tends to have this model.
I tend to believe in this model because when I've seen it in action, bad GMs are quickly identified and replaced for the betterment of the project.
It can be challenging to implement for a few reasons.
- It is difficult for a GM to performance manage across all disciplines. This model works best when you aren't interested in talent development.
- It's bad for functional consistency. GMs are focused on their own outcomes and can make the "ship your org chart" problem worse. It requires strong functional gatekeepers as a second-order discipline.
That's usually a consequence of bad incentives. Either leadership is selecting for that kind of behavior in managers or they don't know how to properly unselect for it.
If a bunch of crap code gets shipped, it isn't always because the engineers are bad. Often it's because they were given a bad deadline. Same with EMs.
Design moves at the speed of culture; not technology. It took 3 years of people messing with mobile phones before it occurred to someone to implement "pull down to refresh" and much longer for it to be common practice that people just expect from UX. I think people are still learning what they want from an AI experience.
I do think you have to be pretty targeted with your predictions, though. Consumer product design seems to be evolving differently from B2B and at a different pace. Growth curves are different for each.
One of the bigger design battles at a prior company was designers insisting on pull to refresh, and the researchers insisting on removing it due to customer feedback.
The whole thrust of anti-vibe UI sentiments remind me of when Twitter Bootstrap came out. The unlocks were huge because suddenly people who didn't know how to make nice looking UI didn't have to do much more than drop in a stylesheet link and add some classes. Despite that, everyone complained all web sites started looking the same.
And, sure, that was valid. However, eventually everyone started figuring out how to get a unique look out of Bootstrap while still enjoying the benefits. All our modern frontend component frameworks can trace their lineage back to Bootstrap.
We'll see something similar with vibe UIs. Just a matter of time.
I don't see how what you're saying is at odds with the author. At no point did they say Vonnegut was a failure before Slaughthouse Five. Only that he, like many others, didn't produce their opus until later in life. This isn't just limited to writing. There are examples in all fields if you look, both creative and commercial. This idea is definitely at odds with a lot of current SV rhetoric.
Your point, that many people don't produce their magnun opus until later in life, is definitely at odds with a lot of current SV rhetoric. And it's a good point.
If the article had tried to make your point, it would have been a much better article.
Instead, it made a different, much less true point, and had to contort Vonnegut's biography to make it.
"His career looked like a sequence of failures until it suddenly wasn't" is just not true of Vonnegut, not true of Galilei, not true of any of the other "examples in all fields" cites in the article. All of them are people who consistently produced great work from early on, well before their 40s, and then produced a magnum opus that really stood the test of time.
I think the people who need to hear this message are not in their late 40s, but are in their 20s thinking that they have to do it now or they never will. I see a lot of ageism on X from people who are clearly young and inexperienced, but when you actually get into the real world and how things are done you see the vast majority of real success happens later in life once you've been around a couple times.
Experience, on the whole, really does get you further than cleverness, but good luck telling that to the inexperienced.
Something we definitely lose when we age is lack of judgement, and with it the ability to play and experiment. I miss that naïveté and over confidence from my twenties knowing I was allowed to screw up.
Is it really those things, or is it that as we get older we have something to lose? I could go live like I'm 20 again...reduce my spending to nothing, work all day on whatever I believe in...but that would cost me my family's comfort, and I rather like them.
In any field where there is a creative element, progress comes in fits and starts that are difficult to predict in advance. No one can accurately predict when we'll get the cure for cancer, for example, in spite of people working on it.
But that isn't how investors operate. They want to know what they will get in exchange for giving a company a billion dollars. If you're running an AI business, you need to set expectations. How do you do that? Go do the thing you know you can do on a schedule, like standing up a new GPU data center.
I don't think the bitter lesson is misunderstood in quite the way the author describes. I think most are well aware we're approaching the data wall within a couple years. However, if you're not in academia you're not trying to solve that problem; you're trying to get your bag before it happens.
Why do you assume investors don’t know about this? They know some investments follow the power law - very few of them work out but they bring most value.
The very existence of openAI and Anthropic are proof of it happening.
Imagine you were an investor and you know what you know now (creativity can’t be predicted). How would you then invest in companies? Your answer might converge on existing VC strategies.
I don't assume that at all. Investors absolutely know, but investment is predicated on returns. You can't do that if you can't give a timeline for when value will be generated unless your investment is so small it's practically a donation. Obviously you can invest in moonshots, but you don't want to bet your whole portfolio. Why do you think OpenAI had the governing structure it did before it made its breakthroughs but suddenly both them and Anthropic can do insane raises?
It depends on who is creating the definition of evil. Once you have a mechanism like this, it isn't long after that it becomes an ideological battleground. Social media moderation is an example of this. It was inevitable for AI usage, but I think folks were hoping the libertarian ideal would hold on a little longer.
It’s notable that the existence of the watchman problem doesn’t invalidate the necessity of regulation; it’s just a question of how you prevent capture of the regulating authority such that regulation is not abused to prevent competitors from emerging. This isn’t a problem unique to statism; you see the same abuse in nominally free markets that exploit the existence of natural monopolies.
Anti-State libertarians posit that preventing this capture at the state level is either impossible (you can never stop worrying about who will watch the watchmen until you abolish the category of watchmen) or so expensive as to not be worth doing (you can regulate it but doing so ends up with systems that are basically totalitarian insofar as the system cannot tolerate insurrection, factionalism, and in many cases, dissent).
The UK and Canada are the best examples of the latter issue; procedures are basically open (you don’t have to worry about disappearing in either country), but you have a governing authority built on wildly unpopular ideas that the systems rely upon for their justification—they cannot tolerate these ideas being criticized.
Software engineering in general is pretty famous for unironically being disdainful of anything old while simultaneously reinventing the past. This new wave is nothing new in that regard.
I'm not sure that means the people who do this aren't good engineers, though. If someone rediscovers something in practice rather than through learning theory, does that make them bad at something, or simply inexperienced? I think it's one of the strengths of the profession that there isn't a singular path to reach the height of the field.
reply