Hacker Newsnew | past | comments | ask | show | jobs | submit | jatora's commentslogin

like the other commenter said. you need to learn to use new tools. and your take clearly indicates you haven't.

Your take clearly indicates you need to learn how to code.

We've banned this account for repeatedly breaking the site guidelines and ignoring our requests to stop.

Please don't create accounts to break HN's rules with. It will eventually get your main account banned as well.

https://news.ycombinator.com/newsguidelines.html


Following this thread takes you into political territory and governmental/regulatory capture, which I believe is the root issue that cannot be solved in late stage capitalism.

We are headed towards (or already in) corporate feudalism and I don't think anything can realistically be done about it. Not sure if this is nihilism or realism but the only real solution I see is on the individual level: make enough money that you don't have to really care about the downsides of the system (upper middle class).

So while I agree with you, I think I just disagree with the little bit you said about "cant expect anything to change without-" and would just say: cant expect anything to change except through the inertia of what already is in place.


This seems pretty huge. Not sure by what metric it wouldn't be civilizationally gigantic for everyone to save that much time per day.


your google-fu isnt failing. there's simply only a couple large studies on this, and of those, zero that have a useful methodology.


I think there is going to be 2-3 year lag in understanding how llms actually impact developer productivity. There are way too many balls in the air, and anyone claiming specific numbers on productivity increase is likely very very wrong.

For example citing staff engineers as an example will have a bias: they have years of traditional training and are obviously not representative of software engineers in general.


FWIW I only mentioned staff engineers because the survey found staff+ engineers reported the highest time savings. The survey itself had time savings averages for junior (3.9), Mid level (4.3), Senior (4.1) and Staff (4.4).


I hope I am never this slow to adapt to new technologies.


it is absolutely poor skill, or disengenuous at best, for any coder to claim AI tools slow them down lol.


I didn’t make this claim.

I also have a personal rule that I will try something for at least 4 months actively before making my decision about it (programming language, new tools, or in this case AI assisted coding)

I made the claim that in my area of expertise - I have found that *most of the time it is faster to write something myself than I write out really detailed md file / prompt. It becomes more tedious to express myself via natural language then it is with code when I want something very specific done.

In these types of cases - writing the code myself, allows me to express the thing I want faster. Also, I like to code with the AI auto complete but still while this can be useful I sometimes disable it because it’s distracting and consistently incorrect with its predictions)


claim that i claimed you claimed: "for any coder to claim AI tools slow them down"

---

claim you made: "One thing I’ve noticed though that actually coding (without the use of AI; maybe a bit of tab auto-complete) is that I’m actually way faster when working in my domain than I am when using AI tools."

---

You did make that claim but I'm aware my approach would bring the defensiveness out of anyone :P


> any coder to claim AI tools slow them down

This is what you said - and I didn’t make this claim. I specifically said that in “my domain”. Meaning a code base I know fully well and own, and it’s a language, framework and patterns that I’ve worked with for years.

For certain things - yes, it’s faster to do myself than write a long prompt with context (or a predefined one) because it’s faster to express what I want with the code than natural language.


or the complete opposite. Very skilled people with a lot of experience in a specific project. I am like that too at my current job. I've REALLY tried to use AI but it has always slowed me down in the end. AI is only speeding me up in very specific and isolated things, tangent to the main product development.


For seasoned maintainers of open source repos, there is explicit evidence it does slow them down, even when they think it sped them up: https://arxiv.org/abs/2507.09089

Cue: "the tools are so much better now", "the people in the study didn't know how to use Cursor", etc. Regardless if one takes issue with this study, there are enough others of its kind to suggest skepticism regarding how much these tools really create speed benefits when employed at scale. The maintenance cliff is always nigh...

There are definitely ways in which LLMs, and agentic coding tools scaffolded in top, help with aspects of development. But to say anyone who claims otherwise is either being disingenuous or doesn't know what they are doing, is not an informed take.


I have seen this study cited enough to have a copy paste for it. And no, there are not a bunch of other studies that have any sort of conclusive evidence to support this claim either. I have looked and would welcome any with good analysis.

"""

1. The sample is extremely narrow (16 elite open-source maintainers doing ~2-hour issues on large repos they know intimately), so any measured slowdown applies only to that sliver of work, not “developers” or “software engineering” in general.

2. The treatment is really “Cursor + Claude, often in a different IDE than participants normally use, after light onboarding,” so the result could reflect tool/UX friction or unfamiliar workflows rather than an inherent slowdown from AI assistance itself.

3. The only primary outcome is self-reported time-to-completion; there is no direct measurement of code quality, scope of work, or long-term value, so a longer duration could just mean “more or better work done,” not lower productivity.

4. With 246 issues from 16 people and substantial modeling choices (e.g., regression adjustment using forecasted times, clustering decisions), the reported ~19% slowdown is statistically fragile and heavily model-dependent, making it weak evidence for a robust, general slowdown effect.

"""

Any developer (who was a developer before March 2023) that is actively using these tools and understands the nuances of how to search the vector space (prompt) is being sped up substantially.


I think we agree no the limitations of the study--I literally began my comment with "for seasoned maintainers of open source repos". I'm not sure if in your first statement ("there are no studies to back up this claim.. I welcome good analysis") you are referring to claims that support an AI-speedup. If so, we agree that good analysis is needed. But if you think there already is good data:

Can you link any? All I've seen is stuff like Anthropic claiming 90% of internal code is written by Claude--I think we'd agree that we need an unbiased source and better metrics than "code written". My concern is that whenever AI usage in professional developers is studied empirically, as far as I have seen, the results never corroborate your claim: "Any developer (who was a developer before March 2023) that is actively using these tools and understands the nuances of how to search the vector space (prompt) is being sped up substantially."

I'm open to it being possible, but as someone who was a developer before March 2023 and is surrounded by many professionals who were also so, our results are more lukewarm than what I see boosters claim. It speeds up certain types of work, but not everything in a manner that adds up to all work "sped up substantially".

I need to see data, and all the data I've seen goes the other way. Did you see the recent Substack looking at public Github data showing no increase in the trend of PRs all the way up to August 2025? All the hard data I've seen is much, much more middling than what people who have something to sell AI-wise are claiming.

https://mikelovesrobots.substack.com/p/wheres-the-shovelware...


not exactly nothing to do with it, they still use generative AI to assist search

and saying 'it is no more'... sigh. such a weird take. the world's coming for you


This is just wrong though. They absolutely learn in-context in a single conversation within context limits. And they absolutely can explain their thinking; companies just block them from doing it.


to fix having to approve commands over and over - use windows WSL. codex does not play nice with permissions/approvals on windows. WSL solves that completely


Absolutely agreed. Thinking anything else is nothing but cope, and these comments are FULL of it. it would be laughable if they weren't so gate keepy and disengenuous about it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: