Hacker Newsnew | past | comments | ask | show | jobs | submit | RevEng's commentslogin

Even as a principal software developer and someone who is skeptical and exhausted with the AI hype, AI IDEs can be useful. The rule I give to my coworkers is: use it where you know what to write but want to save time doing it. Unit tests are great for this. Quick demos and test benches are great. Boilerplate and glue are great for this. There are lots of places where trivial, mind-numbing work can be done quickly and effortlessly with an AI. These are cases where it's actually making life better for the developer, not replacing their expertise.

I've also had luck with it helping with debugging. It has the knowledge of the entire Internet and it can quickly add tracing and run debugging. It has helped me find some nasty interactions that I had no idea were a thing.

AI certainly has some advantages in certain use cases, that's why we have been using AI/ML for decades. The latest wave of models bring even more possibilities. But of course, it also brings a lot of potential for abuse and a lot of hype. I, too, all quite sick of it all and can't wait for the bubble to burst so we can get back to building effective tools instead of making wild claims for investors.


I think you've captured how I feel about it too. If I try to go beyond the scopes you've described, with Cursor in my case and a variety of models, I often end up wasting time unless it's a purely exploratory request.

"This package has been removed, grep for string X and update every reference in the entire codebase" is a great conservative task; easy to review the results, and I basically know what it should be doing and definitely don't want to do it.

"Here's an ambiguous error, what could be the cause?" sometimes comes up with nonsense, but sometimes actually works.


While the Twitter recommendation is strange, the assertion that we will suddenly have leisure time is demonstrably false. For decades each new technological advance was supposed to make it so we could work half as long because we could get twice as much done. That never happens. The cost of a person was never in how much they could produce but in how much they would demand to do so. If you can do twice as much, then your work product becomes half as valuable. Many of the throw away things we buy everyday are cheap only because their production is so heavily automated. If we still had to cook food in a conventional kitchen instead of warming up precooked food, or use a hammer and hand plane to build furniture, we would be paying far more than we do today. If anything, the people working those jobs are paid comparatively less now than before automation because it used to take skill to work those jobs but now anyone can do it. This is why the main advice of the article - do something that can't be automated and learn how to build the automation - is good advice.

Writing the paper is a very small part of the research. It's entirely likely that - like many of their students - they love the research but hate writing papers. They are very different skill sets.

One would think they’d care about the experience of people actually reading their papers.

That sounds very suspicious.

Unless he's using it for storage! haha. Then it's cheap.

Sadly, yes, it's true. New AI projects are getting funded and existing non-AI projects are getting mothballed. It's very disruptive and yet another sign of the hype being a bubble. Companies are pivoting entirely to it and neglecting their core competencies.

While I agree entirely about what Grok teaches us about alignment, I think the argument that "alignment was never a technical problem" is false. Everything I have ever read about AI safety and alignment have started by pointing out the fundamental problem of deciding what values to align to because humanity doesn't have a consistent set of values. Nonetheless, there is a technical challenge because whatever values we choose, we need a way to get the models to follow those values. We need both. The engineers are solving the technical problem; they need others to solve the social problem.


> they need others to solve the social problem.

You assume it is a solvable problem. Chances are that you will have bots following laws (as opposed to moral statements) and each jurisdiction will essentially have a different alignment. So in a social conservative country, for example, a bot will tell you not being hetero is wrong and report you to the police if you ask too many questions about it. While, in a queer friendly country, a bot would not behave like this. A bit like how some movies can only be watched in certain countries.

I highly doubt alignment as a concept works beyond making bots follow laws of a given country. And at the end of the day, the enforced laws are essentially the embodiment of the morality of that jurisdiction.

People seem to live in a fictional world if they believe countries won't force LLM companies to force the country's morality in their LLMs whatever their morality is. This is essentially what has happened with intellectual property and media and LLMs likely won't be different.


They do ask. When you set it up it presents 5 agreements to accept, only 2 of which are required. ACR, voice recognition, and a few other questionable this are covered under those optional agreements. I simply didn't accept them and ask those features were disabled.


You can stop it much earlier than this. At setup time it gives you several policies to agree to. Only two of them are required; the rest are optional. The optional ones include Live Plus and several other systems for monitoring and advertising.


The process is reproducible even if the outcome isn't always identical. Outside of computing and mathematics, real world processes never result in the exact same output - small variations in size, density, concentration, etc. will occur.


I live in a province in Canada where the electrical system is owned and operated by a crown corporation. They are mandated to maintain a very high uptime and they do through several means including redundancy. Our electrical bills are cheaper than much of the US. It certainly can be done; there are other means than competition to ensure adequate service.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: