The real power of Claude Code comes when you realise it can do far more than just write code.
It can, in fact, control your entire computer. If there's a CLI tool, Claude can run it. If there's not a CLI tool... ask Claude anyway, you might be
surprised.
E.g. I've used Claude to crop and resize images, rip MP3s from YouTube videos, trim silence from audio files, the list goes on. It saves me incredible amounts of time.
I don't remember life before it. Never going back.
You probably want to give Claude a computer. I'm not sure you always want to give it your computer unless you're in the loop.
We have Linux instances running an IDE running in cloud vms that we can access through the browser at https://brilliant.mplode.dev. Personally I think this is closer to the ideal UX for operating an agent (our environment doesn't install agents by default yet, but you should be able to just install them manually). You don't have to do anything to set up terminal access or ssh except sign in and wait for your initial instance to start, and once you have any instance provisioned it automatically pauses and resumes based on whether your browser has it open. It's literally Claude + A personal Linux instance + an IDE that you can just open from a link
Pretty soon I should be able run as many of these at a time as I can afford, and control all of their permissions/filesystems/whatever with JWTs and containers. If it gets messed up or needs my attention I open it with the IDE as my UI and can just dive in and fix it. I don't need a regular Linux desktop environment or UI or anything. Just render things in panes of the IDE or launch a container serving a webapp doing what I want and open it instead of the IDE. Haven't ever felt this excited about tech progress
>> I thought I would see a pretty drastic change in terms of Pull Requests, Commits and Line of Code merged in the last 6 weeks. I don’t think that holds water though
The chart basically shows same output with claude than before.
Which kinda represents what I felt when using LLMs.
You "feel" more productive and you definitely feel "better" because you don't do the work now, you babysit the model and feel productive.
But at the end of the day the output is the same because all advantages of LLMs is nerfed by time you have to review all that, fix it, re-prompt it etc.
And because you offload the "hard" part - and don't flex that thinking muscle - your skills decline pretty fast.
Try using Claude or another LLM for a month and then try doing a tiny little app without it. Its not only the code part that will seem hard - but the general architecture/structuring too.
And in the end the whole code base slowly (but not that slowly) degrades and in longer term results net negative. At least with current LLMs.
I've been exploring vibe coding lately and by far the biggest benefit is the lack of mental strain.
You don't have to try to remember your code as a conceptual whole, what your technical implementation of the next hour of code was going to be like at the same time as a stubborn bug is taunting you.
You just ask Mr smartybots and it deliver anything between proofreading and documentation and whatnot, with some minor fuckups occasionally
My friend, there’s no solid evidence that this is the case. So far, there are a bunch of studies, mostly preprints, that make vague implications, but none that can show clear causal links between a lack of mental strain and atrophying brain function from LLMs.
You're right, we only have centuries of humans doing hard things that require ongoing practice to stay sharp. Ask anyone who does something you can't fake, like playing the piano, what taking months off does to their abilities. To be fair, you can get them back much faster than someone that never had the skills to begin, but skills absolutely atrophy if you are not actively engaged with them.
I wish, but as it stands right now LLMs have to be driven and caged ruthlessly. Conventions, architecture, interfaces, testing, integration. Yes, you can YOLO it and just let it cook up _something_, but that something will an unmaintainable mess. So I'm removing my brain from the abstraction level of code (as much as I dare), but most definitely not from everything else.
We know that learning and building mental capabilities require effort over time. We know that when people have not been applying/practicing programming for years, their skills have atrophied. I think a good default expectation is that unused skills will go away over time. Of course the questions are, is the engagement we have with LLMs enough to sustain the majority of the skills? Or is there new skills one builds that can compensate foe those lost (even when the LLM is no longer used)? How quickly do the changes happen? Are there wider effects, positive and/or negative?
But the mental strain is how you build skills and get better at your job over time. If it's too much mental strain, maybe your code's architecture or implementation can be improved.
A lot of this sounds like "this bot does my homework for me, and now I get good grades and don't have to study so hard!"
I haven't found such a bug yet. If it fails to debug on its second attempt I usually switch to a different model or tell it to carpet bomb the code with console logs, write test scripts and do a web search, etc.
The strength (and weakness) of these models is their patience is infinite.
Perhaps you set a very high quality bar, but I don't see the LLMs creating messy code. If anything, they are far more diligent in structuring it well and making it logically sequenced and clear than I would be. For example, very often I name a variable slightly incorrectly at the start and realise it should be just slightly different at the end and only occasionally do I bother to go rename it everywhere. Even with automated refactoring tools to do it, it's just more work than I have time for. I might just add a comment above it somewhere explaining the meaning is slightly different to how it is named. This sort of thing x 100 though.
> hey are far more diligent in structuring it well and making it logically sequenced and clear than I would be
Yes, with the caveat: only on the first/zeroth shot. But even when they keep most/all of the code in context if you vibe code without incredibly strict structuring/guardrails, by the time you are 3-4 shots in, the model has "forgotten" the original arch, is duplicating data structures for what it needs _this_ shot and will gleefully end up with amnesiac-level repetitions, duplicate code that does "mostly the same" thing, all of which acts as further poison for progress. The deeper you go without human intervention the worse this gets.
You can go the other way, and it really does work. Setup strict types, clear patterns, clear structures. And intervene to explain + direct. The type of things senior engineers push back on in junior PRs. "Why didn't you just extend this existing data structure and factor that call into the trivially obvious extension of XYZ??".
I know what you‘re writing is the whole point of vibe coding, but I‘d strongly urge you to not do this. If you don’t review the code an LLM is producing, you‘re taking on technical debt. That’s fine for small projects and scripts, but not for things you want to maintain for longer. Code you don’t understand is essentially legacy code. LLM output should be bent to our style and taste, and ideally look like our own code.
If that helps, call it agentic engineering instead of vibe coding, to switch to a more involved mindset.
Not for me. I just reversed engineered a bluetooth protocol for a device which would taken me at least a few days capturing streams of data wireshark. Now i dumped entire dumps inside a llm and it gave me much more control finding the right offsets etc. It took me only a day.
I got it to diagnose why my Linux PC was crashing. It did a lot of journalctl grepping on my behalf and was glad for its help. Think it may have helped fix it but will see.
I was having a kernel panic on boot, I would work around it by loading the previous kernel. Turns out I had just ran out of space on my boot partition, but in my initial attemps to debug and fix I had gotten into a broken package state.
I handed it the reigns just out of morbid curiosity, and because I couldn't be bothered continuing for the night, but to my surprise (and with my guidance step by step) it did figure it all out. It found unused kernels, after uninstalling them didn't remove them, it deleted them with rm. It then helped resolve the broken package state and eventually I was back in a clean working state.
Importantly though, it did not know it hadn't actually cleaned up the boot partition initially. I had to insist that it had not in fact just freed up space, and that it would need to remove them.
Completely agree. Another use case is a static site generator. I just write posts with whatever syntax I want and tell Claude Code to make it into a blog post in the same format. For example, I can just write in the post “add image image.jpeg here” and it will add it - much easier than messing around with Markdown or Hugo.
Beyond just running CLI commands, you can have CC interact with those, e.g I built this little tool that gives CC a Tmux-cli command (a convenience wrapper around Tmux) that lets it interact with CLI applications and monitor them etc:
For example this lets CC spawn another CC instance and give it a task (way better than the built-in spawn-and-let-go black box), or interact with CLI scripts that expect user input, or use debuggers like Pdb for token-efficient debugging and code-understanding, etc.
It's the automators dream come true. Anything can be automated, anything scripted, anything documented. Even if we're gonna use other (possibly local) models in the future, this will be my interface of choice. It's so powerful.
Here's the hope that the demand for software continues to increase as developer productivity rises, and that increases in developer productivity are partially captured in higher salaries.
Automation is now trivially easy. I think of another new way to speed up my workflow — e.g. a shell script for some annoying repetitive task — and Claude oneshots it. Productivity gains built from productivity gains.
I don’t feel Claude code helps one iota with the issue in 1319. If anything, it has increased the prevalence of “ongoing development” as I auto mate more things and create more problems to solve.
However, I have fixed up and added features to 10 year old scripts that I never considered worth the trade off to work on. It makes the cost of automation cheaper.
It's not a dream come true to have a bunch of GPUs crunching at full power to achieve your minor automation, with the company making them available losing massive amounts of money on it:
I'd like to say I'm praising the paradigm shift more than anything else (and this is to some degree achievable with smaller, open and sometimes local agentic models), but yes, there are definitely nasty externalities (though burning VC cash is not high up that list for me). I hope some externalities can be be optimized away.
The point is that it costs more than $1200, you're just not the one paying all the costs. It seems like there are a ton of people on HN who are absolutely pumped to be totally dependent on a tool that must rugpull them eventually to continue existing as a business. It feels like an incredible shame that the craft is now starting to become totally dependent on tools like this, where you're calling out to the cloud to do even the most basic programming task.
> If there's a CLI tool, Claude can run it. If there's not a CLI tool... ask Claude anyway, you might be surprised.
No Claude Code needed for that! Just hang around r/unixporn and you'll collect enough scripts and tips to realize that mainstream OS have pushed computers from a useful tool to a consumerism toy.
That's like saying "you don't need a car, just hang around this bicycle shop long enough and you'll realize you can exercise your way around the town!"
Simple task of unzipping with tar is cryptic enough that collecting unix scripts from random people is definitely something people don't want to do in 2025.
Remembering one thing is easy, remembering all the things is not. With an agentic CLI I don't need to remember anything, other than if it looks safe or not.
But I still need to remember what to search for, and they're not always straight forward. The agent writes and organizes the scripts/commands I use frequently, and references them as a starting point. It all started by having an agent look at my shell history.
It's faster than I am, and it knows things like ffmpeg flags I don't care to memorize.
Even opencode running on a local model is decent at this.
The point is not that a tool maybe exists. The point is: You don't have to care if the tool exists and you don't have to collect anything. Just ask Claude code and it does what you want.
Maybe you do today. Will that always be the case? People running Windows do not have as much control over their systems as they should. What does enshittification look like for AI?
It can, in fact, control your entire computer. If there's a CLI tool, Claude can run it. If there's not a CLI tool... ask Claude anyway, you might be surprised.
E.g. I've used Claude to crop and resize images, rip MP3s from YouTube videos, trim silence from audio files, the list goes on. It saves me incredible amounts of time.
I don't remember life before it. Never going back.