I was nodding along enthusiastically right up until LLMs and that point we sharply diverge.
For me, part of creating "perfect" software is that I am very much the one crafting the software. I'm learning while creating, but I find such learning is greatly diminished when I outsource building to AI. It's certainly harder and perhaps my software is worse, but for me the sense of achievement is also much greater.
The author is saying that “perfect software” is like a perfect cup of coffee. It’s highly subjective to the end user. The perfect software for me perfectly matches how I want to interact with software. It has options just for me. It’s fine tuned to my taste and my workflows, showing me information I want to see. You might never find a tool that’s perfect for you because someone else wrote it for their own taste.
LLMs come in because it wildly increases the amount of stuff you can play around with on a personal level. It means someone finally has time to put together the perfect workflow and advanced tools. I personally have about 0 time outside of work that I can invest in that, so I totally buy the idea that LLMs can really give people the space to develop personal tools and workflows that work perfectly for them. The barrier to entry and experimentation is incredibly low, and since it’s just for you, you don’t need to worry about scale and operations and all the hard stuff.
There is still plenty of room for someone to do it by hand, but I certainly don’t have time to do that. So I’ll never find perfect software for some of my workflows unless I get an assist from LLMs.
I agree with you about learning and achievement and fun — but that’s completely unrelated to the topic!
Thanks for this. This is exactly the spirit in which I wrote it.
You hit on the key constraint: time. The point isn't that the use of LLMs specifically provides agency, but that it lowers the barrier, allowing us to build things that bring it. "Perfect software" is perfect not just because of what they do, but because of what it lacks (fluff, tracking, features we don't need).
I find that most of the time, programming is just procrastination, and having the LLM there breaks through that procrastination and lets me focus on the idea I was thinking on without going into the weeds.
A lot of the time, the LLM outputs the code, I test my idea, and realize I really don't care or the idea wasn't that great, and now I can move on to something else.
I hope at some point people don't feel the need to justify using or not using LLMs. If you feel like using them, use them. If you regret doing that, delete the code and write it yourself. And vice versa - if you are in a slog and an LLM can get you out, just use it.
You have to break down the problem into manageable chunks anyway, might as well feed those into a code agent while you write it yourself. If you don't like what it did explain what it did wrong. Shouldn't take long to figure out what parts are the biggest waste of time to write yourself. Do still have to hop tools and adjust your confidence as they improve.
So ok you don't get into the weeds and you're proud of that, but also nothing you can think of wanting to do turns out to be worth doing.
Those things are wholly related. Opportunity never comes exactly the time and the way you expect. You have to be open to it, you have to be seeking out new experiences and new ideas. You have to get into the weeds and try things without being entirely sure what the outcome might be, what insight you might gain, or when that insight might become useful.
A friend of mine has a hilarious method for breaking though procrastination. His one trick is to spend money on the task/project, buy all kinds of things to make the job easier. It has to be useful but it is more about paying the unlock fee.
Github is full of half forgotten saved games waiting for money to be thrown at them.
I'm now using an LLM to write a voice note organisation application that I have been dreaming about for two decades.
I did vibe code the first version. It runs, but it is utterly unmaintainable. I'm now rewriting it using the LLM as if it were a junior or outsourced programmer (not a developer, that remains my job) and I go over every line of application code. I love it, I'm pushing out decent quality code and very focused git commits. I write every commit message myself, no LLM there. But I don't even bother checking the LLM's unit and integration tests.
I would have never gotten to this stage of my dream project without AI tooling.
Not grandparent, but I'm in the same boat. I've been dreaming for almost 10 years of building a sort of digital bullet journal. I had some feeble attempts to start, but never got to the point where I could actually use it. Last year I started again, heavily LLM assisted. After 1-2 weeks (this was before agents), I had something usable, from which I could benefit, which wanted to make me improve it more, which made me want to use it more.
By now it's grown to 100k lines of code. I've not read all of them, but I do have a high level overview of the app, I've done several refactorings to keep it maintainable.
This would not have happened without AI agents. I don't have the time, period. With AI agents, I can kickoff a task while I'm going to the park with my kids. Instead of scrolling HN, I look every now and then to what the agent is doing.
So, it's a personal pet project, I've thrown in everything and the kitchen sink. There's a telegram integration so I can submit entries via telegram, there's a chatbot integration so that I can "talk to my entries" and ask questions about what I did when). It imports weather data, Garmin data, and so on.
So yes, it's around 100k lines of code (Python, HTML, JS and CSS).
> With AI agents, I can kickoff a task while I'm going to the park with my kids. Instead of scrolling HN, I look every now and then to what the agent is doing.
How does that work? Are you running the agents on a server? Are you using gnu screen and termux? Can you respond to prompts asking for permission to e.g. run ls or grep?
I have at least two projects that I estimated to take a week or two but aren't finished after years. There might be others that just got abandoned that should be included in the count.
Then there are things that work but aren't polished enough or should really have documentation.
I can’t (due to other priorities) give consistent time to a project unless it is very important. That lack of consistency means I have to spend time re-learning what I was thinking and doing which is both inefficient and not fun. Since the projects are either experimental or not that important, I’m generally more motivated to do something else.
Over time I’ve learned to not even start such projects, but LLMs have made it easier to complete such projects by making the work faster reducing the time variable in time over importance and easing the refamiliarization problem, adding to the set of such projects I’m willing to tackle.
lack of character, distracted by other things for to long, drowning in unforeseen complexity, much slower progression than expected, bored with it, force majeure, etc
However, I don’t think using LLMs has to be an all-or-none proposition. You can still choose to build the parts you most care about yourself (where the learning happens) and delegate the other aspects to AI.
In the case of the text justifier, it was a small nuisance I wanted solved with very little effort. I didn't care about the browser APIs, just the visual outcome, so I let the LLM do it all.
If I were building something more complex, I would use LLMs much more mindfully. The value is in having the choice to delegate the chores so you can focus on the craft where it matters to you.
While we might value the process differently, the broader point remains that these tools enable people to build things they otherwise wouldn't have the time or specific resources to create, and still feel a sense of agency and ownership.
I remember some of the early phases of home computing. The whole point of owning a home computer was that in addition to using other people's software, you could write your own and put the machine to whatever use you could think of. And it was a machine you owned, not time on some big company's machine which, ultimately, was controlled, and uses approved, by that company. The whole point of the home computing market was to create an environment where people managed the machines, not the other way around. (Wozniak has said that this was one of his motivations for creating the Apple I and II.)
Now we have people like this guy who say we finally have autonomy in computing—by purchasing time on some big company's machine doing numberwang to write the software for you. Ultimately the big company, not you, controls the machine and the uses to which it may be put. What's worse is these companies are buying up all the manufacturing capacity, starving the consumer market and making it more difficult to acquire computing hardware! No, this is not the autonomy envisioned by Wozniak, Jobs, or even a young shithead Bill Gates.
Hear, hear. The key word I feel, is autonomy. It's like that article says, the coming war on general-purpose computing. We must seize the means of computation. We've already lost control of mobile phones, whose major operating systems barely allow you to see files, or run software of your own choice. That corporate colonization is coming for the rest of the personal computing stack.
Large language models, the resources and the exploitative means it took to create them, are not "free", they have serious social costs and loss of personal freedom. I still use them, particularly local models, but even that is questionable. At least when the AI bubble bursts and the inevitable enshittification begins, I will be able to continue running them without further vendor lock-in or erosion of privacy.
In terms of bootstrappability and supply chain risk, LLMs fail because we the people are not able to re-create them from scratch.
The first time I saw a computer, I saw a machine for making things. I once read a quote from actor Noel Coward who said that television was "for appearing on, not watching", and I immediately connected it to my own relationship with computers.
I don't want an LLM to write software or blog posts for me, for the same reason I don't want to hire an intern to do that for me: I enjoy the process.
Everything else, I'm in agreement on. Writing software for yourself - and only for yourself - is a wonderful superpower. You can define the ergonomics for yourself. There's lots of things that make writing software a little painful when you're the only customer: UX learning curves flatten, security concerns diminish a little, subscription costs evaporate...
I actually consider the ability to write software for yourself a more profound and important right over anything the open source movement offers. Of course, I want an environment which makes that easier, so it's this that makes me more concerned about closed ecosystems.
I definitely made software for me with zero desire to learn, zero learning happening, just to scratch an itch.
that being said calling it "perfect" is on the nose, at least for my own, it does a thing, it does it good enough, and that's all. It could be better but it won't be because it's not worth it, because it's good enough
Just today I gave an LLM the task of porting some Python modules to rust. I then went back and learned enough rust to understand these modules. This would have taken me days without the LLM. And I learned a lot.
For me, part of creating "perfect" software is that I am very much the one crafting the software. I'm learning while creating, but I find such learning is greatly diminished when I outsource building to AI. It's certainly harder and perhaps my software is worse, but for me the sense of achievement is also much greater.