I'm still chugging along on a Dell XPS Developer Edition that came with Ubuntu 20.04 preinstalled. It's not as repairable as a Framework but it's been very reliable.
If I had to get a new laptop for personal use today I'd probably go for an X1 Carbon. Those seem to have very good luck with Linux even without OEM installs.
Some of favorite desktop environments:
NeXTSTEP
BeOS/Haiku
ChromeOS
Raspberry PI
macOS
CDE
I like uniformity, simplicity and consistency, stability, few surprises, little guessing. I want to use the computer. I don't need to become an expert in computer interfaces. Like cars. I just want to drive the car. I don't want to have to build or customize my own automobile ergonomics. Much of my time is spent on the command line anyway, but when I have to use the GUI, please make it very simple.
I work on deeply embedded software that doesn't have what you'd commonly think of as a "UI". So, unless there are bugs or we ship faster or something like that, users will never have any idea how much of our code is AI generated.
indeed - my main use case is those kind of "record everything" sort of setups. I'm not even super privacy conscious per se but it just feels too weird to send literally everything I'm saying all of the time to the cloud.
luckily for now whisper doesn't require too much compute, bu the kind of interesting analysis I'd want would require at least a 1B parameter model, maybe 100B or 1T.
Autonomy generally, not just privacy. You never know what the future will bring, AI will be enshittified and so will hubs like huggingface. It’s useful to have an off grid solution that isn’t subject to VCs wanting to see their capital returned.
> You never know what the future will bring, AI will be enshittified and so will hubs like huggingface.
If anyone wants to bet that future cloud hosted AI models will get worse than they are now, I will take the opposite side of that bet.
> It’s useful to have an off grid solution that isn’t subject to VCs wanting to see their capital returned.
You can pay cloud providers for access to the same models that you can run locally, though. You don’t need a local setup even for this unlikely future scenario where all of the mainstream LLM providers simultaneously decided to make their LLMs poor quality and none of them sees this as market opportunity to provide good service.
But even if we ignore all of that and assume that all of the cloud inference everywhere becomes bad at the same time at some point in the future, you would still be better off buying your own inference hardware at that point in time. Spending the money to buy two M3 Ultras right now to prepare for an unlikely future event is illogical.
The only reason to run local LLMs is if you have privacy requirements or you want to do it as a hobby.
> but further enshittification seems like the world's safest bet.
Are you really, actually willing to bet that today's hosted LLM performance per dollar is the peak? That it's all going to be worse at some arbitrary date (necessary condition for establishing a bet) in the future?
Would need to be evaluated by a standard benchmark, agreed upon ahead of time. No loopholes or vague verbiage allow something to be claimed as "enshittification" or other vague terms.
Sorry, didn't realize what you were actually referring to. Certainly I'd assume the models will keep getting better from the standpoint of reasoning performance. But much of that improved performance will be used to fool us into buying whatever the sponsor is selling.
That part will get worse, given that it hasn't really even begun ramping up yet. We are still in the "$1 Uber ride" stage, where it all seems like a never-ending free lunch.
Real machine learning research has promise, especially over long time scales.
Imminent AGI/ASI/God-like AI/end of humanity hawks are part of a growing AI cult. The cult leaders are driven by insatiable greed and the gullible cult followers are blinded by hope.
And I say this as a developer who is quite pleased with the progress of coding assistant tools recently.
Github only for now. Out of curiosity, is yours on gitlab? Something else?
We should be able to find something interesting in most codebases, as long as there's some plausible way to build and test the code and the codebase is big enough. (Below ~250 files the results get iffy.) We've just tested it a lot more thoroughly on app backends, because that's what we know best.
> Out of curiosity, is yours on gitlab? Something else?
Something else, it's a self-hosted Git server similar to GitHub, GitLab, etc. We have multiple repos well clear of 1k files. Almost none of it is JavaScript or TypeScript or anything like that. None of our own code is public.
I think that's just the name they picked. I don't mind it. Taking a glance at what it actually does, it just looks like another command line coding assistant/agent similar to Opencode and friends. You can use it for whatever you want not just "vibe coding", including high quality, serious, professional development. You just have to know what you're doing.
RTX Pro 6000, ends up taking ~66GB when running the MXFP4 native quant with llama-server/llama.cpp and max context, as an example. Guess you could do it with two 5090s with slightly less context, or different software aimed at memory usage efficiency.
That's the nice thing about completely unsubstantiated, baseless claims on the Internet, if it ever happens, you can always point at it like you're Nostradamus.
My predictions:
Actual zombie president in 2044.
New COVID in 2061.
Dinosaurs come back in 2123, reveal they've been steadily populating hidden Nazi underground bunkers and have declared peace with the yeti.
If I had to get a new laptop for personal use today I'd probably go for an X1 Carbon. Those seem to have very good luck with Linux even without OEM installs.
reply