Hacker Newsnew | past | comments | ask | show | jobs | submit | chrsw's commentslogin

I'm still chugging along on a Dell XPS Developer Edition that came with Ubuntu 20.04 preinstalled. It's not as repairable as a Framework but it's been very reliable.

If I had to get a new laptop for personal use today I'd probably go for an X1 Carbon. Those seem to have very good luck with Linux even without OEM installs.


Arduino was finished the moment it was acquired by Qualcomm.

Someone should do a conspiracy board showing why evil companies doing acquisitions have names that end in com.

Some of favorite desktop environments: NeXTSTEP BeOS/Haiku ChromeOS Raspberry PI macOS CDE I like uniformity, simplicity and consistency, stability, few surprises, little guessing. I want to use the computer. I don't need to become an expert in computer interfaces. Like cars. I just want to drive the car. I don't want to have to build or customize my own automobile ergonomics. Much of my time is spent on the command line anyway, but when I have to use the GUI, please make it very simple.

I work on deeply embedded software that doesn't have what you'd commonly think of as a "UI". So, unless there are bugs or we ship faster or something like that, users will never have any idea how much of our code is AI generated.

But it's happening.


The only reason why you run local models is for privacy, never for cost. Or even latency.

indeed - my main use case is those kind of "record everything" sort of setups. I'm not even super privacy conscious per se but it just feels too weird to send literally everything I'm saying all of the time to the cloud.

luckily for now whisper doesn't require too much compute, bu the kind of interesting analysis I'd want would require at least a 1B parameter model, maybe 100B or 1T.


> t just feels too weird to send literally everything I'm saying all of the time to the cloud

... or your clients' codebases ...


Autonomy generally, not just privacy. You never know what the future will bring, AI will be enshittified and so will hubs like huggingface. It’s useful to have an off grid solution that isn’t subject to VCs wanting to see their capital returned.

> You never know what the future will bring, AI will be enshittified and so will hubs like huggingface.

If anyone wants to bet that future cloud hosted AI models will get worse than they are now, I will take the opposite side of that bet.

> It’s useful to have an off grid solution that isn’t subject to VCs wanting to see their capital returned.

You can pay cloud providers for access to the same models that you can run locally, though. You don’t need a local setup even for this unlikely future scenario where all of the mainstream LLM providers simultaneously decided to make their LLMs poor quality and none of them sees this as market opportunity to provide good service.

But even if we ignore all of that and assume that all of the cloud inference everywhere becomes bad at the same time at some point in the future, you would still be better off buying your own inference hardware at that point in time. Spending the money to buy two M3 Ultras right now to prepare for an unlikely future event is illogical.

The only reason to run local LLMs is if you have privacy requirements or you want to do it as a hobby.


If anyone wants to bet that future cloud hosted AI models will get worse than they are now, I will take the opposite side of that bet.

OK. How do we set up this wager?

I'm not knowledgeable about online gambling or prediction markets, but further enshittification seems like the world's safest bet.


> but further enshittification seems like the world's safest bet.

Are you really, actually willing to bet that today's hosted LLM performance per dollar is the peak? That it's all going to be worse at some arbitrary date (necessary condition for establishing a bet) in the future?

Would need to be evaluated by a standard benchmark, agreed upon ahead of time. No loopholes or vague verbiage allow something to be claimed as "enshittification" or other vague terms.


Sorry, didn't realize what you were actually referring to. Certainly I'd assume the models will keep getting better from the standpoint of reasoning performance. But much of that improved performance will be used to fool us into buying whatever the sponsor is selling.

That part will get worse, given that it hasn't really even begun ramping up yet. We are still in the "$1 Uber ride" stage, where it all seems like a never-ending free lunch.


Yes, I agree. And you can add security to that too.

Real machine learning research has promise, especially over long time scales.

Imminent AGI/ASI/God-like AI/end of humanity hawks are part of a growing AI cult. The cult leaders are driven by insatiable greed and the gullible cult followers are blinded by hope.

And I say this as a developer who is quite pleased with the progress of coding assistant tools recently.


How does this work if your repos aren't on GitHub? And what if your code has nothing to do with backend web apps?


Github only for now. Out of curiosity, is yours on gitlab? Something else?

We should be able to find something interesting in most codebases, as long as there's some plausible way to build and test the code and the codebase is big enough. (Below ~250 files the results get iffy.) We've just tested it a lot more thoroughly on app backends, because that's what we know best.


> Out of curiosity, is yours on gitlab? Something else?

Something else, it's a self-hosted Git server similar to GitHub, GitLab, etc. We have multiple repos well clear of 1k files. Almost none of it is JavaScript or TypeScript or anything like that. None of our own code is public.


I think that's just the name they picked. I don't mind it. Taking a glance at what it actually does, it just looks like another command line coding assistant/agent similar to Opencode and friends. You can use it for whatever you want not just "vibe coding", including high quality, serious, professional development. You just have to know what you're doing.


> run locally for agentic coding. Nowadays I mostly use GPT-OSS-120b for this

What kind of hardware do you have to be able to run a performant GPT-OSS-120b locally?


RTX Pro 6000, ends up taking ~66GB when running the MXFP4 native quant with llama-server/llama.cpp and max context, as an example. Guess you could do it with two 5090s with slightly less context, or different software aimed at memory usage efficiency.


That has 96GB GDDR7 ECC, to save people looking it up.


The model is 64GB (int4 native), add 20GB or so for context.

There are many platforms out there that can run it decently.

AMD strix halo, Mac platforms. Two (or three without extra ram) of the new AMD AI Pro R9700 (32GB of RAM, $1200), multi consumer gpu setups, etc.


Mbp 128gb.


How do you know?


If it happens today, OP is right, and if it happens in a century they are too.


What about if its in a millenium?


That's the nice thing about completely unsubstantiated, baseless claims on the Internet, if it ever happens, you can always point at it like you're Nostradamus.

My predictions:

Actual zombie president in 2044.

New COVID in 2061.

Dinosaurs come back in 2123, reveal they've been steadily populating hidden Nazi underground bunkers and have declared peace with the yeti.


I've connected the dots.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: