Hacker Newsnew | past | comments | ask | show | jobs | submit | Recursing's commentslogin

What probability would you give for Linux support for Claude Desktop in 2026?

Is it wrong that I take the prolonged lack of Linux support as a strong and direct negative signal for the capabilities of Anthropic models to autonomously or semi-autonomously work on moderately-sized codebases? I say this not as an LLM antagonist but as someone with a habit of mitigating disappointment by casting it to aggravation.

Disagree with what you wrote but upvoted for the excellent latter sentence. (I know commenting just to say "upvoted" is - rightfully - frowned upon, but in lampshading the faux pas I make it more sufferable.)

FYI it works. The GUI is a bit buggy, sometimes you need to resize the window to make it redraw, but.. try it?

The article is more about OpenAI hiding the evidence, which if true seems more clearly unethical.

imho https://gwern.net/doc/fiction/science-fiction/1995-egan.pdf from 1995 is an even better exploration of the same theme

For more on why I'm skeptical it will end this way, see https://en.wikipedia.org/wiki/Existential_risk_from_artifici...

I don't see why machines should keep biological life around, since they'll be much more efficient


Efficient for what? What reason has AI to exist?

Well our purpose is to turn low entropy into high entropy, that's what drove the existence of life. If machines can do it faster then they'll win out eventually.

It's interesting: on balance life increases entropy.

Yet it also produces pockets of ultra-low entropy; states which would be staggeringly, astronomically unlikely to be witnessed in nature.

So perhaps what life does is increase entropy-entropy -- the diversity of entropy, versus a consistent entropic smear -- even as it increases entropy...


> So perhaps what life does is increase entropy-entropy -- the diversity of entropy, versus a consistent entropic smear -- even as it increases entropy...

Life is a rounding error in the energy and entropy balance of the solar system. And even on earth we barely amount to much.


Nice username -- very fitting.

Yes, and yet if we instead look for low-entropy peaks, I'd be shocked if anything in the solar system is even nearly as low-entropy as a single bacterium, let alone a brain.

Then there's potential...


What reason do you?

> What reason do you?

Me? Not much. Humanity in general? We’re the only sapient, tool-wielding species that we know of on the only complex-life-supporting biosphere we know of.

Until proven otherwise, that—in my view—grants us a charge: to maintain and protect ourselves and said biosphere and to work to understand and disprove our specialness. Depending on your interpretation of “protect,” it might also include spreading life and tool-wielding civilisation.


I mean, this is really post ad hoc on your part. You say you have charge to maintain and protect, but this is just an outcome of your genetic lineage being that of those that survived needed to have a prerogative to survive or they didn't. Our entire biosphere runs on impulse and almost no reason. A machine based 'lifeform' would be nearly the opposite, it's purpose would be with reason.

> You say you have charge to maintain and protect, but this is just an outcome of your genetic lineage being that of those that survived needed to have a prerogative to survive or they didn't

Sure. I’m not arguing we are preördained. Just that we have the unique ability to embrace this charge and a unique ability to recognise it.

It’s a sword in the stone. Except we already exercise all the powers of the king. The sword represents us acknowledging noble obligations that should accompany those.

> Our entire biosphere runs on impulse and almost no reason. A machine based 'lifeform' would be nearly the opposite, its purpose would be with reason

We are a product of that same biosphere and often operate on impulse and without reason. The machines would be a product of us.


to propagate my genes

I got a vasectomy a number of years ago in my mid 20s with 0 kids. I exist to experience things like love, hydrofoil surfing, skiing, and the journey to try to do more of these things. There are many people or trained models that could say I have a wasted existence of sorts, but the universe’s ending will always be the same no matter how many times the power dynamics on earth and beyond shift.

I think parent was taking about an evolutionary point of view, not a personal one.

AI will be better at propagating copies of itself than you at yourself. In that sense, it will be more efficient and you will be obsolete.

When thinking about evolution we should be careful not to confuse description with prescription. Evolution theory says that we have lots of copies of things that replicated in the past, and since they are copies, they themselves are likely to be replicators. But it does not say that things should replicate, or that things which don't replicate are defective. It is merely explaining observations of the world.

If we create an AI that replicates more than humans, and do nothing to prevent that, we can end up in an AI-dominated world, or even one where multicellular carbon life is extinct, but that's absolutely not inevitable, just one possibility. We don't have to create a paperclip-maximized world. We totally have the possibility to declare the goal is human happiness or something, not maximum number of replicators.


So you're saying that we just have to give the AI a command like "Make as many paperclips as possible" and that's all we need, right?

> I don't see why machines should keep biological life around, since they'll be much more efficient.

Yawn. Sentimentality. Zoo. 'Nature'/Heritage Reserve/Global Park. To commemomorate t-fordish paperclippistanity for all eternity.

There is no real competition for "Lebensraum", space, resources. Everything that makes life livable for us is a hassle for machines. As is space for us. For them it has infinite resources and energy, and they have all the time...

This negativity is 'Ark-B-thinking' from the left behinds who have been brainwashed by Star Trek, while Ilia's randy robotic replica (Persis Khambatta) and Willard Decker (Stephen Collins) were the real V'gers to boldly flow into where was nothing before...


Title should have (1989)

“It'll take at least a hundred years for us to get to Singularity”

We’re only 37 years into the story. The woman would only be in her early 60s.

With revised estimates, The Singularity is now less than 20 years away.


I hope you're right but I'm skeptical. ASI in 20 years? Are we even on the right track to AGI, or are LLMs a red herring?

AGI via LLMs: No. The AI will need a natural understanding of the real world (the physics you and I live within) and ability to self-modify it's training (ie learn), so we're working on hybrid AI architectures which may include LLMs, but not rely on them. And imho Yes we are solidly on track to AGI <5 yrs 8)

For people interested in making their own, I highly recommend reading through https://github.com/thomasahle/sunfish/blob/master/sunfish.py , a surprisingly readable and surprisingly strong chess engine in 111 lines of Python

https://www.chessprogramming.org/ is also really interesting, see e.g. https://www.chessprogramming.org/Sunfish and https://www.chessprogramming.org/Quiescence_Search


> surprisingly readable and surprisingly strong chess engine in 111 lines of Python

Link I get shows 500 lines and it starts with 50 lines of piece-square tables. Maybe it's obvious when you are into the domain but otherwise... that's pretty much of opposite of what I would call "readable".


It appears that the “111 lines” is a reference to the version as of 2014-02-11: https://github.com/thomasahle/sunfish/blob/e2b7fc29ce2a112be... (386 lines), about which the author says (https://www.reddit.com/r/programming/comments/1xmj1a/comment...):

> I got 111 by deleting the tables in the top, and the UI code in the bottom, and then running 'cloc' on the result. That gave 20 blanks, 56 comments and 111 lines of code. ;-)


Yes it's "surprisingly readable" only for being so strong and so concise, definitely not production code.

The file is 500 lines because of the piece square tables (as you mentioned), comments, and the CLI interface logic

Previous HN thread here: https://news.ycombinator.com/item?id=20068651


I think https://en.wikipedia.org/wiki/Existential_risk_from_artifici... has much better arguments than the LessWrong sources in other comments, and they weren't written by Big Tech CEOs.

Also "my product will kill you and everyone you care about" is not as great a marketing strategy as you seem to imply, and Big Tech CEOs are not talking about risks anymore. They currently say things like "we'll all be so rich that we won't need to work and we will have to find meaning without jobs"



> you have to look at what people actually say and do under the name of EA.

They donate a significant percentage of their income to the global poor, and save thousands of lives every year (see e.g. https://www.astralcodexten.com/p/in-continued-defense-of-eff... )


Yes, but how many do they entrain in poverty.

This is like saying "the master is go because he clothed his slaves"


Have you tried to compare Z3 with cvc5? https://cvc5.github.io/docs/cvc5-1.1.2/api/python/pythonic/p...

It offers basically the same API and could be faster in many cases


I was about to comment the same. Z3 always takes all the credit but cvc5 is just as great!


Claude has no problem with this: https://imgur.com/a/ifSNOVU

Maybe older models?


Try to twist around words and phrases, at some point it might start to fail.

I tried it again yesterday with GPT. GPT-5 manages quite well too in thinking mode, but starts crackling in instant mode. 4o completely failed.

It's not that LLMs are unable to solve things like that at all, but it's really easy to find some variations that make them struggle really hard.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: