Is it wrong that I take the prolonged lack of Linux support as a strong and direct negative signal for the capabilities of Anthropic models to autonomously or semi-autonomously work on moderately-sized codebases? I say this not as an LLM antagonist but as someone with a habit of mitigating disappointment by casting it to aggravation.
Disagree with what you wrote but upvoted for the excellent latter sentence. (I know commenting just to say "upvoted" is - rightfully - frowned upon, but in lampshading the faux pas I make it more sufferable.)
Well our purpose is to turn low entropy into high entropy, that's what drove the existence of life. If machines can do it faster then they'll win out eventually.
It's interesting: on balance life increases entropy.
Yet it also produces pockets of ultra-low entropy; states which would be staggeringly, astronomically unlikely to be witnessed in nature.
So perhaps what life does is increase entropy-entropy -- the diversity of entropy, versus a consistent entropic smear -- even as it increases entropy...
> So perhaps what life does is increase entropy-entropy -- the diversity of entropy, versus a consistent entropic smear -- even as it increases entropy...
Life is a rounding error in the energy and entropy balance of the solar system. And even on earth we barely amount to much.
Yes, and yet if we instead look for low-entropy peaks, I'd be shocked if anything in the solar system is even nearly as low-entropy as a single bacterium, let alone a brain.
Me? Not much. Humanity in general? We’re the only sapient, tool-wielding species that we know of on the only complex-life-supporting biosphere we know of.
Until proven otherwise, that—in my view—grants us a charge: to maintain and protect ourselves and said biosphere and to work to understand and disprove our specialness. Depending on your interpretation of “protect,” it might also include spreading life and tool-wielding civilisation.
I mean, this is really post ad hoc on your part. You say you have charge to maintain and protect, but this is just an outcome of your genetic lineage being that of those that survived needed to have a prerogative to survive or they didn't. Our entire biosphere runs on impulse and almost no reason. A machine based 'lifeform' would be nearly the opposite, it's purpose would be with reason.
> You say you have charge to maintain and protect, but this is just an outcome of your genetic lineage being that of those that survived needed to have a prerogative to survive or they didn't
Sure. I’m not arguing we are preördained. Just that we have the unique ability to embrace this charge and a unique ability to recognise it.
It’s a sword in the stone. Except we already exercise all the powers of the king. The sword represents us acknowledging noble obligations that should accompany those.
> Our entire biosphere runs on impulse and almost no reason. A machine based 'lifeform' would be nearly the opposite, its purpose would be with reason
We are a product of that same biosphere and often operate on impulse and without reason. The machines would be a product of us.
I got a vasectomy a number of years ago in my mid 20s with 0 kids. I exist to experience things like love, hydrofoil surfing, skiing, and the journey to try to do more of these things. There are many people or trained models that could say I have a wasted existence of sorts, but the universe’s ending will always be the same no matter how many times the power dynamics on earth and beyond shift.
AI will be better at propagating copies of itself than you at yourself. In that sense, it will be more efficient and you will be obsolete.
When thinking about evolution we should be careful not to confuse description with prescription. Evolution theory says that we have lots of copies of things that replicated in the past, and since they are copies, they themselves are likely to be replicators. But it does not say that things should replicate, or that things which don't replicate are defective. It is merely explaining observations of the world.
If we create an AI that replicates more than humans, and do nothing to prevent that, we can end up in an AI-dominated world, or even one where multicellular carbon life is extinct, but that's absolutely not inevitable, just one possibility. We don't have to create a paperclip-maximized world. We totally have the possibility to declare the goal is human happiness or something, not maximum number of replicators.
> I don't see why machines should keep biological life around, since they'll be much more efficient.
Yawn. Sentimentality. Zoo. 'Nature'/Heritage Reserve/Global Park. To commemomorate t-fordish paperclippistanity for all eternity.
There is no real competition for "Lebensraum", space, resources. Everything that makes life livable for us is a hassle for machines. As is space for us. For them it has infinite resources and energy, and they have all the time...
This negativity is 'Ark-B-thinking' from the left behinds who have been brainwashed by Star Trek, while Ilia's randy robotic replica (Persis Khambatta) and Willard Decker (Stephen Collins) were the real V'gers to boldly flow into where was nothing before...
AGI via LLMs: No. The AI will need a natural understanding of the real world (the physics you and I live within) and ability to self-modify it's training (ie learn), so we're working on hybrid AI architectures which may include LLMs, but not rely on them. And imho Yes we are solidly on track to AGI <5 yrs 8)
> surprisingly readable and surprisingly strong chess engine in 111 lines of Python
Link I get shows 500 lines and it starts with 50 lines of piece-square tables. Maybe it's obvious when you are into the domain but otherwise... that's pretty much of opposite of what I would call "readable".
> I got 111 by deleting the tables in the top, and the UI code in the bottom, and then running 'cloc' on the result. That gave 20 blanks, 56 comments and 111 lines of code. ;-)
Also "my product will kill you and everyone you care about" is not as great a marketing strategy as you seem to imply, and Big Tech CEOs are not talking about risks anymore. They currently say things like "we'll all be so rich that we won't need to work and we will have to find meaning without jobs"
reply