"People are falling in love with LLMs" and "P(Doom) is fearmongering" so close to each other is some cognitive dissonance.
The 'are LLMs intelligent?' discussion should be retired at this point, too. It's academic, the answer doesn't matter for businesses and consumers; it matters for philosophers (which everyone is even a little bit). 'Are LLMs useful for a great variety of tasks?' is a resounding 'yes'.
If the DIY work wasn't the cause for the fire it shouldn't matter, but I half-expect someone to inform me that US insurance companies can (legally) deny coverage for reasons unrelated to the accident.
Not so fast. Have you very carefully read the full small print of the insurance policy? Did you review that with a lawyer? Is incredible how different "normal" people vs. lawyers can understand a contract.
I'm pretty sure there is a clause, which states that you have to inform if you have and/or are not allowed to have fire loads, or anything that could cause a fire, or make it worse, or something along the lines in legalese. These formulations are always there because of people hoarding fuel in the basement, for example, or O2 Tank, or whatever. They are formulated in the most generic way possible to catch anything you do "wrong". Failing to follow such clauses, also when not explicitly stated, is dropping your obligations in the contract. And then there will be a clause that of course says, that not following the contract from your side, also exempts the company of paying.
Note also there are clauses that are very softly specified, like "use rooms for the intended purpose" which may be a problem if you store idk, paint in the garage, which may be flammable, in which case a fire in the garage will not be (at least fully) covered.
Gemini 3 is very good in particular. Haven't had a serious attempt with GPT 5.2 yet, but I expect it to also be good (previous versions were surprising at times, e.g. used a recursive CTE instead of window functions). Sonnet 4.5 sucks. Haven't tried Opus for SQL at all.
I haven't been listening to any promises, I'm simply trying out the models as they get released. I agree with the article wholeheartedly - you can't pretend these tools are not worth learning anymore. It's irresponsible if you're a professional.
Next breakthrough will happen in 2030 or it might happen next Tuesday; it might have already happened, it's just that the lab which did it is too scared to release it. It doesn't matter: until it happens, you should work with what you've got.
We need a hardware attestation vendor who isn’t also selling ads on the same device. Something like, I dunno, an identity module which you could maybe insert into the phone?
We never had one on desktop; no real issues. Hardware attestation is primarily in the interest of the vendor, not the user. The user relies on chains of trust. This is how the world works.
This is because of legacy. And even now lots of people assemble and build PC.
My worry is one fine day Microsoft, Samsung Apple, and Google (rest of SV Media companies like Netflix etc) will join hands in bringing security and force a ChromeOS or macOS type totally- we decide everything for you.
But that's exactly why I advocate that the hardware attestation module be separate from the computing device - so I can be in control of what and when I attest, not the vendor.
Can you elaborate. Say I buy parts myself and install a fully FOSS OS on my machine. Let's say I want to access my bank, and they demand attestation. You propose I'd buy an off-the-shelf, universal attestation module of my chosing (free market). But how would that work from an implementation standpoint? How would the module help put e.g. my bank at ease?
Those actually exist. Yubikeys, Nitrokeys (complete FOSS FW) or bank-approved code generators (For Germany these exist: https://www.reiner-sct.com/tan-generatoren/) are basically that. They provide independent assessment. So regardless of the OS or the browser both parties can make secure transactions.
Ah, so the computer doesn't need to be trusted at all, it's just an untrusted medium, just like when using encryption when sending data. All the trust would be at the vendor and inside external hardware device.
So basically anything we don’t know how to write an algorithm for? I see where you’re coming from - but at the same time it’s actually an AI meme and smells of permanently moving goalposts.
The 'are LLMs intelligent?' discussion should be retired at this point, too. It's academic, the answer doesn't matter for businesses and consumers; it matters for philosophers (which everyone is even a little bit). 'Are LLMs useful for a great variety of tasks?' is a resounding 'yes'.
reply