Hacker Newsnew | past | comments | ask | show | jobs | submit | atmosx's commentslogin

Oh good point. I mixed it up, UTM is using qemu under hood, but as someone mentioned now OpenBSD snapshot boots with qemu seemlesly. It's still virtualised though.

Do sales backup these claims that come up very often or not? Does anyone has any data?

Although the op is not wrong, maybe their decisions are data driven and pay off?


“Welcome to the real world Neo!”

“There is no cloud, it’s just somebody else’s computer”

etc etc…


> Nothing happening in the federal governemnt or the middle east or eastern Europe affects me from a local standpoint, and it's easy to stay informed on those events through a variety of sources.

This is something that - for whatever reason - takes a surprising amount of time for ppl to understand.


> The problem with local journalism is simple: the product is produces is not worth what it costs to produce it.

I find this approach superficial and dangerous.

Maybe local journalism has been superseded or looks like not important to the locals. The lack of local journalism IMO will end up costing a lot more to any community in the long run for obvious reasons.


I think the nuance is that is doesn't produce what it's worth - it's that it's value to society is more than what people are willing to pay for it (and also more than what it costs to produce).

Of course there will be exceptions to the rule, but these dynamics seem pretty strong.


Absolutely.

And as someone who’s seen some condo boards, I can tell you that when presented with “we all need to pay a small amount of money now to avoid a big bill later” the response will generally be “no way!”

It’s a tragedy of the commons issue, mixed with people who don’t agree on the value of it in the first place.


Sure, but the community has to somehow decide to pay the people doing that good thing. There are a lot of projects that would likely be a net benefit not being paid for.

Externalities, coordination failure...

It's simultaneously worth vastly more to the community as a whole than the cost of producing it, and yet, to any single individual, the marginal benefit of having it is not enough to justify paying for it.

The naïve solution might be to collectively subsidize it, but then that creates its own moral hazards and perverse incentives.

...It's a bit scary how much of democracy relies on institutions that were only able to form because we lucked into social conditions making them sustainable.


Just make sure you have a local and remote backup server.

From to time, test the restore process.


I haven't tried it yet, but the evil twin to this practice is to nuke everything periodically to ensure that your agent isn't relying on any filesystem state that it hasn't specified builds for (i.e. https://grahamc.com/blog/erase-your-darlings/).

They tend to slip out of declarative mode and start making untracked changes to the system from time to time.


Claude with root access will ensure there's "motivation" to run the restore process regularly.

You don’t have to run the control plane and you don’t have to manage DNS & SSL keys for the DNS entries. Additionally the RBAC is pretty easy.

All these are manageable through other tools, but it’s more complicated stack to keep up.


It’s not against AI. It’s against privacy issues arising though data mining & double speech.

I agree. LLMs cannot and should not replace professionals but there are huge gaps that can be filled by intro provided and the fact that you can dig deeper into any subject is huge.

This is probably a field that MistralAI could use privacy and GDPR as leverage to build LLMs around that.


One of the big issues I have with LLMs that when you start a prompting session with an easy question it all goes great. It bring up points you might not have considered and appears very knowledgeable. Fact checking at this stage will show the LLM is invariably correct.

Then you start "digging deeper" on a specific sub-topic, and this is where the risk of an incorrect response grows. But it is easy to continue with the assumption the text you are getting is accurate.

This has happened so many times with the computing/programming related topics i usually prompt about, there is no way I would trust a response from an LLM on health related issues I am not already very familiar with.

Given that the LLM will give incorrect information (after lulling people with a false sense of it being accurate), who is going to be responsible for the person that makes themselves worse off by doing self diagnosis, even with a privacy focused service?


That's a good point—and I have probably fallen victim to it as well: the "sliding scale" of an LLM's authority.

Like you, I fact-check it (well, search the internet to see if others validate the claims/points) but I don't do so with every response.


The responsibility falls always to the patient. That’s true with doctors are as well: you visit two doctors they give you different diagnosis, one tells to go for surgery, the other tells you it’s not worth the hassle. Who can decide? The patient does.

LLMs are yet another powerful tool under our belt, you know it’s hallucinating so be careful. That said, even asking specialized info about this or that medical topic can be a great thing for patients. That’s why I believe it’s a good thing to have specialized LLMs that can tailor responses on individual health situations.

The problem is the framework and the implementation end goal. IMO state owned health data is a goldmine for any social welfare system and now with AI they can make use of it in novel ways.


In most cases, that’s the opposite of chill.

As do remote, responsibilities and autonomy

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: