Hacker Newsnew | past | comments | ask | show | jobs | submit | jwr's commentslogin

I was also surprised to read this. I have terrible problems with all Google UIs. I can never find anything and it's an exercise in frustration to get anywhere.

This makes perfect sense and is so much better than getting a flood of half-baked "issues" and then closing them automatically with a bot for "inactivity".

It's a nice intro! One thing I would respectfully suggest is using more precise terminology than "ACID". Jepsen has a great resource for looking at various consistency models (https://jepsen.io/consistency/models) and while every database these days says it is "ACID", very few can guarantee Strict Serializability in a distributed setting, like FoundationDB does.

Good suggestion. I take serializability for granted.

I used to work at a company developing an independent H.264 decoder implementation. We would have killed for this kind of source content, especially if the license allowed showing it at trade shows.

It has become fashionable to s*t on GnuPG. I just wish all the crypto experts doing that would point me to an alternative that is functionally equivalent.

Something that will encrypt using AES-256 with a passphrase, but also using asymmetric crypto. Oh, and I want my secret keys printable if needed. And I want to store them securely on YubiKeys once generated (https://github.com/drduh/YubiKey-Guide). I want to be able to encrypt my backups to multiple recipients. And I want the same keys (stored on Yubikeys, remember?) to be usable for SSH authentication, too.

And by the way, if your fancy tool is written using the latest language du jour with a runtime that changes every couple of years or so, or requires huge piles of dependencies that break if you even as much as sneeze (python, anyone?), it won't do.

BTW, in case someone says "age", I actually followed that advice and set it up just to be there on my systems (managed by ansible). Apart from the fact that it really slowed down my deployments, the thing broke within a year. And I didn't even use it. I just wanted to see how reliable it will be in the most minimal of ways: by having it auto-installed on my systems.

If your fancy tool has less than 5 years of proven maintenance record, it won't do. Encryption is for the long term. I want to be able to read my stuff in 15-30 years.

So before you go all criticizing GnuPG, please understand that there are reasons why people still use it, and are actually OK with the flaws described.


> I just wish all the crypto experts doing that would point me to an alternative that is functionally equivalent.

The entire point of every single valid criticism of PGP is that you cannot make a single functionally equivalent alternative to PGP. You must use individual tools that are good at specific things, because the "Swiss Army knife" approach to cryptographic tool design has yielded empirically poor outcomes.

If you have an example of how age broke for you, I think its maintainers would be very interested in hearing that -- I've been using it directly and indirectly for 5+ years and haven't had any compatibility or runtime issues with it, including when sharing encrypted files across different implementations of age.


Point of order: there are valid and important criticisms of PGP that have nothing to do with its jack-of-all-trades philosophy. There's no modern cryptosystem in the world you would design with PGP's packet scheme.

Yeah, that was just the low-hanging fruit I reached for.

(I think you can make a tie-in argument here, though: PGP's packet design and the state machine that falls out of it is a knock-on effect of how many things PGP tries to do. PGP would maybe not have such a ridiculously complicated packet design if it didn't try and do so many things.)


> Apart from the fact that it really slowed down my deployments, the thing broke within a year. And I didn't even use it. I just wanted to see how reliable it will be in the most minimal of ways: by having it auto-installed on my systems.

I'm very curious about this. Tell me more.


I didn't even catch this the first read. `age` is a command line program written in Go. It's not a system service. Simply "having it installed" on your system can't do anything.

If it fails to build when the system is updated?

Poster says:

> slowed down my deployments

I take that to mean the _deployment_ step, not the deployed system.


There is a downloadable binary, I doubt many people recommending age are recommending every server using it also download a Go compiler and build it themselves.

What I meant was that the ansible recipe for building and installing age broke within a year. I didn't investigate why, I just switched it off, but it was a data point.

Yes, I know this can surely be explained and isn't a "fair" comparison. But then again my time is limited and I need predictable, reliable tools.


> Apart from the fact that it really slowed down my deployments

Is this a comparable complaint worth mentioning, and if it is are you sure you actually need cryptography? It slowed things down a bit, so you don't really want to move on from demonstrably too-complex to not have bugs GnuPG?


Asking for an equivalent to GPG is like asking for an equivalent of a Swiss knife with unshielded chainsaws and laser cutters.

Stop asking for it, for your own good, please. If you don't understand the entire spec you can't use it safely.

You want special purpose tools. Signal for communication, Age for safer file encryption, etc.

What exact problems did you have with age? You're not explaining how it broke anything. Are you compiling yourself? Age has yubikey support and can do all you described.

> if your fancy tool has less than 5 years of proven maintenance record, it won't do. Encryption is for the long term. I want to be able to read my stuff in 15-30 years.

This applies to algorithms, it does not apply to cryptographic software in the same way. The state of art changes fast, and while algorithms tend to stand for a long time these days there are significant changes in protocol designs and attack methods.

Downgrade protection, malleability protection, sidechannel protection, disambiguation, context binding, etc...

You want software to be implemented by experts using known best practices with good algorithms and audited by other experts.


If you haven't checked out sequoia (sq), you should! I think it ticks your boxes.

https://book.sequoia-pgp.org/about_sequoia.html


There's something to be said perhaps for preferring tools that do one of those things, rather than all of those things, and doing them well.

Not to say you can't then make an umbrella interface for interacting with them all as a suite, but perhaps the issue has become that gpg has not appropriately followed the Unix philosophy to begin with.

Not that I've got the solution for you. Just calling out the nature of your demands somewhat being at odds with a key design principle that made Unix and Unix-likes great to begin with.


There isn't an alternative that is functionally equivalent because what PGP does is dumb. It's a Swiss Army Knife. Nobody who wants to design an excellent saw sets out to design the Swiss Army Knife saw[†]. Nobody who needs shears professionally buys a Swiss Army Knife for the scissors.

The cryptographic requirements of different problems --- backup, package signing, god-help-us secure messaging --- are in tension with each other. No one design adequately covers all the use cases. Trying to cram them all into one tool is a sign that something other than security is the goal. If that's the case, you're live action roleplaying, not protecting people.

I'd be interested in whether you could find a cryptographer who disagrees with that. I've asked around!

[†] I am aware that SAK nerds love the saw.


So what toolbag or workshop of excellent specialized tools would provide the same capability as GnuPG?

Ask me a question about a specific realistic problem (ie, not "how do I replicate this behavior of PGP", but rather "how do I solve this real-world problem") and I'll give an answer (or someone else will).

I think I described (though perhaps too briefly or not clearly enough) the very specific realistic problems?

I'm somewhat amused that every time this kind of discussion comes up, the answer is "you are holding it wrong". I have a feeling the world of knowledgeable crypto folks is somewhat detached from user reality.

If a single tool isn't possible, give me three tools. But if those three tools each require separate sets of keys with their own key management systems, I'm not sure if the user's problem is being addressed.


I only counted one problem:

> I want to be able to encrypt my backups to multiple recipients.

Presumably this means you want to encrypt the backup once and have multiple decryption keys or something?

The rest of your original comment are constraints around how you want it to work.


Which three problems? This isn't a trick question.

I believe all the criticism of GnuPG is due to the fact most people grew up with Microsoft or Apple, so they are use to hand-holding.

If you read the various how-tos out there it is not that hard to use, just people do not want to read anything more than 2 lines. That is the main issue.

My only complaint is Thunderbird now uses its own homegrown encryption, thus locking you into their email client. Seems almost all email clients have their own way of encryption, confusing the matters even more. I now use mutt because it can be easily likned to GnuPG and it does not lock me into a specific client.


> If you read the various how-tos out there it is not that hard to use, just people do not want to read anything more than 2 lines. That is the main issue.

The video linked above contains multiple examples of people using GnuPG's CLI in ways that it was seemingly intended to be used. Blaming users for holding it wrong seems facile.


Seconded — "AI" is a great teaching resource. All bigger models are great at explaining stuff and being good tutors, I'd say easily up to the second year of graduate studies. I use them regularly when working with my kid and I'm trying to teach them to use the technology, because it is truly like a bicycle for the mind.


Explaining the wrong stuff...


to people that are clueless, perhaps…


Polish person here. Don't try to learn Polish. It's insanely difficult, the "rules" make no sense whatsoever, and almost anybody that you'll want to talk to will be able to communicate with you in English.

As for Russian, I also don't see any point in learning it. I was forcefully taught Russian in primary school back when Poland was under Russian yoke. The general idea here is that we'd like not to be in that situation ever again. Learning the language of a nation where a significant percentage of population supports war and killing is not something I'd consider.


Polish is great because there is a lot of content to learn from. And it is a gateway to other western slavic languages in the region. I basically forced myself to learn it because Manga was all in polish at the time. Their movie industry is great as well.


Being able to read Stanisław Lem in original I'd consider possibly the biggest perk.


As someone studying Polish, and making excellent progress, I mostly agree with your take. If you want to explore other languages, something like Spanish will get you much more mileage. Polish is difficult and the community of speakers isn't exactly warm to foreigners or people acquiring the language. On the other hand, if you truly enjoy languages and are passionate about them, I have found Polish to be really interesting and beautiful in its own way. Definitely not recommended, but still enjoyable to read/write/speak.


> Learning the language of a nation

A nation doesn't own the language.


> Learning the language of a nation where a significant percentage of population supports war and killing is not something I'd consider.

So, when do you plan to unlearn English?


Do you believe that a significant proportion of native English speakers support the idea of imperialistic invasion and occupation, and the rape and torture of women and children?


Yes. This has been incredibly consistent throughout the existence of the US and UK in particular.


That seems like a pretty wild thing to say. What is your source for this?


Which war? The Iraq War started with around 62% support. When the US started its involvement in the Korean War (one of the biggest mass atrocities we'd carried out since, well, about 5 years earlier when we atom bombed Japan...), around 78% supported it. Around 71% of Americans supported a large scale troop invasion of Afghanistan when it started.

Honestly, even for the wars with bad public perception, like Vietnam, it was mostly because Americans were tired of our guys being drafted just to be turned into dogfood on the other side of the world, not because we were occupying and brutalizing them.


Let's just pick one, so as to not get distracted.

> The Iraq War started with around 62% support.

I think the essence of my question is what did "support" look like here?

I can empathise with the position that the invasion of Iraq was warranted (which is not to say that I agree with it), in the context of the September 11 attacks. What I haven't seen is any popular support for the slaughter of civilians or the annexation of territory — there is no grand narrative that the USA is actually liberating its historical lands in Iraq. I think the support was conditional, and based on claims that later collapsed. The end goal was withdrawal after regime change.

What I haven't seen is any analogue to egregious instances like this[0], of which there are many in russia's war against Ukraine.

[0]: https://iwpr.net/global-voices/go-ahead-and-rape-ukrainian-w...


You're ignoring the mass atrocities committed by both sides in the Donbas throughout much of the 2010s that provided easy propaganda to achieve the same outcomes as the Iraq War propaganda.

I have another example of a "war" carried out recently with overwhelming support in its nation and in the US (initially) due to rapid propaganda around an attack that was likely intentionally intensified in effect by things like moving civilian events next to a military target the day before. But I won't post that one lol


> You're ignoring the mass atrocities committed by both sides in the Donbas throughout much of the 2010s

This is grossly misleading. It implies a scale and symmetry that reputable monitors do not support.

The question now is what caused you to write this. Was it ignorance? Or malicious dishonesty?


> it's different when we do it

You're not helping here.


Who are you quoting?

Is this what HN has become? Just blatant strawmanning?


Ah, the usual whataboutism that derails rational discussion. While the world isn't black and white, there are rare situations where things actually are totally black and white, and we are witnessing one of them right now.


Why is every time you see a Polish person they have an inferiority syndrome and and shit on their own country?


I… don't think that's what I did?


> Learning the language of a nation where a significant percentage of population supports war and killing is not something I'd consider.

Are you a european/white supremacist who doesn't consider the victims of the anglosphere to be human, or are you historically illiterate, even of extremely recent history?

I don't see a third option here since you learned english also, would appreciate an explanation for this special pleading rather than furious downvoting when identifying basic empirical discrepancies in the face of what looks to be materially false claims.


trolling is really an art ^^^^

the references were about russian federation waging an imperialistic type of a war to conquer land when they have the most land already


You're really deep into painting everyone with the same brush, aren't you?

Define russian federation first. Am I it? Is it land? Is it government? Is it those zombie mercenaries who execute criminal orders? Is it those who got jailed after protests against war? Is it those who got conscripted? Those who fled the country to avoid that? Those who struggle to meet ends? Those cruising aboard 150 meter yachts?

Who is this elusive mrs. russian federation?


It's pretty simple, actually. Do you hold a Russian passport? You're Russian.

If you "don't support" the invasion and killing, either start changing the system, or get rid of the passport. Yes, it's inconvenient. So are the missiles and bombs falling on the heads of people in Ukraine for them.


> start changing the system, or get rid of the passport

Right after you, my friend, as soon as you singlehandedly stop the US special military operation in Venezuela and extrajudicial killing of people off its coasts, or jail the commanders and mercenaries of EU forces in Syria, Afghan and Libyian war, depending on your passport. Or get rid of it.

And before you deploy your strawman about "terrorists" - that's exactly the same term that has been used by the kremlin to excuse the invasion into Ukraine.

Your illusion of possibility to change the system shall pass soon, rest assured.


https://en.wikipedia.org/wiki/Category:Invasions_by_Great_Br...

https://en.wikipedia.org/wiki/List_of_wars_involving_the_Uni...

This kind of historical blindness and hysterical hypocrisy has never ended well.

Is HN becoming a place where we should expect people to lie to us and promote trivially disprovably rationales in order to foment cultural and racial hatreds based on current political conveniences?

"Never believe they are completely unaware of the absurdity of their replies. They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. By giving ridiculous reasons, they discredit the seriousness of their interlocutors. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert."


When the russian military fired missiles and drones at my house, I should just accept it because some of my distant ancestors persecuted brown people. Is that your take?

Also, russia’s war against Ukraine enjoys popular support in russia today. Is your argument that the majority of UK and/or US citizens are eager for their respective countries to engage in war against former colonies today?

Fucking wild.


I think that if you clicked on the links and reviewed the original claim, you'd see that you removed every single word and concept and overwhelmingly mutually agreed upon fact and then replaced it with nonsense.

Russian and English are both languages of empires that have engaged in countless acts of violence and aggression. They are not equivalent, but to deny this or heavily qualify it (like dismissing acts of war and violence that happened literally yesterday as "distant") in either direction is inherently hypocritical and dehumanizing.

Honestly, I am starting to suspect you are a Kremlin agent designed to make europeans opposed to their war look so crazy that global opinion shifts against the Ukrainians by tying them to denial of and advocacy for the worst acts of europeans.


…Wow.

> Honestly, I am starting to suspect you are a Kremlin agent

Ok, I'll clarify my position, for the avoidance of doubt.

The terrorist state of russia's unprovoked invasion of Ukraine and ongoing genocide is the darkest chapter in European history since the Holocaust. The putin regime has no regard for human life, and russian soldiers brag about raping women, and murdering children, sometimes by shooting them in the head at point-blank range. Many of these rapes and murders are even encouraged by the wives of russian soldiers — thousands of kilometres away from the front lines. We have it all on tape.

While I am not a soldier, I have two medals from the Ukrainian military for volunteering, and I will continue to help Ukrainian soldiers protect civilians in Ukraine, and to put russian invaders in the ground where they belong.

Does that clear things up for you?


Unfortunately that just leads to more questions, since you did not answer the previous ones at all, and personally volunteering is what most double agents and saboteurs do in order to be in a position to cause more harm by first gaining trust.

https://en.wikipedia.org/wiki/Casualties_of_the_Iraq_War https://en.wikipedia.org/wiki/Mahmudiyah_rape_and_killings

https://en.wikipedia.org/wiki/Gaza_genocide https://www.bbc.com/news/articles/cy0kpd97qqko

Numerically, the numbers of civilians killed are far greater and we have substantive evidence of rape as military policy along with the murder of children.

In order to clear things up, you need to explain if you believe that either:

A) Those lives less valuable by some measure? Ie, did they deserve it, is it all a hoax and no one died, or is there something about them that makes those lives inherently worth far less than yours?

B) You have reason to believe the Ukrainian government is lying about the casualty figures and that over 600,000 Ukrainian soldiers and over 200,000-500,000 Ukrainian civilians including ~50,000 Ukrainian children have already been killed.

Is it A or is it B?

If you can tell me if you agree with statements like this made by Ukrainian officials about Indians and Chinese being inferior races of lesser intelligence, I think that would clear things up also: https://www.livemint.com/news/world/ukrainian-official-says-...


> Those lives less valuable by some measurement you need to explain

I do not believe the lives of different races/ethnicities of humans are of different intrinsic value.

What an incredibly fucked up, sick question.

> Numerically, the numbers of civilians killed are far greater and we have substantive evidence of rape as military policy along with the murder of children.

Comparing death rates numerically like this is also incredibly fucked up, and you should be ashamed of yourself. I am disgusted by this.

> You have reason to believe the Ukrainian government is lying about the casualty figures

Where are you getting your figures? I have strong reason to believe that it's near enough impossible to determine accurate figures since so many civilians were slaughtered by russian soldiers and then buried in mass graves on territory that russian soldiers are still occupying. That, and the Ukrainian government explicitly does not divulge how many military casualties they've taken.

> If you can tell me if you agree with statements like this made by Ukrainian officials about Indians and Chinese being inferior races of lesser intelligence, I think that would clear things up also

I do not agree with this racist statement by one Ukrainian politician.

---

Nobody should take your geopolitical analysis seriously, since you cite kremlin apologists like Mearsheimer and Sachs. You just don't know what you're talking about.

https://news.ycombinator.com/item?id=43275551


My claim is that we see the same terrible violence and wars of aggression with subsequent public support (for a time) in both the Russia and the West. Ie, there is a fundamental hypocrisy to either set of alliances to condemn wars of aggression and the mass murder of children. It's quite clear that both sets of elites consider either one to be a policy option if they think it will get them results they want.

And your response is that things should "not be counted numerically" and that it is "incredibly fucked up" to consider human lives to be of equal value.

Altogether, it seems like you can only see things in terms of one ethnically european empire or another as morally righteous, with no other options. You cannot understand or imagine the perspective of someone who considers neither empire to be moral agents who deserve to have their crimes ignored or downplayed.

You have made no argument and your emotional appeal looks identical to eurocentric white supremacy which denies its nature but can only use emotional blackmail and threats when people point out the discrepancies.

It is not disgusting to ask why some raped and murdered civilians are "the worst thing since the holocaust" while others which preceded it that are of a larger scale are not merely forgotten but denied.

All of the current western leaders who forced Ukrainian denuclearization and talk openly about using Ukrainian lives as a "cheap" way to harm Russia are your true friends...

Meanwhile, people like Mearsheimer who said Ukraine should keep its nuclear weapons (https://www.mearsheimer.com/wp-content/uploads/2019/07/Mears...) and Sachs who helped Poland successfully transition to a market economy (https://www.earth.columbia.edu/sitefiles/file/Sachs%20Writin...) are "Kremlin apologists."

Why did one "Kremlin apologist" argue persuasively that Ukraine must keep its nuclear weapons to prevent a situation exactly like this war, and why did the other do everything he could to make Poland a stronger country? You have left reality behind.

Your "support" is so irrational that when Putin and Lavrov dishonestly argue there is no one credible to negotiate with on the other side, people around the world who want a lasting peace will reluctantly conclude that while they often lie, this time they are telling the truth.

I continue to think you are being paid by Russia or Russian proxies or that you are functionally equivalent to someone who is. Your rhetorical tactics and emotive language are so similar to RT and other Kremlin propaganda outlets that collusion seems more likely than linguistic convergence at this point.

All that said, Russia was in the wrong to invade and as someone with many Ukrainian friends who are now refugees, I hope you can understand why I hope the Ukrainian authorities are able to identify you and access your personal devices and documents.

An investigation seems warranted to find out if you're really this mentally ill or if you're being paid to make it seem like most Ukraine supporters are, especially since you're a decorated volunteer in a military conflict.


This is the most unhinged load of drivel I have ever read on this website. Ever.

---

> And your response is that things should "not be counted numerically" and that it is "incredibly fucked up" to consider human lives to be of equal value.

You have very clearly twisted my words.

I do not think it is moral to turn human suffering into a pissing contest.

I very clearly stated that it is incredibly fucked up to compare human suffering in the way that you're doing. In fact, my first sentence was "I do not believe the lives of different races/ethnicities of humans are of different intrinsic value". You are framing your attack as though I said the complete opposite of what I actually said. What you are doing here is dishonest, and frankly, disgusting.

> Meanwhile, people like Mearsheimer … and Sachs … are "Kremlin apologists."

Yes, they are.

- https://www.newstatesman.com/world/europe/2023/10/john-mears...

- https://www.russiamatters.org/analysis/whats-missing-mearshe...

- https://cepa.org/article/sympathy-with-the-devil-the-lie-of-...

> An investigation seems warranted

I'll cross the border into Ukraine again on January 16th.

Happy to provide my personal identification and details of my medals here. Are you happy to provide yours?

Would you like to contact the authorities? Or shall I?


I am not a party to the conflict and I now restrict my efforts to assisting refugees and deserters on both sides. I supported Ukraine's July offensive because it was still capable of changing the strategic balance, and I hoped for a peace with significant russian concessions to be made at the high-water mark.

Now, because of people like you who are bloody-minded and impossibly idealistic when it's not your blood and you can always walk away, it's far too late.

I regard both sides maximal war aims to be impossible in the short and medium term and that all further loss of life is for nothing other than to accelerate the demographic collapse of both Russia and Ukraine in exchange for a few hundred kilometers of nearly worthless and already-depopulated land.

I think you should contact the Ukrainian authorities and ask them if they believe your advocacy is contributing to their goals. Furthermore, you should consider how things will change if there is a peace deal, at which point it seems like you will be someone who will, from a safe location, be working to undermine the Ukrainian government and to restart a losing conflict.

You are part of a larger conflict and you do not set policy, and when it changes, if you don't change with it, you become an enemy of Ukrainian government and the majority of Ukrainians. This majority and the Ukrainian government have stated they would like to have a democratic election without martial law and press censorship in order to decide their future.

Are you against democratic elections? Would you support a coup against a civilian government in order to continue the war?

You said the invasion of Ukraine was worse than the invasion of Iraq, but you reject all quantitative measures. You also have fatal anomalies in your argument you have not refuted by citing opinion pieces that also ignore this information.

Why did John Mearsheimer say Ukraine should keep its independent nuclear deterrent? Because he regarded a war like this as inevitable and that regardless of the outcome, both sides would lose and a large number of people would die, in addition to strengthening China significantly. And so it is.

I care about the average person who is stuck in this geopolitical clash between military blocs that have no regard for russian or ukrainian lives. You seem to care about achieving a military solution with little or no diplomatic consideration.

You are unwilling or unable to comprehend that people in Venezuela and the Middle East are not in fact, members of a lesser race of humans to whom acts of war and the mass murder of civilians "don't count" and don't fundamentally change the way that 90% of people on earth see western claims of moral principle.

I think you should contact the Ukrainian authorities and prove your commitment to your beliefs by volunteering to serve on the front lines: men willing to kill and die for a field are what is most needed now. Being a propagandist trying to get other people to give their lives while refusing to risk your own shows exactly how you feel about things: your life is more valuable than anyone else's and other people's sons, brothers, husbands, should die for your beliefs.

Putin doesn't care how many Russians from rustbelt towns in central asia and small towns get killed and the strategic military balance is in his favor. It is in his interests that diplomacy be seen to fail but not be his fault, because he does care about the willingness of other countries trying to make sense of the current situation to disregard and circumvent western sanctions. So yes, every word you speak and your point of view aligns perfectly with Russian strategy.

Maybe you're simply a dupe and part of an FSB influence operation, but you could make up for it by serving on the front. Anything else is chickenhawk cowardice or a false friend with murky motives.

Age is no restriction, Ukrainian men in their 50s and 60s are on the front lines. Will you fight for the cause you believe is both realistic and a moral necessity? Or perhaps... their lives are worth less than yours?

Why is it appropriate for a Ukrainian man in his late 50s to be drafted (in a way which resembles kidnapping) to kill and die for what you say you believe in but aren't willing to risk your own life for?

If you're working for the FSB you should be ashamed, and if you're not, you should be even more ashamed!

As for myself, I am an enemy of pointless, unwinnable wars, dictatorship, and coercion, so I am an enemy of both governments and a friend of the common person who had no say in this and is trapped between two corrupt cliques that get other people's families killed while vacationing safely in luxury: https://www.kyivpost.com/post/11648

I hope you provide your personal information to a Ukrainian recruiter and put your own skin in the game, because without that, you are functionally identical to an FSB functionary.


>Honestly, I am starting to suspect you are a Kremlin agent

Posting that kind of paranoid delusion should be a wake up call that you are propagandized.


People who alienate the 90-92% of the non-white/european global population against Ukraine by repeatedly asserting that only white european lives should have any kind of moral impact or even be remembered at all are feeding into Putin's international propaganda operation. Could it be by accident? Perhaps, but it fits so perfectly with the Kremlin's diplomatic strategy to win over the rest of the world...


Occam's Razor is that they just have a different worldview than you.


Perhaps you're right and I just don't want to believe that the goons in the Kremlin are telling the truth about how ignorant and mendacious the worldview of many of their (very real victims) are.


The author seems unaware of how well recent Apple laptops run LLMs. This is puzzling and puts into question the validity of anything in this article.


If Apple offered a reasonably-priced laptop with more than 24gb of memory (I'm writing this on a maxed-out Air) I'd agree. I've been buying Apple laptops for a long time, and buying the maximum memory every time. I just checked, and I see that now you can get 32gb. But to get 64gb I think you have to spend $3700 for the MBMax, and 128gb starts at $4500, almost 3x the 32gb Air's price.

And as far as I understand it, an Air with an M3 is perfectly capable of running larger models (albeit slower) if it had the memory.


You’re not wrong that Apple’s memory prices are unpleasant, but also consider the competition - in this context (running LLMs locally) laptops with large amounts of fast memory that can be purposed for the GPU. This limits you to Apple or one specific AMD processor at present.

An HP Zbook with an AMD 395+ and 128Gb of memory apparently lists for $4049 [0]

An ASUS ROG Flow z13 with the same spec sells for $2799 [1] - so cheaper than Apple, but still a high price for a laptop.

[0] https://hothardware.com/reviews/hp-zbook-ultra-g1a-128gb-rev...

[1] https://www.hidevolution.com/asus-rog-flow-z13-gz302ea-xs99-...


Yeah, I'm by no means saying that Apple is uniquely bad here -- it's just an issue I've been frustrated by since the first M1 chip, long before local LLMs made it a serious issue. More memory is always a good idea, and too much is never enough.


You can get any low spec laptop that has no soldered DIMMs and just replace them with the maximum supported capacity.

You don't necessarily need to go the maxed up SKU.


Would that be unified memory? Where the gpu and cpu can share the memory? Which is key for performance.


Right, no, it wouldn't, I appreciate that in this particular context my comment was entirely wrong.

Thanks for helping me see it!


No, it wouldn’t. You’d be limited to using the CPU and the lower bandwidth system memory.


The framework desktop will get you the 395+ and 128gb of ram for 2k USD.


The trick here is buying used. Especially for something like the m1 series there is tremendous value to be had on high memory models where the memory hasn't changed significantly over generations compared the cpus and even m1's are quite competent for many workloads. Got a m1 max 64gb ram recently for I think $1400.


I think pricing is just one dimension of this discussion — but let's dive into it. I agree it's a lot of money. But what are you comparing this pricing to?

From what I understand, getting a non-Apple solution to the problem of running LLMs in 64GB of VRAM or more has a price tag that is at least double of what you mentioned, and likely has another digit in front if you want to get to 128GB?


it's astonishing how apple gouges on the memory and ssd upgrade prices (I'm on an M1 w/ 64Gb/4Tb).

That said they have some elasticity when it comes to the DRAM shortage.


The M-series unified memory is built into the chip itself, not separate components. Of course Apple is going to maintain their margins, but it’s easy to see why with this design more memory is more expensive than drams. Well maybe not with the current market pricing which hopefully is temporary.

They gouge you on RAM and SSD but provide a far better overall machine for the price than Windows laptops.


I think the author is aware of Apple silicon. The article mentions the fact Apple has unified memory and that this is advantageous for running LLMs.


Then idk why they say that most laptops are bad at running LLMs, Apple has a huge marketshare in the laptop market and even their cheapest laptops are capable in that realm. And their PC competitors are more likely to be generously specced out in terms of included memory.

> However, for the average laptop that’s over a year old, the number of useful AI models you can run locally on your PC is close to zero.

This straight up isn’t true.


Apple has a 10-18% market share for laptops. That's significant but it certainly isn't "most".

Most laptops can run at best a 7-14b model, even if you buy one with a high spec graphics chip. These are not useful models unless you're writing spam.

Most desktops have a decent amount of system memory but that can't be used for running LLMs at a useful speed, especially since the stuff you could run in 32-64GB RAM would need lots of interaction and hand holding.

And that's for the easy part, inference. Training is much more expensive.


my laptop is 4 years old. I only have 6Gb VRam. I run, mostly, 4b and 8b models. They are extremely useful in a variety of situations. Just because you can't replicate what you do in chatgpt doesn't mean they don't have their use cases. It seems to me you know very little about what these models can do. Not to speak of trained models for specific use cases, or even smaller models like functiongemma or TTS/ASR models. (btw, I've trained models using my 6Gb VRAM too)


I’ll chime in and say I run LM Studio on my 2021 MacBook Pro M1 with no issues.

I have 16GB ram. I use unsloth quantized models like qwen3 and gpt-oss. I have some MCP servers like Context7 and Fetch that make sure the models have up to date information. I use continue.dev in VSCode or OpenCode Agent with LM Studio and write C++ code against Vulkan.

It’s more than capable. Is it fast? Not necessarily. Does it get stuck? Sometimes. Does it keep getting better? With every model release on huggingface.

Total monthly cost: $0


A few examples of useful tasks would be appreciated. I do suffer from a sad lack of imagination.


I suggest taking a look at /r/localLLaMa and see all sorts of cool things people do with small models.


A Max cpu can run 30b models quantized, and definitely has the RAM to fit them in memory. The normal and pro CPUs will be compute/bandwidth limited. Of course, the Ultra CPU is even better than the Max, but they don't come in laptops yet.


So I'm hearing a lot of people running LLMs on Apple hardware. But is there actually anything useful you can run? Does it run at a usable speed? And is it worth the cost? Because the last time I checked the answer to all three questions appeared to be no.

Though maybe it depends on what you're doing? (Although if you're doing something simple like embeddings, then you don't need the Apple hardware in the first place.)


I was sitting in an airplane next to a guy on a MacBook pro something who was coding in cursor with a local llm. We got talking and he said there are obviously differences but for his style of 'English coding' (he described basically what code to write/files to change but in english, but more sloppy than code obviously otherwise he would just code) it works really well. And indeed that's what he could demo. The model (which was the OSS gpt i believe) did pretty well in his nextjs project and fast too.


Thanks. I call this method Power Coding (like Power Armor), where you're still doing everything except for typing out the syntax.

I found that for this method the smaller the model, the better it works, because smaller models can generally handle it, and you benefit more from iteration speed than anything else.

I don't have hardware to run even tiny LLMs at anything approaching interactive speeds, so I use APIs. The one I ended up with was Grok 4 Fast, because it's weirdly fast.

ArtificialAnalysis has a section "end to end" time, and it was the best there for a long time, tho many other models are catching up now.


The speed is fine, the models are not.

I found only one great application of local LLMs: spam filtering. I wrote a "despammer" tool that accesses my mail server using IMAP, reads new messages, and uses an LLM to determine if they are spam or not. 95.6% correct classification rate on my (very difficult) test corpus, in practical usage it's nearly perfect. gpt-oss-20b is currently the best model for this.

For all other purposes models with <80B parameters are just too stupid to do anything useful for me. I write in Clojure and there is no boilerplate: the code reflects real business problems, so I need an LLM that is capable of understanding things. Claude Code, especially with Opus, does pretty well on simpler problems, all local models are just plain dumb and a waste of time compared to that, so I don't see the appeal yet.

That said, my next laptop will be a MacBook pro with M5 Max and 128GB of RAM, because the small LLMs are slowly getting better.


I've tried out gpt-oss:20b on a MacBook Air (via Ollama) with 24GB of RAM. In my experience it's output is comparable to what you'd get out of older models and the openAI benchmarks seem accurate https://openai.com/index/introducing-gpt-oss/ . Definitely a usable speed. Not instant, but ~5 tokens per second of output if I had to guess.


This paper shows a use case running on Apple silicon that’s theoretically valuable:

https://pmc.ncbi.nlm.nih.gov/articles/PMC12067846/

Who cares if result is right / wrong etc as it will all be different in a year … just interesting to see a test of desktop class hardware go ok.


I have an MBP Max M3 with 64GB of RAM, and I can run a lot at useful speed (LLMs run fine, diffusion image models run OK although not as fast as they would on a 3090). My laptop isn't typical though, it isn't a standard MBP with a normal or pro processor.


I can definitely write code with a local model like Devstral small or a quantized granite, or a quantized deep-seek on an M1 Max w/ 64gb of ram.


Of course it depends what you’re doing.

Do you work offline often?

Essential.


Most laptops have 16GB of RAM or less. A little more than a year ago I think the base model Mac laptop had 8GB of RAM which really isn't fantastic for running LLMs.


By “PC”, they mean non-Apple devices.

Also, macOS only has around 10% desktop market share globally.


It's actually closer to 20% globally. Apple now outsells Lenovo:

https://www.mactech.com/2025/03/18/the-mac-now-has-14-8-of-t...


I meant market share in terms of installed base: https://gs.statcounter.com/os-market-share/desktop/worldwide...


macOS and OS X are split on this graph, and “Unknown” could be anything? This might actually show Apple install base close to 20%.


> Apple has a huge marketshare in the laptop market

Hello, from outside of California!


Global Mac marketshare is actually higher than the US: https://www.mactech.com/2025/03/18/the-mac-now-has-14-8-of-t...


Less than 1 in 5 doesn’t feel like huge market share,

but it’s more than I have!


Apple outsells Lenovo, if that puts it in a different perspective.


But economically, it is still much better to buy a lower spec't laptop and to pay a monthly subscription for AI.

However, I agree with the article that people will run big LLMs on their laptop N years down the line. Especially if hardware outgrows best-in-class LLM model requirements. If a phone could run a 512GB LLM model fast, you would want it.


Are you sure the subscription will still be affordable after the venture capital flood ends and the dumping stops?


100% yes.

The amount of compute in the world is doubling over 2 years because of the ongoing investment in AI (!!)

In some scenario where new investment stops flowing and some AI companies go bankrupt all that compute will be looking for a market.

Inference providers are already profitable so with cheaper hardware it will mean even cheaper AI systems.


You should probably disclose that you're a CTO at an AI startup, I had to click your bio to see that.

> The amount of compute in the world is doubling over 2 years because of the ongoing investment in AI (!!)

All going into the hands of a small group of people that will soon need to pay the piper.

That said, VC backed tech companies almost universally pull the rug once the money stops coming in. And historically those didn't have the trillions of dollars in future obligations that the current compute hardware oligopoly has. I can't see any universe where they don't start charging more, especially now that they've begun to make computers unaffordable for normal people.

And even past the bottom dollar cost, AI provides so many fun, new, unique ways for them to rug pull users. Maybe they start forcing users to smaller/quantized models. Maybe they start giving even the paying users ads. Maybe they start inserting propaganda/ads directly into the training data to make it more subtle. Maybe they just switch out models randomly or based on instantaneous hardware demand, giving users something even more unstable than LLMs already are. Maybe they'll charge based on semantic context (I see you're asking for help with your 2015 Ford Focus. Please subscribe to our 'Mechanic+' plan for $5/month or $25 for 24 hours). Maybe they charge more for API access. Maybe they'll charge to not train on your interactions.

I'll pass, thanks.


I'm not longer CTO at an AI startup. Updated, but don't actually see how that is relevant.

> All going into the hands of a small group of people that will soon need to pay the piper.

It's not very small! On the inference side there are many competitive providers as well as the option of hiring GPU servers yourself.

> And historically those didn't have the trillions of dollars in future obligations that the current compute hardware oligopoly has. I can't see any universe where they don't start charging more, especially now that they've begun to make computers unaffordable for normal people.

I can't say how strongly I disagree with this - it's just not how competition works, or how the current market is structured.

Take gpt-oss-120B as an example. It's not frontier level quality but it's not far off and certainly gives a strong redline that open source models will never get less intelligent than.

There is a competitive market in hosting providers, and you can see the pricing here: https://artificialanalysis.ai/models/gpt-oss-120b/providers?...

In what world is there a way in which all the providers (who are want revenue!) raise prices above the premium price Cerebas is charging for their very high speed inference?

There's already Google, profitable serving at the low-end at around half the price of Cerebas (but then you have to deal with Google billing!)

The fact that Azure/Amazon are all pricing exactly the same as 8(!) other providers as well as the same price https://www.voltagepark.com/blog/how-to-deploy-gpt-oss-on-a-... gives for running your own server shows how the economics work on NVidia hardware. There's no subsidy going on there.

This is on hardware that is already deployed. That isn't suddenly going to get more expensive unless demand increases... in which case the new hardware coming online over the next 24 months is a good investment, not a bad one!


Datacenters full of GPU hosts aren't like dark fiber - they require massive ongoing expense, so the unit economics have to work really well. It is entirely possible that some overbuilt capacity will be left idle until it is obsolete.


The ongoing costs are mostly power, and aren't that massive compared to the investment.

No one is leaving an H100 cluster not running because the power costs too much - this is why remnants markets like Vast.ai exist.


They absolutely will leave them idle if the market is so saturated that no one will pay enough for tokens to cover power and other operational costs. Demand is elastic but will not stretch forever. The build out assumes new applications with ROI will be found, and I'm sure they will be, but those will just drive more investment. A massive over build is inevitable.


Of course!

But the operational costs are much lower than some people in this thread seem to think.

You can find a safe margin for the price by looking at aggregators.

https://gpus.io/gpus/h100 is showing $1.83/hour lowest price, around $2.85 average.

That easily pays running costs - a H100 server with cooling etc is around $0.10/hour to keep running

And a massive overbuild pushes prices down not up!


> Inference providers are already profitable.

That surprises me, do you remember where you learned that?


Lots of sources, and you can do the math yourself.

Here's a few good ones:

https://github.com/deepseek-ai/open-infra-index/blob/main/20... (suggests Deepseek is making 80% raw margin on inference)

https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch...

https://martinalderson.com/posts/are-openai-and-anthropic-re... (there's a HN discussion of this where it was pointed out this overestimates the costs)

https://www.tensoreconomics.com/p/llm-inference-economics-fr... (long, but the TL;DR is that serving Lllama 3.3 70B costs around $0.28/million tokens input, $0.95 output at high utilization. These are close to what we see in the market: https://artificialanalysis.ai/models/llama-3-3-instruct-70b/... )


> The amount of compute in the world is doubling over 2 years because of the ongoing investment in AI (!!)

which is funded by the dumping

when the bubble pops: these DCs are turned off and left to rot, and your capacity drops by a factor of 8192


> which is funded by the dumping

What dumping do you mean?

Are you implying NVidia is selling H200s below cost?

If not then you might be interested to see that Deepseek has released there inference costs here: https://github.com/deepseek-ai/open-infra-index/blob/main/20...

If they are losing money it's because they have a free app they are subsidizing, not because the API is underpriced.


Doesn't matter now. GP can revisit the math and buy some hardware once the subscription prices actually grow too high.


You have to remember that companies are kind of fungible in the sense that founders can close old companies and start new ones to walk away from bankruptcies in the old companies. When there's a bust and a lot of companies close up shop, because data centers were overbuilt, there's going to be a lot of GPUs being sold at firesale prices - imagine chips sold at $300k today being sold for $3k tomorrow to recoup a penny on the dollar. There's going to be a business model for someone buying those chips at $3k, then offering subscription prices at little more than the cost of electricity to keep the dumped GPUs running somewhere.


I do wonder how usable the hardware will be once the creditors are trying to sell it - as far as I can tell is seems the current trend is more and more custom no-matter-the cost super expensive power-inefficient hardware.

The situation might be a lot different than people selling ex-crypto mining GPUs to gamers. There might be a lot of effective scrap that is no longer usable when it is no longer part of a some companies technological fever dream.


They will go down. Or the company will be gone.


Running an LLM locally means you never have to worry about how many tokens you've used, and also it allows for a lot of low latency interactions on smaller models that can run quickly.

I don't see why consumer hardware won't evolve to run more LLMs locally. It is a nice goal to strive for, which consumer hardware makers have been missing for a decade now. It is definitely achievable, especially if you just care about inference.


isnt this what all these NPUs are created for?


I haven’t seen an NPU that can compete with a GPU yet. Maybe for really small models, I’m still not sure where they are going with those.


> economically, it is still much better to buy a lower spec't laptop and to pay a monthly subscription for AI

Uber is economical, too; but folks prefer to own cars, sometimes multiple.

And how there's market for all kinds of vanity cars, fast sportscars, expensive supercars... I imagine PCs & Laptops will have such a market, too: In probably less than a decade, may be a £20k laptop running a 671b+ LLM locally will be the norm among pros.


> Uber is economical, too

One time I took an Uber to work because my car broke down and was in the shop and the Uber driver (somewhat pointedly) made a comment that I must be really rich to commute to work via Uber because Ubers are so expensive


Most people don't realise the amount of money they spend per year on cars.


Paying $30-$70/day to commute is economical?


if you calculate depreciation and running costs on a new car in most places - I think it probably would be.


If Uber were cheaper than the depreciation and running costs of a car, what would be left for the driver (and Uber)?


a big part of the whole "hack" of Uber in the first place is that people are using their personal vehicles. So the depreciation and many of the running costs are sunk costs already. Once you paid those already it becomes a super good deal to make money from the "free" asset you already own.


My private car provides less than one commute per day, on average.

An Uber car can provide several.


While your car in sitting in the parking lot, the uber driver is utilizing their car throughout the day.


If you’re using uber to and from work, presumably you would buy a car that’s worth more than the 10 year old Prius your uber driver has 200k miles on.


The depreciation would be amortized to cover more than one person. I only travel once or twice per week, it cost me less to use an Uber than to own a car.


> Paying $30-$70/day to commute is economical?

When LLM use approaches this number, running one locally would be, yes. What you and other commentator seem to miss is, "Uber" is a stand-in for Cloud-based LLMs: Someone else builds and owns those servers, runs the LLMs, pays the electricity bills... while its users find it "economical" to rent it.

(btw, taxis are considered economical in parts of the world where owning cars is a luxury)


any "it's cheaper to rent than to own" arguments can be (and must be) completely disregarded due to experience of the last decade

so stop it


You still need ridiculously high spec hardware, and at Apple’s prices, that isn’t cheap. Even if you can afford it (most won't), the local models you can run are still limited and they still underperform. It’s much cheaper to pay for a cloud solution and get significantly better result. In my opinion, the article is right. We need a better way to run LLMs locally.


You still need ridiculously high spec hardware, and at Apple’s prices, that isn’t cheap.

You can easily run models like Mistral and Stable Diffusion in Ollama and Draw Things, and you can run newer models like Devstral (the MLX version) and Z Image Turbo with a little effort using LM Studio and Comfyui. It isn't as fast as using a good nVidia GPU or a cloud GPU but it's certainly good enough to play around with and learn more about it. I've written a bunch of apps that give me a browser UI talking to an API that's provided by an app running a model locally and it works perfectly well. I did that on an 8GB M1 for 18 months and then upgraded to a 24GB M4 Pro recently. I still have the M1 on my network for doing AI things in the background.


You can run newer models like Z Image Turbo or FLUX.2 [dev] using Draw Things with no effort too.


I bought my M1 Max w/ 64gb of ram used. It's not that expensive.

Yes, the models it can run do not perform like chatgpt or claude 4.5, but they're still very useful.


I’m curious to hear more about how you get useful performance out of your local setup. How would you characterize the difference in “intelligence” of local models on your hardware vs. something like chatgpt? I imagine speed is also a factor. Curious to hear about your experiences in as much detail as you’re willing to share!


Local models won't generally have as much context window, and the quantization process does make them "dumber" for lack of a better word.

If you try to get them to compose text, you'll end up seeing a lot less variety than you would with a chatgpt for instance. That said, ask them to analyze a csv file that you don't want to give to chatgpt, or ask them to write code and they're generally competent at it. the high end codex-gpt-5.2 type models are smarter, may find better solutions, may track down bugs more quickly -- but the local models are getting better all the time.


749 for an M4 air at Amazon right now


Try running anything interesting on these 8gb of ram.

You need 96gb or 128gb to do non trivial things. That is not yet 749 usd


Fair enough, but they start at 16GB nowadays.


The M4 starts with 16GB, though that can also be tight for local LLMs. You can get one with 24GB for $1149 right now though, which is good value.


899 at B&H started today 12/24


64gb is fine.


This subthread is about the Macbook Air, which tops out at 32 GB, and can't be upgraded further.

While browsing the Apple website, it looks like the cheapest Macbook with 64 GB of RAM is the Macbook Pro M4 Max with 40-core GPU, which starts at $3,899, a.k.a. more than five times more expensive than the price quoted above.


I have an M1 Max w/ 64gb that cost me much less than that -- you don't have to buy the latest model brand new.

if you are going for 64GB, you need at least a Max CPU or you will be bandwidth/GPU limited.


I was pleasantly surprised at the speed and power of my second hand M1 Pro 32GB running Asahi & Qwen3:32B. It does all I need, and I dont mind the reading pace output, although I'd be tempted by M2 Ultra if the secondhand market hadn't also exploded with the recent RAM market manipulations.

Anyway, I'm on a mission to have no subscriptions in the New Year. Plus it feels wrong to be contributing towards my own irrelevance (GAI).


Yeah, any Mac system specced with a decent amount of RAM since the M1 will run LLMs locally very well. And that’s exactly how the built-in Apple Intelligence service works: when enabled, it downloads a smallish local model. Since all Macs since the M1 have very fast memory available to the integrated GPU, they’re very good at AI.

The article kinda sucks at explaining how NPUs aren’t really even needed, they just have potential to make things more efficient in the future rather than depending on the power consumption involved with running your GPU.


This article specifically talks about PC laptops and discusses changes in them.


Only if you want to take all the proprietary baggage and telemetry that comes with Apple platforms by default.

A Lenovo T15g with a 16gb 3080 mobile doesn’t do too badly and will run more than just Windows.


I just got a Framework desktop with 128 GB of shared RAM just before the memory prices rocketed, and I can comfortably run many even bigger oss models locally. You can dedicate 112GB to the GPU and it runs Linux perfectly.


The M-series chips really changed the game here


This article is to sell more laptops.


Why the "a few million in revenue" cutoff?

In a solo-bootstrapped business you don't need a few million (USD or EUR) in revenue to run a successful business and live a really good life. In fact, depending on where you mostly live, much less than a single million might be plenty.


I'm looking for companies that are small by intent, but also looking to scale sustainably and grow beyond the capabilities of a single person. There are a lot of small businesses / solopreneurs / freelancers who work enough to get by or do a few hundred $K as a salary replacement, but aren't looking to build anything bigger. A lot of those companies also go by the wayside when the founder retires, gets hired, or something happens. And there's already a lot of content and sources for those types of solopreneur / single founder freelancers / side hustle type businesses.

I'm looking for the other stories of small teams scaling big. I'm basically separating side-hustles and solopreneurs as freelancers from a more sustainable business. Revenue is a cutoff as a way to differentiate, but doesn't have to be the only one.

For sure however, a team of 5 doing $20M implies something significant is happening at scale versus a solopreneur making what would otherwise be salary-replacement level money. Nothing wrong with that, of course, I love solopreneurship. Just trying to find those other stories, which are much harder to find.


Revenue isn’t the correct metric for what the grandparent wants to measure to begin with.


Fellow (solo) bootstrapper here. Congratulations to the author(s) of the article, and to all the other bootstrappers here. Remember, even if your business is smaller or doesn't have the incredible stellar growth numbers that these people posted, it's still YOUR BUSINESS. You are doing right by your customers (because they keep paying you), you have your own business, you do not depend on VCs, and you don't have to fake anything. This is incredibly liberating and should be cherished!


Totally! That’s the most important message. The numbers we’ve managed to achieve are beyond even the most optimistic outlook we had ten years ago, and we would have been (and were) genuinely happy and satisfied of what we were doing even with a tenth of the revenue. You definitely don't need this much to be proud of your company.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: