It's disappointing, but not surprising, that people thought the president would make any really impact on inflation. That said, with global conditions improving it looks like we could've actually seen a drastically larger reduction in inflation if not for the tariffs. The goals of the tariffs seem so misaligned with what the country needs - again not surprising that we're doing something the opposite of what we need - and again also not surprising that his supporters don't seem to care.
I think the real issue is that for the powers that be, inflation is seen as either neutral or a good thing. The only people it hurts is the working class and the blame is nebulous. So it is used as a tool to increase taxes without changing laws, lower the cost of debt, and cut labor wages since they don't get pay raises commensurate with inflation. So I think it is a trick played upon the working class to screw them over in the long term while the wealthy are protected because all their assets simply go up in value with inflation. I think the target inflation rate should be 0%, not 2%. I simply don't believe the justification for the 2% target.
We're well above 2% anyway, I doubt they will ever hit that again - they are already having to cut rates because job market is frozen, and that will increase inflation pressure.
I track my spend each year and my personal actual inflation rate has averaged about 4.5% over past 5 years. And I'm pretty low income, my spending is all core stuff.
>It's disappointing, but not surprising, that people thought the president would make any really impact on inflation
Except that a president, in normal times, COULD make an impact on inflation, both directly and indirectly.
What is surprising, is that after a completely failed presidency that saw a marked decrease in middle class prosperity, people thought that Donald Trump, of all people, could bring inflation down.
Ask more widely. People want reasonable services from their government, and tighter regulation of markets, with elimination of profit-taking middlemen.
They want democratic socialism.
Meanwhile, the right wing has been telling them that public libraries and public schools and everything good except profit -- is communism.
> People want reasonable services from their government
Yes, though the definition of "reasonable" is a real sticking point
> and tighter regulation of markets
This is less clear to me, but I would agree people want less fraud and deception in markets
> with elimination of profit-taking middlemen
I don't think many people think about this at all, and it's another very nebulous term
> They want democratic socialism.
No, democratic socialists want democratic socialism. Most Americans do not.
> Meanwhile, the right wing has been telling them that public libraries and public schools and everything good except profit -- is communism.
I disagree with basically everything the current incarnation of the Republican party is doing or stands for, and silly statements like this aren't helpful.
People don't "essentially want communism" by advocating for socialist policy. Serious economists will tell you that it is impossible to transition America's free market into a planned economy. We're capitalist through thick and thin.
Yet there is a sizeable number of us who consider seriously promises to "lower prices of X" like it's a thing that can be done by decree. It's disappointing is all.
it's about the dream of being able to have capital though, not actually about having capital. most people do not like the idea of a death tax even though most people will never have enough wealth where it would matter.
Exactly, people didn't used to even imagine there was any way to change nor think free-enterprise should be compromised for any special interests, the outcome had always been negative when lobbyists got their way too often with either party.
Remember why Ronald Reagan and the bulk of the American people from both parties absolutely hated Communism so much?
It wan't mainly the economic differences from a free-market system; that barely made it onto the radar and was largely academic.
It was the dictatorship aspect that was so disgusting and anti-American as can be.
Dismal economic considerations under Communist governments were well-recognized as a logical result of dictatorship, that had been obvious for centuries.
Otherwise there wouldn't have been as much ambition for subjects to withdraw from dictator/monarchy regimes and settle in America to begin with.
After hearing his vitriol over the years I do see his comics and writing very differently now. As someone else said, he views everyone as idiots or below him, and needs an out group to target. Dilbert read in that light just seems hateful more than insightful or relatable. I never plan on reading any Scott Adams material for the rest of my life or introducing anyone else to it.
Cancel culture is simply social consequence. That's it. It can be harsh and at times probably too harsh. But I don't see how you can't have cancel culture w/o also not greatly limiting free speech.
I don't think this is true. "Cancel culture" is distinguished from normal social consequences by many things, including the perpetrators going to others outside of the perpetrators' and victim's social group to attack the victim.
If I say something racist at home, my friends and family will shame me - that is social consequence. If I say something racist at home and the person I invited over publicly posts that on Twitter and tags my employer to try to get me fired, that's cancel culture, and there's clearly a difference.
There are virtually no social groups where it's socially acceptable to get offended by what an individual said and then seek out their friends, family, and co-workers to specifically tell them about that thing to try to inflict harm on that individual. That would be extremely unacceptable and rude behavior in every single culture that I'm aware of, to the point where it would almost always be worse and more ostracizing than whatever was originally said.
We don't have to accept or reject all manner of social consequence as a single unit. That would be absurd.
> w/o also not greatly limiting free speech.
Indeed it would be exceedingly difficult to legislate against it. But something doesn't need to be illegal for us to push back against it. I'm not required to be accepting of all behavior that's legal.
For example, presumably you wouldn't agree with an HN policy change that permitted neo nazi propaganda despite the fact that it generally qualifies as protected speech in the US?
I wouldn't agree with this change. And I'd stop using HN and I'd tell others to also not use it. I'd implement cancel culture on it.
> But something doesn't need to be illegal for us to push back against it.
This is exactly what cancel culture is. It's pushing back on something (usually legal, but behavior we don't strongly don't agree with).
And its absurd to me how the right acts like cancel culture is a left movement. The right has used it too. Look at all the post Charlie Kirk canceling that happened, huge scale -- even the government got involved in the canceling there. Colin Kaepernick is probably one of the most high profile examples of canceling. The big difference is that the right has more problematic behaviors. Although more of it is being normalized. Jan 6 being normalized is crazy to me, but here we are.
Not even close to true. You're all over this thread posting strange takes and carrying water for this guy. Actually, I remember you doing exactly that in many, many threads before. And you're always trying to protect unsavory characters with awful views and no one else.
It's pretty simple to deal with cancel culture without limiting speech:
First, speak out about it and shame those engaging in it. If its not socially acceptable to ruin someone's live over their opinions then less people will go along with the mob and it becomes less of a problem.
Second, make sure that people's livelihoods are not ruined by people being mad at them. That's essentially what anti-discrimination laws do we just need to make sure they cover more kinds of discrimination. Essentially large platforms should not be allowed to ban you and employers should not be able to fire you just because a group of people is upset with something you expressed outside the platform/company.
> First, speak out about it and shame those engaging in it.
Ah, fight cancel culture with cancel culture.
So you're going to legislate that employers can't fire people because of something they've done outside of work (presumably as long as its legal)? Many professions have morality clauses -- we'd ban those presumably? And if you had a surgeon who said on Facebook that he hated Jews and hated when he operated on them (but he would comply with the laws) -- as a hospital you'd think that people who raised this to you had no ground to stand on. That they should just sue if they feel they got substandard treatment?
It's clear they don't have the in-house expertise to do it themselves. They aren't an AI player. So it's not a mistake, just a necessity.
Maybe someday they'll build their own, the way they eventually replaced Google Maps with Apple Maps. But I think they recognize that that will be years away.
Apple has been using ML in their products for years, to the point that they dedicated parts of their custom silicon for it before the LLM craze. They clearly have some in-house ML talent, but I suppose LLM talent may be a different question.
I’m wondering if this is a way to shift blame for issues. It was mentioned in an interview that what they built internally wasn’t good enough, presumably due to hallucinations… but every AI does that. They know customers have a low tolerance for mistakes and any issues will quickly become a meme (see the Apple Maps launch). If the technology is inherently flawed, where it will never live up to their standards, if they outsource it, they can point to Google as the source of the failings. If things get better down the road and they can improve by pivoting away from Google, they’ll look better and it will make Google look bad. This could be the long game.
They may also save a fortune in training their own models, if they don’t plan to directly try to monetize the AI, and simply have it as a value add for existing customers. Not to mention staying out of hot water related to stealing art for training data, as a company heavily used by artists.
I agree that they don't appear poised to do it themselves. But why not work with Meta or OpenAI (maybe a bit more questionable with MS) or some other player, rather than Google?
The optics of working with Meta make it a non-starter. Apple symbolizes privacy, Meta the opposite.
With OpenAI, will it even be around 3 years from now, without going bankrupt? What will its ownership structure look like? Plus, as you say, the MS aspect.
So why not Google? It's very common for large corporations to compete in some areas and cooperate in others.
I didn't see you 41 day old reply to me until it was too late to comment on it. So here's a sarcastic "thanks for ignoring what I wrote" and telling me that exactly what I was complaining about is the solution to the problem I was complaining about.
1) I told you my household can't use Target or Amazon for unscented products, without costly remediation measures, BECAUSE EVEN SCENT-FREE ITEMS COME SMELLING FROM PERFUME CROSS-CONTAMINATION THANKS TO CLEANING, STORAGE, AND TRANSPORTATION CONDITIONS. SOMETIMES REALLY BADLY.
FFS. If you are going to respond, first read.
I also mentioned something other than "government intervention to dictate how products are made" as a solution to this issue, namely adequate segregation between perfumed and non-perfumed products.
And I care less about my wallet than I do about my time and actual ability to acquire products that are either truly scent free, or like yesteryear, don't have everlasting fragrance fixatives.
For people in my position, which make up a small percentage of the population (that still numbers in the millions), the free market has failed. We are a specialized niche that trades tips on how to make things tolerable.
For who? Regular people are quite famously not clamouring for more AI features in software. A Siri that is not so stupendously dumb would be nice, but I doubt it would even be a consideration for the vast majority of people choosing a phone.
Web search is a core part of browsing and Apple is Google's biggest competitor in browsers. Google is paying Apple about 25x for integrating Google Search in Safari as Apple will be paying Google to integrate Google's LLMs into Siri. If you think depending on your competitor is a problem, you should really look into web search where all the real money is today.
My Sony TV has android and is fairly responsive. Maybe a second lag, but definitely not 10-20 secs. I do need to give it time to “warm up” when I start it, though. I use it so rarely it’s generally turned off from wall outlet.
I still prefer Apple TV for various reasons, though, responsiveness being one of them.
Torrenting is easy, but what are you goung to do with the torrented files then? Without additional external hardware you probably won't be able to play your downloaded files on your large TV, and most people prefer a laggy simple route over having to do more work. I do torrent from time to time, but the hassle associated with the whole process really highlights why streaming apps took over.
Sony TVs are some of the most sane options in the TV market right now. Generally decent, and they don't fight you if you want to use them without connecting them to the internet. Still not perfect and they'll cost you more, but it's a worthwhile trade to me.
I'd love to see a thorough breakdown of what these local NPUs can really do. I've had friends ask me about this (as the resident computer expert) and I really have no idea. Everything I see advertised for (blurring, speech to text, etc...) are all things that I never felt like my non-NPU machine struggled with. Is there a single remotely killer application for local client NPUs?
I used to work at Intel until recently. Pat Gelsinger (the prior CEO) had made one of the top goals for 2024 the marketing of the "AI PC".
Every quarter he would have an all company meeting, and people would get to post questions on a site, and they would pick the top voted questions to answer.
I posted mine: "We're well into the year, and I still don't know what an AI PC is and why anyone would want it instead of a CPU+GPU combo. What is an AI PC and why should I want it?" I then pointed out that if a tech guy like me, along with all the other Intel employees I spoke to, cannot answer the basic questions, why would anyone out there want one?
It was one of the top voted questions and got asked. He answered factually, but it still wasn't clear why anyone would want one.
Also professionals that need powerful computers ("workstations") in their jobs, like video editing
A lot of them are incorporating AI in their workflow, so making local AI better would be a plus. Unfortunately I don't see this happening unless GPUs come with more VRAM (and AI companies don't want that, and are willing to spend top dollar to hoard RAM)
Pretty much the same as what you see in the comments here. For certain workloads, NPU is faster than CPU by quite a bit, and I think he gave some detailed examples at the low level (what types of computations are faster, etc).
But nothing that translated to real world end user experience (other than things like live transcription). I recall I specifically asked "Will Stable Diffusion be much faster than a CPU?" in my question.
He did say that the vendors and Microsoft were trying to come up with "killer applications". In other words, "We'll build it, and others will figure out great ways to use it." On the one hand, this makes sense - end user applications are far from Intel's expertise, and it makes sense to delegate to others. But I got the sense Microsoft + OEMs were not good at this either.
> In theory, its your math coprocessor for your 386.
A math coprocessor, AFAIK, can do much more than matrix multiplications.
From my POV, having a separate chip doing a single math operation is (so 1970) lame.
The problem is essentially memory bandiwdth afiak. Simplifying a lot in my reply, but most NPUs (all?) do not have faster memory bandwidth than the GPU. They were originally designed when ML models were megabytes not gigabytes. They have a small amount of very fast SRAM (4MB I want to say?). LLM models _do not_ fit into 4MB of SRAM :).
And LLM inference is heavily memory bandwidth bound (reading input tokens isn't though - so it _could_ be useful for this in theory, but usually on device prompts are very short).
So if you are memory bandwidth bound anyway and the NPU doesn't provide any speedup on that front, it's going to be no faster. But has loads of other gotchas so no real "SDK" format for them.
Note the idea isn't bad per se, it has real efficiencies when you do start getting compute bound (eg doing multiple parallel batches of inference at once), this is basically what TPUs do (but with far higher memory bandwidth).
NPUs are still useful for LLM pre-processing and other compute-bound tasks. They will waste memory bandwidth during LLM generation phase (even in the best-case scenario where they aren't physically bottlenecked on bandwidth to begin with, compared to the iGPU) since they generally have to read padded/dequantized data from main memory as they compute directly on that, as opposed to being able to unpack it in local registers like iGPUs can.
> usually on device prompts are very short
Sure, but that might change with better NPU support, making time-to-first-token quicker with larger prompts.
Yes I said that in my comment. Yes they might be useful for that - but when you start getting to prompts that are long enough to have any significant compute time you are going to need far more RAM than these devices have.
Obviously in the future this might change. But as we stand now dedicated silicon for _just_ LLM prefill doesn't make a lot of sense imo.
You don't need much on-device RAM for compute-bound tasks, though. You just shuffle the data in and out, trading a bit of latency for an overall gain on power efficiency which will help whenever your computation is ultimately limited by power and/or thermals.
The idea that tokenization is what they're for is absurd - you're talking a tenth of a thousandth of a millionth of a percent of efficiency gain in real world usage, if that, and only if someone bothers to implement it in software that actually gets used.
NPUs are racing stripes, nothing more. No killer features or utility, they probably just had stock and a good deal they could market and tap into the AI wave with.
Apple demonstrates this far better. I use their Photos app to manage my family pictures. I can search my images by visible text, by facial recognition, or by description (vector search). It automatically composes "memories" which are little thematic video slideshows. The FaceTime camera automatically keeps my head in frame, and does software panning and zooming as necessary. Automatic caption generation.
This is normal, standard, expected behavior, not blow your midn stuff. Everyone is used to having it. But where do you think the computation is happening? There's a reason that a few years back Apple pushed to deprecate older systems that didn't have the NPU.
I've yet to see any convincing benchmarks showing that NPUs are more efficient than normal GPUs (that don't ignore the possibility of downclocking the GPU to make it run slower but more efficient)
NPUs are more energy efficient. There is no doubt that a systolic array uses less watts per computation than a tensor operation on a GPU, for these kinds of natural fit applications.
Are they more performant? Hell no. But if you're going to do the calculation, and if you don't care about latency or throughput (e.g. batched processing of vector encodings), why not use the NPU?
Especially on mobile/edge consumer devices -- laptops or phones.
> NPUs are more energy efficient. There is no doubt
Maybe because they sleep all the time. To be able to use an NPU you need at least a compiler which generates code for this particular NPU and a CPU scheduler which can dispatch instructions to this NPU.
In theory NPUs are a cheap, efficient alternative to the GPU for getting good speeds out of larger neural nets. In practice they're rarely used because for simple tasks like blurring, speech to text, noise cancellation, etc you can get usually do it on the CPU just fine. For power users doing really hefty stuff they usually have a GPU anyway so that gets used because it's typically much faster. That's exactly what happens with my AMD AI Max 395+ board. I thought maybe the GPU and NPU could work in parallel but memory limitations mean that's often slower than just using the GPU alone. I think I read that their intended use case for the NPU is background tasks when the GPU is already loaded but that seems like a very niche use case.
If the NPU happens to use less power for any given amount of TOPS it's still a win since compute-heavy workloads are ultimately limited by power and thermals most often, especially on mobile hardware. That frees up headroom for the iGPU. You're right about memory limitations, but these are generally relevant for e.g. token generation not prefill.
NPUs really just accelerate low-precision matmuls. A lot of them are based on systolic arrays, which are like a configurable pipeline through which data is "pumped" rather than a general purpose CPU or GPU with random memory access. So they're a bit like the "synergistic" processors in the Cell, in the respect that they accelerate some operations really quickly, provided you feed them the right way with the CPU and even then they don't have the oomph that a good GPU will get you.
It’s more like you need to program a dataflow rather than a program with instructions or vliw type processors. They still have operations but for example I don’t think ethos has any branch operations.
There are specialized computation kernels compiled for NPUs. A high-level program (that uses ONNX or CoreML, for example) can decide whether to run the computation using CPU code, a GPU kernel, or an NPU kernel or maybe use multiple devices in parallel for different parts of the task, but the low-level code is compiled separately for each kind of hardware. So it's somewhat abstracted and automated by wrapper libraries but still up to the program ultimately.
Yes but your CPUs have energy inefficient things like caches and out of order execution that do not help with fixed workloads like matrix multiplication. AMD gives you 32 AI Engines in the space of 3 regular Ryzen cores with full cache, where each AI Engine is more powerful than a Ryzen core for matrix multiplication.
I thought SSE2 and everything that came after like AVX 512 or SSE4 were made for streaming, leveraging the cache only for direct access to speed things up?
Haven't used SSE instructions for anything other than fiddling around with it yet, so I don't know if I'm wrong in this assumption. I understand the lock state argument about cores due to always max 2 cores being able to access the same cache/memory... but doesn't this have to be identical for FPUs if we compare this with SIMD + AVX?
You definitely would use SIMD if you were doing this sort of thing on the CPU directly. The NPU is just a large dedicated construct for linear algebra. You wouldn't really want to deploy FPGAs to user devices for this purpose because that would mean paying the reconfigurability tax in terms of both power-draw and throughput.
> Everything I see advertised for (blurring, speech to text, etc...) are all things that I never felt like my non-NPU machine struggled with.
I don’t know how good these neural engines are, but transistors are dead-cheap nowadays. That makes adding specialized hardware a valuable option, even if it doesn’t speed up things but ‘only’ decreases latency or power usage.
I think a lot of it is just power savings on those features, since the dedicated silicon can be a lot more energy efficient even if it's not much more powerful.
ChatGPT has become an indispensable health tool for me. It serves as a great complement to my doctor. And there's been at least two cases in our house where it provided recommendations that were of great value (one possibly life saving and the other saving from an unnecessary surgery). I think that specialized LLMs will eventually be the front-line doctor/nurse.
Curious, does anyone know if this might also apply to tendons? I've had patella tendonitis for years (jumpers knee) and have tried everything (isometrics, shockwave, PRP injections, etc...).
Yep. Do those on a slant board. And knee extensions. And a few others. Plus drink collagen before it. I’m still working my way through it so it might work, but I’d love for something like this to work.
I didn’t plan on examining Elon’s ideology. He shoved it in my face. If other CEOs want to to be coy with Nazi salutes and post the types of things he does on X then let me know. I’ll happily treat them the same way.
reply