Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm so mad about this, I need DDR5 for a new mini-PC I bought and prices have literally gone up by 2.5x..

128GB used to be 400$ in June, and now it's over $1,000 for the same 2x64GB set..

I have no idea if/when prices will come back down but it sucks.



Dram alternates between feast and famine; it's the nature of a business when the granularity of investment is so huge (you have a fab or you don't, and they cost billions -maybe trillions by now). So, it will swing back. Unfortunately it looks like maybe 3-5 years on average, from some analysis here: https://storagesearch.com/memory-boom-bust-cycles.html

(That's just me eyeballing it, feel free to do the math)


I am so glad both top rated and majority of comments on HN finally understands DRAM industry instead of constant DRAM is a cartel that is why things are expensive.

Also worth mentioning DRAM and NAND's profit from Samsung is what keep the Samsung Foundry fighting TSMC. Especially for those who thinks TSMC is somehow a monopoly.

Another things to point out which is not mentioned yet, China is working on both DRAM and NAND. Both LPDDR5 and Stacked NAND are already in production and waiting for yield and scale. Higher Price will finally be perfect timing for them to join the commodity DRAM and NAND race. Good for consumer I suppose, not so good for a lot of other things which I wont go into.


DRAM manufacturers have literally been convicted of price fixing in the past why do you have to white knight for them?


Most of us who've been on Earth for a while know that courts often get it wrong. Even if the particular court decision you mention was correct does not mean that price fixing is the main reason or the underlying reason DRAM prices sometime go up.


They blatantly were doing it, admitted to it, and did it again later. What kind of crazy is this?

Is this the ‘but he loves me, he wouldn’t hit me again’ of the tech world?


He isn't even the only one on this thread who is making this argument lol. I'm guessing its a paid information op.


And I am 100% sure a lot of other industries in commodities would have been convicted of price fixing if we look into it. And I say this as someone who have witnessed it first hand.

Unfortunately commodity business is not sexy, it doesn't get the press, nor does it get told even in business schools. But a lot of the times these call called price fixing is a natural phenomenon.

I wont even go into what get decided in court doesn't always mean it is right.

I will also add we absolutely want the DRAM and NAND or in fact any industries to make profits, or as much profits as it could. What is far more important is where do they spend not those profits. I didn't look into SK Hynix but both Samsung and Micron spends significant amount of R&D at least try to lower the total production cost of DRAM per GB. We want them to make healthy margin selling DRAM at $1/GB, not losing money and then go bankrupt.


Look man I’m a PhD economist I know the difference between monopolistic competition and collusion. All that price fixing does is transfer monopoly rents from you and me to the DRAM cartel (or whatever industry is doing the price fixing).


Both stories can be true.

The firms can coordinate by agreeing on a strategy they deem necessary for the future of the industry, and that strategy requires significant capital expenditures, and the industry does not get (or does not want) outside investment to fund it, and if any of the firms defects and keeps prices low the others cannot execute on the strategy, so they all agree to raise prices.

Then, after the strategy succeeds, they have gotten addicted to the higher revenues, they do not allow prices to fall as fast as they should, their coordination becomes blatantly illegal, and they have to get smacked down by regulators.


> The firms can coordinate by agreeing on a strategy they deem necessary for the future of the industry.. Then, after the strategy succeeds, they have gotten addicted to the higher revenues, they do not allow prices to fall as fast as they should, their coordination becomes blatantly illegal..

So said and did the infamous Phoebus cartel, to unnaturally "fix" the prices and quality of light bulbs.

https://spectrum.ieee.org/the-great-lightbulb-conspiracy

https://en.wikipedia.org/wiki/Phoebus_cartel

For more than a century, one strange mystery has puzzled the world: why do old light bulbs last for decades while modern bulbs barely survive a couple of years?

The answer lies in a secret meeting held in Geneva, Switzerland in 1924, where the world’s biggest light bulb companies formed the notorious Phoebus Cartel.

Their mission was simple but shocking: control the global market, set fixed prices, and most importantly… reduce bulb lifespan.

Before this cartel, bulbs could easily run for 2500+ hours. But after the Phoebus Cartel pact and actions, all companies were forced to limit lifespan to just 1000 hours. More failure meant more purchases. More purchases meant more profit. Any company who refused faced heavy financial penalties.

The most unbelievable proof is the world-famous Livermore Fire Station bulb in California, glowing since 1901. More than 120 years old. Still alive. While our new incandescent bulbs die in 1–2 years.

Though the Phoebus cartel was dissolved in the 1930s due to government pressure, its impact still shadows modern manufacturing. Planned obsolescence didn’t just begin here… but Phoebus made it industrial.

https://m.youtube.com/watch?v=0U5uU6nzgO8


The Phoebus cartel didn't collude just to make the light bulbs have a shorter lifespan. They upped the standard illumination a bulb emitted so that consumers needed fewer of them to see well. With an incandescent you have a kind of sliding scale of brightness:longevity (with curves on each end that quickly go exponential, hence the longest lasting light bulb that's so dim you can barely read by its light). The brighter the bulb, the shorter the lifespan.

https://www.youtube.com/watch?v=zb7Bs98KmnY


Also, incandescent lightbulb lifespan is reduced by repeated power cycling. Not only is the legendary firehouse bulb very dim, it has been turned off and back on again very few times. Leaving all your lights on all the time would be a waste of power for the average household, and more expensive than replacing the bulbs more frequently.


And same quirk was also shared by fluorescent bulbs

I still try to fight that habit of not unnecessarily cycling even tho all my lights are LED.


Also lightbulb dimmers were a thing back in the day, so you could always buy more lightbulbs and lower the brightness of each to take advantage of that exponential curve in lifespan.


> The firms can coordinate by agreeing on a strategy they deem necessary for the future of the industry

As long as it doesn't fall into the "collusion" prohibitions of the relevant competition law.

> “People of the same trade seldom meet … but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.”

Adam Smith, The Wealth of Nations (1776)


re: other things, I bet I agree.


I wouldn't be so sure. I've seen analyses making the case that this new phase is unlike previous cycles and DRAM makers will be far less willing to invest significantly in new capacity, especially into consumer DRAM over more enterprise DRAM or HBM (and even there there's still a significant risk of the AI bubble popping). The shortage could last a decade. Right now DRAM makers are benefiting to an extreme degree since they can basically demand any price for what they're making now, reducing the incentive even more.

https://www.tomshardware.com/pc-components/storage/perfect-s...


The most likely direct response is not new capacity, it's older capacity running at full tilt (given the now higher margins) to produce more mature technology with lower requirements on fabrication (such as DDR3/4, older Flash storage tech, etc.) and soak up demand for these. DDR5/GDDR/HBM/etc. prices will still be quite high, but alternatives will be available.


> produce more mature technology ... DDR3/4

...except current peak in demand is mostly driven by build-out of AI capacity.

Both inference and training workloads are often bottlenecked on RAM speed, and trying to shoehorn older/slower memory tech there would require non-trivial amount of R&D to go into widening memory bus on CPU/GPU/NPUs, which is unlikely to happen - those are in very high demand already.


Even if AI stuff does really need DDR5, there must be lots of other applications that would ideally use DDR5 but can make do with DDR3/4 if there's a big difference in price


I mean, AI is currently hyped, so the most natural and logical assumption is that AI drives these prices up primarily. We need compensation from those AI corporations. They cost us too much.


It is still an assumption.


> The shortage could last a decade.

Do we really think the current level of AI-driven data center demand will continue indefinitely? The world only needs so many pictures of bears wearing suits.


The pop culture perception of AI just being image and text generators is incorrect. AI is many things, they all need tons of RAM. Google is rolling out self-driving taxis in more and more cities for instance.


Congrats on engaging with the facetious part of my comment, but I think the question still stands: do you think the current level of AI-driven data center demand will continue indefinitely?

I feel like the question of how many computers are needed to steer a bunch of self-driving taxis probably has an answer, and I bet it's not anything even remotely close to what would justify a decade's worth of maximum investment in silicon for AI data centers, which is what we were talking about.


Data center AI is also completely uninteresting/non-useful for self driving Taxis, or any other self driving vehicle.


Do you know comparatively how much GPU time training the models which run Waymo costs compared to Gemini? I'm genuinely curious, my assumption would be that Google has devoted at least as much GPU time in their datacenters to training Waymo models as they have Gemini models. But if it's significantly more efficient on training (or inference?) that's very interesting.


My note is specifically for operating them. Training the models, certainly can help.


A decade is far from indefinitely.


AI is needed to restart feudalism?


No, the 10% best scenario return on AI won't make it. The bubble is trying to replace all human labor, which is why it is a bubble in the first place. No one is being honest that AGI is not possible in this manner of tech. And Scale won't get them there.


There's not a difference between "consumer" DRAM and "enterprise" DRAM at the silicon level, they're cut from the same wafers at the end of the day.


Doesn't the same factory produce enterprise (i.e. ECC) and consumer (non-ECC) DRAM?

If there is high demand for the former due to AI, they can increase production to generate higher profits. This cuts the production capacity of consumer DRAM, and lead to higher prices in that segment too. Simple supply & demand at work.


Conceptually, you can think of it as "RAID for memory".

A consumer DDR5 module has two 32-bit-wide buses, which are both for example implemented using 4 chips which each handle 8 bits operating in parallel - just like RAID 0.

An enterprise DDR5 module has a 40-bit-wide bus implemented using 5 chips. The memory controller uses those 8 additional bits to store the parity calculated over the 32 regular bits - so just like RAID 4 (or RAID 5, I haven't dug into the details too deeply). The whole magic happens inside the controller, the DRAM chip itself isn't even aware of it.

Given the way the industry works (some companies do DRAM chip production, it is sold as a commodity, and others buy a bunch of chips to turn them into RAM modules) the factory producing the chips does not even know if the chips they have just produced will be turned into ECC or non-ECC. The prices rise and fall as one because it is functionally a single market.


That makes sense, thank you.


At the silicon level, it is the same.

Each memory DIMM/stick is made up of multiple DRAM chip. ECC DIMMs have an extra chip for storing the error correcting parity data.

The bottleneck is with the chips and not the DIMMs. Chip fabs are expensive and time consuming, while making PCBs and placing components down onto them is much easier to get into.


Got it now, thanks!


Yes, but if new capacity is also redirected to be able to be sold as enterprise memory, we won't see better supply for consumer memory. As long as margins are better and demand is higher for enterprise memory, the average consumer is screwed.


Does it matter that AI hardware has such a shorter shelf life/faster upgrade cycle? Meaning we may see the ram chips resold/thrown back into the used market quicker than before?


Is there still a difference? I have DDR5 registered ECC in my computer.


I mean, the only difference we care about is how much of it is actual RAM vs HBM (to be used on GPUs) and how much it costs. We want it to be cheap. So yes, there's a difference if we're competing with enterprise customers for supply.

I don't really understand why every little thing needs to be spelled out. It doesn't matter. We're not getting the RAM at an affordable price anymore.


Anytime somebody is making a prediction for the tech industry involving a decade timespan I pull out my Fedora of Doubt and tip my cap to m’lady.


Maybe we'll get default to ECC in everything with this?


A LOT of businesses learned during Covid they can make more money by permanently reducing output and jacking prices. We might be witnessing the end times of economies of scale.


The idea is someone else comes in that's happy to eat their lunch by undercutting them. Unfortunately, we're probably limited to China doing that at this point as a lot of the existing players have literally been fined for price fixing before.

https://en.wikipedia.org/wiki/DRAM_price_fixing_scandal


It seems more likely that someone else comes in and either colludes with the people who are screwing us to get a piece of the action or gets bought out by one of the big companies who started all this. Since the rare times companies get caught they only get weak slaps on the wrist where they only pay a fraction of what they made in profits (basically just the US demanding their cut) I don't have much faith things will improve any time soon.

Even China has no reason to reduce prices much for memory sold to the US when they know we have no choice but to buy at the prices already set by the cartel. I expect that if China does start making memory they'll sell it cheap within China and export it at much higher prices. Maybe we'll get a black market for cheap DRAM smuggled out of China though.


I think in part it is a system level response to the widespread just-in-time approach of those businesses' clients. A just-in-time client is very "flexible" on price when supply is squeezed. After that back and forth i think we'll see return to some degree of supply buffering(warehousing) to dampen down the supply levels/price shocks in the pipelines.


I thought that, too, but then the Nexperia shitstorm hit, and it was as if the industry had learned nothing at all from the COVID shortages.


In that case it's far simpler - even IF they wanted to met the demand, building more capacity is hideously expensive and takes years.

So, it would happen even with best intentions and no conspiracies. AI boom already hiked GPU prices, memory was next in line.


Historically, yes. But we haven't had historical demand for AI stuff before. What happens when OpenAI and NVIDIA monopolize the majority of DRAM output?


Nothing costs trillions.


If you had a trillion dollars you might find some things are for sale that otherwise wouldn't be...


To be fair, nobody HAS a trillion dollar either. They have stuff that may be worth a trillion dollar when sold.


Have you seen our debt recently?..


Is this still the case in 2025, though?

In a traditional pork cycle there's a relatively large number of players and a relatively low investment cost. The DRAM market in the 1970s and 1980s operated quite similarly: you could build a fab for a few million dollars, and it could be done by a fab which also churned out regular logic - it's how Intel got started! There were dozens of DRAM-producing companies in the US alone.

But these days the market looks completely different. The market is roughly equally divided up between SK Hynix, Micron, and Samsung. Building a fab costs billions and can easily a year of 5 - if not a decade - from start to finish. Responding to current market conditions is basically impossible, you have to plan for the market you expect years from now.

Ignoring the current AI bubble, DRAM demand has become relatively stable - and so has the price. Unless there's a good reason to believe the current buying craze will last over a decade, why would the DRAM manufacturers risk significantly changing their plans and potentially creating an oversupply in the future? It's not like the high prices are hurting them...


Also, current political turbulence makes planning for the long term extremely risky.

Will the company be evicted from the country in 6 months? A year? Will there be 100% tariffs on competitions imports? Or 0%? Will there be an anti-labor gov’t in effect when the investment might mature, or a pro-labor?

The bigger the investment, the longer the investment timeframe, and the more sane the returns - the harder it is to make the investment happen.

High risk requires a correspondingly high potential return.

That everyone has to pay more for current production is a side effect of the uncertainty, because no one knows what the odds are of even future production actually happening, let along the next fancy wiz-bang technology.

But people do need the current production.


My guess is that they will plummet down when the AI bubble bursts.


A waiver is a waiver. The cost is per square mm. It’s pure supply and demand


No, a wafer is very much not a wafer. DRAM processes are very different from making logic*. You don't just make memory in your fab today and logic tomorrow. But even when you stay in your lane, the industry operates on very long cycles and needs scale to function at any reasonable price at all. You don't just dust off your backyard fab to make the odd bit of memory whenever it is convenient.

Nobody is going to do anything if they can't be sure that they'll be able to run the fab they built for a long time and sell most of what they make. Conversely fabs don't tend to idle a lot. Sometimes they're only built if their capacity is essentially sold already. Given how massive the AI bubble is looking right now, I personally wouldn't expect anyone to make a gamble building a new fab.

* Someone explained this at length on here a while ago, but I can't seem to find their comment. Should've favorited it.


Sure, yes the cost of producing a wafer is fixed. Opex didn’t change that much.

Following your reasoning, which is common in manufacturing, the capex needed is already allocated. So, where does the 2x price hike come from if not supply/demand?

The cost to produce did not go up 100%, or even 20%

Actually, DRAM fabs do get scaled down, very similar to the Middle East scaling down oil production.


> So, where does the 2x price hike come from if not supply/demand?

It absolutely is supply/demand. Well, mostly demand, since supply is essentially fixed over shorter time spans. My point is that "cost per square mm [of wafer]" is too much of a simplification, given that it depends mostly on the specific production line and also ignores a lot of the stuff going on down the line. You can use to look at one fab making one specific product in isolation, but it's completely useless to compare between them or when looking at the entire industry.

It's a bit like saying the cost of cars is per gram of metal used. Sure, you can come up with some number, but what is it really useful for?


DRAM/flash fab investment probably did get scaled down due to the formerly low prices, but once you do have a fab it makes sense to have it produce flat out. Then that chunk of potential production gets allocated into DRAM vs. HBM, various sorts of flash storage etc. But there's just no way around the fact that capacity is always going to be bottlenecked somehow, and a lot less likely to expand when margins are expected to be lower.


> Sometimes they're only built if their capacity is essentially sold already.

"Hyperscalers" already have multi-year contracts going. If the demand really was there, they could make it happen. Now it seems more like they're taking capacity from what would've been sold on the spot or quarterly markets. They already made their money.


I just looked at the invoice for my current PC parts that I bought in April 2016: I paid 177 EUR (~203 USD) for 32GB (DDR4-2800).

It's kinda sad when you grow up in a period of rapid hardware development and now see 10 years going by with RAM $/GB prices staying roughly the same.


Well, I've experienced both to some degree in the past. The previous long time with very similar hardware performance was when PCs were exorbitantly expensive and commodore 64 was the main "home computer" (at least in my country) over the latter 80s and early 90s.

That period of time had some benefits. Programmers learned to squeeze absolutely everything out of that hardware.

Perhaps writing software for today's hardware is again becoming the norm rather than being horribly inefficient and simply waiting for CPU/GPU power to double in 18 months.

I was lucky. I built my am5 7950x Ryzen pc with 2x48gb ddr5 2 years ago. I just bought 4x48gb kit a month ago with an idea to build another home server with the old 2*48gb kit.

Today my old g.skill 2x48gb kit costs Double what I paid for the 4x48gb.

Furthermore I bought two used rtx3090 (for AI) back then. A week ago I bought a third one for the same price... ,(for vram in my server).


Olds remember the years around '95 when RAM stayed the exact same price per megabyte for what seemed a decade.


I paid about GBP 20K for the 192MB RAM in a Sun SPARC 5 workstation in 1995. That’s maybe $27K USD in 1995 dollars. Gulp.


There is or was a website that would let you plug in an Apple computer, and then tell you what you'd be worth if instead you'd bought Apple stock.

I put my G4 PowerBook into it once, and then vowed never to look at it again.


> It's kinda sad when you grow up in a period of rapid hardware development and now see 10 years going by with RAM $/GB prices staying roughly the same.

But you’re cherry picking prices from a notable period of high prices (right now).

If you had run this comparison a few months ago or if you looked at averages, the same RAM would be much cheaper now.

We’re just consuming a lot of DRAM in general.


Aside, $203 USD back then would be about $276 USD after inflation. Not a primary effect, but contributory.


I think that goes to show that official inflation benchmarks are not very practical / useful in terms of buckets of things that people actually buy or desire. If the bucket that measured inflation included computer parts (GPUs?), food and housing - i.e. all that the thing that a geek really needs inflation would be wayy higher...


> If the bucket that measured inflation included computer parts (GPUs?), food and housing - i.e. all that the thing that a geek really needs inflation would be wayy higher...

A house is $500,000

A GPU is $500

You could put GPUs into the inflation bucket and it wouldn’t change anything. Inflation trackers count cost of living and things you pay monthly, not one time luxury expenses every 4 years that geeks buy for entertainment.


Also we’re likely comparing RAMs at different speeds and memory bandwidth.


Also need to account for the dollar decline vs other currencies (which yes is possibly somewhat factored into dollar inflation so you'd have to do the inflation calculation in Euros then convert to dollars accounting for the decline in value).


I bought a bunch of hard drives in 2021 (16TB Seagate Exos) that are now $50-$100 more expensive. It's depressing.


If the sticker price stayed the same since 2016, it got about 35% cheaper due to inflation.


Ordered some servers 6 months ago ~12k USD per unit.

Same order, same bill of materials, 17.5K USD per unit today.

That is roughly a 5.5k increase for 768GB of DDR5 ECC memory and the 4 2tb nvme ssds.


I just gave up and built an AM4 system with a 3090 because I had 128G of ddr4 udimms on hand the whole build was for less than just the memory would have cost for an AM5/ddr5 build.

Really wish that I could replace my old skylake-x system but even ddr4 rdimms for an older xeon are crazy now let alone ddr5. Unfortunately I need slots for 3xTitan V's for the 7.450 TFLOPS each of FP64. Even the 5090 only does 1.637 TFLOPS for FP64, so just hopping that old system keeps running.


If you don't need full ieee-754 double precision, ozaki scheme (emulation with tensor cores) might do the trick. It's been added (just a little bit) to cublas recently.


My 64gb DDR5 kit started having stability issues running XMP a few weeks out of warranty. I bought it two years ago. Looked into replacing it and the same kit is now double the price. Bumping the voltage a bit and having better cooling gets it through memtest thankfully. The fun of building your own computer is pretty much gone for me these days.


Doubled in the last 4 months https://www.youtube.com/watch?v=o5Zc-FsUDCM

Upgraded by adding 64GB.. last Friday I sold the 32 GB I took out for what I paid for the 64 GB in July... insane


Time to start scouring used-PC sales to reclaim the RAM and sell it for a profit?


Have you not noticed the domain of the submitted article? Others are way, way ahead on that already.

(Including the submitter. In their comment history is "Tip: You can sell used server RAM or desktop modules through BuySellRam to recover value from old hardware." at https://news.ycombinator.com/item?id=45800881 and all of the submissions of this domain are from this user: https://news.ycombinator.com/from?site=buysellram.com )


but why wouldn't that used-PC simply increase in price due to the components becoming more expensive?


Information asymmetry


If you can find used PCs being liquidated with DDR4 RAM that is fast enough for a modern build, then you might.

Old RAM that comes out of the PCs being sold at fire sale prices isn’t really in demand though. Even slower DDR4 grades aren’t seeing much demand.


You should use OBS to screen record rather than video your computer screen with your phone


Such is life. I suggest finding a less volatile hobby, like crocheting.

Actually, the textile market is pretty volatile in the US these days with Joan's out of business. Pick a poison, I guess? There's little room for stability in a privately-owned-world.


Why do we all need 128GB now? I was happy with 32.

Close a few Chrome tabs, and save some DDR5 for the rest of us. :-)


Last night, while writing a LaTeX article, with Ollama running for other purposes, Firefox with its hundreds of tabs, multiple PDF files open, my laptop's memory usage spiked up to 80GB RAM usage... And I was happy to have 128GB. The spike was probably due to some process stuck in an effing loop, but the process consuming more and more RAM didn't have any impact on the system's responsiveness, and I could calmly quit VSCode and restart it with all the serenity I could have in the middle of the night. Is there even a case where more RAM is not really better, except for its cost?


> Is there even a case where more RAM is not really better, except for its cost?

It depends. It takes more energy, which can be undesirable in battery powered devices like laptops and phones. Higher end memory can also generate more heat, which can be an issue.

But otherwise more RAM is usually better. Many OS's will dynamically use otherwise unused RAM space to cache filesystem reads, making subsequent reads faster and many databases will prefetch into memory if it is available, too.


Firefox is particularly good at having lots of tabs open and not using tons of memory.

    $ ~/dev/mozlz4-tool/target/release/mozlz4-tool \
        "$(find ~/Library/Application\ Support/Firefox/Profiles/ -name recovery.jsonlz4 | head -1)" | \
        jq -r '[.windows[].tabs | length] | add'
    5524
Activity monitor claims firefox is using 3.1GB of ram.

    Real memory size:      2.43 GB
    Virtual memory size: 408.30 GB
    Shared memory size:  746.5  MB
    Private memory size: 377.3  MB
That said, I wholeheartedly agree that "more RAM less problems". The only case I can think of when it's not strictly better to have more is during hibernation (cf sleep) when the system has to write 128GB of ram to disk.


In my experience firefox is "pretty good" about having lots of tabs and windows open if you don't mind it crashing every week or two.


I've not had a crash on Firefox in like a decade, basically since the Quantum update in like 2016.


Try living like I do. I currently have 1,838 tabs open across 9 different windows. On second thought, maybe don't live like I do...


I've got ~5k+ tabs, and I've also seen basically zero crashes in the last decade. I'm on Macos, not very many extensions though one of them is Sidebery (and before that Tree Style Tabs) which seems to slow things down quite a lot.


Why do you need all of these tabs open? How do you find what you need?


I likely don't need all the tabs. Some were opened only because they might be useful or interesting. Others get opened because they cover something I want to dig into further later on, but in this case it's the buildup of multiple crash>restore cycles. Eventually I'll get to each tab and close it or save the URL separately until it's back to 0, but even in that process new tabs/windows get opened so it can take time.


On consumer chips the more memory modules you have the slower they all run. I.e. if you have a single module of DDR5 it might run at 5600MHz but if you have four of them they all get throttled to 3800MHz.


Mainboards have two memory channels so you should be able to reach 5600mhz on both and dual slot mainboards have better routing than quad slot mainboards. This means the practical limit for consumer RAM is 2x48GB modules.


Intel's consumer processors (and therefore the mainboards/chipsets) used to have four memory channels, but around the year 2020 this was suddenly limited to two channels since the 12th generation (AMD's consumer processors had always two channels, with exception of Threadriper?).

However this does not make sense, as for more than a decade the processors have only grown increasing the number of threads, therefore two channels sounds like a negligent and deliberately imposed bottleneck to access the memory if one use all those threads (Lets say 3D render, Video postproduction, Games, and so on).

And if one want four channels to surpass such imposed bottleneck, the mainboards that nowadays have four channels don't contemplate consumer use, therefore they have one or two USB connectors with three or four LAN connectors at prohibitive prices.

We are talking about consumer quad-channel DDR4 machines ten years old, wildly spread, keeps being competent compared with current consumers ones, if not better. It is like if all were frozen along this years (and what remains to be seen with such pattern).

Now it is rumoured that AMD may opt for four channels for its consumer lines due to the increased number of pin connectors (good news if true).

It is a bad joke what the industry is doing to customers.


> Intel's consumer processors (and therefore the mainboards/chipsets) used to have four memory channels, but around the year 2020 this was suddenly limited to two channels since the 12th generation (AMD's consumer processors had always two channels, with exception of Threadriper?).

You need to re-check your sources. When AMD started doing integrated memory controllers in 2003, they had Socket 754 (single channel / 64-bit wide) for low-end consumer CPUs and Socket 940 (dual channel / 128-bit wide) for server and enthusiast destkop CPUs, but less than a year later they introduced Socket 939 (128-bit) and since then their mainstream desktop CPU sockets have all had a 128-bit wide memory interface. When Intel later also moved their memory controller from the motherboard to the CPU, they also used a 128-bit wide memory bus (starting with LGA 1156 in 2008).

There's never been a desktop CPU socket with a memory bus wider than 128 bits that wasn't a high-end/workstation/server counterpart to a mainstream consumer platform that used only a 128-bit wide memory bus. As far as I can tell, the CPU sockets supporting integrated graphics have all used a 128-bit wide memory bus. Pretty much all of the growth of desktop CPU core counts from dual core up to today's 16+ core parts has been working with the same bus width, and increased DRAM bandwidth to feed those extra cores has been entirely from running at higher speeds over the same number of wires.

What has regressed is that the enthusiast-oriented high-end desktop CPUs derived from server/workstation parts are much more expensive and less frequently updated than they used to be. Intel hasn't done a consumer-branded variant of their workstation CPUs in several generations; they've only been selling those parts under the Xeon branding. AMD's Threadripper line got split into Threadripper and Threadripper PRO, but the non-PRO parts have a higher starting price than early Threadripper generations, and the Zen 3 generation didn't get non-PRO Threadrippers.


At some point the best "enthusuast-oriented HEDT" CPU's will be older-gen Xeon and EPYC parts, competing fairly in price, performance and overall feature set with top-of-the-line consumer setups.


Based on historical trends, that's never going to happen for any workloads where single-thread performance or power efficiency matter. If you're doing something where latency doesn't matter but throughput does, then old server processors with high core counts are often a reasonable option, if you can tolerate them being hot and loud. But once we reached the point where HEDT processors could no longer offer any benefits for gaming, the HEDT market shrank drastically and there isn't much left to distinguish the HEDT customer base from the traditional workstation customers.


I'm not going to disagree outright, but you're going to pay quite a bit for such a combination of single-thread peak performance and high power efficiency. It's not clear why we should be regarding that as our "default" of sorts, given that practical workloads increasingly benefit from good multicore performance. Even gaming is now more reliant on GPU performance (which in principle ought to benefit from the high PCIe bandwidth of server parts) than CPU.


I said "single-thread performance or power efficiency", not "single-thread performance and power efficiency". Though at the moment, the best single-thread performance does happen to go along with the best power efficiency. Old server CPUs offer neither.

> Even gaming is now more reliant on GPU performance (which in principle ought to benefit from the high PCIe bandwidth of server parts)

A gaming GPU doesn't need all of the bandwidth available from a single PCIe x16 slot. Mid-range GPUs and lower don't even have x16 connectivity, because it's not worth the die space to put down more than 8 lanes of PHYs for that level of performance. The extra PCIe connectivity on server platforms could only matter for workloads that can effectively use several GPUs. Gaming isn't that kind of workload; attempts to use two GPUs for gaming proved futile and unsustainable.


You have a processor with more than eight threads, at same bus bandwidth, what do you choose, dual channeled or four channeled processor.

That number of threads will hit a bottleneck accessing only through to channels of memory.

I don't understand why you brought up the topic of single-threading in your response to the user, given that processors reached a frequency limit of 4 GHz, and 5 GHz with overclocking, a decade ago. This is why they increased the number of threads, but if they reduce the number of memory channels for consumer/desktop...


What is the best single thread performance possible right now? With over locked fast ram.


But you can easily have 128GB and still on 2 modules


Larger capacity is usually slower though. The fastest ram is typically 16 or 32 capacity.

The OP is talking about a specific niche of boosting single thread performance. It’s common with gaming pcs since most games are single thread bottlenecked. 5% difference may seem small, but people are spending hundreds or thousands for less gains… so buying the fastest ram can make sense there.


> Is there even a case where more RAM is not really better, except for its cost?

RAM uses power.


It also consumes more physical space. /s


Not really /s, since it is a limited resource in e.g. Laptops.


It depends on what you are doing.

If you are working on an application that has several services (database, local stack, etc.) as docker containers, those can take up more memory. Especially if you have large databases or many JVM services, and are running other things like an IDE with debugging, profiling, and other things.

Likewise, if you are using many local AI models at the same time, or some larger models, then that can eat into the memory.

I've not done any 3D work or video editing, but those are likely to use a lot of memory.


640K ought to be enough for anyone.


Why did you waste all your money on 32gb when 4gb is enough? Why did we all need 32gb?


Bloated OS loaded with things the buyer does not need and bloated JS ecosystem probably.


Get this. Pen and paper. No need for silicon at all.

You're welcome.


Having recently updated to 192gb from 96gb I'm pretty happy. I run many containers, have 20 windows of vscode and so on. Plus ai inference on CPU when 48gb vram is not enough.


Exactly. I recently doubled my RAM and have now 4GB.


I like to tell people I have 128GB. It's pretty rare to meet someone like me that isn't swapping all the time.


I also tell people that. It’s not true, but it’s free.


Wow, no kidding. I checked my BOM for the 9950 build I did a year ago, RAM price has doubled for the exact same DDR5-6000 sticks.


> I have no idea if/when prices will come back down but it sucks.

Usually after the companies are fined for price-fixing

https://en.wikipedia.org/wiki/DRAM_price_fixing_scandal


Damn I bought a whole computer with 128GB RAM & 16-core Ryzen CPU for £325 a few months ago.


and the guy that sold it to you stole it from where ?


new?


Interesting that Samsung put their prices up 60% today, and a retailer who bought their stock at the old price feels compelled to put their prices up 2.5x.

When the AI bubble bursts we can get back to the old price


The cost of inventory on the shelves basically doesn’t matter. The only thing that matters is the market rate.

If those retailers didn’t increase their prices when the price hike was announced, anyone building servers would have instantly purchased all of the inventory anyway at the lower prices, so there wouldn’t actually have been weeks of low retail RAM prices for everyone.

Every once in a while you can catch a retailer whose pricing person missed the memo and forgot to update the retail price when the announcement came out. They go out of stock very rapidly.


> If those retailers didn’t increase their prices when the price hike was announced, anyone building servers would have instantly purchased all of the inventory anyway at the lower prices

But that retailer would have made a lot of money in a very short time.


In the scenario where they don't raise prices, they sell out immediately. In the scenario where they do raise prices, it's too expensive so you don't buy it. In the scenario where they keep prices low, and do a lottery to see who can buy them, you don't get picked.

No matter what, you are not getting those modules at the old price. There are few things that trip up people harder than this exact scenario, and it happens everywhere. Concert tickets, limited releases, water during crises, hot Christmas gift, pandemic GPUs, etc.

Once understood you can stop getting mad over it like it's some conspiracy. It's fundamental and natural market behavior.


> I have no idea if/when prices will come back down but it sucks.

Years, or when the AI bubble pops, whatever comes first.

Similar situation with QLC flash and HDDs btw.


I guess I lucked out. I bought a 768GB workstation (with 9995wx CPU and rtx 6000 Pro Blackwell GPU) in August. 96GB modules were better value than 128GB. That build would be a good bit pricier today looks like.


what's your usecase for such a build?


Yeah you are not alone here being annoyed. I think we need to penalise all who drive the prices up - that includes the manufacturers but also AI companies etc...

Those price increases are not normal at all. I understand that most of it still comes from market demands but this is also skewing the market now in unfair manners. Such increases smell of criminal activity too.


> I think we need to penalise all who drive the prices up - that includes the manufacturers but also AI companies etc...

You want to penalize companies for buying things and penalize companies for selling things are market rate?

There are a lot of good examples through history about how central planning economics and strict price controls do not lead to good outcomes. The end result wouldn’t be plentiful cheap RAM for you. The end result would be no RAM for you at all because the manufacturers choose to sell to other countries who understand basic economics.


I think there's a case for banning the sale of services well below the marginal cost of supplying that service - loss leaders, or "dumping" - when it's done on such a scale as AI marketing.


I'm still on DDR3 :)


it's cyclical. just wait 10 years


Good advice for the immortal. For everyone else, "do something else instead" is more practical.


I think it's somewhat useful long term advice, and I would add that parts prices tend to be asynchronous.

Building a PC in a cost efficient manner generally requires someone to track parts prices over years, buy parts at different times, and buy at least a generation behind.

The same applies to many other markets/commodities/etc...


That is terrible. Only less than half year! If all countries keep building AI centers, it will talk a long time for the price to be back to reasonable status.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: