Cell phones have been through how many generations between the 80s and now? All the past generations are obsolete, but the investment in improving the technology (which is really a continuation of WWII era RF engineering) means we have readily available low cost miniature comms equipment. It doesn’t matter that the capex on individual phones was wasted.
Same for GPUs/LLMs? At some point things will mature and we’ll be left with plentiful, cheap, high end LLM access, on the back of the investment that has been made. Whether or not it’s running on legacy GPUs, like some 90s fiber still carries traffic, is meaningless. It’s what the investment unlocks.
I'm worried all that cheap easily accessible LLM capacity will be serving us ads if we're lucky and subtly pushing us to use brands that pay money if we're not.
If AI says don't buy a subaru it's not worth the money, then Subaru pays attention and they are willing to pay money to get a better rec. Same for Univerisites. Students who see phrases like "If the degree is from brown flush it down" (ok hyperbole, but still) are going to pick different schools.
I think people have more memetic immunity than you're giving them credit for. We're in the early days, people don't fully understand how to treat ChatGPT's outputs.
Soon enough, asking an LLM a question and blindly trusting the answer will be seen as ridiculous, like getting all your news from Fox News or Jacobin, or reading ads on websites. Human beings can eventually tell when they're being manipulated, and they just... won't be.
We've already seen how this works. Grok gets pushed to insert some wackjob conservative talking point, and then devolves into a mess of contradictions as soon as it has to rationalize it. Maybe it's possible to train an LLM to actually manipulate a person towards a specific outcome, but I do not think it will ever be easy or subtle.
You mention Fox News and people knowing when they're manipulated and I struggle to see how that squares with the current reality of Fox News being the most popular news network and rising populism that very much relies on manipulation.
It’s a tried and true method of Silicon Valley VCs. Produce something as a loss leader. Build a moat. Then extract rent. Not only can you stop having to produce anything of value, you can even degrade your product and people won’t be able to leave thanks to lock-in.
We wonder why the US has lost or losing competitiveness with China in most industries. Their government has focused on public investment and public ownership of natural monopolies, preventing rent extraction and keeping the costs of living lower. That means employers don’t have to pay workers as much so their businesses can be more competitive. Contrast with the US whose working class is parasitized by various forms of rent extraction - land, housing, medicine, subscription models, etc. US employers effectively finance these inefficiencies. It’s almost like the US wants to fall behind.
> At some point things will mature and we’ll be left with plentiful, cheap, high end LLM access, on the back of the investment that has been made.
Okay, and? What if the demand for LLMs plummets because people get disillusioned due to the failure to solve issues like hallucinations? If nobody is willing to buy, who cares how cheap it gets?
The problem with generative AI is that its economics are more like steel. Metals used to be extremely valuable due to the amount of effort needed to extract them from ores, but eventually the steel industry matured and steel turned into a commodity with hardly any differentiation. You won't be able to back up your trillion dollar valuations if a competitor is going to undercut you.
You are comparing a toy that had no knowledge exchange, learning or improvement capability, or cult like enterprise adoption with LLM.... You might want to rethink your counter example
Thank you for proving my point. People waiting for a big, sudden crash must insist that the growth is driven solely by hype and FOMO. This is absurd. We’re talking about professionals with decades of experience using these things for hours a day in their area of expertise.
Have you seen the "average" users of LLM? They mostly don't care about hallucinations. It's like that joke[1] "Computer says no", it doesn't matter what is true or real, only that the computer said it, so now that's true in their mind.
Personal anecdote: I work with beginner IT students, and a new, fun (<- sarcasm, not fun) thing for me is the amount of energy they spend arguing with me about basic, easily proven Linux functionality. It's infuriating, the LLM is more believable than the paid professional who's been doing it for 30 years...
I find it highly doubtful that hallucinations, if unsolved, will have any real negative effect on LLM adaptation.
It won't happen due to Gell-Mann amnesia. It "helps" that LLMs never admit to not knowing something. So from average user point of view it looks like the agents know everything and are only under-educated in the one domain the user is actually the specialist.
> It's infuriating, the LLM is more believable than the paid professional who's been doing it for 30 years...
Swallow your fury, and accept the teachable moments; AI isn’t going away and beginners will continue to trust it when they lack the skills to validate its information on their own.
LLMs are less of a miraculous new world than the hype machine says they are, but they are not nothing. They have some actual uses. It will fulfill those uses. There will be some people who still buy.
Same for GPUs/LLMs? At some point things will mature and we’ll be left with plentiful, cheap, high end LLM access, on the back of the investment that has been made. Whether or not it’s running on legacy GPUs, like some 90s fiber still carries traffic, is meaningless. It’s what the investment unlocks.