Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People think that because AI cannot replace a senior dev, it's a worthless con.

Meanwhile, pretty much every single person in my life is using LLMs almost daily.

Guys, these things are not going away, and people will pay more money to use them in future.

Even my mom asks ChatGPT to make a baking applet with a picture she uploads of the recipe, that creates a simple checklist for adding ingredients (she forgets ingredients pretty often). She loves it.

This is where LLMs shine for regular people. She doesn't need it to create a 500k LOC turn-key baking tracking SaaS AWS back-end 5 million recipes on tap kitchen assistant app.

She just needs a bespoke one off check list.



Is she going to pay enough to fund the multitrillion dollars it costs to run the current AI landscape?


Yeah, she is, because when reality sets in, these models will probably have monthly cellphone/internet level costs. And training is the main money sink, whereas inference is cheap.

500,000,000 people paying $80/mo is roughly a 5-yr ROI on a $2T investment.

I cannot believe on a tech forum I need to explain the "Get them hooked on the product, then jack up the price" business model that probably 40% of people here are kept employed with.

Right now they are (very successfully) getting everyone dependent on LLMs. They will pull rug, and people will pay to get it back. And none of the labs care if 2% of people use local/chinese models.


> And training is the main money sink, whereas inference is cheap.

False. Training happens once for a time period, but inference happens again and again every time users use the product. Inference is the main money sink.

"according to a report from Google, inference now accounts for nearly 60% of total energy use in their AI workloads. Meta revealed something even more striking: within their AI infrastructure, power is distributed in a 10:20:70 ratio among experimentation, training, and inference respectively, with inference taking the lion’s share."

https://blogs.dal.ca/openthink/the-hidden-cost-of-ai-convers...


They get paid for inference, those tokens might as well be monetary tokens.


This is exactly the problem.

Companies currently are being sold that they can replace employees with little agents that cost $20 to $200 a month.

But then they realize that the $200 last for about 3.5 hours on day 1 of the month and the rest will be charged by the token. Which will then cost as much or more than the employee did, but with a nice guaranteed quota of non determinism and failure rate included.


I personally don't know a single person that would pay $80 for some LLM. Most people i know pay nothing, or got a 1 year sub of a phone purchase or similar.

Also, everyone here conveniently always forgets the huge hardware and datacenter upfront investment that MS have already made. That cost alone will never be recouped with current prices.

If you can't even run the thing close to profitable, then how will you ever actually profit?

But don't worry guys, your robotaxi will recoup your tesla purchase within a year while you sleep.


The problem is when paying $20 or $40 a month is what's expected to pay for inference that costs $50 or $80 a month to provide. Electricity is not going to get cheaper.


> 500,000,000 people paying $80/mo

Simply not going to happen


> 500,000,000 people paying $80/mo

Or better yet, you just need 100 people paying 400 million each to get the same amount!

> "Get them hooked on the product, then jack up the price"

That only works if the product is actually good. The average person isn't going to be paying EIGHTY dollars a month to generate recipes or whatever, that's just delusional


For $960 a year, you could probably buy a recipe-ingredients-tracker-app.


I could sell my car and get my groceries delivered and save a ton of money.

Thankfully, like LLMs, cars have wide utility and use cases. I don't use my car solely for driving to the grocery store.


I think there are 2 things at play here. LLMs are, without a doubt, absolutely useful/helpful but they have shortcomings and limitations (often worth the cost of using). That said, businesses trying to add "AI" into their products have a much lower success rate than LLM-use directly.

I dislike almost every AI feature in software I use but love using LLMs.


This false dichotomy is still frustratingly all over the place. LLMs are useful for a variety of benign everyday use cases, that doesn't mean that they can replace a human for anything. And if those benign use cases is all they're good at, then the entire AI space right now is maybe worth $2B/year, tops. Which is still a good amount of money! Except that's roughly the amount of money OpenAI spends every minute, and it's definitely not "the next invention of fire" like Sam Altman says.


Even these everyday use-cases are infinitely varied and can displace entire industries. E.g. ChatGPT helped me get $500 in airline delay compensation after multiple companies like AirHelp blew me off: https://news.ycombinator.com/item?id=45749803

For reference, AirHelp alone had revenue of $153M last year (even without my money ;-P): https://rocketreach.co/airhelp-profile_b5e8e078f42e8140

This single niche industry as a whole is probably worth billions alone.

Now multiply that by the number of niches that exist in this world.

The consider the entire universe of formal knowledge work, where large studies (from self-reported national surveys to empirical randomized controlled trials on real-world tasks) have already shown significant productivity boosts, in the range of 30%. Now consider their salaries, and how much companies would be willing to pay to make their employees more productive.

Trillions is not an exaggeration.


Use case == Next iteration of "You're Fired" may be more like it.


Sure, as a search engine replacement it's totally fine and works reasonably well, but this is also because Google as search engine has regressed dramatically since it's aggressively pushing products at the top of the search results instead of answering questions.

But "a slightly better search engine" sounds much less interesting to investors than "will completely transform human civilization" ;)


Search engines did completely transform human civilization, and will continue to be needed.


That was the internet, search engines were just the logical consequence to make the information stored in the internet more accessible. And today's AI is also more or less 'just' a lossy compressing of the information that was accumulated on the internet over the last 50 years. Without internet no AI.


The internet without search engines is like the printing press without distribution.


It's exactly the same situation as Tesla "self driving". It's sold and marketed in no uncertain terms, VERY EXPLICITLY that AI will replace senior devs.

As you admit, it can't do that. And everyone involved knows it.

How is that anything other than a con?


Because (as per OpenAI at least) only 4% of tokens generated are for writing software. Don't lose sight of the world because you swim in tech 24/7.


> People think that because AI cannot replace a senior dev, it's a worthless con.

Quite the strawman. There are many points between “worthless” and “worth 100s of billions to trillions of investment”.


Are your mother's cooking recipes gonna cover the billions and even trillions being spent here? I somehow doubt that, and it's funny to me that the killer usecase the hypesters use is stupid inane shit like this (no offense to your mom, but a recipe generator isn't something we should be speedrunning global economic collapse for)


is this really the best use case you could come up with? says it all really if so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: