Not sure about that, they're all losing money. OpenAI recently raised 6.5 billion but is losing 5 billion a year, it's just not sustainable. In fact it's the local LLMs that are thriving, predictably because they run on your own hardware.
The mistake in that article is the assumption that these companies collecting those gigantic VC funding rounds are looking to stay ahead of the pack and be there even 10 years down the road.
That's a fundamental misunderstanding of the (especially) US startup culture in the last maybe 10-20 years. Only very rarely is the goal of the founders and angel investors to build an actual sustainable business.
In most cases the goal is to build enough perceived value by wild growth financed by VC money & by fueling hype that an subsequent IPO will let the founders and initial investors recoup their investment + get some profit on top. Or, find someone to acquire the company before it reaches the end of its financial runway.
And then let the poor schmucks who bought the business hold the bag (and foot the bill). Nobody cares if the company becomes irrelevant or even goes under at that point anymore - everyone who did has has recouped their expense already. If the company stays afloat - great, that's a bonus but not required.
Well, yes. Replace the various music and book publishing mills with LLMs for even more low quality drivel filling the marketplaces because now even the already low barrier of having to actually pay someone to produce it will be removed.
That's definitely going to be an improvement. Not.
Normal? Absolutely not. It signals loud and wide that your time is not worth even a reply from the recruiter/HR and you are being treated as a disposable piece of crap. You can imagine how they would treat you if you actually got hired.
There is zero excuse for wasting the candidate's time like this. Even less today with all the "AI" automation the recruiting companies & HR uses - writing a form rejection e-mail is literally a one click affair.
It is good that you are rooting for the poor industry because they are "being screwed".
Sad that you didn't consider the content creators, people who's faces, voices, writings, art, personal information, etc. are being used without consent and without any compensation as the ones "being screwed" here.
gatekeeping the information that is being learnt off these data is not a right they have been granted.
I do not see an ai weight/model as a derivative works of the individual works used to train it, provided that you can obtain new works that would not have violated the copyright of the original had it been manually made.
Copyright supports creativity by limiting creativity. It's paradoxical. But more recently we have seen open source, wikipedia and open scientific publication pushing this trend back.
They've generally already been screwed by distributors, they're being screwed further by gen AI, and in the case where large corporations which own their works are allowed to extract further rent from those works from those creators, they will be triply-screwed. It's a shit situation but the existing copyright system is not a good solution for the average creator.
It's kind of like the history of the web with regular human consumers and various attempts to allow microtransactions, which receded in favor of ads and tracking.
If the pattern holds, the "cost" of training non-paywalled data will be attempts to hack/influence the model, as opposed to hacking humans.
Content creators or artists, will have to find a way to deal with AI (as a broad term). Both in how they will or won't use it, but also in how it affects or threathens their jobs and livelyhoods.
People in the AI (using this term very loosely here) business are worried that only the biggest players effectively monopolize AI with the help of badly thought out (or plain corrupt) regulations that may be set in place in the near future.
These two issues are not mutually exclusive problems, these two groups are not fighting against eachother. It feels like a typical divide and conquer, where the regular people are being pitted against eachother because of your view, while they should work together towards a solution for both of their problems and fight bad regulation (and monopolies in AI).
Generative "AI" enthusiasts build their models by ripping off creative human beings for profit.
Generative "AI" enthusiasts fear that bigger fish will squeeze them out of the niches they created for themselves through theft.
Suggesting that artists and genAI enthusiasts should "work together" to defend the regulatory environment of the latter is ludicrous; it would be like a burglar breaking into my house, stealing my appliances, and asking for my support in his campaign for mayor!
I agree that web scraping is a shady business in many cases but there is definitely a difference between setting up a few mobile proxies for yourself and using devices and networks which belong to other people without them even knowing this until they cannot access some websites because there was a bot detected in their network.
Some of the affected people somehow knows that are being used for that, some companies pay them for amount of requests they pass through. But they ignore the potential harmful consequences for them, and the consequences for the scrapped sites (from loss of performance to not be table to serve content) and the intended users for those sites.