I’m surprised, hacker news is not questioning this in the slightest?
Is anyone really buying they laid off 4k people _because_ they really thought they’d replace them with an LLM agent? The article is suspect at best and this doesn’t even in the slightest align with my experience with LLMs at work (it’s created more work for me).
The layoff always smelled like it was because of the economy.
Hmm, actually lines up for me at least. It was a pretty big news item a few months ago when Salesforce did this drastic reduction in their Customer Service department, and Marc Benioff raved about how great AI was (you might have just missed it):
I’m beginning to doubt very much that will happen. AI/LLMs are already based on 99% of all accessible text in the world (I made that stat up, but I think I’m not far off). Where will the additional intelligence come from that SalesForce needs for the long tail, the nuance, and the tough cases? AI is good at what it’s already good at - I predict we won’t see another order of magnitude improvement with all the current approaches.
Hmm, am no LLM expert, but agree with you that the models themselves for the individual subject domains seem like they're starting to reach their peaks (Writing, solving math, coding, music gen...) and the improvements are becoming a lot less dramatic than couple of years ago.
But, feel like combining LLM's with other AI techniques seems like it could do so much more...
... As mentioned, am no expert, but seems like one of the next major focuses on LLM's is on verification of its answers, and adding to this, giving LLM's a sense for when its result are right or wrong. Yeah, feel like the ability for an LLM to introspect itself so it can gain an understanding of how it got its answer might be of help if knowing if its answer is right (think Anthropic has been working on this for awhile now), as well as scoring the reliability of the information sources.
And, they could also mix in a formal verification step, using some form of proof to prove that its results are right (for those answers that lend themselves to formal verification).
Am sure all this is all currently being tried. So any AI experts out there, feel free to correct me. Thanks!
The idea of formal verification works great for code or math where clear rules exist, but in customer support, there is no formal specification. You can't write a unit test for empathy or for "did we correctly understand that the customer actually wants a refund even though they're asking about settings." This is the Neuro-symbolic AI problem: to verify an LLM answer, you need a rigid ontology of the world (Knowledge Graph or rules), but the real world of customer interaction is chaos that cannot be fully formalized
Ah yes, and actually, Agreed (as mentioned, formal verification is only possible for "those answers that lend themselves to it").
Interesting that you mentioned Knowledge Graphs, haven't heard about these in a long time. Just looked up "Commonsense knowledge" page on wikipedia and seems like they're still being added to. Would you happen to know if they're useful yet and can do any real work? or are good enough to integrate with LLM's?
I mean, this might be a case where it’s actually sort of credible. It was a _very_ deep cut (basically half the workforce), the salesforce guy is a particularly over-the-top ai true believer, and if they are now reversing course and re-hiring, well, nothing has happened to the economy in the last couple months that would suggest that, if it was related to the economy. If anything, things are looking even more uncertain/ominous.
Is anyone really buying they laid off 4k people _because_ they really thought they’d replace them with an LLM agent? The article is suspect at best and this doesn’t even in the slightest align with my experience with LLMs at work (it’s created more work for me).
The layoff always smelled like it was because of the economy.