They shouldn't have tried to force LLMs into doing something current models aren't designed for: semantic understanding of "unknown unknowns". Tier-2/3 support isn't just about picking an answer from a knowledge base; it requires deduction, empathy, and finding solutions that don't exist yet. Models excel at generating relevant text for FAQs, but the moment a task requires understanding novel context, correlating non-obvious facts, or recognizing subtle emotional cues from a customer, current LLM architectures fail ruthlessly