Integrations and inference costs aren't necessarily 1:1. Integrations can use more AI, reasoning models can cause token explosion, Jevons Paradox can drive more inference tokens, big businesses and government agencies (around the world, not just the US) can begin using more LLMs. I'm not sure integrations are that simple. A lot of integrations that I know of are very basic integrations.
While I haven't read the article yet, if this is true then yes this could be an indication of consumer app style inference (ChatGPT, Claude, etc) waning which will put more pressure on industrial/tool inference uses to buoy up costs.
> Oh and the ChatGPT consumer app is seeing slowing growth: https://techcrunch.com/2025/10/17/chatgpts-mobile-app-is-see...
While I haven't read the article yet, if this is true then yes this could be an indication of consumer app style inference (ChatGPT, Claude, etc) waning which will put more pressure on industrial/tool inference uses to buoy up costs.