The "Generative AI services popularity" [1] chart is surprising. ChatGPT is being #1 makes sense, but Character.AI being #2 is surprising, being ahead of Anthropic, Perplexity, and xAI. I suspect this data is strongly affected by the services DNS caching strategies.
The other interesting chart is "Workers AI model popularity" [2]. `llama-3-8b-instruct` has been leading at 30% to 40% since April. That makes it hands the most popular weights available small "large language model". I would have expected Meta's `m2m100-1.2b` to be more used, as well as Alphabet's `Gemma 3 270M` starting to appear. People are likely using the most powerful model that fits on a CF worker.
As shameless plug, for more popularity analysis, check out my "LLM Assistant Census" [3].
With a lot of characters/scenarios of a sexual nature. They are the market leader for NSFW LLM experiences. Or maybe it's more accurate to call them "dating" experiences
1.1.1.1 will see the query regardless of caching by upstream servers. Downstream and client caching probably averages out quite nicely with enough volume.
If the TTL of one domain’s records are all shorter than the TTLs of another domain’s, what would make downstream and client caching cancel out? Do clients not respect TTLs these days?
(In this particular case, I don’t think the TTLs are actually different, but asking in general)
The "Generative AI services popularity" [1] chart is surprising. ChatGPT is being #1 makes sense, but Character.AI being #2 is surprising, being ahead of Anthropic, Perplexity, and xAI. I suspect this data is strongly affected by the services DNS caching strategies.
The other interesting chart is "Workers AI model popularity" [2]. `llama-3-8b-instruct` has been leading at 30% to 40% since April. That makes it hands the most popular weights available small "large language model". I would have expected Meta's `m2m100-1.2b` to be more used, as well as Alphabet's `Gemma 3 270M` starting to appear. People are likely using the most powerful model that fits on a CF worker.
As shameless plug, for more popularity analysis, check out my "LLM Assistant Census" [3].
[1] https://radar.cloudflare.com/ai-insights#generative-ai-servi...
[2] https://radar.cloudflare.com/ai-insights?dateRange=24w#worke...
[3] https://aleyan.com/blog/2025-llm-assistant-census/