Just as the title says. A very sanitized keyhole. Google does not produce in the output billions of results. The number of results is very small, and that is a problem.
This might be another 'dead internet rant' [1]. It might be a common knowledge, but Google shows only several pages of results, by average 10 links per page. 31 pages for 'news'. The common argument is that google produces only 'relevant' data, and 'nobody looks beyond the first link'.
I made an experiment today.
Example 1. Google
If I write 'covid' in Google, then it shows me 19 pages of results for 'covid'. 30 pages for news filter for 'covid' keyword.
Example 2. Next. I hoard links [2], from various sources, mainly RSS. I store them in my SQL database. I have about 1095264 links in total.
I have 12566 links which have 'covid' in title. 'covid' is important term, and very broad. Some links in my database are not important. Some are trash. However there is a big difference between the amount of results Google provides, and links found in my database.
Example 3. If I search Reddit for 'covid' keyword, using reddit web page, and search box I can find around 246 links, which is also not a very big number.
Analysis of the tests.
Google focuses on 'recent' links. Does that not erase history, by applying 'recent' filter? I know you can apply a filter in Google tools, but you would have to manually scan for the desired time frame. You would have to know when the news occurred to find some news. It reminds me of hiding the history.
I myself cannot quickly search 1095264 links in my database, but I assume Google has more resources. I think it should do better.
It is easy to find something useful in Google for terms like compiler errors, but I would expect Google to find billions of links for 'covid' keyword.
I am not sure even if chatgpt could be better, cause it gives answers, rather than sources.
What do you think?
[1] https://en.wikipedia.org/wiki/Dead_Internet_theory
[2] Links are present in https://github.com/rumca-js/RSS-Link-Database-2023