Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My take is that it's primarily a smart way to quickly gather lots of ELO/AB feedback about LLM responses for training, whilst also reducing people switching to ChatGPT. OpenAI has significant first mover advantage here and it's why they're so worried about distillation, becuase it threatens the moat.

Google, on the other hand, has a huge moat in access to probably the best search index available and existing infrastructure built around it. Not to mention the integration with Workspace emails, docs, photos - all sorts of things that can be used for training. But what they (presumably) lack is the feedback-derived data that OpenAI has had from the start.

ChatGPT does not use search grounding by default and the issues there are obvious. Both Gemini and ChatGPT make similar errors even with grounding but you would expect that to get better over time. It's an open research question as to what knowledge should be innate (in the weights) and what should be "queryable", but I do think the future will be an improved version of "intelligence" + "data lookup".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: