> "Perhaps the middle ground is to have the LLM write the classifier..."
There was a time when I'd read this comment and then go looking for a tutorial on building a basic "Bayesian classifier". Invariably, I'd find several, which I'd line up like soldiers in tabs, and go through them until I find one that explained the what, why and how of it that spoke to my use (or near enough).
Of course, now ChatAI does all that for you in several Chat sessions. One does wonder though, if Chat is trained on text, and that was the history of what text was available, 10 years from now after everyone stopped writing 10 Blog posts about the same "Bayesian classifier", where's the ChatAI fodder coming from? I don't even know if this would be an outcome of fewer Blog posts [1]. It just strikes me as interesting because that would be _a very slow process_.
[1]: Not that this is necessarily true. People write blogs for all sorts of reasons, and having knockout quality competition from ChatAI does not KO all of them.
There was a time when I'd read this comment and then go looking for a tutorial on building a basic "Bayesian classifier". Invariably, I'd find several, which I'd line up like soldiers in tabs, and go through them until I find one that explained the what, why and how of it that spoke to my use (or near enough).
Of course, now ChatAI does all that for you in several Chat sessions. One does wonder though, if Chat is trained on text, and that was the history of what text was available, 10 years from now after everyone stopped writing 10 Blog posts about the same "Bayesian classifier", where's the ChatAI fodder coming from? I don't even know if this would be an outcome of fewer Blog posts [1]. It just strikes me as interesting because that would be _a very slow process_.
[1]: Not that this is necessarily true. People write blogs for all sorts of reasons, and having knockout quality competition from ChatAI does not KO all of them.