Some perspectives from someone working in the image space.
These tests don't feel practical - That is, they seem intended to collapse the model, not demonstrate "in the wild" performance.
The assumption is that all content is black or white - AI or not AI - and that you treat all content as equally worth retraining on.
It offers no room for assumptions around data augmentation, human-guided quality discrimination, or anything else that might alter the set of outputs to mitigate the "poison"
As someone also working in the imaging space, ai generated data is useful solong as it's used carefully.
Specifically, we're implementing AI culled training sets which contain some generated data that then gets reviewed manually for a few specific things, then pushed into our normal training workflows. This makes for a huge speedup versus 100% manual culling and the metrics don't lie, the models continue to improve steadily.
There may be a point where they're poisoned and will collapse, but I haven't seen it yet.
This is exactly right. Model collapse does not exist in practice. In fact, LLMs trained on newer web scrapes have increased capabilities thanks to the generated output in their training data.
For example, "base" pretrained models trained on scrapes which include generated outputs can 0-shot instruction follow and score higher on reasoning benchmarks.
Intentionally produced synthetic training data takes this a step further. For SoTA LLMs the majority of, or all of, their training data is generated. Phi-2 and Claude 3 for example.
Granted, one could argue that this only happened because the API version of Claude doesn't appear to use a system prompt. If that's the case, then the LLM lacks any identity otherwise defined by the initial system prompt, and thus, kind of makes one up.
Nonetheless, point remains, it's kind of interesting to see that in the years since the launch of ChatGPT we're already seeing a tangible impact on publicly available training data. LLMs "know" what ChatGPT is, and may even claim to be it.
that is the meat the article tries to cook. the impacts so far aren’t all that negative.
but time flows like a river, and the more shit that gets into it…
poison does not need to be immediately fatal to be fatal. some take a frighteningly long time to work. by the time you know what’s happening, not only is it too late, you have already suffered too much.
does this sound like anything more than a scary story to tell around campfires? not yet.
Claude 3 does use publically available data. Not everything is synthetically generated. Look at the section for training data in the below link. It has an quote from the paper which states that it uses a mix of public data, data from labelers and synthetic data
I can't find a link to the actual clause paper to verify the above link but a few other places mention the same thing about the training data. We don't know if this improved performance is because of synthetic data or something else. I'm guessing even antropic might not be knowing this too.
Wouldn’t reinforcement learning just weigh any nonsense data very low and then spammy garbage doesn’t really affect the model in the end much ? If the model and human experts can’t tell the difference then it’s probably pretty good AI generated data
Why would you limit a model to be like a brain in a vat? Instead let the model out so people use it, then use the chat logs to fine-tune. A chat room is a kind of environment, there is a human, maybe some tools. The LLM text will generate feedback and right there is a learning signal.
Even without a human, if a LLM has access to code execution it can practice solving coding tasks with runtime feedback. There are many ways a LLM could obtain useful learning signals. After all, we got all our knowledge from the environment as well, in the end there is no other source for knowledge and skills.
Dude what? That’s a pretty absurd claim. Most generally available models specifically curate their inputs for the express purpose of avoiding AI garbage induced collapse. It’s literally on their cited reasons for avoiding ai generated data as inputs.
This is the part that I don't really understand. Isn't this basically an evolutionary algorithm, where the fitness function is "whatever people like the most" (or at least enough to post it online)?
People rarely generate 10 pieces of content with AI and then share all 10 with the world. They usually only share the best ones. This naturally filters for better output.
Are they saying that evolutionary algorithms don't work?
> Use the model to generate some AI output. Then use that output to train a new instance of the model and use the resulting output to train a third version, and so forth. With each iteration, errors build atop one another. The 10th model, prompted to write about historical English architecture, spews out gibberish about jackrabbits.
That this happens doesn't surprise me, but I'd love to see a curve of how each organic vs machine content mixe ratio results in model collapse over N generations.
These tests don't feel practical - That is, they seem intended to collapse the model, not demonstrate "in the wild" performance.
The assumption is that all content is black or white - AI or not AI - and that you treat all content as equally worth retraining on.
It offers no room for assumptions around data augmentation, human-guided quality discrimination, or anything else that might alter the set of outputs to mitigate the "poison"