I mean, I have to verify stuff in human-written tutorials too. Humans are wrong all the time.
A lot of it is just, are its explanations consistent? Does the code produce the expected result?
Like, if you're learning ray-tracing and writing code as you go, either it works or it doesn't. If the LLM is giving you wrong information, you're going to figure that out really fast.
In practice, it's just not really an issue. It's the same way I find mistakes in textbooks -- something doesn't quite add up, you look it up elsewhere, and discover the book has a typo or error.
Like, when I learn with an LLM, I'm not blindly memorizing isolated facts it gives me. I'm working through an area, often with concrete examples, pushing back on what seems confusing, until getting to a state where things make sense. Errors tend to reveal themselves very quickly.
A lot of it is just, are its explanations consistent? Does the code produce the expected result?
Like, if you're learning ray-tracing and writing code as you go, either it works or it doesn't. If the LLM is giving you wrong information, you're going to figure that out really fast.
In practice, it's just not really an issue. It's the same way I find mistakes in textbooks -- something doesn't quite add up, you look it up elsewhere, and discover the book has a typo or error.
Like, when I learn with an LLM, I'm not blindly memorizing isolated facts it gives me. I'm working through an area, often with concrete examples, pushing back on what seems confusing, until getting to a state where things make sense. Errors tend to reveal themselves very quickly.