I’m not sure how LLMs output is indistinguishable from Wikipedia or World Book.
Maybe? and if the question is “did the student actually write this?” (which is different than “do they understand it?” there are lots of different ways to assess if a given student understands the material…that don’t involve submitting typed text but still involve communicating clearly.
If we allow LLMs- like we allow calculators, just how poor LLMs are will become far more obvious.
Do you really not see the problem? A student who pastes an essay prompt into an input box and copies out the response has learned nothing. Even direct plagiarism from Wikipedia would typically need to be reworked; there will rarely be a Wikipedia page corresponding to your teacher's specific essay prompt.
Students are also poor writers. Often LLM-generated essays can be spotted in elementary school because they write too well for that grade level. A good student will surpass a chatbot, but not if they use it as a crutch while it's still a stronger writer than they are.
I’m not sure how LLMs output is indistinguishable from Wikipedia or World Book.
Maybe? and if the question is “did the student actually write this?” (which is different than “do they understand it?” there are lots of different ways to assess if a given student understands the material…that don’t involve submitting typed text but still involve communicating clearly.
If we allow LLMs- like we allow calculators, just how poor LLMs are will become far more obvious.