I'd like to think they'd at least look for some evidence, rather than just ask a crystal ball whether the person is innocent or not.
For a supposedly educated and thinking person like a professor, if they don't understand "AI" and can't reason that it can most certainly be wrong, they just shouldn't be allowed to use it.
Threatening someone like the people in the article with consequences if they're flagged again, after false flags already, is barbaric; clearly the tool is discriminating against their writing style, and other false flags are probably likely for that person.
I can't imagine what a programming-heavy course would be like these days; I was once accused alongside colleagues of mine (people I'd never spoken to in my life) of plagiarism, at university, because our code assignments were being scanned by something (before AI), and they found some double-digit percentage similarity, but there's only so many ways to achieve the simple tasks they were setting; I'm not surprised a handful out of a hundred code-projects solving the same problem looked similar.
Judges and police officers arent 100% accurate too