Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Maybe it would be possible to design labs with LLM's in such a way that you teach them how to evaluate the LLM's answer? This would require them to have knowledge of the underlying topic. That's probably possible with specialized tools / LLM prompts but is not going to help against them using a generic LLM like ChatGPT or a cheating tool that feeds into a generic model.

What you are desribing is that they should use LLM just after they know the topic. A dilemma.



Yeah, I kinda like the method siscia suggests downthread [0] where the teacher grades based on the question they ask the LLMs during the test.

I think you should be able to use the LMM at home to help you better understand the topic (they have endless patience and you can usually you can keep asking until you actually grok the topic) but during the test I think it's fair to expect that basic understanding to be there.

[0] https://news.ycombinator.com/item?id=46043012




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: