Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is not an unreasonable mental model. You can ask ChatGPT to give you a program, ask it "is this correct?" and it'll find and fix bugs. To a layperson it looks like it is capable of double checking its work and finding an error. Why would it be any different here?

(the answer of course that the LLM doesn't actually search the internet and/or doesn't have access to a law database it can query)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: