Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The LLM codegen at Google isn't unsupervised. It's integrated into the IDE as both autocomplete and prompt-based assistant, so you get a lot of feedback from a) what suggestions the human accepts and b) how they fix the suggestion when it's not perfect. So future iterations of the model won't be trained on LLM output, but on a mixture of human written code and human-corrected LLM output.

As a dev, I like it. It speeds up writing easy but tedious code. It's just a bit smarter version of the refactoring tools already common in IDEs...



What about (c) the human doesn't realize the LLM-generated code is flawed, and accepts it?


I mean what happens when a human doesn't realize the human generated code is wrong and accepts the PR and it becomes part of the corpus of 'safe' code?


Presumably someone will notice the bug in both of these scenarios at some point and it will no longer be treated as safe.


Do you ask a junior to review your code or someone experienced in the codebase?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: