Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Have you tried reading AI-generated code? Most of the time it's painfully obvious, so long as the snippet isn't short and trivial.


To me it is not obvious. I work with junior level devs and have seen a lot of non-AI junior level code.


You mean, you work with devs who are using AI to generate their code.


I saw a lot of unbelievably bad code when I was teaching in university. I doubt that my undergrad students who couldn't code had access to LLMs in 2011.


Not saying where, but well before transformers were invented, I saw an iOS project that had huge chunks of uncompiled Symbian code in the project "for reference", an entire pantheon of God classes, entire files duplicated rather than changing access modifiers, 1000 lines inside an always true if block, and 20% of the 120,000 lines were:

//

And no, those were not generally followed by a real comment.


And yet, I have an unfortunately clear mental picture of the human that did this. In itself, that is a very specific coding style. I don't imagine an LLM would do that. Chat would instead take a couple of the methods from the Symbian codebase and use them where they didn't exist. The God classes would merely be mined for more non-existent functions. The true if block would become a function. And the # lines would have comments on them. Useless comments, but there would be text following every last one of them. Totally different styles.


Depends on the LLM.

I've seen exactly what you describe and worse *, and I've also seen them keep to one style until I got bored of prompting for new features to add to the project.

* one standard test I have is "make a tetris game as a single page web app", and one model started wrong and then suddenly flipped from Tetris in html/js to ML in python.


Actually some of us have been in the industry for more than 22 months.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: