Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It sounds worthwhile. I just wonder how you envision the author encoding their reasoning state. If as (terse) text, how would the author know the LLM successfully unpacks its meaning without interrogating it in detail, and then fine-tuning the prompt? And at that point, it would probably be faster to just write more verbose docs or comments?

What about a tool that simply allows other developers to hover over some code and see any relevant conversations the developer had with a model? Version the chat log and attach it to the code basically.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: