Chatgpt very useful for adding softness and politeness to my sentences. Would you like more straight forward text which probably will be rude for regular american?
If we can detach content and presentation, then the reader can choose tone and length.
At some point we will stop making decisions about what future readers want. We will just capture the concrete inputs and the reader's LLM will explain it.
I don't think form and function can be separated so cleanly in natural language. However you encode what's between your ears into text, you've made (re)presentational choices.
A piece of text does not have a single inherently correct interpretation. Its meaning is a relation constructed at run- (i.e. read-)time between the reader, the writer, and (possibly) the things the text refers to, that is if both sides are well enough aligned to agree on what those are.
My LLM workflow involves a back-and-forth clarification (verbose input -> LLM asks questions) that results in a rich context representing my intent. Generating comments/docs from this feels lossy.
What if we could persist this final LLM context state? Think of it like a queryable snapshot of the 'why' behind code or docs. Instead of just reading a comment, you could load the associated context and ask an LLM questions based on the original author's reasoning state.
Yes, context is model-specific, a major hurdle. And there are many other challenges. But ignoring the technical implementation for a moment, is this concept – capturing intent via persistent, queryable LLM context – a valuable problem to solve? I feel it would be.
It's no accident most of the software development world gravitated toward free and open source tooling: proprietary tool-specific code and build recipes have the same gotchas as the model-specific context would have today.
So perhaps switching to open-source models of sufficient "power" will obsolete that particular concern (they would be a "development dependency", just like a linter, compiler or code formatter are today).
It sounds worthwhile. I just wonder how you envision the author encoding their reasoning state. If as (terse) text, how would the author know the LLM successfully unpacks its meaning without interrogating it in detail, and then fine-tuning the prompt? And at that point, it would probably be faster to just write more verbose docs or comments?
What about a tool that simply allows other developers to hover over some code and see any relevant conversations the developer had with a model? Version the chat log and attach it to the code basically.