Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've already asked a number of colleagues at work producing insane amount of gibberish with LLMs to just pass me the prompt instead: if LLM can produce verbose text with limited input, I just need that concise input too (the rest is simply made up crap).


Something I’ve found very helpful is when I have a murky idea in my head that would take a long time for me to articulate concisely, and I use an LLM to compress what I’m trying to say. So I type (or even dictate) a stream of consciousness with lots of parentheticals and semi-structured thoughts and ask it to summarize. I find it often does a great job at saying what I want to say, but better.

(See also the famous Pascal quote “This would have been a shorter letter if I had the time”).

P.s. for reference I’ve asked an LLM to compress what I wrote above. Here is the output:

When I have a murky idea that’s hard to articulate, I find it helpful to ramble—typing or dictating a stream of semi-structured thoughts—and then ask an LLM to summarize. It often captures what I mean, but more clearly and effectively.


Like the linked article, I’d rather read your original text, even if it’s less structured and rough


Agreed, the messiness of the original text has character and humanity that is stripped from the summarized text. The first text is an original thought, exchanged in a dialogue, imperfectly.

Elsewhere in this comment section, it's discussed about the importance of having original thought, which the summarized text specifically isn't, and has leeched away.

The parent comment has actually made the case against the summarized text being "better" (if we're measuring anything that isn't word count).


Learning to articulate your thoughts is pretty vital in learning to think though.

An LLM could make something sound articulate even if your input is useless rambling containing the keywords you want to think about. Having someone validate a lack of thought as something useful doesn't seem good for you in the long term


Yeah, so the problem I’m solving is not that I don’t think enough about something, or even that I don’t think about it in the right way. “Murky” was maybe the wrong word. It’s more that I often find my audience does not have the longest attention span or forgiveness for sloppy writing; thus, the onus is on me to make my thoughts as easy to digest as possible.


As someone in a similar position, I have found I benefit from practicing - but also, LLMs are a really useful tool for that practice!

Learning how to condense what I say focuses me to think about what is and isn't important - and it also forces me to think in terms of "style" and "audience".

(My natural writing style is much more verbose - I want to address all sorts of branching objections and tangential concepts. I find parenthesis really useful, because I can dump a bunch of stuff there and it's a clear marker that you can safely skip it all)

LLMs are also useful, because I can ramble, work out my own summary, and then compare to the LLM. Or, when I was just starting out, ramble, get an LLM to summarize, and then try to work out my own summary that captures what it missed.

Aside from practice being inherently beneficial, I also find that being able to form my own summaries helps me catch when the LLM has misunderstood, hallucinated, or just subtly changed the emphasis - for instance, your original example was indeed much cleaner, but I wouldn't have felt like you were really truly a fellow rambler just from reading that.

Hopefully you don't mind a rambling post. If you want a TL;DR an LLM can probably do a decent job ;)

(ChatGPT Summary: Practicing summarization improves clarity, audience awareness, and writing focus—especially for naturally verbose thinkers. LLMs are helpful tools for this, both as a comparison point and a learning aid. Writing your own summaries sharpens understanding and helps catch LLM misinterpretations or emphasis shifts.)

(Yeah, that seems pretty accurate)


Your original here is distinctly better! It shows your voice and thought patterns. All character is stripped away in the "compressed" version, which unsurprisingly is longer, too.


What do you mean it's longer? It's shorter.


“Someone sent me this ai generated message. Please give me your best shot at guessing the brief prompt that originated the text”.

Done, now ai is just lossy prettyprinting.


An incredible use of such advanced technology and gobs of energy.


What will me make the rocks with lightning in do next?!


Jokes aside, this happens all the time.

I have it write doc strings. I later ask it to explain a section of code, wherein it uses the doc strings to understand and explain the code to me.

A less lossy way to capture this will probably emerge at some point.



Recently I wasted half a day trying to make sense of story requirements given to me by a BA that were contradictory and far more elaborate than we had previously discussed. When I finally got ahold of him he confessed that he had run the actual requirements through ChatGPT and "didn't have time to proofread the results". Absolutely infuriating.


This is how I've felt about using LLMs for things like writing resumes and such. It can't possibly give you more than the prompt since it doesn't know anything more about you than you gave it in the prompt.

It's much more useful for answering questions that are public knowledge since it can pull from external sources to add new info.


The one case where this doesn't work, is if the prompt is, say 3 ideas, which the LLM expand to 20, and the colleague then trimmed down to 10.

Ideally there's some selection done, and the fact you're receiving it means it's better than a mean answer. But sometimes they haven't even read the LLM output themselves :-(


Chatgpt very useful for adding softness and politeness to my sentences. Would you like more straight forward text which probably will be rude for regular american?


Yes. I can't stand waffle from native or non-native speakers. Waste of electrons and oxygen :-) that might just be me however. Know your audience ;-)


If we can detach content and presentation, then the reader can choose tone and length.

At some point we will stop making decisions about what future readers want. We will just capture the concrete inputs and the reader's LLM will explain it.


I don't think form and function can be separated so cleanly in natural language. However you encode what's between your ears into text, you've made (re)presentational choices.

A piece of text does not have a single inherently correct interpretation. Its meaning is a relation constructed at run- (i.e. read-)time between the reader, the writer, and (possibly) the things the text refers to, that is if both sides are well enough aligned to agree on what those are.

Words don't speak, they only gesture.


My LLM workflow involves a back-and-forth clarification (verbose input -> LLM asks questions) that results in a rich context representing my intent. Generating comments/docs from this feels lossy.

What if we could persist this final LLM context state? Think of it like a queryable snapshot of the 'why' behind code or docs. Instead of just reading a comment, you could load the associated context and ask an LLM questions based on the original author's reasoning state.

Yes, context is model-specific, a major hurdle. And there are many other challenges. But ignoring the technical implementation for a moment, is this concept – capturing intent via persistent, queryable LLM context – a valuable problem to solve? I feel it would be.


It's no accident most of the software development world gravitated toward free and open source tooling: proprietary tool-specific code and build recipes have the same gotchas as the model-specific context would have today.

So perhaps switching to open-source models of sufficient "power" will obsolete that particular concern (they would be a "development dependency", just like a linter, compiler or code formatter are today).


It sounds worthwhile. I just wonder how you envision the author encoding their reasoning state. If as (terse) text, how would the author know the LLM successfully unpacks its meaning without interrogating it in detail, and then fine-tuning the prompt? And at that point, it would probably be faster to just write more verbose docs or comments?

What about a tool that simply allows other developers to hover over some code and see any relevant conversations the developer had with a model? Version the chat log and attach it to the code basically.


>which probably will be rude

as long as the text isn't at risk of being written up by HR, I don't particularly care about the tone of the message.


yes


And you do what with the prompt once you have it?


Get all the information of value that was hidden behind 2-10k words generated by an LLM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: