Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

even with gpt 4, it hallucinates enough that it’s not reliable, forgetting to open/close brackets and quotes. This sounds like it’d be a big improvement.


Not that it matters now but just doing something like this works 99% of the time or more with 4 and 90% with 3.5.

It is VERY IMPORTANT that you respond in valid JSON ONLY. Nothing before or after. Make sure to escape all strings. Use this format:

{“some_variable”: [describe the variable purpose]}


99% of the time is still super frustrating when it fails, if you're using it in a consumer facing app. You have to clean up the output to avoid getting an error. If it goes from 99% to 100% JSON that is a big deal for me, much simpler.


Except it says in the small print to expect invalid JSON occasionally, so you have to write your error handling code either way


If you're building an app based on LLMs that expects higher than 99% correctness from it, you are bound to fail. Negative scenarios workarounds and retries are mandatory.


Yup. Is there a good/forgiving "drunken JSON parser" library that people like to use? Feels like it would be a useful (and separable) piece?


Honestly, I suspect asking GPT-4 to fix your JSON (in a new chat) is a good drunken JSON parser. We are only scraping the surface of what's possible with LLMs. If Token generation was free and instant we could come up with a giant schema of interacting model calls that generates 10 suggestions, iterates over them, ranks them and picks the best one, as silly as it sounds.


That's hilarious... if parsing GPT's JSON fails, keep asking GPT to fix it until it parses!


It shouldn't be surprising though. If a human makes an error parsing JSON, what do you do? You make them look over it again. Unless their intelligence is the bottleneck they might just be able to fix it.


It works. Just be sure to build a good error message.


I already do this today to create domain-specific knowledge focused prompts and then have them iterate back and forth and a ‘moderator’ that chooses what goes in and what doesn’t.


Wouldn't you use traditional software to validate the JSON, then ask chatgpt to try again if it wasn't right?


In my experience, telling it "no thats wrong, try again" just gets it to be wrong in a new different way, or restate the same wrong answer slightly differently. I've had to explicitly guide it to correct answers or formats at times.


Try different phrasing, like "Did your answer follow all of the criteria?".


It forgets commas too


Nah, this was solved by most teams a while ago.


I feel like I’m taking crazy pills with the amount of people saying this is game changing.

Did they not even try asking gpt to format the output as json?


> I feel like I’m taking crazy pills....try asking gpt to format the output as json

You are taking crazey pills. Stop

gpt-? is unreliable! That is not a bug in it, it is the nature of the beast.

It is not an expert at anything except natural language, and even then it is an idiot savant




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: