This showed me that people don’t yet understand how to practice good “hygiene” when using these tools.
This apparent doubling down is (usually) the product of asking it to verify something it previously output in the same chat session
It tends toward logical consistency unless directly told something is wrong. As such, asking it “were you correct when you told me X?” is bad hygiene.
You can “sanitize” the validation process by opening a new chat session and asking it if something is correct. You can also ask it to be adversarial and attempt to prove its prior output is wrong.
Even then it’s just a quick way to see if it’s output was garbage. A positive result is not a confirmation and independent verification is necessary.
Also, especially with ChatGPT, you have to understand that its role has been fine-tuned to be helpful and, to some extent, positively affirmative. This means, in my experience, that if you at all “show your hand” with a leading question or any (even unintended) indication of the answer you’re seeking, it is much more likely to output something that affirms any biases in your prompt.
People keep saying that it’s trained on human conversations/texts/etc and so everything it outputs is a reflection. But that’s not quite true:
ChatGPT in particular, unless you run up against firm guardrails of hate speech etc., appears to be fine tuned to a very large degree to be non confrontational. It generally won’t challenge your assumptions, so if your prompts have underlying assumptions in them (they almost always will) then ChatGPT will play along.
If you’re going to ask it for anything resembling factual information you have to be as neutral and open ended in tone as possible in your prompts. And if you’re trying to do something like check a hunch you have, you should probably not be neutral and instead ask it to be adversarial. Don’t ask “Is X true?”, ask “Analyze the possibility that X is false.”
Those are overly simplistic formulations of prompts but that’s the attitude you need to go into with it if you’re doing anything research-ish.
This showed me that people don’t yet understand how to practice good “hygiene” when using these tools.
This apparent doubling down is (usually) the product of asking it to verify something it previously output in the same chat session
It tends toward logical consistency unless directly told something is wrong. As such, asking it “were you correct when you told me X?” is bad hygiene.
You can “sanitize” the validation process by opening a new chat session and asking it if something is correct. You can also ask it to be adversarial and attempt to prove its prior output is wrong.
Even then it’s just a quick way to see if it’s output was garbage. A positive result is not a confirmation and independent verification is necessary.
Also, especially with ChatGPT, you have to understand that its role has been fine-tuned to be helpful and, to some extent, positively affirmative. This means, in my experience, that if you at all “show your hand” with a leading question or any (even unintended) indication of the answer you’re seeking, it is much more likely to output something that affirms any biases in your prompt.
People keep saying that it’s trained on human conversations/texts/etc and so everything it outputs is a reflection. But that’s not quite true:
ChatGPT in particular, unless you run up against firm guardrails of hate speech etc., appears to be fine tuned to a very large degree to be non confrontational. It generally won’t challenge your assumptions, so if your prompts have underlying assumptions in them (they almost always will) then ChatGPT will play along.
If you’re going to ask it for anything resembling factual information you have to be as neutral and open ended in tone as possible in your prompts. And if you’re trying to do something like check a hunch you have, you should probably not be neutral and instead ask it to be adversarial. Don’t ask “Is X true?”, ask “Analyze the possibility that X is false.”
Those are overly simplistic formulations of prompts but that’s the attitude you need to go into with it if you’re doing anything research-ish.