>”Synthetically generated pornography of a real person created without their consent is bad.”
I would agree with the premise, but not the line of reasoning that follows. Personally, I would say that the freedom of speech will always be more relevant and much more important than the subjective experience of perceived harm.
Much like I don’t think the government should be able to dictate to a woman whether she’s allowed to have a medical procedure or not, I don’t think a corrupt group of politicians should be the ones to dictate what kind of images my computer is allowed to make, assuming that I’m starting with an entirely ethically-sourced dataset.
I think it’s also worth considering why laws like these exist in the first place. They’re good and important, but I don’t think the root reason for them is to stop these kinds of images from being distributed. I think the root reason for these laws is to stop the production of CSAM using children, which absolutely is incredibly harmful and disgusting. I think that’s a good thing to stop, and if some guy in his basement looking at AI porn that stops him from supporting material that actually hurts real, living, breathing kids, then I think I’m conceptually okay with it. If it protects children and reduces the CSAM market, both consumption and production, isn’t that an overall good thing?
The reason it's illegal to distribute and possess CSAM is, to my knowledge, not to mitigate demand for its production, but specifically because we have every indication that its distribution (really, knowledge of its distribution) does inflict subjective harm on the victims.
The reason anything is illegal is because of the subjective experience of perceived harm. There is no harm that exists other than the "subjective", "experienced", and "perceived" kind. The question is just whether the rest of society empathizes enough with that harm to protect against it.
I do find the argument around allowing (fully) synthetically produced imagery in order to satiate demand for CSAM without harming real people an interesting one. I'm not sure where I land but probably probably somewhere near "if it actually reduces demand for harm-producing imagery, then good, and if it does not reduce that demand, then bad." I.e. this position would probably increase everyone's chances of coming across synthetic CSAM which in turn might actually increase the market for "authentic" material. Not sure how this would play out.
I would agree with the premise, but not the line of reasoning that follows. Personally, I would say that the freedom of speech will always be more relevant and much more important than the subjective experience of perceived harm.
Much like I don’t think the government should be able to dictate to a woman whether she’s allowed to have a medical procedure or not, I don’t think a corrupt group of politicians should be the ones to dictate what kind of images my computer is allowed to make, assuming that I’m starting with an entirely ethically-sourced dataset.
I think it’s also worth considering why laws like these exist in the first place. They’re good and important, but I don’t think the root reason for them is to stop these kinds of images from being distributed. I think the root reason for these laws is to stop the production of CSAM using children, which absolutely is incredibly harmful and disgusting. I think that’s a good thing to stop, and if some guy in his basement looking at AI porn that stops him from supporting material that actually hurts real, living, breathing kids, then I think I’m conceptually okay with it. If it protects children and reduces the CSAM market, both consumption and production, isn’t that an overall good thing?