I am honestly amazed by how people are not interested in the obvious counterfactuals. Are we seriously that disinterested in how many people were helped by AI, that we don't even feel the need to ask that question when looking at potential harm?
Concerned that AI companies, like social media companies, exist in an unregulated state
that is likely to be detrimental to most minors and many adults? Absolutely.
What does the second sentence have to do with the article in question and the reason for why it's not making an effort to talk about the number of people being helped by AI in this context?
What if multiple teenagers were convinced to not commit suicide by AI? The story says that ChatGPT urged the teen to seek help from loved ones dozens of time. It seems plausible to me that other people out there actually listened to ChatGPT’s advice and received care as a result, when they might have attempted suicide otherwise.
We simply don’t know the scale of either side of the equation at this point.
And what if someone sold heroin to a bunch of people and many of them died but one of them made some really relatable tortured artist music?
Like all tools we regulate heroin and we should regulate AI in a way they attempts to maximize the utility that it provides to society.
Additionally with things like state lottery systems we decide that we should regulate certain things in such a way that the profits are distributed to society, rather then letting a few rent seekers take advantage of intrinsic addictive nature of the human mind to the detriment of all of society.
We should consider these things when developing regulations around technology like AI.
If a therapist tells 10 people not to kill themselves, and convinces 5 patients to kill themselves, would you say "this therapist is doing good on the whole"?
You can just ask "What flavor of ethics do you prefer; utilitarianism, deontology, egoism and/or contractualism?" instead.
From what I can gather, a lot of ML people are utilitarians, for better or worse. Which is why you're seeing this discussed at all, if we all agreed on the ethics it would have been a no-brainer.
It seems like a misunderstanding of utilitarianism to use it as an excuse to shut down any complaints about something as long as the overall societal benefit is positive. If we actually engage with these issues, we can potentially shift things into an even more positive direction. Why would a utilitarian be against that?
I don't think anyone here tried to use it that way, but it's useful to have some framing around things. It might seem macabre to see people argue "Well, if it killed 10 kids but saved 10,000, doesn't that mean it's good?" without understanding the deeper perspective a person like that would hold, which essentially is a utilitarian.
And I don't think any utilitarian would be against "something with some harm but mostly good can be made to do even less harm".
But the person I initially replied to, obviously doesn't see it as "some harm VS much good" (which we could argue if it's true or not), and would say that any harm + any good is still worth considering if the harm is worth it, besides the good it could do.
>I don't think anyone here tried to use it that way
That's certainly the impression you gave with your response. You didn't engage with the clear example of harmful behavior or debate what various numbers on either side of the equation would mean. Your approach was to circumvent OP's direct question to talk philosophy. That suggests you think there are some philosophical outlooks that could similarly sidestep the question of a therapist pushing people to kill themselves, which is a rather simple and unambiguous example of harm that could be reduced.
"We have AI therapists, most of the time it helps but sometimes it convinces the patients to kill themselves instead. Is the AI good or evil? Should it be allowed to continue to exist knowing it will inevitably kill again?"
Sounds more like the plot line of a 60s sci-fi novel or old Star Trek episode. What would the prime directive say? Haha
I think a therapist telling teenagers to kill themselves is always bad and should lead to revocation of license and prosecution, even if they helped other people.
AI solved a medical issue bothering my sister for over 20 years. Did what $20k in accumulated doctors bills couldn't.
I'm not even a utilitarian, but if there are many many people with stories like her, at some point you have to consider it. Some people use cars to kill themselves, but cars help and enrich the lives of 99.99% of people who use them.
Because almost everything you can do in general has positive and negative effects. Focusing only on one side of the coin and through that view boost or reject that thing misses the full picture. You end up either over-idealizing or unfairly condemning it, instead of understanding the trade-offs involved and making a balanced, informed judgment.
I'm pretty sure that many people with no instant access to doctors have also been helped by AI to diagnose illnesses and know when to consult. As with any technology, you can't evaluate it by focusing only on the worst effects.
Well when you think the only thing that matters in life is money, you want to pursue it. Such wealth concentrations are a purely human sickness that can easily be cured with redistribution.
Look at how much is being invested in this garbage then look at the excuses when they tell us we can't have universal medicare, free school lunches, or universal childcare.
Isn't it obvious? If ChatGPT convinced more teenagers to not commit suicide, than it has convinced teenagers to commit suicide, then the net contribution is positive, isn't it?
Then the question becomes more if we're fine with some people dying because some other people aren't.
But AFAIK, we don't know (and probably can never know) the exact ratio of people AI has helped still be alive today VS helped contribute to that these people aren't alive today, which makes the whole thing a bit moot.
Yes. It pulls people towards normality, since it gives the average words for every answer. Meanwhile social media encouraged people to be different enough to surface, and therefore encouraged abnormality.
It's a over-simplification, that's for sure, one bordering on incorrect. But for people who don't care about the internals, I don't think it's a harmful perspective to keep.
It's harmful because in this context it leads to an incorrect conclusion. There's no reason to believe that LLMs "averaging" behavior would cause a suicidal person to be "pulled toward normal"
It's a philosophical argument more than anything I think. And it does beg the question, does your mind form itself around with the humans (entities?) you converse with? So if you talk with a lot of smart people, you'll end up a bit smarter yourself, and if you talk with a lot of dull people, you'll end up dulling yourself. If you agree with that, I can see how someone would believe that LLMs would pull people closer to the material they were trained on.
"It" being ChatGPT, in that case. I guess most people know, but not all AI is the same as all other AI, the implementation in those cases matter more than what weights are behind it, or that it's actually ML rather than something else.
With that said, like most technology, it seems to come with a ton of drawbacks, and some benefits, while most people focus on the benefits, we're surely about to find out all the drawbacks shortly. Better than social media or not, its being deployed on a wide-scale, so it's less about what each person believes, and more about what we're ready to deal with and how we want to deal with it.
There is/are currently no realistic ways to temper or enforce public safety on these companies. They are in full regulatory capture. Any kind of call for public safety will be set aside, and if its not someone will pay the exec to give them an exception.
> There is/are currently no realistic ways to temper or enforce public safety on these companies
There is, general strikes usually does the trick if the government stops listening to the people. Of course, this doesn't apply to some countries that spent decades making unions, syndicates and other movements handicapped, but for the modern countries that still pride themselves on democracy, it is possible, given enough people care to do something about it.
Yes, I'm well aware, I mentioned the US not by name but by other properties in my earlier comments... I think once a country moves into authoritarianism there isn't much left but violence to achieve anything. General strikes and unions won't matter much once the military gets deployed against civilians, and you guys are already beyond that point so. GLHF and I hope things don't get too messy and you're welcome to re-join the modern world once you've cleaned the house.
I mean, what you say is not really wrong, but it's also not really relevant to the post (or thread) you're replying to.
It doesn't matter what government is in control: LLMs cannot be made safe from the problems that plague them. They are fundamental to their basic makeup.
It's nore about whether we, the citizens, even want this deployed and under what legal framework, so that it will fit our collective view of what society is.
The "if" is very much on the table at this stage of the political discussion. Companies are trying to railroad everybody past this decision stage by moving too fast. However, this is a momemt where we need to slow down instead and have a good long ponderous moment hinjing about whether we should allow it at all. And as the peoples of our respective countries, we can force that.
Yeah, that's not how technology deployments work, nor ever worked. Basically, there is a "cat is out of the bag" moment, and after that, it's basically a free-for-all until things get organized enough for someone to eventually start pushing back on too much regulation. Since we're just after this "cat is out of the bag" moment and way early for "over-regulation", companies of course ignore all of it and focuses on what they always focus on, making as much money while spending as little money as possible.
Besides general strikes, there isn't much one can do to stop, pause or otherwise hold back companies and individuals from deploying legal technology any way they see fit, for better or worse.
Well, you're very much wrong about that. The cat can be put back into the bag if we want to. It certainly happened before.
Right now, companies are working extremely hard to give the impression that AI technology is essential. But that is a purposefully manufactured illusion. It's a set of ideas planted in people's heads. Marketing in those megacompanies that introduce new technologies like LLMs and AR glasses to end users is very much focused on reshaping society around their product. They think BIG. We need more awareness that this is happening so that we can push back in a coordinated and meaningful way. And then we can support politicians that implement that agenda.
> Well, you're very much wrong about that. The cat can be put back into the bag if we want to. It certainly happened before.
Name a single technology that was invented, people figured out the drawbacks where bigger than the benefits, and then humanity just stopped caring about it altogether? Not even the technology with the biggest drawback we've created so far (literally make the earth inhospitable if deployed at scale) apparently been important enough to do so with, so I'm eager to hear what specific cats have been put back in what hats, if you'd entertain me.
There are plenty of ways. For example, the technology would die completely the moment companies get barred from creating or running it. End users don't have the means to update those models and they would age and become useless.
Is there any reason to believe AI will be any better than social media when it comes to mental health?
https://www.washingtonpost.com/technology/2025/12/27/chatgpt...