Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am honestly amazed by how people are not interested in the obvious counterfactuals. Are we seriously that disinterested in how many people were helped by AI, that we don't even feel the need to ask that question when looking at potential harm?




Disinterested? Not at all.

Concerned that AI companies, like social media companies, exist in an unregulated state that is likely to be detrimental to most minors and many adults? Absolutely.


What does the second sentence have to do with the article in question and the reason for why it's not making an effort to talk about the number of people being helped by AI in this context?

Why should "some people have been helped by AI" outweigh "ChatGPT has convinced multiple teenagers to commit suicide"?

What if multiple teenagers were convinced to not commit suicide by AI? The story says that ChatGPT urged the teen to seek help from loved ones dozens of time. It seems plausible to me that other people out there actually listened to ChatGPT’s advice and received care as a result, when they might have attempted suicide otherwise.

We simply don’t know the scale of either side of the equation at this point.


And what if someone sold heroin to a bunch of people and many of them died but one of them made some really relatable tortured artist music?

Like all tools we regulate heroin and we should regulate AI in a way they attempts to maximize the utility that it provides to society.

Additionally with things like state lottery systems we decide that we should regulate certain things in such a way that the profits are distributed to society, rather then letting a few rent seekers take advantage of intrinsic addictive nature of the human mind to the detriment of all of society.

We should consider these things when developing regulations around technology like AI.


If a therapist tells 10 people not to kill themselves, and convinces 5 patients to kill themselves, would you say "this therapist is doing good on the whole"?

You can just ask "What flavor of ethics do you prefer; utilitarianism, deontology, egoism and/or contractualism?" instead.

From what I can gather, a lot of ML people are utilitarians, for better or worse. Which is why you're seeing this discussed at all, if we all agreed on the ethics it would have been a no-brainer.


It seems like a misunderstanding of utilitarianism to use it as an excuse to shut down any complaints about something as long as the overall societal benefit is positive. If we actually engage with these issues, we can potentially shift things into an even more positive direction. Why would a utilitarian be against that?

I don't think anyone here tried to use it that way, but it's useful to have some framing around things. It might seem macabre to see people argue "Well, if it killed 10 kids but saved 10,000, doesn't that mean it's good?" without understanding the deeper perspective a person like that would hold, which essentially is a utilitarian.

And I don't think any utilitarian would be against "something with some harm but mostly good can be made to do even less harm".

But the person I initially replied to, obviously doesn't see it as "some harm VS much good" (which we could argue if it's true or not), and would say that any harm + any good is still worth considering if the harm is worth it, besides the good it could do.


>I don't think anyone here tried to use it that way

That's certainly the impression you gave with your response. You didn't engage with the clear example of harmful behavior or debate what various numbers on either side of the equation would mean. Your approach was to circumvent OP's direct question to talk philosophy. That suggests you think there are some philosophical outlooks that could similarly sidestep the question of a therapist pushing people to kill themselves, which is a rather simple and unambiguous example of harm that could be reduced.


"We have AI therapists, most of the time it helps but sometimes it convinces the patients to kill themselves instead. Is the AI good or evil? Should it be allowed to continue to exist knowing it will inevitably kill again?"

Sounds more like the plot line of a 60s sci-fi novel or old Star Trek episode. What would the prime directive say? Haha


What if it's 100000 people who not kill themselves?

I think a therapist telling teenagers to kill themselves is always bad and should lead to revocation of license and prosecution, even if they helped other people.

AI solved a medical issue bothering my sister for over 20 years. Did what $20k in accumulated doctors bills couldn't.

I'm not even a utilitarian, but if there are many many people with stories like her, at some point you have to consider it. Some people use cars to kill themselves, but cars help and enrich the lives of 99.99% of people who use them.


Video games, fast food, and religion are other examples.

They are mostly useful, but occasionally can kill someone who indulges in them too much.


Because almost everything you can do in general has positive and negative effects. Focusing only on one side of the coin and through that view boost or reject that thing misses the full picture. You end up either over-idealizing or unfairly condemning it, instead of understanding the trade-offs involved and making a balanced, informed judgment.

I'm pretty sure that many people with no instant access to doctors have also been helped by AI to diagnose illnesses and know when to consult. As with any technology, you can't evaluate it by focusing only on the worst effects.

Well when you think the only thing that matters in life is money, you want to pursue it. Such wealth concentrations are a purely human sickness that can easily be cured with redistribution.

Look at how much is being invested in this garbage then look at the excuses when they tell us we can't have universal medicare, free school lunches, or universal childcare.


Isn't it obvious? If ChatGPT convinced more teenagers to not commit suicide, than it has convinced teenagers to commit suicide, then the net contribution is positive, isn't it?

Then the question becomes more if we're fine with some people dying because some other people aren't.

But AFAIK, we don't know (and probably can never know) the exact ratio of people AI has helped still be alive today VS helped contribute to that these people aren't alive today, which makes the whole thing a bit moot.


But the AI companies will have a very good idea of this ratio. They have all the conversations.

In fact, targeted research with this data could help do more research on how to convince more people to stay alive, right?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: