Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[flagged] Ask HN: Why was my last post flagged
4 points by andai on Sept 28, 2024 | hide | past | favorite | 10 comments
I posted a link to volunteer opportunities in AI safety and mentioned that I found it on an Effective Altruism website (i.e. a list with similar opportunities, which seemed relevant to mention). I got one comment saying "Effective Altruism is a cult" (with no further explanation) and the thread was flagged.

I found this a bit odd and I'd love some more information. I'm attending a local EA discussion group since last week, and it seems pretty reasonable so far. (As far as I can tell, AI safety isn't a cult either, but apparently opinions differ strongly on at least one of these issues!)

So I'd love it if someone could give me some actual information instead of just flagging the thread this time? Thanks in advance



EA is a way for wealthy and powerful people to feel good about being wealthy and powerful. It's a tool to resolve the cognitive dissonance of them taking a bigger share of the national pie while others are struggling to survive, by saying "See? I give back!"

Like many other "rich people" movements of the past, it comes with a veneer of logic and altruism, but is ultimately self-serving and does nothing to resolve the wealth inequality problem (which is what the rich feel guilty over and need to salve their conscience somehow). In the old days, the poor would just rise up and kill them. We'd get around this problem by redistributing wealth (pot latch, jubilees, etc) but we don't have this anymore, thus the many philanthropic movements that target a specific tiny thing rather than the fundamental problem (which would threaten their wealth).

It gets flagged on HN because this whole thing already played out over the past 10 years, so we're over that.


Actually I brought up this argument at the EA meeting. Someone mentioned that a disproportionate percentage of CEOs are psychopaths, and I said we should utilize that. If extremely powerful people have a need to look good for selfish reasons, why not help them achieve this selfish goal by saving lives? It sounds like a highly optimal outcome to me.

Then this morning I've been reading the New Yorker article on EA and it sounds like they're already way ahead of me in that line of thinking...


Yep, the ends justify the means.

Except that nobody ever checks the ends.

Techies are attracted to EA's siren song because it offers such a simple and orderly view of the world. Kinda reminds me of Technocracy back in the day...


This is the weirdest argument. Psychopaths aren’t exactly know for caring about the lives of others or embracing being manipulated.

“Hi, mr psychopath CEO, sir. A bunch of other people and I were discussing openly on a forum that we should manipulate you into doing the right thing by stroking your ego. What do you think? Surely as a psychopath you care about the opinions of others.“


If there is a job that attracts psychopaths, we should seriously reconsider the existence of that job.

If, on the other hand, there is a job that makes psychopaths, we should really seriously reconsider the existence of that job.


I have no particular opinion on the matter — though, full disclosure, I did join the mailing lists of both groups when I was trying to start a Master’s degree program and found the resulting messaging and an AI safety study group in particular sufficiently high-noise and ill-considered that I unsubscribed — and would point you at the “controversies” section of Wikipedia on EA, but “it seems pretty reasonable so far” and “as far as I can tell [xyz] isn’t a cult” are pretty classically things someone just entering a cult would say.


Not me. Just wild guess

1. People are just sick of hearing yet another thing about AI 2. Combine AI with EA and even more buzzwords and you’ll quickly multiply that effect. 3. It comes off as advertising, something that is usually highly frowned upon here. Even if the cause may or may not be noble. The crowd here wants to digest interesting content.


Here's a more interesting question. Is Effective Altruism wrong in idea or in execution? What would it look like if we had something like that done right?

(To me, so far, it seems more right than wrong, but feedback from people who disagree is very valuable.)

My own value system here is "minimize suffering". EA emphasizes more measurable metrics like death and disability, which is a fair proxy. Though I've also heard criticisms of the fundamental idea, and more broadly utilitarianism in the first place.


People abuse the flagging functionality by flagging submissions that go against their ideology or prejudice. I think it’s part of the current suppress/censor/ban/deplatform/demonetize/unperson zeitgeist.


EA may very well be a cult, but a link to volunteer opportunities in AI safety seems harmless to me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: