Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[flagged] Deplatforming Backfired (reason.com)
34 points by sbuttgereit 4 days ago | hide | past | favorite | 42 comments




If deplatforming didn't work, why is the CBS 60 Minutes special being pulled? Why does the US have such an elaborate and far reaching network of financial sanctions, and corresponding anti-BDS laws trying to prevent private organizations from maintaining sanctions of their own? Why do most platforms and payment providers deplatform adult content? And so on.

(The article appears to complain that the John Birch Society were wrongly deplatformed, if you want to know how far out the author is)


The thesis of the article would mean that if people are not allowed to express the views in the 60 Minutes story, it would create an opportunity for an ungated, stronger expression of those views in the future.

That's actually an interesting argument. I wonder if it helps explain the growing support for the far right political party in Germany (ADL?), considering that they've been effectively blacklisted by the other political parties.

Did you read the article or are you responding to the headline?

I read it enough to get to the defense of Bircherism.

They literally aren't defending Bircherism ,they say that it would have been more productive to argue against them in public to discredit their ideas rather than letting them fester off in some dark corner. They're talking about how pushing bad ideas out of public view rather than arguing against them can exacerbate negative polarization and draw more people into bad ideas.

You have completely missed the point of the article. So you didn't actually read the article and you're making a dumb claim based on a misunderstanding.


> ,they say that it would have been more productive to argue against them in public to discredit their ideas rather than letting them fester off in some dark corner.

That doesn't work very well either. There are countless examples like the anti-vax nonsense.

I'll agree with the statement that deplatforming doesn't work very well. But it could work better than the alternatives in some cases.


Anti-vaxers were removed from every platform for more than 2 years during the pandemic, and that didn't work. I rarely see anyone actually going into a public forum to try to clearly communicate the evidence for vaccine safety in clear terms rather than just an appeal to authority. Clearly its a hard job, but I think its worthwhile.

It's an impossible job. It takes many hours of work to properly debunk a post that can be written in 30 seconds. Even less if you use a bot.

> I rarely see anyone actually going into a public forum to try to clearly communicate the evidence for vaccine safety

Then you haven’t looked. There are endless examples of qualified people explaining the actual risks and benefits of vaccines in clear and honest terms.

Perhaps what you actually mean is that you don’t see this happen within the insular communities that embrace antivaccine rhetoric. You don’t see it there because such efforts are blocked. Go explain vaccines in an antivax subreddit and watch as you get downvoted into invisibility and probably banned from the sub.


> Then you haven’t looked. There are endless examples of qualified people explaining the actual risks and benefits of vaccines in clear and honest terms.

Yes there's plenty of that in some places, like tiktok or the NYT. I mean that people need to actually address it in places where people who are engaging in anti-vaxx content will see it and engage with it. There was a successful example a few years back where public health officials engaged with Chabad community leaders in Brooklyn and got them to encourage everyone to get measles vaccines, but it think this is all too rare.


I think this is both inaccurate and unreasonable.

From what I’ve seen there is a lot of effort placed on trying to reach out and correct these misplaced views (or at least there was under the previous administration). You are saying that the issue is that outreach is not being attempted when in fact it is.

> Chabad community leaders in Brooklyn

Was this a case of actual vaccine hesitancy? Most of the antivax stuff is not mere hesitancy but hostility. If you have an audience willing to listen you can potentially sway them. An audience who refuses to listen and assumes you are an evil liar is hard to work with.

> actually address it in places where people who are engaging in anti-vaxx content

And I explained why this is so difficult. Internet echo chambers are a huge source of this stuff and it’s extremely hard to pierce because participants actively block participants who dissent.


No I’m the Chabad case JFK’s bullshit nonprofit had been filtering and making phone calls in Yiddish to convince mothers not to vaccinate their children. However the community was receptive to arguments about the benefits after an outbreak.

> Go explain vaccines in an antivax subreddit and watch as you get downvoted into invisibility and probably banned from the sub.

See, deplatforming works!

(like any tactic, it can be used for good or evil)


> See, deplatforming works!

Again that's not what the article here that you didn't read is talking about. The article is about negative polarization and preference falsification.


Why do I never hear the "it would be better to let pornography onto major platforms so people could debate against it in the comments" argument?

So you still didn't read the article and you're changing the subject to cover for the fact that you made up that the article defends bircherism. Nice attempt at a deflection, but you're still reacting to something you didn't read based on basically just the headline.

He didn’t say the article is correct. He said that arguing against the article without reading it is dumb.

I'm saying both that it is correct and the other poster should read it or move on.

Removing Trump from Twitter didn't stop people from voting him.

Removing 60 Minutes from CBS doesn't stop people like you from on elsewhere.

Instead, it fuel them to post elsewhere.

Deplatforming didn't work, and deplatforming doesn't work now.


What I'm saying is that this is a survivor effect: there are plenty of cases where deplatforming does work, it's just not 100% effective and so we have this situation like antibiotic resistance where pathologies have evolved around the defenses. It's kind of incredible that viruses have managed to evolve around vaccines to install a pro-virus person at the top of the US department of health to ensure better spread of viruses, but I guess life finds a way.

Also: this is entirely anglocentric. I don't think you'd find anyone claiming that the Chinese government censorship system backfired or is completely ineffective. It's an even stronger system than billionaires over there.


That wasn’t what you were saying but they’re good observations. This behavior of Americans was observed by Tocqueville’s observations about newspapers and the role that discussing them played in our political outcomes in allowing certain types of populist candidates to bubble up. There are analogues in English politics. The article had a continental example but it was just an analogy. That said, it’s reasonable for Americans to want to understand and adjust their strategies for quirks in their culture and political process; they can’t simply transplant Chinese government and culture here to please you, can they?

It’s frustrating that this is flagged. I don’t agree with the article but I think there’s good context here for actual discussion.

Perhaps hacker news isn't the right place for that discussion.

HN seems like exactly the place for a discussion on the unintended effects of speech suppression on social media platforms.

If anything, the federal government became even more aggressive in it's censorship efforts under the current administration. Banning individual words in federal reports and grantees publications, pressuring networks to fire program hosts, performatively lawsuits, threatening to pull broadcast licenses and now even censoring an individual CBS story. Things have distinctly escalated.

It doesn't seem like its working very well for them, and if anything polarize people further as the article suggests.

It does seem almost performative doesnt it? Censorship would be more effective if the machinations weren't so public.

@Dang - why is this flagged down, but the flags on the 60 minutes story were overriden?

We occasionally turn off the flags on political/ideological stories when certain conditions are met, such as: (1) there aren't too many of them; (2) the story contains significant new information (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...); (3) there's some overlap with intellectual curiosity; (4) we think HN can maybe discuss it substantively; or last but not least, (5) the community is insisting on discussing it. The latter can show up in various ways, such as when the story keeps getting reposted (often from different URLs) or we get lots of emails about it.

Political or ideological opinion pieces rarely meet any of these conditions. That doesn't mean they're bad articles, but it does mean we would reserve the turning-off-flags move (which ought to be fairly rare) for articles that do.

Does that answer your question?


Thank you for the explanation. As a long time community member, I have seen this principle and agree with it. I am asking how the principle applies to the 60 minutes story so that multiple articles were on the front page, despite all being flagged? This does not seem very relevant as the 60 minutes story was not even killed, but delayed to await additional investigation. Seems like it violates the numbers you mentioned 1), 3), and 4) and 5)?

Sounds like deplatforming is a variant of the Streisand effect.

This article is predicated on an unfounded counterfactual. Who knows what would have happened if Trump's twitter account hadn't been banned? Also, it seems a bit off to describe Musk's leadership of Twitter as triggering the return of "the free and open internet".

> This article is predicated on an unfounded counterfactual.

I think it's just evaluating the claim that removing these people from a public platform removes their ideas from popular discourse, which obviously didn't work. The article is arguing that failing to engage bad ideas head on leads to increasingly insular an polarized groups within society.


> which obviously didn't work

But... how obvious is that? Perhaps it did significantly reduce those ideas when it was active. Like, if Musk hadn't reinstated Trump's account we could be looking at a different presidency.


Again you didn’t read the article. Read it or fuck off and stop arguing about the headline.

Yeah, I read the article. It doesn't address the possibility that the causality could be the other way around.

The stuff about Trump and Bhattacharya is just odd. Trump rose back to power after Musk bought Twitter and gave him a platform to spread lies again. Then Trump appointed RFK, who appointed Bhattacharya as a sort of token gesture.

The Fuentes stuff is just as odd - his popularity waned while he was censored, but after being reinstated to X he grew his base to a million followers. Again, how does this support the claim that deplatforming was a negative move?

I guess there's two competing narratives: deplatforming never worked, vs deplatforming was working until Musk stepped in and undid it. The article does not give any compelling arguments for the former.


Did you read it though ? The article is about negative polarization spirals. You keep missing that entirely.

There's a bit about that in the "Tyranny of the Intolerant" section. Imo the article isn't so much making a case for that as it is lifting quotations wholesale from Timur Kuran as a sort of appeal to authority to justify it's own narrative. It makes out like it's obvious that Kuran's work explains the rise of Trump and Fuentes, whilst Musk's hijacking of Twitter strikes me as a simpler more natural explanation.

Funny, the Musk comment tripped me up as well. Since it is clear the Twitter algos and Grok are sycophantic. Yet … the article resonates with me. Deplatforming. Canceling. Suppressing. Has not worked. Moderation. Healthily engagement. Acknowledgement but not acceptance. Can that work?

It resonates with people who aren’t personally affected by the Overton window shifting towards extremism. It’s now more normal to fling racist slurs at people online, and that behaviour is coming offline as well. When we used to “deplatform” racism this sort of talk wasn’t within the Overton window. Of course if you’re not personally affected you’ll say this is fine, marketplace of ideas etc. Let the marketplace sort out if racism should be normalised or not.

It doesn’t even lead to better discourse. We’re both here, commenting on this forum right? It’s because the level of discourse here is higher than elsewhere, certainly much better than “free speech” platforms like Musk’s. How can that be, when HN has extraordinarily strict rules on acceptable speech? Even calling someone an idiot can get you banned here, let alone a pajeet or Paki. If you truly believed in freedom of speech, you’d quit a forum moderated like this.


Agreed. The internet is a wild ride of walled garden algorithms, dead internet theory, bot comments with other bot comments, like farming, influenced sway-the-masses, scam laden AI generated nightmares with major platforms requiring IDs, biometric verifications where you're fingerprinted, scanned, identified and crapped on.

Deplatforming removes a voice to a captive audience where one has entire lively hoods taken from them, their viewpoints suppressed and are forced to other platforms where the userbases are questionable offering their own infinite scrolls and dopamine hits and their own cancel cultures.

It is what it is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: