Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it would be wise to listen to Nobel Prize-winning journalist Maria Ressa of The Philippines, regarding unchecked social media.

"You and I, if we say a lie we are held responsible for it, so people can trust us. Well, Facebook made a system where the lies repeated so often that people can't tell."

"Both United Nations and Meta came to the same conclusion, which is that this platform Facebook actually enabled genocide that happened in Myanmar. Think about it as, when you say it a million times... it is not just the lie but also it is laced with fear, anger and hate. This is what was prioritized in the design and the distribution on Facebook. It keeps us scrolling, but in countries like Myanmar, in countries like Philippines, in countries where institutions are weak, you saw that online violence became real world violence."

"Fear, anger, hate, lies, salaciousness, this is the worst of human nature... and I think that's what Big Tech has been able to do through social media... the incentive structure is for the worst of who we are because you keep scrolling, and the longer you keep scrolling the more money the platform makes."

"Without a shared reality, without facts, how can you have a democracy that works?"

https://www.cnn.com/2025/01/12/us/video/gps0112-meta-scraps-...



"Beware of he who would deny you access to information for in his heart he dreams himself your master." - Commissioner Pravin Lal, U.N. Declaration of Rights


Full quote: "As the Americans learned so painfully in Earth's final century, free flow of information is the only safeguard against tyranny. The once-chained people whose leaders at last lose their grip on information flow will soon burst with freedom and vitality, but the free nation gradually constricting its grip on public discourse has begun its rapid slide into despotism. Beware of he who would deny you access to information, for in his heart he deems himself your master."

(Alpha Centauri, 1999, https://civilization.fandom.com/wiki/The_Planetary_Datalinks... )


I sit here in my cubicle, here on the motherworld. When I die, they will put my body in a box and dispose of it in the cold ground. And in the million ages to come, I will never breathe, or laugh, or twitch again. So won't you run and play with me here among the teeming mass of humanity? The universe has spared us this moment."

~Anonymous, Datalinks.


[flagged]


[flagged]


You can watch YouTube without watching any channels from an american person what do you mean


Weird American ads from crazy American Christians convinced about the rapture.


That's why you buy $20,000 GPU for local inference for your AI-ad-blocker, geez.

Orrrrr you pay $20 per month to either left or right wing one on the cloud.


There is a difference between free flow of information and propaganda. Much like how monopolies can destroy free markets, unchecked propaganda can bury information by swamping it with a data monoculture.

I think you could make a reasonable argument that the algorithms that distort social media feeds actually impede the free flow of information.


> Much like how monopolies can destroy free markets, unchecked propaganda can bury information by swamping it with a data monoculture.

The fundamental problem here is exactly that.

We could have social media that no central entity controls, i.e. it works like the web and RSS instead of like Facebook. There are a billion feeds, every single account is a feed, but you subscribe to thousands of them at most. And then, most importantly, those feeds you subscribe to get sorted on the client.

Which means there are no ads, because nobody really wants ads, and so their user agent doesn't show them any. And that's the source of the existing incentive for the monopolist in control of the feed to fill it with rage bait, which means that goes away.

The cost is that you either need a P2P system that actually works or people who want to post a normal amount of stuff to social media need to pay $5 for hosting (compare this to what people currently pay for phone service). But maybe that's worth it.


>We could have social media that no central entity controls, i.e. it works like the web and RSS instead of like Facebook. There are a billion feeds, every single account is a feed, but you subscribe to thousands of them at most. And then, most importantly, those feeds you subscribe to get sorted on the client.

The Fediverse[1] with ActivityPub[0]?

[0] https://activitypub.rocks/

[1] https://fediverse.party/


Something along those lines, but you need it to be architectured in such a way that no organization can capture the network effect in order to set up a choke point. You need all moderation to be applied on the client, or you'll have large servers doing things like banning everyone from new/small independent servers by default so that people have to sign up with them instead. The protocol needs to make that impossible or the long-term consequences are predictable.


>but you need it to be architectured in such a way that no organization can capture the network effect in order to set up a choke point.

How is that not the case now?

>You need all moderation to be applied on the client, or you'll have large servers doing things like banning everyone from new/small independent servers by default so that people have to sign up with them instead.

I suppose. There are ActivityPub "clients" which act as interfaces that allow the former and act as agents for a single user interacting with other ActivityPub instances. which, I'd expect can take us most of the way you say we should go.

I haven't seen the latter, as there's really no incentive to do so. Meta tried doing so by federating (one-way) with threads, but that failed miserably as the incentives are exactly the opposite in the Fediverse.

I suppose that incentives can change, although money is usually the driver for that and monetization isn't prioritized there.

>The protocol needs to make that impossible or the long-term consequences are predictable.

Impossible? Are you suggesting that since ActivityPub isn't perfect, it should be discarded?

ActivityPub is easily 75% of where you say we should go. Much farther along that line than anything else. But since it's not 100% it should be abandoned/ignored?

I'm not so sure about your "long-term consequences" being predictable. Threads tried to do so and failed miserably. In fact, the distributed model made sure that it would, even though the largest instances did acquiesce.

ActivityPub is the best you're going to get right now. and the best current option for distributed social media.

Don't let the perfect be the enemy of the good.

Edit: I want to clarify that I'm not trying to dunk on anyone here. Rather, I'm not understanding (whether that's my own obtuseness or something else) the argument being made against ActivityPub in the comment to which I'm replying. Is there some overarching principle or actual data which supports the idea that all social media is doomed to create dystopian landscapes? Or am I missing something else here?


> How is that not the case now?

The protocol allows servers, rather than users, to ban other servers. Servers should be only the dumbest of pipes.

> Are you suggesting that since ActivityPub isn't perfect, it should be discarded?

I'm saying that by the time something like this has billions of users the protocol is going to be a lot harder to change, so you should fix the problems without delay instead of waiting until after that happens and getting deja vu all over again.

> Threads tried to do so and failed miserably.

Threads tried to do that all at once.

The thing that should be in your threat model is Gmail and Chrome and old school Microsoft EEE. Somebody sets up a big service that initially doesn't try to screw everyone, so it becomes popular. Then once they've captured a majority of users, they start locking out smaller competitors.

The locking out of smaller competitors needs to be something that the protocol itself is designed to effectively resist.


>> How is that not the case now?

>The protocol allows servers, rather than users, to ban other servers. Servers should be only the dumbest of pipes.

A fair point. A good fix for this is to have individual clients that can federate/post/receive/moderate/store content. IIUC, there is at least one client/server hybrid that does this. It's problematic for those who don't have the computing power and/or network bandwidth to run such a platform. But it's certainly something to work towards.

>> Are you suggesting that since ActivityPub isn't perfect, it should be discarded?

>I'm saying that by the time something like this has billions of users the protocol is going to be a lot harder to change, so you should fix the problems without delay instead of waiting until after that happens and getting deja vu all over again.

I'm still not seeing the "problems" with server usage you're referencing. Federation obviates the need for users to be on the same server and there's little, if any, monetary value in trying to create mega servers. Discoverability is definitely an issue, but (as you correctly point out) should be addressed. It is, however, a hard problem if we want to maintain decentralization.

>The thing that should be in your threat model is Gmail and Chrome and old school Microsoft EEE. Somebody sets up a big service that initially doesn't try to screw everyone, so it becomes popular. Then once they've captured a majority of users, they start locking out smaller competitors.

Given the landscape of the Fediverse, that seems incredibly unlikely. Perhaps I'm just pie in the sky on this, but those moving to ActivityPub platforms do so to get away from such folks.

Adding to that the ability to manage one's own content on one's own hardware with one's own tools, it seems to be a really unlikely issue.

Then again, I could absolutely be wrong. I hope not. That said, I'm sure that suggestions for changes along the lines you suggest to the ActivityPub protocol[0][1][2] as a hedge against making it fall into a series of corporate hell holes, as you put it, "impossible," would be appreciated.

[0] https://github.com/w3c/activitypub

[1] https://activitypub.rocks/

[2] https://w3c.github.io/activitypub/

Edit: Clarified my thoughts WRT updates to the ActivityPub protocol.


There is no generally accepted definition of propaganda. One person's propaganda is another person's accurate information. I don't trust politicians or social media employees to make that distinction.


There is definitely videos that are propaganda.

Like those low quality AI video about Trump or Biden, saying things that didn't happened. Anyone with critical thinking knows that those are either propaganda or engagement farming


Or they're just humorous videos meant to entertain and not be taken seriously. Or they are meant to poke fun of the politician, e.g. clearly politically motivated speech, literally propaganda, but aren't meant to be taken as authentic recordings and deception isn't the intent.

Sometimes it's clearly one and not the other, but it isn't always clear.


'I'm just a comedian guys' interviewing presidential candidates, spouting how we shouldn't be in Ukraine, then the second they get any pushback 'I'm just a comedian'. It's total bullshit. They are trying to influence, not get a laugh.


Downvoted...yet here is the Vice President saying the FCC Commissioner saying 'we can do this the hard way or the easy way' regarding censoring Jimmy Kimmel was 'just telling a joke':

https://bsky.app/profile/atrupar.com/post/3lzm3z3byos2d

You 'it's just comedy' guys are so full of it. The FCC Head attacking free media in the United States isn't 'just telling jokes'.


What you think is propaganda is irrelevant. When you let people unnaturally amplify information by paying to have it forced into someone’s feed that is distorting the free flow of information.

Employees choose what you see every day you use most social media.


Congrats! You are 99% of the way to understanding it. Now you just have to realize that "whoever is in charge" might or might not have your best interests at heart, government or private.

Anyone who has the power to deny you information absolutely has more power than those who can swamp out good information with bad. It's a subtle difference yes, but it's real.


Banning algorithms and paid amplification is not denying you information. You can still decide for yourself who to follow, or actively look for information, actively listen to people. The difference is that it becomes your choice.


Well, this is about bringing back creators banned for (in YouTube's eyes) unwarranted beliefs stemming from distrust of political or medical authorities, and promoting such distrust. They weren't banned because of paid amplification.

I don't quite understand how the Ressa quote in the beginning of this thread justifies banning dissent for being too extreme. The algorithms are surely on YouTube and Facebook (and Ressa's!) side here, I'm sure they tried to downrank distrust-promoting content as much as they dared and had capabilities to, limited by e.g. local language capabilities and their users' active attempts to avoid automatic suppression - something everyone does these days.


Just regulate the algorithm market. Let people see, decide, share, compare


What is the "algorithm market"? Where can I buy one algorithm?


Isn’t one yet, that would be the roll of government to create a market on these large platforms.


OK, but that's an argument against advertising, and maybe against dishonest manipulation of ranking systems.

It's not an argument for banning doctors from YouTube for having the wrong opinions on public health policy.


> distorting the free flow of information

There is no free flow of information. Never was. YouTube and FB and Google saying "oh it's the algorithm" is complete BS. It always manipulated, boosting whoever they feel fit.


And propaganda by definition isn’t false information. Propaganda can be factual as well.


So many people have just given up on the very idea of coherent reality? Of correspondence? Of grounding?

Why? No one actually lives like that when you watch their behavior in the real world.

It's not even post modernism, it's straight up nihilism masquerading as whatever is trendy to say online.

These people accuse every one of bias while ignoring that there position comes from a place of such extreme biased it irrationally, presuppositionaly rejects the possibility of true facts in their chosen, arbitrary cut outs. It's special pleading as a lifestyle.

It's very easy to observe, model, simulate, any node based computer networks that allow for coherent and well formed data with high correspondence, and very easy to see networks destroyed by noise and data drift.

We have this empirically observed in real networks, it's pragmatic and why the internet and other complex systems run. People rely on real network systems and the observed facts of how they succeed or fail then try to undercut those hard won truths from a place of utter ignorance. While relying on them! It's absurd ideological parasitism, they deny the value of the things the demonstrably value just by posting! Just the silliest form of performative contradiction.

I don't get it. Fact are facts. A thing can be objectively true in what for us is a linear global frame. The log is the log.

Wikipedia and federated text content should never be banned, logs and timelines, data etc... but memes and other primarily emotive media is case by case, I don't see their value. I don't see the value in allowing people to present unprovable or demonstrably false data using a dogmatically, confidentally true narrative.

I mean present whatever you want but mark it as interpretation or low confidence interval vs multiple verified sources with a paper trail.

Data quality, grounding and correspondence can be measured. It takes time though for validation to occur, it's far easier to ignore those traits and just generate infinite untruth and ungrounded data.

Why do people prop up infinite noise generation as if it was a virtue? As if noise and signal epistemically can't be distinguished ever? I always see these arguments online by people who don't live that way at all in any pragmatic sense. Whether it's flat earthers or any other group who rejects the possibility of grounded facts.

Interpretation is different, but so is the intentional destruction of a shared meaning space by turning every little word into a shibboleth.

People are intentionally destroying the ability to even negotiate connections to establish communication channels.

Infinite noise leads to runaway network failure and in human systems the inevitably of violence. I for one don't like to see people die because the system has destroyed message passing via attentional ddos.


Fortunately your biased opinion about what information has value is utterly worthless and will have zero impact on public policy. Idealized mathematical models of computer networks have no relevance to politics or freedom of expression in the real world.


There isn’t. Yet, everybody knows what I mean under “propaganda against immigration” (just somebody would discredit it, somebody would fight for it), and nobody claims that the Hungarian government’s “information campaign” about migrants is not fascist propaganda (except the government, obviously, but not even their followers deny it). So, yes, the edges are blurred, yet we can clearly identify some propaganda.

Also accurate information (like here is 10 videos about black killing whites) with distorted statistics (there is twice as much white on black murder) is still propaganda. But these are difficult to identify, since they clearly affect almost the whole population. Not many people even tried to fight against it. Especially because the propaganda’s message is created by you. // The example is fiction - but the direction exists, just look on Kirk’s twitter for example -, I have no idea about the exact numbers off the top of my head


Propaganda wouldn't be such a problem if content wasn't dictated by a handful of corporations, and us people weren't so unbelievably gullible.


indeed, didn't YT ban a bunch of RT employees for undisclosed ties? I bet those will be coming back.


Oh, but can you make an argument that the government, pressuring megacorporations with information monopolies to ban things they deem misinformation, is a good thing and makes things better?

Because that's the argument you need to be making here.


You don't even need to make the argument. Go copy paste some top HN comments on this issue from around the time the actions we're discussing youtube reversing happened.


I think those arguments sound especially bad today, actually. They got the suppression they wanted, but it did not give the outcome they wanted.


Not really. You can argue that the government should have the right to request content moderation from private platforms and that private platforms should have the right to decline those requests. There are countless good reasons for both sides of that.

In fact, this is the reality we have always had, even under Biden. This stuff went to court. They found no evidence of threats against the platforms, the platforms didn't claim they were threatened, and no platform said anything other than they maintained independent discretion for their decisions. Even Twitter's lawyers testified under oath that the government never coerced action from them.

Even in the actual letter from YouTube, they affirm again that they made their decisions independently: "While the Company continued to develop and enforce its policies independently, Biden Administration officials continued to press the company to remove non-violative user-generated content."

So where does "to press" land on the spectrum between requesting action and coercion? Well, one key variable would be the presence of some type of threat. Not a single platform has argued they were threatened either implicitly or explicitly. Courts haven't found evidence of threats. Many requests were declined and none produced any sort of retaliation.

Here's a threat the government might use to coerce a platform's behavior: a constant stream of subpoenas! Well, wouldn't you know it, that's exactly what produced the memo FTA.[1]

Why hasn't Jim Jordan just released the evidence of Google being coerced into these decisions? He has dozens if not hundreds of hours of filmed testimony from decision-makers at these companies he refuses to release. Presumably because, like in every other case that has actually gone to court, the evidence doesn't exist!

[1] https://www.politico.com/live-updates/2025/03/06/congress/ji...


The key problem with the government "requesting" a company do something is that the government has nigh infinite unrelated decisions that can be used to apply pressure to that company.

It's unreasonable to expect some portion of the executive branch to reliably act counter to the President's stated goals, even if they would otherwise have.

And that opportunity for perversion of good governance (read: making decisions objectively) is exactly why the government shouldn't request companies censor or speak in certain ways, ever.

If there are extenuating circumstances (e.g. a public health crisis), then there need to be EXTREMELY high firewalls built between the part of the government "requesting" and everyone else (and the President should stay out of it).


The government has a well-established right to request companies to do things, and there are good reasons to keep it.

For example, the government has immense resources to detect fraud, CSAM, foreign intelligence attacks, and so on.

It is good, actually, that the government can notify employers that one of their employees is a suspected foreign asset and request they do not work on sensitive technologies.

It is good, actually, that the government can notify a social media platform that there are terrorist cells spreading graphic beheading videos and request they get taken down.

It's also good that in the vast majority cases, the platforms are literally allowed to reply with "go fuck yourself!"

The high firewall is already present, it's called the First Amendment and the platforms' unquestioned right to say "nope," as they do literally hundreds of times per day.


How does any of that prevent the government from de facto tying unrelated decisions to compliance by companies? E.g. FCC merger approval?


None of it de facto prevents anything, but if a corporation feels they're being bullied in this way they can sue.

In the Biden admin, multiple lawsuits (interestingly none launched by the allegedly coerced parties) revealed no evidence of such mechanics at play.

In the Trump admin, the FCC Commissioner and POTUS have pretty much explicitly tied content moderation decisions to unrelated enforcement decisions.

Definitely there's possibility for an admin to land in the middle (actually coercive, but not stupid enough to do it on Truth Social), and in those scenarios we rely on the companies to defend themselves.

The idea that government should be categorically disallowed from communicating and expressing preferences is functionally absurd.


That sounds great in the context of a game, but in the years since its release, we have also learned that those who style themselves as champions of free speech also dream themselves our master.

They are usually even more brazen in their ambitions than the censors, but somehow get a free pass because, hey, he's just fighting for the oppressed.


I'd say free speech absolutism (read: early-pandemic Zuckerberg, not thumb-on-the-scales Musk) has always aged better than the alternatives.

The trick is there's a fine line between honest free speech absolutism and 'pro free speech I believe in and silence about the freedom of that I don't.' Usually when ego and power get involved (see: Trump, Musk).

To which, props to folks like Ted Cruz on vocally addressing the dissonance of and opposing FCC speech policing.


Anything that people uncritically good attracts the evil and the illegitimate because they cannot build power on their own so they must co-opt things people see as good.


Not in the original statement, but as it referenced here, the word 'information' is doing absolutely ludicrous amounts of lifting. Hopefully it bent at the knees, because it my book it broke.

You can't call the phrase "the sky is mint chocolate chip pink with pulsate alien clouds" information.


While this is true, It's also important to realize that during the great disinformation hysteria, perfectly reasonable statements like "This may have originated from a lab", "These vaccines are non-sterilizing", or "There were some anomalies of Benford's Law in this specific precinct and here's the data" were lumped into the exact same bucket as "The CCP built this virus to kill us all", "The vaccine will give you blood clots and myocarditis", or "The DNC rigged the election".

The "disinformation" bucket was overly large.

There was no nuance. No critical analysis of actual statements made. If it smelled even slightly off-script, it was branded and filed.


The mRNA based COVID-19 vaccines literally did cause myocarditis as a side effect in a small subset of patients. We can argue about the prevalence and severity or risk trade-offs versus possible viral myocarditis but the basic statement about possible myocarditis should have never been lumped into the disinformation bucket.

https://www.cdc.gov/vaccines/covid-19/clinical-consideration...


Doesn't detract from my point. "These vaccines are correlated with an N% increased risk of myocarditis" is a different statement from "These vaccines will give you myocarditis".

BOTH of them were targeted by the misinformation squad, as if equivalent.


But it is because of the deluge that that happens. We can only process so much information. If the amount of "content" coming through is orders of magnitude larger, it makes sense to just reject everything that looks even slightly like nonsense, because there will still be more than enough left over.


So does that justify the situation with Jimmy Kimmel? After all there was a deluge of information and a lot of unknowns about the shooter but the word choice he used was very similar to the already debunked theory that it was celebratory gunfire from a supporter.

Of course not.


That sentence from Kimmel was IMO factually incorrect, and he was foolish to make the claim, but how is offensive towards the dead, and why is it worth a suspension?

But as we know, MAGA are snowflakes and look for anything so they can pull out their Victim Card and yell around...


MAGA are badasses when they're out of power, yet apparently threatened enough by an escalator stopping so as to call for terrorism charges.

The doublethink is real.


You can call it data and have sufficient respect of others that they may process it into information. Too many have too little faith in others. If anything we need to be deluged in data and we will probably work it out ourselves eventually.


Facebook does its utmost to subject me to Tartarian, Flat Earth and Creationist content.

Yes I block it routinely. No the algo doesnt let up.

I dont need "faith" when I can see that a decent chunk of people disbelieve modern history, and aggressively disbelieve science.

More data doesnt help.


This is a fear of an earlier time.

We are not controlling people by reducing information.

We are controlling people by overwhelming them in it.

And when we think of a solution, our natural inclination to “do the opposite” smacks straight into our instinct against controlling or reducing access to information.

The closest I have come to any form of light at the end of the tunnel is Taiwan’s efforts to create digital consultations for policy, and the idea that facts may not compete on short time horizon, but they surely win on longer time horizons.


The problem is that in our collective hurry to build and support social networks, we never stopped to think about what other functions might be needed with them to promote good, factual society.

People should be able to say whatever the hell they want, wherever the hell they want, whenever the hell they want. (Subject only to the imminent danger test)

But! We should also be funding robust journalism to exist in parallel with that.

Can you imagine how different today would look if the US had leveraged a 5% tax on social media platforms above a certain size, with the proceeds used to fund journalism?

That was a thing we could have done. We didn't. And now we're here.


Beware of those who quote videogames and yet attribute them to "U.N. Declaration of Rights".


They're not wrong; the attribution is part of the quote. In-game, the source of the quote is usually important, and is always read aloud (unlike in Civ).


I would argue that they are, if not wrong, at least misleading.

If you've never played Alpha Centauri (like me) you are guaranteed to believe this to be a real quote by a UN diplomat. It also doesn't help that searching for "U.N. Declaration of Rights" takes me (wrongly) to the (real) Universal Declaration of Human Rights. I only noticed after reading ethbr1's comment [1], and I bet I'm not the only one.

[1] https://news.ycombinator.com/item?id=45355441


Hence my reply.

Also, you missed a great game.


Beware he who would tell you that any effort at trying to clean up the post apocalyptic wasteland that is social media is automatically tyranny, for in his heart he is a pedophile murderer fraudster, and you can call him that without proof, and when the moderators say your unfounded claim shouldn't be on the platform you just say CENSORSHIP.


The thing is that burying information in a firehose of nonsense is just another way of denying access to it. A great way to hide a sharp needle is to dump a bunch of blunt ones on top of it.


Sure, great. Now suppose that a very effective campaign of social destabilisation propaganda exists that poses an existential risk to your society.

What do you do?

It's easy to rely on absolutes and pithy quotes that don't solve any actual problems. What would you, specifically, with all your wisdom do?


Let's not waste time on idle hypotheticals and fear mongering. No propaganda campaign has ever posed an existential threat to the USA. Let us know when one arrives.


Have you seen the US recently? Just in the last couple of days, the president is standing up and broadcasting clear medical lies about autism, while a large chunk of the media goes along with him.


I have seen the US recently. I'm not going to attempt to defend the President but regardless of whether he is right or wrong about autism this is hardly an existential threat to the Republic. Presidents have been wrong about many things before and that is not a valid justification for censorship. In a few years we'll have another president and he or she will be wrong about a whole different set of issues.


I hope I’m wrong, but I think America is fundamentally done, because it turns out the whole “checks and balances” system turned out to be trivial to steamroll as president, and future presidents will know that now.

By done I don’t mean it won’t continue to be the worlds biggest and most important country, but I don’t expect any other country to trust America more than they have to for a 100 years or so.


A lot of people thought that America was fundamentally done in 1861, and yet here we are. The recent fracturing of certain established institutional norms is a matter of some concern. But whether other countries trust us or not is of little consequence. US foreign policy has always been volatile, subject to the whims of each new administration. Our traditional allies will continue to cooperate regardless of trust (or lack thereof) because mutual interests are still broadly aligned and they have no credible alternative.


> whether other countries trust us or not is of

some consequence. Not all consuming, but significant.

> Our traditional allies will continue to cooperate regardless of

whether they continue to include the US within that circle to the same degree, or indeed at all.

Trump's tariff's have been a boon for China's global trade connections, they continue to buy soybeans, but from new partners whereas before they sourced mainly from the US.


> turned out to be trivial to steamroll as president, and future presidents will know that now

... when the Presidency, House, and Senate are also controlled by one unified party, and the Supreme Court chooses not to push back aggressively.

That rarely happens.


"You cannot trust basic statements of fact coming from POTUS, HHS, FDA, CDC, DOD" is absolutely an existential risk.


I won't attempt to defend the current administration's incompetent and chaotic approach to public health (or communications in general) but it's hardly an existential crisis. The country literally existed for over a century before HHS was even created.


Among other major problems, the logic in your comment implicitly assumes that the worst a badly-run (incompetent, malevolent, or some combination) central authority can be is equal to the effect of no central authority.

Another important error is the implicit assumption that public health risks are constant, and do not vary with changing time and conditions, so that the public health risk profile today is essentially the same as in the first century of the US’s existence.


They are spreading this nonsense in part in order to hide from the fact that they refuse to release the Epstein files, something that seems to include a rather lot of high profile/high importance official potentially doing really bad things.

It's called flooding the zone, and it is a current Republican strategy to misinform, to sow defeatism in their political opposition, default/break all of the existing systems for handling politics, with the final outcome to manipulate the next election. And they publicized this yet people like you claim to think it's non issue.


It doesn't have to be national threat. Social media can be used by small organisations or even sufficiently motivated individuals to easily spread lies and slanders against individuals or group and it's close to impossible to prevent (I've been fighting some trolls threatening a group of friends on Facebook lately, and I can attest how much the algorithm favor hate speach over reason)


That's a non sequitur. Your personal troubles are irrelevant when it comes to public policy, social media, and the fundamental human right of free expression. While I deplore hate speech, it's existence doesn't justify censorship.


It is of course subjective. For you hate speech does not justify censorship but for me it does. Probably because we make different risk assessments: you might expect hate speech to have no consequences in general and censorship to lead to authoritarianism, whereas I expect hate speech to have actual consequences on people life that are worse and more likely than authoritarianism. When I think about censorship and authoritarianism, I think about having to hide, but when I think about hate speech I picture war propaganda and genocides.


There are twin goals: total freedom of speech and holding society together (limit polarization). I would say you need non-anonymous speech, reputation systems, trace-able moderation (who did you upvote), etc. You can say whatever you want but be ready to stand by it.

One could say the problem with freedom of speech was that there weren't enough "consequences" for antisocial behavior. The malicious actors stirred the pot with lies, the gullible and angry encouraged the hyperbole, and the whole US became polarized and divided.

And yes, this system chills speech as one would be reluctant to voice extreme opinions. But you would still have the freedom to say it but the additional controls exert a pull back to the average.


Is your point that any message is information?

Without truth there is no information.


That seems to be exactly her point, no?

Imagine an interface that reveals the engagement mechanism by, say, having an additional iframe. In this iframe an LLM clicks through its own set of recommendations picked to minimize negative emotions at the expense of engagement.

After a few days you're clearly going to notice the LLM spending less time than you clicking on and consuming content. At the same time, you'll also notice its choices are part of what seems to you a more pleasurable experience than you're having in your own iframe.

Social media companies deny you the ability to inspect, understand, and remix how their recommendation algos work. They deny you the ability to remix an interface that does what I describe.

In short, your quote surely applies to social media companies, but I don't know if this is what you originally meant.


Raising the noise floor of disinformation to drown out information is a way of denying access to information too..


Facebook speaks through what it chooses to promote or suppress and they are not liable for that speech because of Section 230.


Not quite: prior to the communications Decency Act of 1996 (which contained section 230), companies were also not liable for the speech of their users, but lost that protection if they engaged in any moderation. The two important cases at hand are Stratton Oakmont, Inc. v. Prodigy Services Co. And Cubby, Inc. v. CompuServe Inc.

The former moderated content and was thus held liable for posted content. The latter did not moderate content and was determined not to be liable for user generated content they hosted.

Part of the motivation of section 230 was to encourage sites to engage in more moderation. If section 230 were to be removed, web platforms would probably choose to go the route of not moderating content in order to avoid liability. Removing section 230 is a great move if one wants misinformation and hateful speech to run unchecked.


You say "Not quite" but it looks to me like you're agreeing?


We must dissent.


There's a special irony in this being the top comment on a site where everyone has a rightthink score and people routinely and flagrantly engage in "probably bad faith, but there's plausible deniability so you can't pin it on them" communication to crap on whatever the wrongthink on an issue is.

As bad as facebook and it's opaque algorithms that favor rage bait are, the kind of stuff you get by keeping score is worse.


>"You and I, if we say a lie we are held responsible for it, so people can trust us."

I don't know how it works in The Philippines, but in the USA the suggestion that media outlets are held responsible for the lies that they tell is one of the most absurd statements one could possibly make.


How about InfoWars?


I was referring more to established Media that people consider credible like the NBC, CBS, The Guardian, The New York Times, the Wall Street Journal, The Atlantic, etc. The fact that the only person in "media" who has been severely punished for their lies is a roundly despised figure (without any credibility among established media or the ruling class) is not a ringing endorsement for the system. While the lies of Jones no doubt caused untold hardship for the families of the victims, they pale in comparison to the much more consequential lies told by major media outlets with far greater influence.

When corporate media figures tell lies that are useful to the establishment, they are promoted, not called to account.

In 2018 Luke Harding at the Guardian lied and published a story that "Manafort held secret talks with Assange in Ecuadorian embassy" (headline later amended with "sources say" after the fake story was debunked) in order to bolster the Russiagate narrative. It was proven without a shadow of a doubt that Manafort never went to the Embassy or had any contact at all with Assange (who was under blanket surveillance), at any time. However, to this day this provably fake story remains on The Guardian website, without any sort of editor's note that is it false or that it was all a pack of lies!(1) No retraction was ever issued. Luke Harding remains an esteemed foreign correspondent for The Guardian.

In 2002, Jonah Golberg told numerous lies in a completely false article in The New Yorker that sought to establish a connection between the 9/11 attacks and Saddam Hussein called, "The Great Terror".(2) This article was cited repeatedly during the run up to the war as justification for the subsequent invasion and greatly helped contribute to an environment where a majority of Americans thought that Iraq was linked to Bin Laden and the 9/11 attackers. More than a million people were killed, in no small part because of his lies. And Goldberg? He was promoted to editor-in-chief of The Atlantic, perhaps the most prestigious and influential journal in the country. He remains in this position today.

There are hundreds, if not thousands, of similar examples. The idea suggested in the original OP that corporate/established media is somehow more credible or held to a higher standard than independent media is simply not true. Unfortunately there are a ton of lies, falsehoods and propaganda out there, and it is up to all of us to be necessarily skeptical no matter where we get our information and do our due diligence.

1. https://www.theguardian.com/us-news/2018/nov/27/manafort-hel...

2. https://www.newyorker.com/magazine/2002/03/25/the-great-terr...


A sympathetic jury can be an enemy of justice.

I'm not an Alex Jones fan, but I don't understand how a conspiracy theory about the mass shooting could be construed as defamation against the parents of the victims. And the $1.3B judgement does seem excessive to me.


You should read up on some details. The defamation claim is because Alex Jones accused the parents of being actors who are part of staging the false flag. The huge judgement is partly because Alex Jones failed to comply[1][2] with basic court procedure like discovery in a timely way so a default judgement was entered.

Despite his resources, Alex Jones completely failed to get competent legal representation and screwed himself. He then portrayed himself as the victim of an unjust legal system.

[1] https://www.npr.org/2021/11/15/1055864452/alex-jones-found-l...

> Connecticut Superior Court Judge Barbara Bellis cited the defendants' "willful noncompliance" with the discovery process as the reasoning behind the ruling. Bellis noted that defendants failed to turned over financial and analytics data that were requested multiple times by the Sandy Hook family plaintiffs.

[2] https://lawandcrime.com/high-profile/judge-rips-alex-jones-c...

> Bellis reportedly said Jones' attorneys "failure to produce critical material information that the plaintiffs needed to prove their claims" was a "callous disregard of their obligation," the Hartford Courant reported.


> The huge judgement is partly because Alex Jones failed to comply with basic court procedure like discovery in a timely way so a default judgement was entered.

Yeah. Reufsing to cooperate with the court has to always be at least as bad as losing your case would have been.


The specific conspiracy theory implied fraud and cover up on behalf of the parents. Lmao.


Ever watched Fox News?


This is why China bans western social media.


Say what you will about the CCP, it's naive to let a foreign nation have this much impact on your subjects. The amount of poison and political manipulation that are imported from these platform is astronomical.


Well when the local media bends a knee and outright bribes the President (Paramount, Disney, Twitter, Facebook), why should we trust the domestic media?


Like Biden administration pressured social media to take down information/account that went against their narrative


Is there a meaningful difference between pressuring and taking or threatening regulatory action? I think so.


Wait, are you saying that the person you are replying to is a hypocrite, or are you saying that the Biden admin set the standard for responsible government handling of media relations, or are you saying that if one administration does something bad it is ok for any other administration to do something bad, like a tit-for-tat tally system of bad things you get for free after the inauguration?


Biden admin's bad behavior certainly allows Trump to act the same way.

If it was bad for Biden admin, it's much worse for Trump admin - he campaigned against it.


You don’t see a difference between that and outright bribery?


Note that your statement is true and relevant to the conversation, yet downvoted.

It’s shameful that this happens. Is it bot voting? Partisan cheering over productive conversation? It’s troubling.


Instead of implementing government information control, why not invest those resources in educating and empowering ones citizenry to recognize disinformation?


To me this is sort of like saying why do we need seat belts when we could just have people go to the gym so they're strong off to push back an oncoming car. Well, you can't get that strong, and also you can't really educate people well enough to reliably deal with the full force of the information firehose. Even people who are good at doing it do so largely by relying on sources they've identified as trustworthy and thus offloading some of the work to those. I don't think there's anyone alive who could actually distinguish fact from fiction if they had to, say, view every Facebook/Twitter/Reddit/everything post separately in isolation (i.e., without relying on pre-screening of some sort).

And once you know you need pre-screening, the question becomes why not just provide it instead of making people hunt it down?


With modern safety design and human factors, we do both and more. A car can have an automated breaking system, and a manual break, and an information system that informs the driver of the surroundings in order for better informed driver. We don't remove any of those in the false belief that one of them should be enough.

Applying that to information and propaganda, users should have some automated defenses (like ad blockers), but also manual control of what should or should not be blocked, and also education and tools to be better informed when taking manual control.

In neither system should we remove manual control, education or automated help. They all act in union to make people safer.


Perhaps a better analogy from recent HN discussion would be auto-lock-on-drive doors.

Some people die (often children) by opening doors while a vehicle is moving or before it is safe to do so.

However, this also impedes the ability of rescuers to extract people from crashed vehicles (especially with fail-dangerous electric car locks).

Is it safer to protect citizens from themselves or empower them to protect themselves?

In my perfect US, both would be done:

"Dealing with disinformation" as a semester-long required high-school level course and federally mandating the types of tools that citizens could use to better (read: requiring all the transparency data X and Meta stopped reporting, from any democracy-critical large platform).

While also mandating measures to limit disinformation where technically possible and ethically appropriate (read: not making hamfisted government regulations, when mandating data + allowing journalists / citizens to act is a better solution).


Children often lack experience and education to prevent harm, as that is one of the distinguishing aspect between adults and children. We also know from biology that children are more prone to poor impulse control. Children in general are dependent on an adult for safety, and children agency is occasionally removed in favor of security. Auto-lock-on-drive doors is a prime example of this. The adult driver is also liable for their passenger, especially children, so they have multiple incentives to ensure good security.

Treating children as children is fine and expected. Treating adults as children is not. Protecting children from disinformation, under the assumption that they lack the experience, education, impulse control, and expectation to handle information security themselves is fine. The government can also be an acceptable party to define this for children, even if some parents will object to not carry that role. An alternative could also be to make the parent liable if they fail in their role to protect their children from information harm.

Going back to auto-lock-on-drive doors, giving the government remote control of the car doors with no override, including the driver door, is unlikely to be acceptable to the adult driver who own the car.


Because you want to use it yourself. You can't vaccinate if you rely on the disease to maintain power. You can't tell people not to be afraid of people different than themselves if your whole party platform is being afraid of people different than yourself.


That’s hundreds of millions of people in the US, of varying ages and mostly out of school already. Seems like a good thing to try but I’d imagine it doesn’t make a tangible impact for decades.


Instead of investing resources in education, why not let people discover by themselves the virtues of education?

Sarcasm aside, we tend to focus too much on the means and too little on the outcomes.


Because no one person can fight against a trillion dollar industry who has decided misinformation makes the biggest profit.

How am I supposed to learn what’s going on outside my home town without trusting the media?


Because it doesn't seem to work?


Because in that case you wouldn't be able to use disinformation yourself.


'An ounce of prevention is worth a pound of the cure.'

It's so much easier to stop one source than it is to (checks notes) educate the entire populace?!? Gosh, did you really say that with a straight face? As if education isn't also under attack?


I never defended the authoritarianism of the CCP. I only said it makes sense to block foreign platforms, regardless if the state is a tyranny or not. Framing it as if it's some kind of tactic to help keep the populous indoctrinated is a very simplistic take.

Take Reddit, for example. It's filled with blatant propaganda, from corporations and politicians. It's a disgustingly astroturfed platform ran by people of questionable moral character. What's more, it also has porn. All you need is an account to access 18+ "communities". Not exactly "enlightening material" that frees the mind from tyranny.


Because it isn't that simple.

If we could just educate people and make sure they don't fall for scams, we'd do it. Same for disinformation.

But you just can't give that sort of broad education. If you aren't educated in medicine and can't personally verify qualifications of someone, you are going to be at a disadvantage when you are trying to tell if that health information is sound. And if you are a doctor, it doesn't mean you know about infrastructure or have contacts to know what is actually happening in the next state or country over.

It's the same with products, actually. I can't tell if an extension cord is up to code. The best that I can realistically do is hope the one I buy isn't a fake and meets all of the necessary safety requirements. A lot of things are like this.

Education isn't enough. You can't escape misinformation and none of us have the mental energy to always know these things. We really do have to work the other way as well.


Why is this being downvoted?


Sorry, 'recognizing disinformation'? You must have meant 'indoctrination'.

(They don't necessarily exclude each other. You need both positive preemptive and negative repressive actions to keep things working. Liberty is cheap talk when you've got a war on your hands.)


China reflexively bans anything that could potentially challenge Chairman Xi's unchecked authority and control over the information flow.


>unchecked social media

Passive voice. Who exactly is supposed to do the "checking" and why should we trust them?


Citizens. Through lawsuits. Currently we can't because of Section 230.


Nonsense. If social media users engage in fraud, slander, or libel then you can still hold them accountable through a civil lawsuit. Section 230 doesn't prevent this.


Will/can Facebook tell you the real identity of the user? If no then Facebook has to take responsibility for the fraud/slander/libel. Currently Section 230 means they can't be held responsible.


Yes. A plaintiff can file a civil lawsuit in a US court against a "John Doe" defendant and ask the court to order Facebook (or any online service) to turn over any data they have on the user's real identity. If the court agrees and issues the order then Facebook will comply: this is quite routine and happens all the time. The plaintiff can then amend the lawsuit to name specific defendants.

Section 230 is largely irrelevant to this process so I don't know why you'd bring it up. Have you ever even read the Communications Decency Act of 1996?


The "editorializing" may possibly be applied i think (not a lawyer) when the platform's manipulation of what a user sees is based on content. And the Youtube's banning of specific Covid and election content may be such an "editorializing", and thus Youtube may not have Section 230 protection at least in those cases.


Have you even read Section 230? Editorializing is irrelevant.


The problem is not the content, the problem is people believing things blindly.

The idea that we need to protect people from “bad information” is a dark path to go down.


I don't see it so much as protecting people from bad information as protecting people from bad actors, among whom entities like Facebook are prominent. If people want to disseminate quackery they can do it like in the old days by standing on a street corner and ranting. The point is that the mechanisms of content delivery amplify the bad stuff.


It’s a terrible idea and creates more problems than it solves.

You eliminate the good and the bad ideas. You eliminate the good ideas that are simple “bad” because it upsets people with power. You eliminate the good ideas that are “bad” simply because they are deemed too far out the Overton window.

And worst of all, it requires some benevolent force to make the call between good and bad, which attracts all sorts of psychopaths hungry for power.


Have you been living under a rock these past few years? The "bad" ideas outnumber the "good" ones ten to one. The current secretary of health lets internet conspiracies dictate its politics, such that vaccines are getting banned, important research is getting defunded, and now they're even going after paracetamol (!!). People will die.

Cue the quote that says it takes 30 minutes to debunk 30 seconds of lying.


You chose to ignore all the points I made.

Why?


Censorship works both ways. When i tried speaking against violence and genocide perpetrated by Russia in Ukraine i was shut down on Linkedin.

Even here on HN, i was almost banned when i said about children abduction by Russia https://news.ycombinator.com/item?id=33005062 - the crime that half year later ICC wrote the order against Putin.


You know how this used to work in the old days? Instead of publishing allegations yourself, you would take your story to a newspaper reporter. The reporter will then do the investigations then, if there is solid evidence, the story will be published in the newspaper. At that point the newspaper company is standing behind the story, and citizens know the standing of the newspaper in their community, and how much credence to give to the story, based on that. Social media destroyed this process, now anyone can spread allegations at lightning speed on a massive scale without any evidence to back it up. This has to stop. We should return to the old way, it wasn't perfect, but it worked for 100s of years. Repealing Section 230 will accomplish this.


I remember a story that was investigated and then published...it was spread far and wide. The current president of the US stole the election and our biggest adversary has videos of him in compromising positions. Then debunked. (Steele dossier) https://www.thenation.com/article/politics/trump-russiagate-...

I remember a story that was investigated and then published...for some reason it was blocked everywhere and we were not allowed to discuss the story or even link to the news article. It "has the hallmarks of a Russian intelligence operation."(Hunter Biden Laptop) Only to come out that it was true: https://www.msn.com/en-us/news/politics/fbi-spent-a-year-pre...

I would rather not outsource my thinking or my ability to get information to approved sources. I have had enough experience with gell-mann amnesia to realize they have little to no understanding of the situation as well. I may not be an expert in all domains but while I am still free at least I can do my best to learn.


w.r.t. the Steele Dossier, it was always from the beginning purported to be a "raw intelligence product", which is understood by everyone involved in that process to mean it is not 100% true -- the intelligence is weighted at different levels of confidence. Steele has said he believed his sources were credible, but he did not claim the dossier was 100% accurate. He weighed it at 50/50, and expected that investigators would use it as leads to verify information, not as proof in itself.

And on that point the FBI investigations didn't even start on the basis of the Steele Dossier; they started on the basis of an Australian diplomat, Alexander Downer, who during a meeting with top Trump campaign foreign policy advisor George Papadopoulos, became alarmed when Papadopoulos mentioned that the Russian government had "dirt" on Hillary Clinton and might release it to assist the Trump campaign. Downer alerted the Australian government, who informed the FBI. The Steele Dossier was immaterial the investigation's genesis.

So any claim that the dossier as a whole has been "debunked" is not remarkable. Of course parts of it have been debunked, because it wasn't even purported to be 100% true by the author himself. It's not surprising things in it were proven false.

Moreover that also doesn't mean everything in it was not true. The central claim of the dossier -- that Donald Trump and his campaign had extensive ties to Russia, and that Russia sought to influence the 2016 U.S. election in Trump’s favor -- were proven to be true by the Muller Report Vol I and II, and the Senate Select Intel Committee Report on Russian Active Measures Campaigns and Interference in the 2016 Election, Vols I - VI.

> The current president of the US stole the election

Not a claim made in the dossier.

> and our biggest adversary has videos of him in compromising positions.

This hasn't been debunked. The claim in the dossier was that Russia has videos of Trump with prostitutes peeing on a bed Obama slept in, not peeing on Trump himself. The idea that it was golden showers is a figment of the internet. Whether or not the scenario where people peed on a bed Obama slept in happened as laid out in the dossier is still unverified, but not "debunked".


> Russiagate

It was never “debunked”, that is far too strong a word. Is it true? Who knows! Should we operate as if it was true without it being proven? Definitely not.

> Hunter’s laptop

In what way was that story buried or hidden? It was a major news story on every news and social network for over half a year. There was only consternation about how the laptop was acquired and who or what helped with that endeavor. The “quieting” of the story is BS and only came about a long time after the fact. Biden’s people sought (unsuccessfully) to have images removed from platforms but there was never an effort to make it seem like the allegations that stemmed from the laptop were misinformation.


You are spreading misinformation. According to Mark Zuckerberg, Facebook actively buried and hid posts related to the Hunter Biden laptop story in 2020. We can argue about whether Facebook did the right thing based on the information they had at the time but let's be clear about the facts: the CEO literally stated that they did it, so it's not BS.

https://www.bbc.com/news/world-us-canada-62688532

Twitter's Vijaya Gadde also admitted that they blocked users from sharing the story. That's not BS either.

https://www.bbc.com/news/technology-54568785


I stand by what I said and I think you are interpreting my words uncharitably in order to “win” some argument I’m not a part of.

I am not going into a semantic argument with you over whether my exact wordings match whatever you think I said.

I will however say that both theses put forth by the comment I replied to are false. If you read either article you linked they actually support my point, the Hunter Biden news was extremely widely shared on Facebook and only throttled due to suspicions on Facebook’s part that it may have been inorganic. A particular article (but not the news) was blocked on Twitter based on an existing policy, discussion was still allowed and it was definitely widely discussed and shared.


[flagged]


Forest for the trees.

Don't take my comment as a declaration for Trump and all he stands for.

My parent had posted "You know how this used to work in the old days? Instead of publishing allegations yourself, you would take your story to a newspaper reporter. The reporter will then do the investigations then, if there is solid evidence, the story will be published in the newspaper. At that point the newspaper company is standing behind the story, and citizens know the standing of the newspaper in their community, and how much credence to give to the story, based on that."

Rather than call it an argument to authority, which it is very close to, I decided to highlight two cases where this authority that we are supposed to defer to was wrong.

Perhaps a better and direct argument would be to point out that during the COVID pandemic; Youtube, Facebook and Twitter were all banning and removing posts from people who had heterodox opinions, those leading the charge with cries of "Trust the Science".

This run contrary of what science and the scientific process is, Carl Segan saying it better than I "One of the great commandments of science is, 'Mistrust arguments from authority.' ... Too many such arguments have proved too painfully wrong. Authorities must prove their contentions like everybody else."

Now that I have quoted a famous scientist in a post to help prove my point about how arguments from authority are invalid, I shall wait for the collapse of the universe upon itself.


It never worked. Newspapers in the old days frequently printed lies and fake news. They usually got away with it because no one held them accountable.


William Randolph Hearst and the Spanish-American war come to mind.


>At that point the newspaper company is standing behind the story

the newspaper company is the bottleneck that the censors can easily tighten like it was say in USSR. Or like even FCC today with the media companies like in the case of Kimmel.

Social media is our best tool so far against censorship. Even with all the censorship that we do have in social media, the information still finds a way due to the sheer scale of the Internet. That wasn't the case in the old days when for example each typewritter could be identified by unique micro-details of the shape of its characters.

>Social media destroyed this process, now anyone can spread allegations at lightning speed on a massive scale without any evidence to back it up.

Why to believe anything not accompanied by evidence? The problem here is with the news consumer. We teach children to not stick fingers into electricity wall socket. If a child would still stick the fingers there, are you going to hold the electric utility company responsible?

>This has to stop. We should return to the old way, it wasn't perfect, but it worked for 100s of years.

The same can be said about modern high density of human population, transport connections and infectious decease spreading. What you suggest is to decrease the population and confine the rest preventing any travel like in the "old days" (interesting that it took Black Death some years to spread instead of days it would have taken today, yet it still did spread around all the known world). We've just saw how it works in our times (and even if you say it worked then why aren't we still doing it today?). You can't put genie back into the bottle and stop the progress.

>Repealing Section 230 will accomplish this.

Yes, good thing people didn't decided back then to charge the actual printer houses with lies present in the newspapers they printed.


Social media is also a bottleneck. In places like India, Facebook will comply with censorship or they will get blocked


What happens when the press refuses to publish anything which doesn't align with their financial or political interest?


There is no way to go back to this. It’s about as feasible as getting rid of vehicles.


I am not saying we should go back to physical newspapers printed on paper. News can be published online... but whoever is publishing it has to stand behind it, and be prepared to face lawsuits from citizens harmed by false stories. This is feasible, and it is the only solution to the current mess.


It's horrifying that anyone would believe that censorship and control over news would be a solution to anything. The naivety of your comment is in itself an indictment of our collective failure to properly educate the polity in civics.


A determined instigator could easily continue pushing modern yellow journalism with little problem under the system you propose.

They simply need choose which negative stories they print, which opinions they run. How do you frame misrepresentation vs a differing point of view? How do you call out mere emphasis on which true stories are run. Truths are still truths, right?

It's not infrequent today to see political opinions washed through language to provide reasonable deniability by those using it.

Hell, it's not infrequent to see racism, bigotry and hate wrapped up to avoid the key phrases of yesteryear, instead smuggling their foulness through carefully considered phrases, used specifically to shield those repeating them from being called out.

'No no no. Of course it doesn't mean _that_, you're imagining things and making false accusations.'


> We should return to the old way, it wasn't perfect, but it worked for 100s of years

At this stage you are clearly just trolling. Are you even aware of the last 100s of years? From Luther to Marx? You are not acting in good faith. I want nothing to do with your ahistorical worldview.


I can think of another hot-potato country that will get posts nerfed from HN and many others


That's the evil genius behind the general movement in the world to discredit democratic institutions and deflate the government.

Who would hold Meta accountable for the lies it helps spread and capitalize upon them if not the government.

So by crippling democratic institutions and dwarfing the government to the point of virtual non-existence, all in the name of preserving freedom of speech and liberalism -- and in the process subverting both concepts -- elected leaders have managed to neutralize the only check in the way of big corps to ramp up this misinformation machine that the social networks have become.


I think it would be even wiser to start by holding to account the politicians, corporations, and government institutions regarding their unchecked lies corruption and fraud.

But no, yet again the blame is all piled on to the little people. Yes, it's us plebs lying on the internet who are the cause of all these problems and therefore we must be censored. For the greater good.

I have an alternative idea, let's first imprison or execute (with due process) politicians, CEOs, generals, heads of intelligence and other agencies and regulators, those found to have engaged in corrupt behavior, lied to the public, committed fraud, insider trading, fabricated evidence to support invading other countries, engage in undeclared wars, ordered extrajudicial executions, colluded with foreign governments to hack elections, tax evasion, etc. Then after we try that out for a while and if it has not improved things, then we could try ratcheting up the censorship of plebs. Now one might argue that would be a violation of the rights of those people to take such measures against them, but that is a sacrifice I'm willing to make. Since We Are All In This Together™, they would be willing to make that sacrifice too. And really, if they have nothing to hide then they have nothing to fear.

When you get people like Zuckerberg lying to congress, it's pretty difficult to swallow the propaganda claiming that it's Joe Smith the unemployed plumber from West Virginia sharing "dangerous memes" with his 12 friends on Facebook that is one of the most pressing concerns.


I don't think "breadwinner" is blaming the little people.


No, the ruling class is. breadwinner I guess has bought into the propaganda, but hasn't made the connection that it's basically putting all the blame on the little people and proposes to put all the burden of "fixing" things onto them, with measures that will absolutely not actually fix anything except handing more power to the ruling class.


breadwinner is clearly putting the blame on Big Tech for putting perverse incentive structures in place.


Exactly what are you trying to say about unbanning YouTubers here?


That it could be dangerous to readmit people who broadcast disinformation? The connection seemed pretty clear to me.


I certainly guessed that was what you wanted to say. Funny how polarization makes everything predictable.

But what I just realized is that you don't explicitly say it, and certainly make no real argument for it. Ressa laments algorithmic promotion of inflammatory material, but didn't say "keep out anti-government subversives who spread dangerous misinformation" - which is good, because

1. We can all see how well the deplatforming worked - Trump is president again, and Kennedy is health secretary.

2. In the eyes of her government, she was very much such a person herself, so it would have been pretty bizarre thing of her to say.

Ironically, your post is very much an online "go my team!" call, and a good one too (top of the thread!). We all understand what you want and most of us, it seems, agree. But you're not actually arguing for the deplatforming you want, just holding up Ressa as a symbol for it.


> We can all see how well the deplatforming worked - Trump is president again

Not a compelling argument...

Jan 2021 - Twitter bans Trump (for clear policy violations)

Apr 2022 - Musk buys Twitter

Aug 2023 - Twitter reinstates Trump's account

Nov 2024 - Trump re-elected, gives Musk cabinet position


So deplatforming works, unless people become so unhinged at the efforts to shape them that they do crazy stuff like buy major media platforms? Guess what, they do!

But at least the Covid dissenter deplatforming worked, right? Or was the problem Musk there again?

One of my mantras is that powerful people believe all the crazy things regular people believe in, they just act differently on them. I think both Musk and Kennedy are great examples that you'd appreciate, as are Xi and Putin with their open mic life extension fantasies.

It's not long ago that Musk and even Trump himself, was aligned with your competent technocrats wielding the "suppression of irresponsible speech" powers.


I'm saying that Trump's re-election is not a compelling counter-example to the general argument for banning disinformation, because he was "re-"platformed for over a year by the time of the election.


You underestimate how much seeing a sitting president be deplatformed affected the voting public. It wasn’t just Musk, all this talk of “deplatforming” people on the right was an obviously clear erosion of free speech that pushed many moderates like myself rightward.

It wasn’t just banning Trump either, tbh one of the biggest ones was the banning of the Babylon Bee for a pretty tame joke. There’s a long list of other right-leaning accounts which were banned during that time as well.


I mean, who knows how well Trump would have done had he not been re-admitted to Twitter. It's a counter-factual. For what it's worth, I'm not advocating de-platforming right-wing voices. I just think there's an argument to be made that social media platforms have a responsibility to mitigate misinformation and incitements to violence. It should be done in a transparent and impartial manner. There are high-profile right-wing accounts that spread a lot of misinformation trying to whip up a frenzy. In the UK, Musk's un-banning brought accounts like Katie Hopkins, Andrew Tate, and Tommy Robinson back online, a consequence of which was a series of violent riots last summer fuelled by false claims and Islamophobia. I hear people arguing that as long as anyone can share their ideas, then the truth will bubble to the top. Well, that's not how it's playing out.


Having private companies try and label things themselves which are misinformation or incitements to violence is a slippery slope which has never worked well in practice. As soon as you have a person in a company who's job it is to decide whether something is misinformation or not they immediately will apply their own personal biases.

The approach of allowing everything that is _legal_ to say is much better. If it is allowed by a court of law then companies should not be trying to apply their own additional filters. It can be downranked in the algorithm but at least allowing legal speech is important.

Even just looking at your statement, lumping Andrew Tate in with Tommy Robinson is a completely subjective thing, they are two wildly different people. Everything Tommy Robinson has said is true, he regularly states that he doesn’t care about race, he rejects white supremacists, and has a movement filled with peaceful normal Brits. Nothing he says or does is violent or illegal, his claims about Pakistani rape gangs are supported by evidence and first hand testimony. And more generally: not wanting to become a hated minority in your own country is not an extremist position. It doesn’t mean you hate others for their skin color or whatever type of “phobic” label you care to apply. People vote repeatedly for a government to stop the boats and every government that gets elected decides not to try for some mysterious reason, people are justifiably angry that their elected officials are doing the opposite of what they voted for.

Andrew Tate is yes of course a controversial dumb guy who does say things which are pretty out there, but the principle of allowing him to say everything which is legal in a court of law is important. Most normal people recognize that he’s outside the Overton window on many topics and it’s generally easy to counter his speech with better speech. But lumping crazies like Tate in with legitimate people like Robinson is a common tactic to delegitimize the people you disagree with.


> a slippery slope which has never worked well in practice

Yet if we do it your way we get violent nationwide riots fuelled by misinformation on social media.

> Everything Tommy Robinson has said is true ... Nothing he says or does is violent or illegal

Tommy Robinson who was tried and jailed for repeatedly spreading libellous allegations about a Syrian refugee? Are you being serious?


Yes the guy who was sent to jail for saying words that the government didn’t like. He was further persecuted for making a film about it as well. Put in solitary confinement for nothing other than _saying words_.

He is completely correct about Pakistani rape gangs, the growth of Sharia courts and laws in the UK, and growing violence against the native British.

Side note: the UK government fines and jails more people for speech than most authoritarian dictatorships, including Russia.


Robinson was explicitly jailed for making libellous accusations. He invented claims that the refugee had bullied other kids, supplemented by unrelated photos that he stole. He admitted it was all fake, but then repeated the claims in a film. This is all clearly documented.

A 15 year old refugee boy had been assaulted, had water forced into his mouth, had had his arm broken, his sister had been assaulted. He's now terrified of going back to the school because of the hate that Robinson has filled other kids' heads with. Robinson's behaviour was utterly shameful, and it's shameful that you defend him in this instance.

> the UK government fines and jails more people for speech than most authoritarian dictatorships

This is another false claim that Robinson peddles about. I've addressed it previously here: https://news.ycombinator.com/item?id=41488099


“Making libelous accusations” = “saying words you don’t agree with”

“A photo he stole” - didn’t he just repost something he found on TikTok? How is that stealing?

Again all these things you say he did amount to him saying things you don’t like. He didn’t commit any violence or hurt anyone, just said words. You are trying to justify locking people up for saying things and that is what real authoritarian government looks like.

>Approximately 12,000 people are arrested annually in the UK for offensive online messages

>In 2023, specifically 12,183 people were arrested for sending or posting offensive messages on social media [3]

>Police are making around 30 arrests per day for offensive online messages [1]

>The trend shows significant increases: arrests have risen by 121% since 2017 [1] and by almost 58% since before the pandemic

https://factually.co/fact-checks/justice/uk-social-media-arr...


A few posts above you said:

> Everything Tommy Robinson has said is true ... Nothing he says or does is violent or illegal

Given that he committed libel - which involves making false statements and is illegal - will you at least admit that you were wrong?

Perhaps you take the niche position that libel should be legal. But at least have the decency to concede that it is currently illegal.


You realise I didn’t make the original post, right?


No, it looked as if you did. Whatever.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: