People Want Platforms, Not Governments, to Be Responsible for Moderating Content
Posted3 months agoActive3 months ago
reutersinstitute.politics.ox.ac.ukOtherstory
heatedmixed
Debate
85/100
Content ModerationFree SpeechSocial Media Regulation
Key topics
Content Moderation
Free Speech
Social Media Regulation
A survey found that most people want social media platforms, not governments, to be responsible for moderating content, sparking a debate about the role of platforms and governments in regulating online speech.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
131
0-12h
Avg / period
24
Comment distribution144 data points
Loading chart...
Based on 144 loaded comments
Key moments
- 01Story posted
Oct 1, 2025 at 7:55 AM EDT
3 months ago
Step 01 - 02First comment
Oct 1, 2025 at 9:10 AM EDT
1h after posting
Step 02 - 03Peak activity
131 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 7, 2025 at 3:36 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45436664Type: storyLast synced: 11/20/2025, 4:56:36 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
If a platform lies or spread malicious content, it seems people want the platform to have the liability and consequences for the malfeasance. That is what most people mean by "responsible".
Government sets the rules, and if someone fail to comply, there are consequences against those responsible. Government isn't responsible, it is holding them responsible.
Does the government then setup a ministry of truth? Who gets to decide that?
If I state here plain and as a fact that golieca eats little children for breakfast and slaughters kittens for fun, could @dang not look at both a statement from you and one from me and see if I have sufficient proof?
Nah, he would just (shadow)ban you.
But in general we had that debate long and broad on what truth means with Covid. Who decides what the scientific consensus is for instance. (I don't remember a crystal clear outcome, though). But in case of doubt, we still have courts to decide.
Censorship!
There’s a lot of grey areas - statement of fact vs opinion, open scientific consensus, statements about public figures vs. private individuals, … But the post I’m responding to basically says “there is no truth, let’s give up.” and that’s just as false.
Currently, in the US, internet companies get a special exemption from the laws that apply to other media companies via the DMCA. If traditional media companies publish libelous material, they get sued. Facebook and Google get a "Case Dismissed" pass. Most people look at the internet and conclude that hasn't worked out very well.
Cite?
Whereas the creation of laws and the interpretation of laws are powers that the executive branch does not have, and are held separately by the legislative and judicial branches.
In a, well, y’know “functioning” democracy. Apparently.
People complaining about building a "ministry of truth" in countries with anything resembling a functioning legal system are just as clueless as people who cry about "government death panels" while private insurance already denies people lifesaving medicine right freaking now
I personally prefer an emphasis on the first solution because it's better to combat the widespread lack of civility in social media, which I believe to harm society substantially, but I also understand the people who prefer the second model.
The platform isn't lying, any more than if I write "1 equals 2" on a letter and send it via the mail system to someone else is the mail system lying.
You didn't send it out into the void.
If however some of your junk mail included mass mailings of brochures to join the KKK or some neo-nazi group, I could see why people would want the postal service to crack down on that. That is a fair analogy.
but if the platform didnt go verify all content placed by users on it, does it count as "spreading" it?
I mean, there's nothing stopping anyone from publishing a book which spreads lies and malicious content - book banning is looked down upon these days. Why are the publishers not asked to be held to this same level? What makes a web platform different?
The men running these company live like wannabe kings but have absolutely no backbone when it comes to having a stance about moderating for basic decency and are profiting greatly off of festering trash heaps. Additionally, they're complete cowards in the face of their nihilistic shareholders.
The "but" is really throwing me for a loop, because to me it feels like one follows from the other.
Part of that is they don't want responsibility. The phone companies in the US are classified as "common carriers" which in part means they are not held responsible for misuse (drug deals, terrorist plots, whatever, discussed on their system). The flip side is that the are not allowed to discriminate - their job is to complete calls.
Online "platforms" want no responsibility for the content people post. They want to be common carriers, but that would also prevent them from discriminating (even algorithmically) what people see. Since they aren't properly classified/regulated yet they're playing a game trying to moderate content while also claiming no responsibility for content. It does not make sense to let them have it both ways.
The survey could say, “given that the existence of corporate monopolies demonstrates weak and non functional governments, should governments a) cede more power to the monopolies, or b) pretend to claw power back from the monopolies?”
Who defines what "problematic content" is?
YC is free to censor on their own platform, the only issue is when the government is involved in censoring speech.
Anyway my point was "problematic content" is often used as a buzzword by censorship happy people, and ends up being synonymous with "something I disagree with." We have just literally seen the real life consequences of pushing for censorship — that it will eventually be used against speech that one agrees with — and nobody quite seems to care.
There are platforms that have much less strict standards with regards to that - yet you consider this the more agreeable platform. Maybe the reason is that it’s actually nicer to have a conversation in a place where you don’t need to deal with an asshole that starts yelling and insulting everyone at the table.
Think about this in real life: would you want to frequent a place where the loudest asshole gets to insult everyone present or would you rather go to a bar where at some point the Barkeeper steps in and sorts things out?
The modern web example of your bar scenario is more like this: the bartender doesn't want to hear [opposing political/societal issue opinion] at the bar and starts kicking out everyone he disagrees with. The kicked out people go start their own bar. Now there's two neighboring bars, MAGABar and LibBar; customers are automatically filtered into attending either bar by an algorithm. If you say anything that the bartender disagrees with, you're permanently banned. The fun part is that you can be permanently banned from BOTH bars if your viewpoints don't fall in line 100% with what the bartender wants to hear.
Oh and you can't go to TechBar anymore either, the bartender heard you said something critical of furries at another bar, so now you're banned and not allowed to talk about computers.
> If there is, it's certainly not a profitable product.
I think this is the main issue: that we walled up our discussion plazas to make them 'profitable products'.
I know I am a bit of an idealist here, but I miss the old-timey usenet, basically an agora where you could filter yourself (with the appropriately-called killfile), and which was not controlled by one institution alone). I had some hope with federated systems - but these are often built with censorship mechanisms written right into it, and again give operators too much influence on what their users may or may not see.
I remember the time when it was en vouge for subreddits to ban people for participating in subreddits they personally disagree with (automated, regardless of how that participation took place).
You cannot have a free exchange of ideas with a centralised thought police. You can only have truly free communication if you yourself decide what you read, and what you block out.
For bigger platforms which operate as a public forum I think the case is stronger for weak moderation, but even in those situations a bad faith actor (say perhaps a state or corporate actor with a lot of money to blow on bots) can completely undermine the purpose of those forums. I really can't imagine how a transparent moderation policy in such a situation isn't at least practically useful. In the end you cannot have a free exchange of ideas if some parties are intentionally manipulating, trolling, or flooding the zone of exchange.
Congress isn't just a free for all of people yelling at eachother. There are rules, not to moderate free speech, but to just make hundreds of people cooperating a possibility.
I'm glad you're both a reasonable person, and available to identify all others, so that the set of reasonable people want the same thing.
Assuming that we are talking about platform of user-generated content, should the users be punished by what they post? The kind of punishment can do the government is different from what a platform can do, and somewhat they want to feel free to express themselves. This are factors taken into account by users at making decisions.
In the other hand, what the platform does (through algorithms, weights, etc) at selecting, prioritizing and making visible content by users and users themselves is something happening at platform level. There the government may have something to do. And here we are talking about the platform decisions.
There is a middle ground on coordinating/playing with the algorithms to make your content visible by users or groups that control in a way or another many users accounts. There might be some government and platform involvement in this case.
What kind of punishment exactly?
If governments punish users for content, assuming the content isn't clearly illegal, that's government censorship and may be a free speech violation depending on jurisdiction.
Platforms censoring or prioritizing content is a private entity enforcing rules for what they're willing to host and distribute on the platform. I'm not sure that's punishment at all, people don't have to post there and there's no use of force or detention being threatened.
what becomes of the public square under this doctrine tho? There cannot be more than one public square, as it's a natural monopoly, so a platform that turned itself into a public square (and extracting rents from doing so) means they get to control a narrative displayed in public (presumably suitable to their agenda). And there's no secondary public square due to the network effect.
You do not get to say whatever you want in the local pub, the local library, the local mall, the local grocery store (even if it is the only one!) or the local town hall! You can be trespassed explicitly for your speech in any of them, or even for no reason. The only grocery store in town can say "You aren't allowed here anymore because fuck you", and as long as you cannot prove in court that they actually banned you for being a protected class, no more groceries for you!
The closest we have is "common carrier" concepts. The electric company has to serve even the nazis. So did the telephone company. I think the railroads were required to serve all comers?
Each of those were done for the purpose of encouraging a functioning market, actions taken against organizations that you cannot build competitors to, since the rights of way they use are basically gone and would cost absurd prices to replace the infrastructure for even a single customer.
None of this is true for Facebook or Twitter. Anyone can spin up a replacement in a day for dirt cheap, and can self host. The internet was originally built from people self hosting, and it's only gotten easier and cheaper, so the "But you can't replace it" argument has never held water.
If you are upset that a lot of internet companies are megacorps and don't have competition and that self hosting would leave you kind of lonely, I agree! Let's break up these absurdly stupid companies.
Don't claim to be advocating for "freedom of speech" if what you are actually advocating for is "I want to compel speech that I agree with from everyone else"
There is STILL no day to day requirement to be on facebook or youtube or whatever. There are a glut of youtube replacements springing up, because the "serving video on the internet is hard and expensive" narrative is mostly wrong, and because Youtube sucks for anyone who isn't Mr Beast, and the style of content they want that drives maximum profit.
You can be trespassed from public property like a library but the bar is very high and simply saying something they don't like will not hold up in court.
Electric companies, social media companies, grocery stores, etc can do whatever they want with regards to limiting speech. The only issue would be if the government compelled those organizations to limit speech, see the Twitter files for an example of government overreach almost certainly violating free speech protections or finding just how far they can go before legal liability.
Which one platform for user generated content do you think holds the natural monopoly? I agree that network effects limit competition, absent government intervention, but among my acquaintances people are using:
- Discord
- Snapchat
- Instagram
- TikTok
- Bluesky
- YouTube
- Twitter/X
- Reddit
I think that enforcing anti-trust would be enough to keep any one platform or corporate owner from monopolizing public discourse.
As for falsehoods: some people will be mistaken, some people will lie, and sometimes sarcasm will be misunderstood. Why should anyone be liable? It is on each individual to inform themselves, and to decide what to believe and what to disregard.
Article 19 of the Universal Declaration of Human Rights: "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers."
It doesn't say "if your opinion is approved by the government". It doesn't say "if your opinion is correct". It makes no exceptions whatsoever, and that is what we need to strive for.
Opinions cannot be right or wrong.
> It makes no exceptions whatsoever, and that is what we need to strive for.
It certainly does. See libel / defamation / perjury / false representation / fraud / false advertising / trademark infringement.
I’d say if you can be jailed for a particular opinion, someone has certainly made a judgement call that your opinion is wrong!
Can you give an example of someone in a modern democracy jailed for their "opinion"?
To wit, are the examples you're thinking of "statement of opinion", "statement of fact", "pejorative insult", or "incitement"?
Saying "I think <public figure> is an idiot" is an opinion. "The earth is flat" or "The holocaust never happened" are not opinions; neither is, "Kick out all the <insert pejorative here>."
And yeah, in North Korea you'll absolutely be jailed for expressing some opinions. That may make them illegal, but it doesn't make them no longer opinions.
But would you dare state out loud in Germany that, in your opinion, the official number of Holocaust victims is actually much less than what's been widely reported? Even if you had what you believed was solid evidence supporting your argument? I bet you wouldn't.
> neither is, "Kick out all the <insert pejorative here>."
How about voicing your opinion that <people from some country> should be barred from emigrating to <European country> because <crime statistics>? Bet you wouldn't try that either, because your opinion is in "hate speech" territory now.
Ugh, what?
https://www.bbc.com/news/articles/c5yl7p4l11po
https://en.wikipedia.org/wiki/Detention_of_R%C3%BCmeysa_%C3%...
https://en.wikipedia.org/wiki/Detention_of_Mahmoud_Khalil
This isn't an opinion, it's incitement to a crime.
> Detention of Rümeysa Öztürk / Mahmoud Khalil
These clearly are for opinion, and are widely criticized as being vindictive and unconstitutional.
Not really an opinion but it can be a belief. I'm not sure why we are okay with people believing that Earth is ~6000 years old, but not with someone believing that we are in a simulation and everything before e.g. year 1999 is just a collective memory fabrication.
Yes. You can believe this fact to be false but you might also be lying. How do you show this? By showing why you believe it to be false
If you want to visit that idiotic Noah's Ark museum, go.
If you want to prohibit teaching about evolution in schools, go to hell.
People might not have gone to jail, but they did have voices and access to society limited or removed because of their opinions.
Immoral, unethical, impractical, or contrary to human rights, perhaps.
I'm not sure that's a helpful distinction. In some sense, everything we classify as a 'fact' is a judgement call: is the sun a giant ball of fusing hydrogen? I mean, probably, but maybe we're all living in some sort simulation and it doesn't really exist at all; Or maybe you are living in your own personal "Truman Show", being fed lies by everyone who shows you scientific "evidence" about the sun's nature.
But "the sun is a giant ball of fusing hydrogen" is a different type of statement than "chocolate ice cream is better than vanilla", or "Mozart is better than Beethoven".
"The sun is a giant ball of fusing hydrogen" has the possibility of being proven false. This means it's either a true or false fact.
If I said "NYC is the capital of the United States"* I'm either lying or mistaken
What makes it a lie vs mistaken? Whether it's a genuine belief, that I have a reason to have the belief. For example if I made the assumption it's the capital because it's the biggest city then I'm mistaken.
It's a lie if I know it's not true, if I ignore information that falsifies the fact.
*To avoid semantics I mean the official capital of the country not like "it's important"
And even for claims which are in the realm of "fact", which are false, but which are truly believed, we need to be careful about suppressing truth. There was a time when "the sun goes around the earth" was accepted "scientific fact". Lots of flat-earthers genuinely believe the falsehoods they're spreading. Where do we draw the line between "healthy skepticism" and "dangerous falsehood"?
I don't have a clear answer, but I do think there needs to be a line.
The line is whether the person is genuine in belief and the potential harm. There's no direct harm if someone believes the sun revolves around the earth.
No direct harm; but it may be comorbid with other things that cause harm, like vaccine skepticism.
There is a question about what the best response is. Just censoring disinformation like this may cause people who notice / experience the censorship to give more credence to the disinformation. But as is apparent from the whole "flat earth" fiasco, there are a large number of people who seem simply incapable of understanding basic math or scientific principles. The earth can be proven round by personal observations made by anyone. If people still cannot be convinced the earth is round, how are they to be convinced about things that they cannot collect personal observations, like vaccines, or the holocaust, or January 6th?
At any rate, I'm glad I'm not running a platform like YouTube; it's not an easy problem.
I still say there is a difference between "Africa exists" and "gwd's statement about the lack of 'facts' is heretical and they should be imprisoned".
Although I mostly agree, I just wanted to make explicit that nuance.
GP brought up people being jailed for social media posts, but didn't reference any specifically. In the handful of cases I found via a web search, the charges were related to inciting violence.
GP also brought up the Universal Declaration of Human Rights. Article 30 reads:
Nothing in this Declaration may be interpreted as implying for any State, group or person any right to engage in any activity or to perform any act aimed at the destruction of any of the rights and freedoms set forth herein.
When one exercising a freedom restricts another's ability to exercise theirs, it is reasonable to expect courts to get involved to sort it out.
I think quite a few Europeans have lasting and direct experience with totalitarian, oppressive regimes. Which might also explain why they have stricter (or simply more precise) laws governing expression – not as an oppressive tool, but as a safety valve for the society.
I think it is good and healthy to have conversations as to what should and should not be protected speech, but I think that there is this rote reaction that kinda boils down to free speech absolutism. But of course, all the free speech absolutists find at some point or another there is some speech they want made illegal.
A great example of this is in the US where Republicans often outwardly took such as stand when they weren't in power, but recently tried to use the FCC to take a comedian who made light criticism of the regime off the air.
So, silencing speech might not always be the oppressive regime, but it sometimes is.
EDIT: OK, I get the fire/theatre example is a bad one. Instead, consider incitement more broadly. For example incitement to discrimination, as prohibited by Article 20 of the International Covenant on Civil and Political Rights.
This is from an overturned US Supreme Court opinion, has no basis in anyone's jurisprudence, yet keeps coming up as an example of speech that's permissible to suppress for some reason.
Oliver Wendell Holmes created that example to support jailing a socialist for speaking out against the World War I draft.
Because they don't actually have an example of not imminently violence causing non-fraudulent speech that SCOTUS has upheld a ban of. And then when you call them out they'll say "but wait, it's metaphorical". If they had a better example they'd be using it.
Regardless, incitement remains an exception to free speech the world over to some degree. Article 20 of the International Covenant on Civil and Political Rights holds that incitement to discrimination is prohibited, for example [0].
My point stands, people of most societies globally believe certain speech should not be protected.
[0] https://www.ohchr.org/en/instruments-mechanisms/instruments/...
No, its dicta which neither was part of the substantive ruling nor an accurate description of pre-existing law from the Court’s opinion (which was unanimous, so there was no “dissenting opinion”) in a case that has since been overruled and is notorious for having allowed an egregious restriction on core political speech.
This person would be able to provide evidence as to why they thought there was a fire, show why their belief is genuine and not a lie.
Indeed. But one should realize that thorny words are precisely what replaces physical violence.
Human nature doesn't change in a democratic society that allows free dialogue, what changed is the way it is expressed.
If you erase the horrible parts of ourselves we worked hard to banish onto paper, they will eventually remanifest themselves in reality.
You're fired is just words, your health insurance is denied is just words, we don't accept your type here is words, you're being sued by someone with effectively infinite means is just words. But those words that will drastically change the course of your life.
While I abhor physical violence, I do also realize some words are also a type of violence in and of themselves.
I could say I'm going to deny your health insurance, or deny entry of your type to my group, or sue you for something. But notice how me saying any of these things don't actually have any immediate effect on you, because I don't control your health insurance or moderate a group you want to be in, or know who you are to sue you.
I can use words to convince people who do control those things to do things to you, but you can convince them not to, and convince others to do the same thing to me. The value of free speech is in replacing these conflicts that would otherwise be physical violence with words. Human nature didn't change. We still fight all the time, but with words.
None of these is "just words" lol. The words just convey something that will or won't be done. All of these examples are overly dramatic too. I too wish I lived in a world where nobody could tell me "no" but that'll never happen. If someone has lots of money and you don't, they probably won't sue you. Especially for a petty reason. There's not enough to gain from that.
>While I abhor physical violence, I do also realize some words are also a type of violence in and of themselves.
Violence is physical. People are only trying to claim a connection because they want to censor their enemies using one of the exceptions to free speech, which is when there is threat of imminent violence. As nasty or unpleasant as words may be, they bear no resemblence to actual violence. And no, you don't get to censor people because they say stuff that you feel bad about. The whole point of free speech is to allow the expression of unpopular and unpleasant words. Please get your language right and stop trying to gaslight the rest of us into a censorship program. Thank you for your attention to this matter lol
No this is bullshit. The Nazis didn't kill the jews because they couldn't say mean things about them. The Nazis didn't purposely target trans people and gay people and mentally challenged people and political opponents because they couldn't slag them publicly.
Germany did not become Nazis because of any lack of free speech. People were talking about how horrifying the Nazis were right up until they were put in camps.
Christ.
The Civil War didn't happen because people weren't able to say black people are lesser (which they were always able to say and still are)
This take is detached from history.
How much violence did Native Americans avoid by getting to say how awful they were being treated? They were never muzzled, so why did they still end up basically ethnically cleansed?
The difference is oppression used to be physical and involved a lot of killing, now it is done through non-violent means through words. That's what I meant by words replacing violence.
Southern preachers insisted that being enslaved was the black man's rightful place, as god intended, because they were naturally less intelligent and "savage" and needed good guidance from the white man.
I'm tired, after hundreds of years, of people still insisting "no no no, just a little more information freedom and humans will magically fix all their natural biases and magically stop acting like humans and magically stop believing what is comfortable instead of what is provably correct"
It's absolutely good to be much closer to the "Freer" side of that spectrum than the "government enforced muzzle" side, but I'm so tired of people insisting that we can't possibly wiggle around a little bit on the spectrum to find maybe a better place.
Oppression does not come from what laws you have. Oppression comes from how power works. It doesn't matter what laws you have on the books if you put people in charge who do not give a shit about them. It doesn't matter if you have the first amendment if you elect enough people to just disregard it and even change it if you want.
Rules aren't real. Rules don't matter unless you can enforce them. If you allow oppressive people into power, it doesn't matter how many times you write "don't oppress people"
What oppression has free speech demonstrably stopped?
But we are fixing our natural biases over time to get to the technological civilization we have today. Our beliefs align better with reality today than 500 years ago. That's why we can build computers which we're using to talk right now, but couldn't 500 years ago. Everybody is better off compared to 500 years ago. Information sharing accelerates this process.
> What oppression has free speech demonstrably stopped?
Free speech doesn't stop oppression, it replaces violence. Oppression is in human nature, or rather, in nature in general. When two individuals that share a local region of reality have misaligned wishes, they interfere with eachother. But how they interfere matters. Free speech changes the method of interaction, but not the essence of competition.
Two perfectly rational people can agree on a shared model of reality yet not agree on what actions to take next. People, although more similar than different, have different preferences. A modern democratic society simply places the majority's wishes first and oppresses minorities non-violently. It allows open negotiation to balance these wishes without resorting to violence.
Attacking one of the essential pillars of this society doesn't stop oppression, it just risks bringing back a worse form of it.
What does this mean? That if people aren't able to express or relieve themselves of some horrible act then some people will be more likely to do something bad?
Like if a person can't be racist against Muslims on Facebook (due to it being illegal) they will be more likely to harm Muslims physically?
Okay but that's a big "IF". I suspect a regime attempting to do that might be promulgating a significant amount of propaganda, but I doubt that they're able to be oppressive "because of speech".
What about loss of upward mobility for the middle class, or loss of living wages, mismanaged public institutions, corruption, bribery, collapse of democratic process?
All of this enables or sustains oppressive regimes and doesn't require any kind of speech from citizens. And without these kinds of serious problems, citizens barking nonsense won't result in much. Hindering free speech only makes it easier for a regime to continue to exacerbate these serious problems and continue oppression without being called out.
And a lot of speech is like this, nearly no speech is consequence free. I am not saying we should ban any speech that has negative consequences. What I am saying is that with other rights we also have to way the active freedoms of one person ("the freedom to do a thing") against the passive freedoms of all the others ("the freedom to not have a thing done to you").
With other rights it is the same, you may have a right to carry a firearm and even shoot it. But if you shoot it for example in church, other peoples right not to have to deal with you shooting that gun in that church outweighs your right to do that.
In the German speaking part of the EU we decided that the right of literal Nazis to carry their insignia doesn't outweigh the right of the others to not have to see the insignia that have brought so much pain and suffering in these lands. To some degree this is symbolic, because it only bans symbols and not ideologies, but hey, I like my government to protect my state from a fascist takeover, because they are kind of hard to reverse without violence.
This strikes me as just incorrect. What example from history shows totalitarianism being successfully avoided because of controls on speech?
The first item in the totalitarian playbook is controlling speech, and there are historical examples of that in every single totalitarian regime that I'm aware of.
This has worked well for more than half a century here, and I assure you that Germany hasn’t succumbed to a totalitarian regime yet. Quite the opposite to some, erm, land of the free that seems to struggle a lot with freedom lately.
That's where the conundrum lies, requiring individual responsibility for protecting a whole society of potential bad actors using this freedom to break society apart.
How is it solved? No one knows, what we know is that relying on individuals to each act on their own to solve it won't work, it never works, we also see the effects on society from the loss of any social cohesion around what "truth" is, even though before the age of Internet and social media there were vehicles to spread lies, and manipulate people, this has been supercharged in every way: speed of spread, number of influential voices, followings, etc.
Anything that worked before probably doesn't work now, we don't know how to proceed but using platitudes from before these times is also a way to cover our eyes to what is actually happening: fractures in society becoming larger rifts, supercharged by new technologies, being wielded as a weapon.
I don't think government censorship is the answer, nor I think that just letting it be and requiring every single person to be responsible in how they critically analyse the insurmountable amount of information we are exposed to every day is the answer either.
It is solved by a democratic system that defines truth as "mutually observable phenomena", defines good as "the wishes of the people", and allows individuals to engage in free dialogue as a replacement for violence.
Good outnumbers bad, so the good will win, unless both sides think they're good in a 50/50 split.
This can happen even in that ideal society, because 50% of the individuals will eventually decide to have fundamentally different goals as the other 50%. In which case, I don't think we should hold that society together by force, but rather provide a mechanism for it to peacefully split into two, precisely to uphold the democratic principle of respecting the wishes of every individual.
Suppose half of those people are mistaken in a collective delusion, and their goals are in actuality aligned with the other half, but the other half have just failed so spectacularly at enlightening them (or perhaps the delusional half are so spectacularly delusional that they're impossible to enlighten). In this rare case of a perfect failure, they will quickly realize after the split and want to get back together, because reality is a harsh judge, and its judgements are ultimate.
This argument can be made for government in general, although granted technology does make it easier for a smaller group to overreach. I'm a European and do hear your concern, but I feel comfortable supporting restrictions on speech _as long as_ there is also a functioning and just legal system that those restrictions operate within. Though there does seem to be a worrying trend towards technology bypassing the legal system and just giving enforcement agencies blanket access of late.
We all also have our own cultural biases and blind spots. I offer this not as whataboutism but as a different perspective: I'm _way_ more frightened by the authoritarian police culture (I base this on interactions with the police in a period I lived in the US) in the US than I am of the UK governments internet censorship. The internet censorship could do a lot of harm, but I think not as much potential harm as a large militarised police force willing to bust down doors on command from above.
In a statement on the Dubai-headquartered company, Durov claimed that the French intelligence services asked him “through an intermediary” to help the Moldovan government to censor “certain Telegram channels” before the vote on Oct. 20, in which incumbent President Maia Sandu secured a second term in office following a runoff held on Nov. 3.
He said a few channels were identified to have violated Telegram’s rules following reviews of the channels concerned and were subsequently removed.
“The intermediary then informed me that, in exchange for this cooperation, French intelligence would ‘say good things’ about me to the judge who had ordered my arrest in August last year,” Durov said, describing this as “unacceptable on several levels.”
“If the agency did in fact approach the judge — it constituted an attempt to interfere in the judicial process. If it did not, and merely claimed to have done so, then it was exploiting my legal situation in France to influence political developments in Eastern Europe — a pattern we have also observed in Romania,” he further said.
Durov also said that Telegram later received a second list of "Moldovan channels," which he noted were “legitimate and fully compliant with our rules,” unlike the initial list.
CONTINUED...
https://www.aa.com.tr/en/europe/telegram-head-accuses-france...
There are only a few European countries that jail people for wrongspeak, and I can't think of a single one of those countries whose population in general is in favor of such laws.
You do realize that this includes the freedom of people who get harassed online by others.
German journalist Dunja Hayali’s rights where violated by hate comments after social media and nuis sites misquoted her on her reporting on Charly Kirk‘s funeral
But I suppose that wouldn't apply to disinformation. So your made up bullshit is yours to keep!
It’s a kind of reframing (or consequence of) the bullshit asymmetry principle.
They hand-wave tremendously complex questions, in such a way that the respondent is free to interpret them any way you wish.
A similar survey question might be:
> I am thinking of a number. Is it, a) too high or b) too low?
But what changed in the last two decades or so is the newsfeed as well as other forms of recommendation (eg suggested videos on Youtube). Colloquially we tend to lump all of these together as "the algorithm".
Tech companies have very succcessfully spread the propaganda that even with "the algorithm" they're still somehow "content neutral". If certain topics are pushed to more users because ragebait = engagement then that's just "the algorithm". But who programmed the algorithm? Why? What were the explicit goals? What did and didn't ship to arrive at that behavior?
The truth is that "the algirthm" reflects the wishes of the leaders and shareholders of the company. As such, for purposes of Section 230, it's arguable that such platforms are no longer content neutral.
So what we have in the US is really the worst of both worlds. Private companies are responsible for moderation but they kowtow to the administration to reflect the content the administration wants to push or suppress.
Make no mistake, the only reason Tiktok was banned and is now being sold is because the government doesn't have the same control they have over FB, IG or Twitter.
So a survey of what people want here is kind of meaningless because people just don't understand the question.
I think they understand perfectly well. They look at an internet where internet companies aren't held responsible, conclude it's largely corrosive, and prefer a different approach. I'm not sure it's important that they don't understand the elements of a libel claim or that internet companies get a special get of jail free card that traditional media doesn't.
Moderators can block individual posts, accounts, or entire instances if they have objectionable alternate rules.
Don't like the moderation on some instance? Move to another.
neither is acceptable.
the US made a good start by disallowing government censorship completely. europe could do the same, perhaps with a carve out for outright hate speech, and obvious falsehoods like holocaust denial. but these exceptions need to be very clearly defined, which currently is not the case.
what is missing is a restriction on private businesses, to only allow them to moderate content that is obviously illegal or age restricted, or, for topical forums, off topic, for the latter they must make clear which content is on topic.
The problem with group moderation is that it will form disparate, isolated groups that get larger over time. This gradually reduces the effectiveness of democracy and destroys social cohesion. The people in the same group see other groups as more and more evil as the groups get larger, so they don't talk to eachother as much. The effort required to talk to people in a different group becomes larger as the groups get larger, so fewer and fewer individuals are capable of dialogue, until eventually, there are just two echo chambers that absolutely hate eachother. And what follows is either violence or oppression, or both, because the conflict has reached the government level.
There used to be a cap on practical group size before the Internet (except in rare cases, those become authoritarian states), so this went unnoticed in democratic societies. But now there isn't. We ought to consciously realize that the base unit of a democratic society is an individual, and create policies that enable decisions at the individual level.
We ought to talk to people that we think are evil. And let them talk, too.
(Bias: I’m lucky to live in a country that has a relatively good government, so I find it “easy” to imagine that it could work.)
In the UK(like OP), they are arresting people for thought crimes. An unexpected consequence of Brexit was the loss of free speech protection of Article 10.
Opinion polling has labour in steady steep decline. Given the unprecedented attack on freedom, presumptive decimation in the next election is guaranteed at this point. There's no future for the labour party beyond 2029, absurd that they would do this to their party unless they had a plan.
You obviously don't play your cards down like they have if you're intending to have a fair election 2029; or one at all.
"Q17D. In your opinion, should each of the following platforms be held responsible or not responsible for showing potentially false information that users post? Base: Total sample in each country ≈ 2000."
Around the world, appx. 70% said yes. The rub, of course, is coming up with a framework. The poll suggests that the DMCA approach of no duty is widely unpopular. However, strict liability would ensure that that these industries go away, and even a reasonableness standard seems like a headache.
None of these are one size fits all solutions, and there should be a mix. We have a working patch-work of laws in physical space, for a reason. It allow flexibility, and adjustments as we go, as the world changes. We should extended that to virtual space as well.
Age / Content Labeling and opt-in/ opt-out for some content. Outright ban on other kinds of content. A similar "I sue when you abuse my content" for copyright, impersonation, etc.
One size does not fit all, and is not how the real world works. Online shouldn't work much differently.
Ultimately authors should be held responsible for content - the governments role here is setting the laws, and funding the law enforcement mechanisms ( police and courts etc ), and the platform's role is to enable enforcement ( doing takedowns or enabling tracing of perpetrators ).
Obviously one of the challenges here is the platforms are transnational, and the laws are national - but that's the just a cost of doing business.
However this doesn't absolve the platforms from responsibility for content if they are involved in promoting content. If a platform actively promotes content - then in my view they shift from a common carrier to a publisher and thus all the normal publisher responsibilities apply.
Pretending that it's not possible to be technical responsible for platform amplification is a not the answer. You can't create something that you are responsible for, that creates harm and then claim it's not your problem because you can't fix it.
2 more comments available on Hacker News