Anti-Cybercrime Laws Are Being Weaponized to Repress Journalism
Posted2 months agoActive2 months ago
cjr.orgOtherstoryHigh profile
heatednegative
Debate
85/100
Cybercrime LawsJournalismGovernment Repression
Key topics
Cybercrime Laws
Journalism
Government Repression
The article discusses how anti-cybercrime laws are being used to repress journalism in various countries, and the HN discussion highlights the concerns and criticisms of such laws and their potential for abuse.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
42m
Peak period
56
0-6h
Avg / period
8.9
Comment distribution80 data points
Loading chart...
Based on 80 loaded comments
Key moments
- 01Story posted
Nov 2, 2025 at 1:12 PM EST
2 months ago
Step 01 - 02First comment
Nov 2, 2025 at 1:54 PM EST
42m after posting
Step 02 - 03Peak activity
56 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 4, 2025 at 7:32 PM EST
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45792209Type: storyLast synced: 11/20/2025, 6:36:47 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Source?
Rolling Stone’s investigation: ‘A failure that was avoidable’: https://www.cjr.org/investigation/rolling_stone_investigatio...
Last July 8, Sabrina Rubin Erdely, a writer for Rolling Stone, telephoned Emily Renda, a rape survivor working on sexual assault issues as a staff member at the University of Virginia. Erdely said she was searching for a single, emblematic college rape case that would show “what it’s like to be on campus now … where not only is rape so prevalent but also that there’s this pervasive culture of sexual harassment/rape culture,” according to Erdely’s notes of the conversation"
I mean this with all sincerity: So what? What bearing does that have on the journalist and what they are writing?
I am also curious about that claim the other guy asked you about, “Guiding” sources and such.
Fraud is bad, and it should be illegal, but why have different punishments based on what technology someone used?
Laws like this go outside of fraud, and often are clearly unconstitutional, like the Unlawful Internet Gambling Enforcement Act of 2006, which made lawful gambling illegal too, until it was effectively overturned with Murphy v. National Collegiate Athletic Association in 2018.
So first as foundation, I see no reason to pretend that the law is always perfectly thought through and logical particularly when it comes to crime. And even when laws have been done for the time, that also doesn't mean circumstances haven't changed over the decades while the law remained static.
That said, in principle punishment embodies multiple components and a major aspect is deterrence. The deterrence value in turn interplays with components like barrier to entry, scaling of the potential harm and the likelihood of getting caught. Usage of technology can have a significant impact on all of this. It's significantly more challenging and expensive to prosecute crimes that stretch across many jurisdictions, technology can also have a multiplier effect allowing criminal actors to go after far more people, both in terms of raw numbers and in terms of finding the small percentage of the vulnerable, and perceived anonymity/impunity can further increase number of actors and their activity levels. It also has often implied a higher degree of sophistication.
All of that weighs towards a higher level of punishment even as pure game theory. That doesn't mean the present levels are correct or shouldn't be only a part of other aspects of fighting fraud that depressingly frequently get neglected, but it's not irrational to punish more when criminals are generating more damage and working hard to decrease the chance of facing any penalties at all.
You will of course never reach perfection, but considering that when a law is applied, a lot of violence (police, jail, ...) gets involved, a politician who does not dedicate his life towards making the laws as perfect as humanly possible (with the ideal of finding an imperfection in the laws as big of a human breakthrough as the dicovery of quantum physics or general relativity) clearly does not deserve to be elected.
Oh, sweet summer child. Not attempting to make laws "as perfect as humanly possible" is the least of our worries with politicians!
Most of them dedicated their life actively towards the opposite, to make the laws as bad as posisble: out of ideology, out of being paid by lobbies and monopolies, personal interest, and so on.
Part of the history of CFAA was that it was passed because the state of the law preceding it didn't comfortably criminalize things like malicious hacking and denial of service; you can do those things without tripping over wire fraud.
That's a problem with it, but another big one is that it's inherently ambiguous.
The normal way you know if you're authorized to do something with a computer is that it processes the request. They're perfectly capable of refusing; you get "forbidden" or "access denied" but in that case you're not actually accessing it, you're just being informed that you're not allowed to right now. So for there to be a violation the computer would have to let you do something it isn't supposed to. But how are you supposed to know that then?
On a lot of websites -- like this one -- you go to a page like https://news.ycombinator.com/user?id=<user_id> and you get the user's profile. If you put in your user there then you can see your email address and edit your profile etc. If the server was misconfigured and showing everyone's email address when it isn't supposed to, how is someone supposed to know that? Venmo publishes their users' financial transactions. If you notice that and think it's weird and write a post about it, should the company -- possibly retroactively -- be able to decide that the data you're criticizing them for making public wasn't intended to be, and therefore your accessing it was a crime? If you notice this by accident when it's obvious the data shouldn't be public -- you saw it when you made a typo in the URL -- should there be a law that can put you in jail if you admit to this in the process of making the public aware of the company's mistake, even if your access resulted in no harm to anyone?
The wording is too vague and it criminalizes too much. "Malicious hacking" might not always be wire fraud but in other cases it could be misappropriation of trade secrets etc., i.e. whatever the actual act of malice is. The problem with the CFAA is that it's more or less attempting to be the federal computer law against burglary (i.e. unlawful entry with the intent to commit a crime) except that it makes the "unlawful" part too hard to pin down and failed to include the part about intent to commit a crime, which allows it to be applied against people it ought not to.
When was the last CFAA prosecution where the perpetrator literally didn't know they were doing something unauthorised?
Asking for precedence is not the same as "total reliance on prosecutorial discretion." It's asking if a hypothetical is grounded.
> moment that federal prosecutors stop being obsessed with 100% conviction rates, the whole weaponized process becomes tyrannical overnight
This is an orthogonal problem. Prosecutors can bring bullshit cases with zero basis in the law if they want to.
So then you get cases like Sandvig v. Barr where the researchers are assuming the thing they want to do isn't authorized even though that would be unreasonable and then they have to go to court over it. Which is how you get chilling effects, because not everyone has the resources to do that, and companies or the government can threaten people with prosecution to silence them without charges ever being brought because the accused doesn't want to experience "the process is the punishment" when the law doesn't make it sufficiently clear that what they're doing isn't illegal.
Sandvig "was brought by researchers who wished to find out whether employment websites engage in discrimination on the basis of race, gender or other protected characteristics" [1]. It was literally the researchers asking the question you asked and then getting an answer.
"he Court interpreted CFAA’s Access Provision rather narrowly to hold that the plaintiffs’ conduct was not criminal as they were neither exceeding authorized access, nor accessing password protected sites, but public sites. Construing violation of ToS as a potential crime under CFAA, the Court observed would allow private website owners to define the scope of criminal liability – thus constituting an improper delegation of legislative authority. Since their proposed actions were not criminal, the Court concluded that the researchers were free to conduct their study and dismissed the case."
Nobody was prosecuted. Researchers asked a clarifying question and got an answer.
[1] https://globalfreedomofexpression.columbia.edu/cases/sandvig...
Let's remember how the process works. First they threaten you, then if you don't fold they do a more thorough investigation to try to find ways to prove their case which makes you spend significant resources, then they decide whether to actually prosecute you. They don't actually do it if they can't find a way to make you look like a criminal, but that's why it needs to be unambiguous from the outset that they won't be able to.
Otherwise people will fold at the point of being threatened because you'd have to spend resources you don't have and the deal you're offered gets worse because you made them work for it.
Suppose some researchers are trying to collect enough data to see if a company is doing something untoward. They need a significant sample in order to figure it out, but the company has a very aggressive rate limit per IP address before they start giving HTTP 429 to that IP address for the rest of the day. If the researchers use more than one IP address so they can collect the data in less than 20 years, is that illegal? It shouldn't require a judge to be able to know that.
Reality is infinitely complex. The law, meanwhile, is a construct.
One can always come up with anxious apparitions of hypothetical lawbreaking. (What if I’m murdered by a novel ceramic knife. The killer might get away!)
> If the researchers use more than one IP address so they can collect the data in less than 20 years, is that illegal? It shouldn't require a judge to be able to know that
It doesn’t. It requires a lawyer.
No, we lived with that ambiguity because the US system of laws purposely chooses to let Judges in the court system decide those ambiguities (and create "precedent") but only after something has happened, only after that happening leads to a court case, and only if that court case is not settled or dismissed.
That means everyone can just settle cases that would lead to a precedent they don't want.
US law ambiguity is purposeful. The solution is that we should have judges and courts that emphasize the outcome to normal people, and endeavor to improve justice to normal people, but all the people who get law degrees seem to be somewhat sociopathic and prefer instead to waste millions setting precedents on what individual words mean (that don't match at all what a normal and reasonable person would understand) and police syntax.
Judges who try to do just that are labelled "Activist" by politicians.
Meanwhile, when we have agencies who take it upon themselves to take a vague law and turn it into much less vague rules and clear recommendations, they are accused of being unelected bureaucrats writing laws.
If you want less ambiguity, you need to elect people that don't punish agencies for putting out clear documentation, and you need to reform the entire justice system to prioritize clear readings of plain language law over our stupid system of treating english as a programming language for law, which it can never be.
Human language is ambiguous. Law will always be ambiguous. If you suggest instead we should use more strict language in law on a forum full of programmers, you should hopefully understand how that is a cure far worse than the disease. You will end up with law exactly as unambiguous as it can be to an army of specialized lawyers and nobody else.
This is no different than zillions of other criminal statutes, the majority of which hinge on intent.
A common one is to apply the requirement narrowly. You didn't intend to hurt anyone or steal anything, but you intended to visit that URL, so that's the prosecution's burden satisfied.
Which is why you need it to hinge on more than just intent. Otherwise why do we even have different laws? Just pass one that says it's illegal to be bad, right?
The problem here is that accessing a computer is a completely normal and unproblematic thing to do on purpose, so the intent requirement isn't doing much without knowing what "authorization" is supposed to mean but that's the part that isn't clear.
I don’t know if it’s that tech people are just predisposed to try to over complicate, or that legal terminology tends to have definitions that are separate from the tech/colloquial usage of terms. But looking at contemporary usage the CFAA, I don’t actually think “was this hacking or just using the computer like normal” is that hard to figure out.
The problem is that if message board nerds and other ordinary people can't figure out what the law requires in the cases that probably won't be prosecuted, but could be, then it deters people from doing things that shouldn't be -- and maybe even aren't -- against the law. And it gives the government a weapon it shouldn't have, because the lack of clarity can be used to coerce plea bargains.
Message boards are constantly debating insane shit.
If someone feels—beyond generalized anxiety—they’re on the edge of the law, there are plenty of private and public resources they can consult. If they want to shoot shit, as, to be clear, we’re doing here, they can ponder on a message board. The former presages real work. The latter entertainment.
If you think that’s not true, can you give some examples of ambiguous fact patterns?
Fraud via phone or computer is harder to catch. So the US follows it's established pattern and instead of hitting efforts for law enforcement increases punishment
https://www.atlanticcouncil.org/blogs/new-atlanticist/the-un...
> states parties are obligated to establish laws in their domestic system to “compel” service providers to “collect or record” real-time traffic or content data. Many of the states behind the original drive to establish this convention have long sought this power over private firms.
(7) Unlawful access to systems
(8) Interception and wiretapping
(9) Interfering with data (presumably: encrypting and ransoming databases)
(10) DOS attacks
(11) Knowlingly selling hacking tools to criminals
(12) Forging online documents
(13) Online wire fraud
(14) CSAM
(15) Solicitation and grooming
(16) Revenge porn
Articles 14-16 are the closest you get to something not "according to Hoyle" cybercrime. I wouldn't want them in my cybercrime treaty, but I'd be pretty chill about them being standalone domestic laws.
A reminder: no matter what a UN convention says, treaties don't preempt the US Constitution. We could not enforce a treaty that includes Nigeria's misinformation terms --- it would violate the First Amendment. (Also useful to know, contrary to widespread belief online, that a self-executing treaty is itself preempted by statutes passed after it).
I mean how is this surprising to anyone?
Grossly offensive is in the eye of the beholder
Quite right. However, certain media outlets have knowingly published false information and when pushed on this they claim that those reports happened as part of the "opinion" part of their reporting. Before you get smug, your side does it too (as does mine). I'm am less concerned with blaming people than coming up with a mitigation of these issues.
So I think we need a 2 class system of reporting. A factual part where knowingly reporting false information has consequences. And an opinion part where it doesn't. Journalists would claim they already do this but here is the new policy. Reporting must constantly and clearly show to which class the report belongs. So maybe a change in background color on websites, or a change in the frame color for videos. Something that make it visually and immediately clear to which class this reporting belongs. That way people can more accurately assess the level of credibility the reporting should have.
The Fairness Doctrine is irrelevant today because of the way news is published/broadcast, but was effective in my humble opinion
From Wikipedia: “ The fairness doctrine had two basic elements: It required broadcasters to devote some of their airtime to discussing controversial matters of public interest, and to air contrasting views regarding those matters.”
And without getting too political, the beginning of a lot of our media woes in terms of news correlates nicely with when the doctrine was revoked
If only someone, anyone, could have foreseen this /s. I read so many HN comments about the "slippery slope fallacy," back when the powers that be were censoring the people that they didn't like. I bet they'll be right back where they were next time the government is going after the "misinformation" they don't like.
Two years ago, I was sued for $10,000 in copyright infringement for embedding a YouTube video on my website. They filed a lawsuit by describing the word “embed” as if it were “upload.” But they are two different things. I won the case. But I realized that others didn't.
I learned that the company filed lawsuits against dozens of websites, especially Blogspot sites. I even heard a rumor.
They share content on social media and community sites in a way that entices people, focusing on areas that remain in a gray zone and where few people know it's illegal.
For example, “Embed movies from YouTube and share them on your website. You'll make a lot of money. If I knew how to program, I would do it.” This is just one example. There are many different examples. By the way, my site wasn't a movie site.
They apparently file lawsuits like clockwork against anyone who triggers their radar with the right keywords via Google Alerts.
Cybercrimes are just another reflection of this. If I could, I'd share more, but I don't want to go to jail. Freedom of expression isn't exactly welcomed everywhere on the internet.
Since the other side may be doing this commercially, they may be thinking in terms of mass production. In other words, they file lawsuits and earn as much as they can. If they can't win, they keep filing lawsuits against others. They don't bother.
They might not be making it a matter of pride; they might just be thinking about making money.
Secondly, which countries does the article mention? Nigeria, Pakistan, Georgia, Turkey, and Jordan. Such countries strain the definition of "government" let alone "law".
Find me a government and I'll find you corruption.
The EIU, V-Dem, CPI, World Bank, and various other benchmarks highlight this as well.
I take an Occam's Razor to the usual arguments against it - the problems these create (fake news, slander, etc) are already prominent in regulated media platforms, which also rely on community moderation as a result. The solutions it enables (space for fearless/citizen reporting, Streisand effects for censorship rather than big-tech powered banhammers) are wholly absent in regulated media.
Besides tech, and going by the press freedom index, one's only hope for good journalism today would be to incorporate in New Zealand. But you'd still have to face the odds of your content being banned in the countries they report on.
The other issue with the anti-press efforts by governments is that it weaponises the state against on-ground journalism and ends up encouraging out-of-country reporting as a result.
Diaspora way back when had a chance to take hold of the flame and do some damage to Meta. Even after it was released, I moved to it immediately. Nobody from any other social platform would join me.
Its one thing to create a decentralized platform, its quite another to overcome the network effect where friends and family and have other friends and family and they can't move now because none of their friends or family will move. This is why there was a very narrow window before Meta became the juggernaut it is today to get people to move to a more decentralized platform.
Now? Close to 100% impossible to win that game - regardless of the opportunity for freedom from censorship and government overreach. There will be small pockets of people moving to them, but there's a good chance we will never see the kind of numbers that Meta, X, YouTube or other platforms have right now. They are just so entrenched at this point.
I wish people could handle such platforms responsibly. In practice, the first decentralised and un-censorable platform will be immediately overrun with CP and drug emporia. And I say this as someone who appreciates civic freedoms and libertarianism... Some people are genuinely shitty and have an "anti-Midas touch" which turns everything they touch to shit.
Source: I was arrested by my ex-wife's boyfriend, who denied all my human rights in detention. All his ridiculous charges were thrown out, but he, and his police partner was allowed to investigate himself as to whether he violated any laws. I then received a threatening letter from the Attorney-General telling me I would be charged if I brought it up the particulars of it with anyone.
But in this case it may be designed for that purpose.
yeah, right
> "Any proposal must be viewed as follows. Do not pay overly much attention to the benefits that might be delivered were the law in question to be properly enforced, rather one needs to consider the harm done by the improper enforcement of this particular piece of legislation, whatever it might be."
-Lyndon B. Johnson
18 more comments available on Hacker News