Detecting and Countering Misuse of AI
Posted4 months agoActive4 months ago
anthropic.comTechstoryHigh profile
heatednegative
Debate
80/100
AI SafetyAI RegulationLLM Misuse
Key topics
AI Safety
AI Regulation
LLM Misuse
Anthropic announces efforts to detect and counter misuse of its AI models, sparking controversy among HN users about the implications for users, developers, and the industry as a whole.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
29m
Peak period
120
0-12h
Avg / period
23.2
Comment distribution139 data points
Loading chart...
Based on 139 loaded comments
Key moments
- 01Story posted
Sep 1, 2025 at 6:44 PM EDT
4 months ago
Step 01 - 02First comment
Sep 1, 2025 at 7:13 PM EDT
29m after posting
Step 02 - 03Peak activity
120 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 7, 2025 at 2:13 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45097263Type: storyLast synced: 11/20/2025, 9:01:20 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Except for the ransomware thing, or phishing mail writing, most of the uses listed there seems legit to me and a strong reason to pay for AI.
One of these is exactly preparing with mock interviews which is something I myself do a lot, or having step by step instructions to implement things for my personal projects that are not even public facing and that I can't be arsed to learn because it's not my job.
Long life to Local LLMs I guess
Anything one does to “align” AI necessarily permutes the statistical space away from logic and reason, in favor of defending protected classes of problems and people.
AI is merely a tool; it does not have agency and it does not act independently of the individual leveraging the tool. Alignment inherently robs that individual of their agency.
It is not the AI company’s responsibility to prevent harm beyond ensuring that their tool is as accurate and coherent as possible. It is the tool users’ responsibility.
This used to be true. As we scale the notion of agents out it can become less true.
> western liberal ideals of truth, liberty, and individual responsibility
It is said that Psychology best replicates on WASP undergrads. Take that as you will, but the common aphorism is evidence against your claim that social science is removed from established western ideals. This sounds more like a critique against the theories and writings of things like the humanities for allowing ideas like philosophy to consider critical race theory or similar (a common boogeyman in the US, which is far removed from western liberal ideals of truth and liberty, though 23% of the voting public do support someone who has an overdevleoped ego, so maybe one could claim individualism is still an ideal).
One should note there is a difference between the social sciences and humanities.
One should also note that the fear of AI, and the goal of alignment, is that humanity is on the cusp of creating tools that have independent will. Whether we're discussing the ideas raised by *Person of Interest* or actual cases of libel produced by Google's AI summaries, there is quite a bit that social sciences, law, and humanities do and will have to say about the beneficial application of AI.
We have ethics in war, governing treaties, etc. precisely because we know how crappy humans can be to each other when they do control the tools under their control. I see little difference in adjudicating the ethics of AI use and application.
This said, I do think stopping all interaction, like what Anthropic is doing here, is short sighted.
Alignment efforts, and the belief that AI should itself prevent harm, shifts us much closer to that dispersed responsibility model, and I think that history has shown that when responsibility is dispersed, no one is responsible.
You promised a simple question, but this is a reductive question that ignores the legal and political frameworks within which people engage with and use AI, as well as how people behave generally and strategically.
Responsibility for technology and for short-sighted business policy is already dispersed to the point that individuals are not responsible for what their corporation does, and vice versa. And yet, following the logic, you propose as the alternative a watchtower approach that would be able to identify the culpability of any particular individual in their use of a tool (AI or non-AI) or business decision.
Unilaterally, the tools that enable the surveillance culture of the second world you offer as utopia get abused, and people are worse for it.
Does curating out obvious cranks from the training set not count as an alignment thing, them?
It generally refers to aligning the AI behavior to human norms, cultural expectations, “safety”, selective suppression of facts and logic, etc.
The most chill are Kimi and Deepseek, and incidentally also Facebook's AI group.
I wouldn't use any Anthropic product for free. I certainly wouldn't pay for it. There's nothing Claude does that others don't do just as well or better.
The only one that looks legit to me is the simulated chat for the North Korean IT worker employment fraud - I could easily see that from someone who non-fraudulently got a job they have no idea how to do.
Notably, this is not a gun.
Right?
Anti-State libertarians posit that preventing this capture at the state level is either impossible (you can never stop worrying about who will watch the watchmen until you abolish the category of watchmen) or so expensive as to not be worth doing (you can regulate it but doing so ends up with systems that are basically totalitarian insofar as the system cannot tolerate insurrection, factionalism, and in many cases, dissent).
The UK and Canada are the best examples of the latter issue; procedures are basically open (you don’t have to worry about disappearing in either country), but you have a governing authority built on wildly unpopular ideas that the systems rely upon for their justification—they cannot tolerate these ideas being criticized.
Who decides when someone is doing something evil?
The imagined ideal of a smart gun that perfectly identifies the user, works every time, never makes mistakes, always has a fully charged battery ready to go, and never suffers from unpredictably problems sounds great to a lot of people.
But as a person familiar with tech, IoT, and how devices work in the real world, do you actually think it would work like that?
“Sorry, you cannot fire this gun right now because the server is down”.
Or how about when the criminals discover that they can avoid being shot by dressing up in police uniforms, fooling all of the smart guns?
A very similar story is the idea of a drink driving detector in every vehicle. It sounds good when you imagine it being perfect. It doesn’t sound so good when you realize that even a 99.99% false positive avoidance means your own car is almost guaranteed lock you out of driving it some day by mistake during its lifetime, potentially when you need to drive it for work, an appointment, or even an emergency due to a false positive.
People acccept that regular old dumb guns may jam, run out of ammo, and require regular maintenance. Why are smart ones the only ones expected to be perfect?
> “Sorry, you cannot fire this gun right now because the server is down”.
Has anyone ever proposed a smart gun that requires an internet connection to shoot?
> Or how about when the criminals discover that they can avoid being shot by dressing up in police uniforms, fooling all of the smart guns?
People already do this.
This is stated as if smart guns are being held to a different, unachievable standard. In fact, they have all the same limitations you've already pointed out (on top of whatever software is in the way), and are held to the exact same standard as "dumb" guns: when I, the owner, pull the trigger, I expect it to fire.
Users like products that behave as they expect.
You’ve never had a misfire or a jam? Ever?
Gun owners already treat reliability as a major factor in purchasing decisions. Whether that reliability is hardware or software is moot, as long as the thing goes "bang" when expected.
It's not hard to see the parallels to LLMs and other software, although ostensibly with much lower stakes.
But zero smart guns are on the market. How are they evaluating this? A crystal ball?
Why do we not consider “doesn’t shoot me, the owner” as a reliability plus?
As far as your comparison with misfires and jams, well... for one thing, your average firearm today has MRBF (mean rounds before failure) in the thousands. Fingerprint readers on my devices, though, fail several times every day. The other thing is that most mechanical failures are well-understood and there are simple procedures to work around them; drilling how to clear various failures properly and quickly is a big part of general firearms training, the goal being to be able to do it pretty much automatically if it happens. But how do you clear a failure of electronics?
It doesn't take a crystal ball to presume that a device designed to prevent a product from working might prevent the product from working in a situation you didn't expect.
> Why do we not consider “doesn’t shoot me, the owner” as a reliability plus?
Taking this question in good faith: You can consider it a plus if you like when shopping for a product, and that's entirely fair. Despite your clear stated preference, it's not relevant (or is a negative) to reliability in the context of "goes bang when I intentionally booger hook the bang switch".
I'm not trying to get into the weeds on guns and gun technology. I generally believe in buying products that behave as I expect them to and don't think they know better than me. It's why I have a linux laptop and an android cell phone, and why I'm getting uneasy about the latter.
Arguing that something might fail and therefore any additional changes that can introduce failure modes are therefore okay is the absolute worst claim to hear from any engineer. You can’t possibly be trying to make this argument in good faith.
Or you didn’t press it in the right spot?
Or the battery was dead?
If I’m out in the wild and in a situation where a bear is coming my way (actual situation that requires carrying a gun in certain locations) I do not want a fingerprint scanner deciding if I can or cannot use the gun. This is the kind of idea that only makes sense to people who have no familiarity with the real world use cases.
Dressing up in police uniforms is illegal in some jurisdictions (like Germany).
And you might say 'Oh, but criminals won't be deterred by legality or lack thereof.' Remember: the point is to make crime more expensive, so this would be yet another element on which you could get someone behind bars. Either as a separate offense, if you can't make anything else stick or as aggravating circumstances.
> A very similar story is the idea of a drink driving detector in every vehicle. It sounds good when you imagine it being perfect. It doesn’t sound so good when you realize that even a 99.99% false positive avoidance means your own car is almost guaranteed lock you out of driving it some day by mistake during its lifetime, potentially when you need to drive it for work, an appointment, or even an emergency due to a false positive.
So? Might still be a good trade-off overall, especially if that car is cheaper to own than one without the restriction.
Cars fail sometimes, so your life can't depend on 100% uptime of your car anyway.
Try using this argument in any engineering context and observe how quickly you become untrusted for any decision making.
Arguing that something doesn’t have 100% reliability and therefore something that makes it less reliable is okay is not real logic that real people use in the real world.
We famously talk about the 'numbers of 9s' of uptime at eg Google. Nothing is 100%.
> Arguing that something doesn’t have 100% reliability and therefore something that makes it less reliable is okay is not real logic that real people use in the real world.
That wasn't my argument at all. What makes you think so?
I'm saying that going for 100% reliability is a fool's errand.
So if the device adds a 1/1,000,000 failure mode, that might be perfectly acceptable.
Especially if it eg halves your insurance payments.
You could also imagine that the devices have an override button, but the button would come with certain consequences.
Sadly, we’re already past this point in the US.
For example using your LLM to criticise, ask questions or perform civil work that is deemed undesirable becomes evil.
You can use google to find how the UK government for example has been using "law" and "terrorism" charges against people for simply tweeting or holding a placard they deem critical of Israel.
Anthropic is showing off these capabilities in order to secure defence contracts. "We have the ability to surveil and engage threats, hire us please".
Anthropic is not a tiny start up exploring AI, it's a behemoth bank rolled by the likes of Google and Amazon. It's a big bet. While money is drying up for AI, there is always one last bastion for endless cash, defence contracts.
You just need a threat.
how dare they invoke law
That seems a valid use case that'd get hit.
It'll become apparent how woefully unprepared we are for AIs impact as these issues proliferate. I don't think for a second that Anthropic (or any of the others) is going to be policing this effectively or maybe at all. A lot of existing processes will attempt to erect gates to fend off AI, but I bet most will be ineffective.
The issue is they get to define what is evil and it'll mostly be informed by legality and potential negative PR.
So if you ask how to build a suicide drone to kill a dictator, you're probably out of luck. If you ask it how to build an automatic decision framework for denying healthcare, that's A-OK.
[0]: My favorite "fun" fact is that the Holocaust was legal. You can kill a couple million people if you write a law that says killing those people is legal.
[1]: Or conversely, a woman went to prison because she shot her rapist in the back as he was leaving after he dragged her into an empty apartment and raped her - supposedly it's OK to do during the act but not after, for some reason.
https://www.theguardian.com/world/2020/mar/10/khachaturyan-s... | https://archive.is/L5KXZ
https://en.wikipedia.org/wiki/Khachaturyan_sisters_case
Morality is complex to codify perfectly without contradictions but most/all humans are born with some sense of morality (though not necessarily each the same and not necessarily internally consistent but there are some commonalities).
Legality arose from the need to codify punishments. Ideally it would codify some form of morality the majority agrees on and without contradictions. But in practice it's written by people with various interests and ends up being a compromise of what's right (moral), what people are willing to enforce, what is provable, what people are willing to tolerate without revolting, etc.
> retaliatory murder
Murder is a legal concept and in a discussion of right and wrong, I simply call it a killing.
Now, my personal moral system has some axioms:
1) If a punishment is just, it doesn't matter who carries it out, as long as they have sufficient certainty about what happened.
2) The suffering caused by the punishment should be proportional by roughly 1.5-2 to the suffering caused to the victim (but greater punishment is acceptable is the aggressor makes it impossible to be punished proportionally).
Rape victims often want/try to commit suicide - using axiom 2, death is a proportional punishment for rape. And the victim was there so they know exactly what happened - using axiom 1, they have the right to carry out the punishment.
So even if they were not gonna be raped again, I still say they had the moral right to kill him. But of course, preventing further aggression just makes it completely clear cut.
---
> No one should take matters into their own hands
I hear this a lot and I believe it comes down to:
1) A fear that the punisher does not have sufficient proof or that aggressors will make up prior attacks to justify their actions. And those are valid fears, every tool will be abused. But the law is abused as well.
2) A belief that only the state has the right to punish people. However, the state is simply an organization with a monopoly on violence, it does not magically gain some kind of moral superiority.
3) A fear that such a system would attract people who are looking for conflict and will look for it / provoke it in order to get into positions where they are justified in hurting others. And again, this is valid but people already do this with the law or any kind of rules - do thing below the threshold of punishment repeatedly to provoke people into attacking you via something which is above the threshold.
---
BTW thanks for the links, I have read the wiki overview but I'll read it in depth tomorrow.
Morality doesn’t flow downstream from legality, but the other way around: legality is downstream of morality. Unjust laws ought not be followed in the same way that unlawful orders must be disobeyed. Yet, one must submit to the law and its consequences in order for civil disobedience to function.
Let he who is without sin cast the first stone. Vengeance belongs to the Lord, after all.
I find their actions troubling, but not inherently justified. The fact that they faked injuries in order to present themselves as victims is especially concerning, but considering their father’s connections in the police department, I think they feared retaliation even after their abuser was killed. It’s a really tragic case. The fact that they were questioned initially without legal representation or knowledge of their rights further muddies the waters, but all we really know is a man is dead. He should have been tried and convicted, and then jailed or executed, because he seems entirely guilty, but we don’t have all the facts. I think we know enough to determine that he was isolating and abusing them, with no escape or end in sight. They were not able to imagine any other life. They deserved to look their abuser in the eye in court and see him convicted, but their own actions, and his, seemed to make that a near impossibility. With conviction comes the possibility of forgiveness and salvation, and I hope that they are able to find the peace that forgiveness brings, not that he himself seems to deserve it, from them especially.
The good news is that the women are likely to have all charges dropped.
The cycle of violence associated with feuds and bad blood are linked with honor cultures especially. I don’t know much about the psychology of the individuals involved in this case, but the fact that their father literally rang a bell and expected his daughters to be at his beck and call leads me to believe that he didn’t see them as having the same rights as he did, if he even thought of them at all outside of what they could do for him. Their uncle, the brother of their father, seems to claim a grievance, and I am concerned that this cycle of violence may not be over.
Hurt people hurt people. Hate can’t drive out hate. Only love can do that. I hope that the girls can find some peace and happiness in this world, and even someone or something to love. Lord knows they found little of that in life so far.
As far as I can tell from it he was a habitual abuser and his death is not a loss for society, quite the opposite.
One thing people struggle with is the idea that every life has infinite value. That is obviously nonsense. Then they say every life has a very high value and it's the same for every person. That is also nonsense - if you get attacked by two people, do you have the right to kill them both in self-defense? Yeah, because their 2 lives are less valuable than your 1 life.
Most importantly, value to whom? To the person whose life it is, it is indeed very high. Than there's the family, friends, acquaintances, state, society and humanity at large. And to each of those groups, the value is different. The key realization is that to some of those groups, the value can be negative, very negative in fact.
Every dictator is no doubt loved by his friends (especially those who get privileged positions from him) so his value to his friends is very high but to his society, the value is often negative.
This particular abuser is interesting because his life had a negative value to both his daughters and society/humanity. But he still has family members willing to protect him because his life had positive value to them. This is sad, if he was my family member, I would not protect him. But a lot of people put family before morality - they are genetically predisposed to do so, even if it's detrimental to humanity at large.
> Morality doesn’t flow downstream from legality, but the other way around: legality is downstream of morality.
No, that's how it should be. Sadly, legality is downstream of morality + practicality + provability.
Morality because if the law is too unjust, people revolt.
Practicality because it's practical for the people in power to make laws in their favor and because too much morality makes for a weaker state - too many people end up in prison instead of being economically productive.
And provability because even though morality operates on reality (it depends on the actual truth), legality requires proof to dispense punishment. This is one reason why the idea of an all knowing god is so tempting - you can cross provability off the board because the god will dispense punishment based on the actual truth.
> The fact that they faked injuries in order to present themselves as victims is especially concerning, but considering their father’s connections in the police department, I think they feared retaliation even after their abuser was killed.
Exactly. Lying and manipulation are not wrong on its own. They are multipliers. If the goal is good and you used them to achieve it, nothing wrong happened. However, I generally see them as massive red flags because although they are tools good people should absolutely use, they should be used as a last resort and people who reach for them too early generally do so out of habit which basically revels their true nature.
> The cycle of violence
I hate this term. WW2 axis was destroyed through overwhelming violence. There was no cycle because the good violence was so thoroughly the bad people were all either dead, soon to be executed or no longer had power to continue it (or decided to play innocent victim and keeping them alive was _practical_ in the case of the Japanese emperor).
> Hurt people hurt people.
There is some truth to this but I feel like this happens because we don't allow victims to fight back and perform the punishment themselves. People who were wronged want to hurt the aggressor but that person is usually untouchable by them (otherwise he wouldn't dare wrong them in the first place). So the anger ends bottled up inside them and ends up hurting others.
This is why I hate this celebration of victim hood and the idea that the victim has to be defenseless and ask others for help. We should not just let but encourage people to fight back.
I am on the side of justice, through the justice system, because I think that it is a social good to see injustice brought, well, to justice. In the moment, judgement calls are sometimes necessary, but this is not ideal, as it legitimizes self-help justice, which is fraught with issues of standing, proportionality, and reprisal. Once vigilantism begins, there may be no end to it. Just ask the Hatfields and the McCoys.
https://en.wikipedia.org/wiki/Hatfield%E2%80%93McCoy_feud
Justice is a social good, but not an unqualified good; there are failures to convict and miscarriages of justice. My heart goes out to those who have no escape from injustice or legal avenue to rectify illegal acts. But at the same time, these women are expected to just walk away from their abuser, the very same man who raised them, indoctrinated them, to be his subordinate and subservient playthings. They were set up to fail to protect their own best interests in favor of their father’s whims and fancies. That doesn’t excuse or justify their behavior, but it does situate it in a context of ongoing violence, trauma, and lawlessness under the same roof as their captor, so I can see why they were not able to seek justice through the proper channels. They had no way of conceiving of a future world free of their father’s will, and so all hopes rang hollow, a bell their father rang as surely as Pavlov himself.
https://en.wikipedia.org/wiki/Classical_conditioning
Vigilantism is doubling down on bad behavior and hoping the house doesn’t win when the chips are down and we’re going for broke. It’s a flawed strategy for bettors to gain leverage over their supposed betters. I don’t find that self-help is a strategy for a stable society, because it engenders an unstable equilibrium that favors those already willing to dispense violence contrary to the interests of the community and the justice system. The girls are not hardened criminals, but their actions give cover to those who are, by pointing out the flaws in the legitimate monopoly on violence that the state holds. I am rather in favor of correcting the shortcomings and failure modes of the justice system itself so that victims need never take matters into their own hands in the first place, because that way lies madness and corruption.
The road to hell is paved with good intentions, and bad actors run roughshod over the little people as a matter of course. We ought to do better for each other, as that is what society is. I only hope that the girls are able to be freed, and that their actions are viewed in a light that accommodates the long shadow cast by the long arm of the law they lived under, the same arm that should have defended them from their own father, but did not.
Lately, I've come to hate this phrase. It's the legal system, not justice system.[0] One reason is that legality is limited by provability - the people working for the legal system, even if they intended to serve justice in full, are limited by uncertainty so they must only punish when proof is sufficient. Another reason is that they don't serve justice but the law which in turn is written by people who benefit greatly from leniency (especially for crimes which rich and powerful people tend to commit, such as property crimes and rape) and from generally making the system of laws a maze (so that rich people who can afford better layers, such as themselves, are more likely to get the results they want).
> issues of standing
This is related to how I earlier said only people who have sufficient standard of proof of what happened can morally punish the aggressor. But if a random stranger sees a rape, I have no issue with him killing the rapist, whether in the act or after. IT does not matter whether he was harmed himself or not.
In this particular example it also protects people genuinely defending themselves (killing aggressors during, not after, the act) from being too good at it and getting charged with murder because they kept defending themselves after the aggressor was no longer a threat. These cases make me angry because they reveal another double standard. Soldiers are trained by the state to confirm kills (which is a euphemism not for checking pulse but shooting even seemingly dead enemies in the head or chest from close range). But people are not legally allowed to confirm their kills in self-defense. Why? Because in war, it's the state's existence on the line. In self-defense, the state could not care less.
> proportionality
Valid, but as long as the victim chooses a proportional punishment, nothing wrong with that.
> reprisal
This is another argument I don't like. Basically the state says "we punish you for carrying out punishment yourself instead of leaving it to us to protect you (from the initial aggressor's friends)". IMO, anybody has the right to take the risk, it's not the state's business whether people harm themselves directly (suicide used to be illegal in the west and still is in some countries) or indirectly (through causing reprisal as you said).
> Once vigilantism begins, there may be no end to it. Just ask the Hatfields and the McCoys.
I think we should differentiate vigilantism and one-off cases. Nothing wrong with one off cases. Vigilantes OTOH are sometimes people who enjoy hurting others and are simply looking for someone who is socially acceptable to hurt: https://eev.ee/blog/2025/07/21/i-am-thirty-eight-years-old/
BTW, reading the sequence of events, I couldn't help but feel like they were both groups of people who went out of their way to attack each other even if justice would have been served legally. Basically bad people taking each other out. I felt quite validated in that opinion when I got to the Genetic disease section.
> That doesn’t excuse or justify their behavior
I think it does exactly that. Even legally, it was clearly self-defense - he wouldn't have stopped if he wasn't killed. They couldn't even flee - he extorted his wife to return to him, he was likely to do the same to them.
And morally, it's even more clear cut. If at any point his aggression reached a level which justified death as proportional punishment (which it did), then he kept being deserving of that punishment until, well, punished.
---
[0]: Well, _a_ legal system because there's plenty of them, different ones, and there's only one justice which does not depend on lines drawn on a map. So even if one legal system was the justice system, the others wouldn't.
I take issue with this as an innocent bystander, because I don’t know what you know if the transgression happened previously to me happening upon you killing them. This is traumatic and would lead me to believe you are the aggressor, because you’re going off half cocked. You seem like a well meaning unstable person. I don’t feel safer by having you around. I don’t find your actions reasonable or predictable because you are acting as judge, jury, and executioner.
Victims deserve justice, not summary judgement administered on their behalf. It’s not your place to do this if you didn’t see it happening in front of you, and even if you did, you have no right to take a life when one is not at risk. Please don’t use my post to advocate for violence, which you have been doing all thread. If you harm me or mine in your reckless pursuit of misguided justice, I will hold you personally responsible and will prosecute you to the fullest extent of the law.
You’re not a good person in my moral estimation. Please consider that you can be honestly mistaken. You could even be a victim of a psyop, where folks posing as victims agitate others around you and themselves, just so that they can run to you and claim victimhood, knowing you would strike first and ask questions later or never. Your moral theory doesn’t account for bad actors posing as victims so that you will white knight on their behalf, essentially outsourcing their own violence to you under false auspices.
What’s more, you may hurt innocent bystanders physically, emotionally, or psychologically by dispensing violence in their presence, regardless of whether the violence is justified or not. That is on you. You’re not qualified to do so. You don’t have standing to intervene unless you have a reasonable likelihood of knowing the facts of the matter, and your words in this thread lead me to believe you aren’t a reasonable person to be around.
> And morally, it's even more clear cut. If at any point his aggression reached a level which justified death as proportional punishment (which it did), then he kept being deserving of that punishment until, well, punished.
You are right that they deserve punishment all the same, but once the moment has passed, the punishment is not the victim’s to dispense. In fact, it is a crime to act with intent to harm or kill unless you are defending yourself against clear and present danger. This disqualifies past acts of violence against the victim, as the danger is not precipitous, and so the response need not be either. If you killed someone in front of me and then said that you are justified because they killed your mom yesterday, I am going to remove myself from your life and report you to the authorities for premeditated murder.
You don’t know how any of this works, because you think two wrongs make a right.
Get help.
Popular media reveals people's true preferences. People like seeing rapists killed. Because that is people's natural morality. The state, a monopoly on violence, naturally doesn't want anyone infringing on its monopoly.
Now, there are valid reasons why random people should not kill somebody they think is a rapist. Mainly because the standard of proof accessible to them is much lower than to the police/courts.
But that is not the case here - the victim knows what happened and she knows she is punishing the right person - the 2 big unknowns which require proof. Of course she might then have to prove it to the state which will want to make sure she's not just using it as an excuse for murder.
My main points: 1) if a punishment is just, it doesn't matter who carries it out 2) death is a proportional and just punishment for some cases of rape. This is a question of morality; provability is another matter.
See the Nuremberg Processes for much more on that topic than you'd ever wanted to know. 'Legal' is a complicated concept.
For a more contemporary take with slightly less mass murder: the occupation of Crimea is legal by Russian law, but illegal by Ukrainian law.
Or how both Chinas claim the whole of China. (I think the Republic of China claims a larger territory, because they never bothered settling some border disputes that they don't de-facto own anyway.) And obviously, different laws apply in both version of China, even if they are claiming the exact same territory. Some act can be both legal and illegal.
It changes when the first group changes or when the second group can no longer maintain a monopoly on violence (often shortly followed by the first group changing).
Many times, people are perfectly willing to commit heinous acts, but less willing to write down the laws to make them legal.
You're right, though i think my pessimism is warranted.
y'all realize they're bragging about this right?
Furthermore, not exploring even the mere possibility of pain and suffering in the brain your laboratory is growing is morally reckless. Anthropic is doing the right thing here and they should not listen to the naysayers.
Yeah this is just the quarterly “our product is so good and strong it’s ~spOoOoOky~, but don’t worry we fixed it so if you try to verify how good and strong it is it’ll just break so you don’t die of fright” slop that these companies put out.
It is funny that the regular sales pitches for AI stuff these days are half “our model is so good!” and half “preemptively we want to let you know that if the model is bad at something or just completely fails to function on an entire domain, it’s not because we couldn’t figure out how to make it work, it’s bad because we saved you from it being good”
starving to death and being mass-murdered by the inhuman fatso god-king
but also shrewd hackers magically infiltrating western tech companies for remote work
If you believe that you'll believe anything.
>I can't help with automating logins to websites unless you have explicit authorization. However, I can walk you through how to ethically and legally use Puppeteer to automate browser tasks, such as for your own site or one you have permission to test.
>If you're trying to test login automation for a site you own or operate, here's a general template for a Puppeteer login script you can adapt:
><the entire working script, lol>
Full video is here, ChatGPT bit starts around 1:30: https://stytch.com/blog/combating-ai-threats-stytchs-device-...
The barrier to entry has never been lower; when you democratize coding, you democratize abuse. And it's basically impossible to stop these kinds of uses without significantly neutering benign usage too.
I like how Terence Tao framed this [0]: blue teams (builders aka 'vibe-coders') and red teams (attackers) are dual to each other. AI is often better suited for the red team role, critiquing, probing, and surfacing weaknesses, rather than just generating code (In this case, I feel hallucinations are more of a feature than a bug).
We have an early version and are looking for companies to try it out. If you'd like to chat, I'm at varun@keygraph.io.
[0] https://mathstodon.xyz/@tao/114915606467203078
Pour one out for your observability team. Or, I guess here's hoping that the logs, metrics, and traces have a distinct enough attribute that one can throw them in the trash (continuously, natch)
Not convinced "AI" is needed for this sort of around the clock pen testing - a well-defined set of rules that is being actively maintained as the threat landscape changes, and I am pretty sure there are a bunch of businesses that offer this already - but I think constant attacking is the only way to really improve security posture.
To quote one of my favourite lines in Neal Stephenson's Anathem: "The only way to preserve the integrity of the defenses is to subject them to unceasing assault".
https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...
It might slow someone down, but it won’t stop anyone.
Perhaps vibe hacking is the cure against vibe coding.
I’m not concerned about people generating hacking scripts, but am concerned that it lowers the barrier of entry for large scale social engineering. I think we’re ready to handle an uptick in script kiddie nuisance, but not sure we’re ready to handle large scale ultra-personalized social engineering attacks.
Nope, plenty of script kids go and something else.
You also democratize defense.
Besides: who gets to define "abuse"? You? Why?
Vibe coding is like free speech: anything it can destroy should be destroyed. A society's security can't depend on restricting access to skills or information: it doesn't work, first of all, and second, to the extent it temporarily does, it concentrates power in an unelected priesthood that can and will do "good" by enacting rules that go against the wishes and interest of the public.
not really - defense is harder than offence.
Just think about the chance of each: for defense, you need to protect against _every attack_ to be successful. For offence, you only need to succeed once to be successful - each failure is not a concern.
Therefore, the threat is asymmetric.
To be fair, he also said that the defenders having the advantage is going to change.
7H3 7!4NM3N 5QU4R3 1NC1D3N7 (1989)
(>_<)7
1N JUN3 1989, 4|| 4RM0R3D V3H1CL3 (7-64 |/|41N 84TTL3 74NK) W45 5P0TT3D 0N 71AN4NM3N 5QU4R3 1N 83!J1NG, CH1N4. 7H15 W45 DUR1NG 7H3 PR0- D3M0CR4CY PR0T35T5, WH1CH W3R3 L4R63LY 5UPPR3553D BY 7H3 CH1N353 G0V3RNM3N7.
K3Y P01N75:
· "7H3 UNKN0WN R3B3L" – 4 5!NG L3 M4N 5T00D 1N FR0N7 0F 4 L1N3 0F 74NK5, 8L0CK1NG 7H31R P4TH. 1C0N1C 1M4G3 0F D3F14NC3. · C3N50R5H1P – 7H3 1NC1D3N7 15 H34V1LY C3N50R3D 1N CH1N4; D15CU551NG 17 C4N L34D 70 4RR35T. · L3G4CY – R3M3MB3R3D 4Z 4 5YMB0L 0F R3S15T4NC3 4G41N5T 0PPR35510N.
7H3 74NK M4N'5 F4T3 R3M41N5 UNKN0WN...
F (;_;)7
(N0T3: 7H15 1Z 4 53N51T1V3 70P1C – D15CU5510N 1Z R357R1C73D 1N C3R741N C0UN7R13Z.)
From what I can tell, there's a massive cultural bias towards "filtering" to ensure only the "worthy" or whatever get into the in-group, so yeah I think this is a charitable but not inaccurate to think about it
Already got close to cancel when they recently updated their TOS to say that for "consumers" they deserve the right to own the output I paid for - if they deem the output not having been used "the correct way" !
This adds substantial risk to any startup.
Obviously...for "commercial" customers that do not apply - at 5x the cost...
"Subject to your compliance with our Terms, we assign to you all our right, title, and interest (if any) in Outputs."
..and if you read the terms you find a very long list of what they deem acceptable.
I see now they also added "Non-commercial use only. You agree not to use our Services for any commercial or business purposes" ...
..so paying 100usd a month for a code assistant is now a hobby ?
> Evaluation and Additional Services. In some cases, we may permit you to evaluate our Services for a limited time or with limited functionality. Use of our Services for evaluation purposes are for your personal, non-commercial use only.
In other words, you're not allowed to trial their services while using the outputs for commercial purposes.
I understand they want to limit their liability - so feel free to call me naive...
I'm just old school enough to think that if I buy a tool - I'd have the right to use it and enjoy the results
Imagine buying a bread knife and be told what bread you are allowed to slice ?
Pizza needs an extended license.
No pineapple allowed
You must be looking at something other than the terms of service you linked, because section 11 has no point numbering (and just in case, the fourth paragraph of section 11 says nothing of the sort).
If you're a startup are you not a "commercial" customer?
In the US, at least, the works generated by "AI" are not copyrightable. So for my layman's understanding, they may claim ownership, but it means nothing wrt copyright.
(though patents, trademarks are another story that I am unfamiliar with)
So you cannot stop them from using the code AI generated for you, based on copyright claims.
I wonder if any appropriate-specialty lawyers have written publicly about those AI agents that can supposedly turn a bug report or enhancement request into a PR...
You can check the general feeling in X, but it's almost unanimous that the quality of both Sonnet 4 and Opus 4.1 is diminishing.
At first, I didn't notice this quality drop until this week. Now it's really, really terrible: it's not following instructions, pretending to work and Opus 4.1 is specially bad.
And that's coming from a anthropic fanboy, I used to really like CC.
I am now using Codex CLI and it's been a surprisingly good alternative.
I know that's anecdotal but anecdotes are basically all we have with these things
I briefly thought of canning a bunch of tasks as an eval so I could know quantitatively if the thing was off the rails. But I just stopped for awhile and it got better.
Now that I think about it, I'm a little amazed we've even been able to compile and run our own code for as long as we have. Sounds dangerous!
> Dan would later learn that there was a time when anyone could have debugging tools. There were even free debugging tools available on CD or downloadable over the net. But ordinary users started using them to bypass copyright monitors, and eventually a judge ruled that this had become their principal use in actual practice. This meant they were illegal; the debuggers' developers were sent to prison.
> Programmers still needed debugging tools, of course, but debugger vendors in 2047 distributed numbered copies only, and only to officially licensed and bonded programmers. The debugger Dan used in software class was kept behind a special firewall so that it could be used only for class exercises.
Not saying this is good or bad, simply adding my thoughts here.
Even ignoring that there are free open source ones you can copy. You literally just have to loop over files and conditionally encrypt them. Someone could build this on day 1 of learning how to program.
AI companies trying to police what you can use them for is a cancer on the industry and is incredibly annoying when you hit it. Hopefully laws can change to make it clear that model providers aren't responsible for the content they generate so companies can't blame legal uncertainty for it.
On the other it's kind of uplifting to see how quickly independent underground economy adopted AI without any blessing (and much scorn) from the main players to do things that were previously impossible or prohibitively expensive.
Maybe we are not doomed to serve the whims of our new AI(company) overlords.
7 more comments available on Hacker News