Sora Might Have a 'pervert' Problem on Its Hands
Mood
controversial
Sentiment
negative
Category
other
Key topics
Sora, OpenAI's text-to-video model, may be vulnerable to generating fetish content, raising concerns about its potential misuse and the need for stricter content moderation; the discussion highlights the challenges of balancing creative freedom with responsible AI development.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
37m
Peak period
86
Day 1
Avg / period
22.5
Based on 90 loaded comments
Key moments
- 01Story posted
Oct 25, 2025 at 7:32 PM EDT
about 1 month ago
Step 01 - 02First comment
Oct 25, 2025 at 8:09 PM EDT
37m after posting
Step 02 - 03Peak activity
86 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 30, 2025 at 7:02 PM EDT
27 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
>The internet is for porn
I actually think this is what's going to happen with AI once the easy money dries up. They'll quickly race to the bottom selling porn generators. AI slop porn already seems like the majority usage after homework generation.
This is all independent of what is and isn’t a legitimate business model, it’s a social dynamic. It’s also a pretty familiar one: it shows up everywhere from nightclub bouncer policies to the dynamics of early 2000s irc rooms
… and this is probably where the article should’ve ended. Or in fact, where the author should’ve realized there didn’t need to be an article at all.
People are weird and gross. They do weird things that we would often prefer they not. Sora provided a tool to avoid that weirdness. Use it.
If we know anything about software in general (and AI specifically) getting around roadblocks is often a fairly simple thing.
Probably not. AI slop doesn't really "go viral" except when it is super ridiculous like shrimp Jesus. Most people generating AI slop porn are likely in the 10s of people who will see it. If someone generates porn with my face on it and I never even know, how does this harm me? Why should I care?
I am not, that is why I'm leaving it to replies to convince me otherwise.
>what if it's someone you know?
Why not start with what if it's me? If I don't know about it, implicitly, I don't care. But if I do and it bothers me, AI companies have already been given free reign over copyright law so I don't really have any recourse here. Can I sue Sam Altman? No, because if that was possible, someone like Studio Ghibli would have done it and made a billion dollars by now.
>what if they want to then sell the generated content?
If it's generated by AI, then the courts have already stated it can't be copyrighted, and is therefore unsellable. The first person you sell it to can redistribute it for free and there's no way you can stop them.
>what if it was your political enemies?
I'm not political, which upsets a lot of people. Just tell people "I'm not voting, it doesn't matter" and watch them lose their minds.
>what if it was your boss?
I'm not really sure why I should be fighting my boss's battles.
>what if it was someone who was stalking you or made you feel unsafe?
People can stalk me without AI. I'm not sure how AI changes this.
You would presumably care if one of those "10s of people" was a family member or peer.
Maybe not caring is enlightened of you, but it shouldn't stretch your imagination to consider why others would.
This is an interesting angle. Maybe someone might try to blackmail me with AI porn of me cheating on my spouse? This could also be a great usecase for divorce attorneys. But it would be easy enough to do that without AI. There's the old joke:
>Once a year I send out 1000 Valentine Day cards signed "Guess Who? XOXO".
>Its a cheap and easy marketing plan for my divorce attorney law group.
The fact that the AI industry is apparently littered with incredibly immature guys who perceive themselves to be Randian superheros does not reassure me that this tool is going to be better.
(I really don't care that this suggestion typically provokes what amounts to libertarian screeching)
It's a complicated issue that I've considered many times before. If we deem deepfake pornography unethical because it creates images/videos that look like real people, what does this mean for "lookalike" pornography, featuring actors done up (or who just naturally look) like famous people?
For example: Let's say Person A has a friend, Person B, who looks like Person C. Person B consents to Person A using artificial intelligence to generate pornographic images of them, which in turn look like pornographic images of Person C. Should Person A need consent from Person C?
Surely you'd be in support of the use of artificial intelligence in porn as no real people engage in sexual exploitation?
However, using a wannabe AI social media platform to engage with this stuff (and said platform encouraging you to do so), is crossing several uncomfortable lines for me.
https://www.businessinsider.com/threads-meta-engagement-rage...
A journalist has done good work if they report on their ability to smuggle a replica bomb onto a plane. It's a bit hazier if they smuggle a real bomb on board because it puts people at risk. They shouldn't blow up a plane to show how easy it is.
I didn't offer any judgement in my comment as to the ethics of this particular reporter, just noting that was the style that they do.
The claims she purported to represent Meta in public statements would, if true, count as unethical journalism. I don't know of the accuracy of those claims, so at this stage I would remain undecided but wary.
Clearly the author has never visited 4chan... but I think seeing others make such content with your appearance should be taken as flattery.
More seriously, I hope this flood of easy video generation will cause people to more easily realise how they can be persuaded, and increase skepticism of evidence in general.
I think maybe you're an adult male.
Having actually talked to some female friends about this, I'm pretty sure that women in general don't take so well to the idea of tools that might be used to encourage the fantasies of the men that already have a dangerous interest in harming them sexually.
Whether the Valley thinks that's their problem to solve, I doubt. But making a joke out of it is pretty fucked up, dude.
ETA: even women who have done some modelling and are a bit more aware of the way those images are used are at least somewhat concerned about content that can make them act and speak like puppets. This is at least as much about consent as it is about content.
ETA2: I am rate-limited for being an argumentative sod in the past so I will finally edit this to note that 1) I am replying only to the sentence I quoted which has very troubling connotations, and 2) I really think a lot of people here seem not to have read Julian Dibbell's crucial 1993 article "A Rape In Cyberspace" and it really shows.
This assumption sounds like being taken from some feminist manifesto.
Much like when that AI executive dude started talking about layers of hidden reality or whatever it was, that some LLM helped him "find": people were clear that whatever his problems, he might not have blurted that shit out loud, or even developed those thoughts as much, if it were not for the reassurance loop of whatever tool was helping him go a bit more mad.
We understand what happened in his case, right? Perhaps he was keeping that under control and then wasn't, because it was all so plausible.
Now imagine it being video of some young woman realistically depicted doing things she has not consented to do, in the hands of a man who is obsessed and is just keeping that under control. An obsessive fan, for example.
I get your point, but I’ve never seen any research into whether this material makes people more or less likely to actually perpetrate crimes related to it.
A chat loop is a bit different from a static video/photo.
(As commented elsewhere, the author expressly opted-in, apparently with the intent of generating ragebait to write an article about.)
Post body or gtfo
(for those lacking context, this is a callback to a 4chan trope that is inextricable from OP's argument)
I can't figure out the tone here. Is the author suggesting we should stop people from creating fetish content of purely AI-generated characters? OpenAI might want to for business reasons, but surely there's nothing inherently wrong with using AI for fetish content. Should we also stop people from drawing fetish content with pencil and paper?
A lot of social media is a sex platform, and it got mixed up in this way because there’s no talking adults out of being lewd in public.
Social media is still immature. We'll develop norms around what is appropriate.
> Is the author suggesting we should stop people from creating fetish content of purely AI-generated characters?
Presumably not, but she's farming outrage rather than suggesting any fix. In the above suggested setup, people could then generate fetish consent from the much smaller set of users who consented to have fetish consent generated from them. But then of course they might expect some royalties or revenue-sharing, or at least identification/attribution/watermarking so the depicted user could drive traffic to social-media. OpenAI is skirting around not just segmented consent but any concept of revenue-sharing (i.e. OpenAI wants to dip its toe into OnlyFans territory, but without any revenue-sharing or licensing deal with creators).
The only parts of the article that seem to be news are a) OpenAI's blanket consent is very broad and doesn't warn users what might be done with cameos, or segment consent into various different types of content use (as even 25yo modeling sites do) b) that subset of users will bypass the guardrails and c) OpenAI doesn't close the feedback loop by notifying the users in a) what the users in set b) are doing, let alone allow revising or revoking consent.
But why is the conversation only about b) (the predictable bad behavior by users) rather than a) and c) (feasible solutions)?
Notopoulos correctly remarks: "part of an overall pattern of how OpenAI has approached the concept of copyright and intellectual property: asking forgiveness, not permission."
By the way previous (non-sexual-content) incarnations of this sort of issue are 2019 when Clearview scraped 3 billion images non-consensually from people's social-media and state DMVs [https://news.ycombinator.com/item?id=35421117].
Or the previous 2024 OpenAI/Scarlett Johansson-sounding voice shenanigans. Or the existing proliferation of AI porn elsewhere.
OpenAI is still doing basic experiments with which product offering are well received by users and/or work well and which are not. If some data provided by users (e.g. photos depicting the user) are seen to be very essential to the success of the AI-created content using this data so that OpenAI will likely loose an insane amount of money of these users leave (I think this is rather unlikely, but not impossible), then OpenAI will think about some concept of revenue-sharing, but not before (why should they?).
It's taking work from Onlyfans, Artists who draw fetish content on commission (usually by copying other people art style), fanfic writers (who copy writing style, characters and setting of other people) and other organic and free range fetish content producers.
You have no expectation of privacy in public spaces since forever, that is not a problem. Because nobody can photograph you stabbing someone and uploading it on social media without you actually stabbing someone. This is now different, because anyone now can make that photograph of you stabbing someone and post it.
That must be your argument.
And it must be on the social media side, because in X months some open model on GitHub is gonna make every watermark or cloud-based safety feature meaningless anyway.
> I've allowed anyone to make "cameos" using my face. (You don't have to do this: You can choose settings that make your likeness private, or open to just your friends — but I figured, why not? And left my likeness open to everyone, just like Sam Altman.)
So the author specifically allowed their face to be used for content and then is surprised people acted on it? This is silly imo.
People should just never even allow their face to be used for this.
This reminds me of the "Joanne is awful" episode from black mirror. Exact same story.
So this is silly yes but people can still easily make deepfake porn of you from any photo available with other oss tools
The app allows you to control if other people can generate photos of you. If the author doesn't want other people to make these photos, disable public video generation...
My mental test for deciding whether something should be illegal or unacceptable is questioning if any one would see it the same if religion never existed.
During #metoo I remember reading an article where the author was uncomfortable with the “drug fueled sex parties in Silicon Valley.” They basically didn’t want consenting adults to do drugs or engage in group sex. The argument against fetish content with AI generated characters reminded me of the #metoo author’s discomfort with the drug/sex freedom of the Bay Area. The article about Sora sounds like the author is uncomfortable with people generating fetish content, regardless of the content featuring real people or not.
It’s sad that the liberals now include the prudes/conservatives.
I also don’t think one is a prude or a conservative for thinking there are consent and power issues around anything that commingles sex and the workplace.
Things can be unacceptable without being illegal. Things can even be unacceptable without needing to be banned or privately controlled.
My bar for what should be unacceptable is a lot lower than my bar for what should be illegal or privately banned.
Making weird pregnancy fetish videos of real people without their permission is definitely unacceptable. I have no issue with the idea that anyone doing that should be shamed.
Edit: the fact that the author is right to be uncomfortable with their face on generated fetish content, doesn’t make their stance on fetish content with generated characters less prudish. They can be right about one thing, and prudish about the other.
> How do you stop people from making fetish content of purely AI-generated characters that aren't cameos of real people? Does OpenAI want to stop that?
I'd say a resounding no. Didn't Sam Altman announce a little while back that they're exploring allowing ChatGPT to be used for erotica / NSFW generation?
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.