Sora 2 Makes Convincing Fake Crime Footage
Posted3 months agoActive3 months ago
bsky.appTechstory
calmmixed
Debate
40/100
AI-Generated ContentDeepfakesMisinformation
Key topics
AI-Generated Content
Deepfakes
Misinformation
Sora 2, an AI model, can generate convincing fake crime footage, raising concerns about the potential for misinformation and manipulation, with the discussion highlighting the need for critical thinking and verification.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
7m
Peak period
6
0-2h
Avg / period
2.6
Comment distribution13 data points
Loading chart...
Based on 13 loaded comments
Key moments
- 01Story posted
Oct 1, 2025 at 11:36 AM EDT
3 months ago
Step 01 - 02First comment
Oct 1, 2025 at 11:43 AM EDT
7m after posting
Step 02 - 03Peak activity
6 comments in 0-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 2, 2025 at 11:56 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45439012Type: storyLast synced: 11/20/2025, 3:50:08 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
You can try regulating, banning and censoring models, adding silly invisible watermarks, require Gen AI content to be labelled as such, and live with a complete false sense of security. You'd be making it way easier to deceive people.
No, we apply appropriate skepticism by considering context, history, motivations and prior knowledge of both the source and the persons or entities involved. The uncomfortable reality that no news sources were ever worthy of our full trust isn't new or recent since the rise of AI or even digital editing. So, to me, it's a net positive that at least now many more people are aware of it.
AI-generated media elements as well as the slightly more labor-intensive manual digital manipulation before AI (eg Photoshop) are both almost quaintly mild because at least there are digital artifacts which can be fairly easily detected, disproven or otherwise countered. Whereas the far more subtle but no less deceptive techniques like changing the order of interview questions in editing or selectively excerpting answers are essentially indetectable and have been widely used to skew reporting at mainstream national news outlets since at least the 1970s.
About 20 years ago I was professionally involved behind-the-scenes with the creation of mainstream news content at a national level. Seeing how the sausage was made was pretty shocking. Subtle systemic bias was constant and impacted almost everything in ways it would be hard for non-insiders to detect (like motivated editorial curation or pre-aligned source selection). Blatantly overt bias was slightly less common but hardly infrequent. Seeing it happen first-hand disabused me of the notion there were ever "reliable sources of record" which could be trusted. While it's true the better outlets would tend to be mostly correct and mostly complete on many topics, even the very best were still heavily impacted by internal and external partisan influences - and, of course, bias tended to be exerted on the things that mattered.
Like a lot of "journalists". It is called propaganda and is not new.