Is Sora the Beginning of the End for Openai?
Posted3 months agoActive2 months ago
calnewport.comTechstoryHigh profile
heatedmixed
Debate
80/100
Artificial IntelligenceOpenaiSoraVideo Generation
Key topics
Artificial Intelligence
Openai
Sora
Video Generation
The article discusses whether OpenAI's release of Sora, a video generation AI, marks the beginning of the end for the company, with commenters debating the implications and potential consequences of such technology.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
6m
Peak period
109
0-3h
Avg / period
17.8
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 21, 2025 at 12:01 PM EDT
3 months ago
Step 01 - 02First comment
Oct 21, 2025 at 12:07 PM EDT
6m after posting
Step 02 - 03Peak activity
109 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 23, 2025 at 6:06 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45657428Type: storyLast synced: 11/20/2025, 6:51:52 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
It wasn’t that long ago that Sam Altman was still comparing the release of GPT-5 to the testing of the first atomic bomb , and many commentators took Dario Amodei at his word when he proclaimed 50% of white collar jobs might soon be automated by LLM-based tools."
That's the thing, this has all been predicated on the notion that AGI is next. That's what the money is chasing, why it's sucked in astronomical investments. It's cool, but that's not why Nvidia is a multi trillion dollar company. It's that value because it was promised to be the brainpower behind AGI.
What we got next: porn
1. Red states are way ahead on porn consumption, based on past annual reports by Aylo.
Porn (visual and written erotic impression) has been a normal part of the human experience for thousands of years. Across different religions, cultures, technological capabilities. We're humans.
There will always be a market for it, wherever there is a mismatch between desire for and access to sexual activity.
Generate your own porn is definitely a huge market. Sharing it with others, and then the follow-on concern of what's in that shared content, could lead to problems.
Attractive people in sexually fulfilling relationships still look at porn.
It's just human.
How do you know those relationships are "sexually fulfilling"?
You either believe what people report, the clearly-stated position on erotic material of the American Association of Sexuality Educators, Counselors and Therapists (1), or you can just imagine in your head what you think other people’s sex lives are like and just believe whatever you come up with.
1. https://www.aasect.org/our-mission.html
Buddy when it comes to strangers’ sex lives, those are pretty much the only two options.
I’m trying to figure out how “healthy scepticism” in this context means anything other than making something up in your head and then believing it. Do you mean that your imaginative process is really good?
> some organization's rather clunky website
Trying to figure out if “I don’t know what AASECT is and don’t like scrolling down a simple webpage” was meant to be a complaint or meant to be a brag.
2.I had no idea what AASECT is (still not sure if it a good source), and I think it puts me in a vastly bigger share of the world's population compared to those who know. And especially to those who believe that everybody knows.
3. This "Our mission" in itself is just that, you know, statement of their mission/values. I found neither proofs, nor farther links there. Maybe I couldn't find anything important because I'm inattentive or dumb. But I self-report as genius, and according to you theory, it must be accepted as truth.
Say it louder for those who skipped scientific reasoning classes.
Can you complete the statement “I am able to ascertain the truth of the sex lives of strangers by ignoring self-reported data or imagining things by the following process through which I reach my conclusions about strangers’ sex lives”
Now keep in mind if you want to sort of vaguely gesture at any scientific study of sexuality you have to exclude it due to your own rules. There is no such thing as a study of sexual relationships that doesn’t rely on self-reported data. I certainly hope you have the self awareness here to not attempt to imagine in your head that some self-reported data is valid (because it sort of feels right to you) but not other data (that feels icky) and then believe it. That would be kind of embarrassing.
I’m going to have to guess that by ignoring the basis of every scientific study of sexuality and not imagining things, your insight comes from your… heart? Divine revelation? Messages decoded from a numbers station? Sections of the digits in pi converted into ascii? Anime?
Re: payment systems, Visa and MC are notoriously unfriendly to porn vendors, sending them into the arms of crooked payment processors like Wirecard. Paypal grew to prominence because it was once the only way to buy and sell on Ebay. Crypto went from nerd hobby to speculative asset, shipping the "medium of exchange for porn purchases" entirely.
As for broadband adoption, it's as likely to have occurred for MP3 piracy and being 200X faster than dialup, as it was for porn.
A lot of it was clearly exaggerated for dramatic effect for a comedy TV show, and people run with it as fact.
https://www.computerworld.com/article/1688383/porn-industry-...
Promise: AI will change the world.
Delivery: 1000 year old vice.
You need to stretch 'porn', 'thousands if years', and certainly 'normal' definitions really hard to believe it. Even my granddad's (not thousands of years ago) exposure to porn was a one-time event when he served in army. My nephew's exposure is everyday. Which he realized to be an addiction at one point way stronger than nicotine.
I had to work for a bit with SDXL models from there and the amount of porn on the site, before the recent cleanse, was astonishing.
In fact a fun thing to think about is what signals we could observe in markets that specifically call out AGI as the expectation as opposed to simple bullish outlook on inference usage.
AI is already integrated into every single Google search, as well as Slack, Notion, Teams, Microsoft Office, Google Docs, Zoom, Google Meet, Figma, Hubspot, Zendesk, Freshdesk, Intercom, Basecamp, Evernote, Dropbox, Salesforce, Canva, Photoshop, Airtable, Gmail, LinkedIn, Shopify, Asana, Trello, Monday.com, ClickUp, Miro, Confluence, Jira, GitHub, Linear, Docusign, Workday
.....so where is this 100X increase in inference demand going to come from?
Oh and the ChatGPT consumer app is seeing slowing growth: https://techcrunch.com/2025/10/17/chatgpts-mobile-app-is-see...
> Oh and the ChatGPT consumer app is seeing slowing growth: https://techcrunch.com/2025/10/17/chatgpts-mobile-app-is-see...
While I haven't read the article yet, if this is true then yes this could be an indication of consumer app style inference (ChatGPT, Claude, etc) waning which will put more pressure on industrial/tool inference uses to buoy up costs.
I view OpenAI like a pyramid scheme: taking in increasing amounts of money to pursuit ever growing promisses that can be dangled like a carrot to the next investor.
If you owe investors $100 million, that's your problem. If you owe investors $100 billion, that's their problem.
I'm old enough to remember it, but young enough not to remember it well.
The app is fun to use for about 10 minutes then that is it.
Same goes for Grok imagine. All people want to do is generate NSFW content.
What happened to improving the world?
I would love to have Bob Ross, wielding a crayon, add some happy little trees to the walls of a Target.
Using the cybernetic to information theory to cog science to comp sci lineage was an increasingly limited set of tools to employ for intelligence.
Cybernetics should have been ported expansively to neurosci, then neurobio, then something more expansive like eco psychology or coodination dynamics. Instead of expanding, comp sci became too reductive.
The idea a reductive system that anyone with a little math training could A/B test vast swaths of information gleaned from existing forms and unlock highly evolved processes like thinking, reasoning, action and define this as a path to intelligence is quite strange. It defies scientific analysis. Intelligence is incredibly dense in biology, a vastly hidden, parallel process in which one affinity being removed (like the emotions) and the intel vanished into zombiehood.
Had we looked at that evidence, we'd have understood that language/tokens/embedded space couldn't possibly be a composite for all that parallel.
I'd suggest comp sci caught the low fruit, whatever comes out of thekeyboard as a basis, non too smart.
People will be swayed by AI-generated videos while also being convinced real videos are AI.
I'm kinda terrified of the future of politics.
On the other side we want to believe in something, so we'll believe in the video that will suit our beliefs.
It's an interesting struggle.
I don't fully believe anything I see on the internet that isn't backed up by at least two independent sources. Even then, I've probably been tricked at least once.
Would that change, maybe not, but maybe it would lessen the power grabs that some small few seem to gravitate towards.
I know if I wanted to influence the major elections, OpenAI, Google and Meta would be the first places I would go. That's a very small group of points of failure. Elections recently seem to be quite narrow, maybe they were before too though, but that kind of power is a silent soft power that really goes unchecked.
If people are more in tune with being mislead, that power can slowly degrade.
That doesn't scale.
During campaign season, they're already running as many rallies as they can. Outside the campaign train, smaller Town Hall events only reach what, a couple hundred people, tops? And at best, they might change the minds of a couple dozen people.
EDIT: It's also worth mentioning that people generally don't seek to have their mind changed. Someone who is planning on voting for one candidate is extremely unlikely to go to a rally for the opposition.
Which are inapplicable today.
> We will adjust
Will we? Maybe years later... per event. It's finally now dawning on the majority of Britons that Brexit was a mistake they were lied about.
It is a concern... it took a few centuries for the printing press to spur the Catholic/Protestant wars and then finally resolve them.
No, they are not.
What matters is how many people will suffer during this adjustment period.
How many Rwandan genocides will happen because of this technology? How many lynchings or witch burnings?
We will collectively understand that pixels on a screen are like cartoons or Photoshop on steroid.
About 40 years later the Rwandan genocide took place and many scholars attribute a preceding radio-based propaganda campaign as playing a key role in increasing ethnic violence in the aware.
Since then the link between radio and genocide seems to have decreased over time but it's likely that this isn't so much because humans have a better understanding of the medium but more so because propaganda has moved to more effective mediums like the internet.
Given that we didn't actually solve the problems with radio before moving onto the next medium it isn't likely that we'll figure out the problems with these new mediums before millions die.
And apropos radio, the War of the Worlds radio drama in 1938 is know to have made quite some afraid that it's real. And plenty of people collected money in communist Hungary for the sake of the enslaved Isaura (protagonist of a Brazilian soap opera). But most people adjusted and understand that radio dramas are a thing, movies are a thing, and will adjust to the fact that pixels on a screen are just that.
Is that a fair assessment of your comment? Is there a way to test your assertion?
It will take some time but it's in fact quite easy to explain it to older relatives if you make a few custom examples.
The bigger point is that realism is a red herring. You can spread propaganda with hand drawn caricatures just as well or even better. It's a panic over nothing. The real lever of control is what news to report on and how to frame it, what quotes to use, which experts to ask and which ones not to. The bottleneck never was at HD realism.
They do not which is why a reality TV star who is 'good at business' is the current US President.
Reality TV is the old media and people are still falling for it and the consequences of them falling for it will be felt for decades. It will be the same with newer technologies but worse.
The novel threat that something like Sora poses isn't just from realism, it's also from the fast turn around and customized messaging. It will enable the exact things you caution about but at an unprecedented scale.
This idea that it all new media is going to be just another case of 'meet the new boss same as the old boss' is ahistorical and shortsighted.
I can't see how functioning democracy can survive without truth as shared grounds of discussion.
> I don't think the US was a monarchy for its first hundred years.
Did the US not have truth as shared grounds of discussion for its first hundred years?
Prior to the Internet the range of opinions which you could gain access to was far more limited. If the media were all in agreement on something it was really hard to find a counter-argument.
We're so far down the rabbit hole already of bots and astroturfing online, I doubt that AI deepfake videos are going to be the nail in the coffin for democracy.
The majority of the bot, deepfake and AI lies are going to be created by the people who have the most capital.
Just like they owned the traditional media and created the lies there.
> People gossiped all sorts of stuff, spread malicious runors and you had to guess what's a lie and what's not.
And there were things like witch trials where people were burnt at the stake!
The resolution was a shared faith in central authority. Witness testimony and physical evidence don't scale to populations of millions, you have to trust in the person getting that evidence. And that trust is what's rapidly eroding these days. In politics, in police, in the courts.
Just consider how a screenshot of a tweet or made-up headline already spreads like a wildfire: https://x.com/elonmusk/status/1980221072512635117
Sora involves far more work than what is required to spread misinfo.
Finally, people don't really care about the truth. They care about things that confirm their world view, or comfortable things. Or they dismiss things that are inconvenient for their tribe and focus on things that are inconvenient for other tribes.
How could all of this wind up leading to a much more fair, kind, sustainable and prosperous future?
Acknowledging risks is important, but where do YOU want all this to go?
But the kids who grow up with this stuff will just integrate into their life and proceed. The society which results from that will be something we cannot predict as it will be alien to us. Whether it will be better or not -- probably not.
Humans evolved to spend most of their time with a small group of trusted people. By removing ourselves from that we have created all sorts of problems that we just aren't really that equipped to deal with. If this is solvable or not has yet to be seen.
Moreover, I think it's really hard overall to imagine a better future as long as all of this technology and power is in the hands of massively wealthy people who have shown their willingness to abuse it to maintain that wealth at our expense.
The optimistic future effectively requires some means of reclaiming some of that power and wealth for the rest of us.
There is a concept in racing when taking a corner to "keep your eyes off the wall", and instead look where you want the car to go.
Imo the most scary part of the problems we face isn't what you or GP are talking about, it's everyone else's reactions to them. The staring at the wall while screaming or giving up, and refusing to look where you want to go.
It's harder to satisfy our wants if we cant articulate them.
But there's a huge difference between (a) "given that this thing exists that seems very bad, can you imagine a way to a better future?" and (b) "can you imagine ways that this thing that seems very bad could actually be very good?"
The ways to a better future are in spite of these developments, not because of them, and I don't think it's at all helpful to act like that's not the case or be all disappointed (and, frankly, a bit condescending) at people who refuse to play along with attempts to do so.
And it's possible that (a) above is what you meant, but your wording very much sounded like (b).
What I find curious is that no one has really engaged with any of these questions yet. Not even to reflect personally on why. That’s not a criticism, it’s an observation. I think it’s worth asking what makes this kind of conversation so difficult.
When I said that declining to imagine a better future was telling, I didn’t mean it as a put-down. I meant it as a challenge. Because when we stop trying to define what better looks like, we give up our power to those who will define it for us. History shows where that leads. That’s how authoritarianism takes root; not only through force, but through the quiet surrender of imagination and personal responsibility.
If my earlier tone came across as condescending, that wasn’t my intent. My intention is tough love. I believe that acknowledging problems matters, but it’s not enough. If we stop there, we trade agency for frustration. I’d rather see us wrestle with what we want, even if it’s hard, than resign ourselves to cynicism.
So I’ll ask again: what kind of future would you actually want?
EDIT: I just realized that I missed part of an answer in your earlier comment, which I commend you for now. I apologize for not recognizing it before.
You said:
The optimistic future effectively requires some means of reclaiming some of that power and wealth for the rest of us.
Kudos. That's a start.
The Chinese Farmer Story
Once upon a time there was a Chinese farmer whose horse ran away. That evening, all of his neighbors came around to commiserate. They said, “We are so sorry to hear your horse has run away. This is most unfortunate.” The farmer said, “Maybe.”
The next day the horse came back bringing seven wild horses with it, and in the evening everybody came back and said, “Oh, isn’t that lucky. What a great turn of events. You now have eight horses!” The farmer again said, “Maybe.”
The following day his son tried to break one of the horses, and while riding it, he was thrown and broke his leg. The neighbors then said, “Oh dear, that’s too bad,” and the farmer responded, “Maybe.”
The next day the conscription officers came around to conscript people into the army, and they rejected his son because he had a broken leg. Again all the neighbors came around and said, “Isn’t that great!” Again, he said, “Maybe.”
The whole process of nature is an integrated process of immense complexity, and it’s really impossible to tell whether anything that happens in it is good or bad — because you never know what will be the consequence of the misfortune; or, you never know what will be the consequences of good fortune.
— Alan Watts
On a personal level, I have experienced some pretty catastrophic failures that taught me important lessons which I was able to leverage into even greater future success.
So honestly, I am fine with (a) or (b) and I think either are reasonable questions. Really all I am trying to do is encourage you to aim up and articulate that aim. I am not doing a great job, but I am trying.
The way I phrased that was patronizing. It wasn't my intention, but I see now how it comes across.
It seems to me like the attention economy's bias towards threatening novel news is pushing everyone into a negative, cynical, feedback loop, and I am trying clumsily to resist that. There are many real problems and many things seem to be going in the wrong direction, but I don't see how we all get ourselves out of this mess if we can't start talking about what the other side (of the despair) looks like.
I suspect that another mistake I made was the timing/context. For some reason, in the moment, I thought redirecting the cynicism at it's source (a Sora thread) was a good idea. It probably wasn't. I guess there is a time and place to try and inspire hope, and this wasn't it. And judging you for not engaging in it deserves a facepalm in hindsight.
Please accept my apology, and if you think my stance itself is misguided (not just my tone and timing), I would like to understand why.
I feel your response was misguided because by framing it as my responsibility to see a future with some benefit and casting the refusal to do so under the current terms as a failure of character, you are doing the equivalent of saying to an bottom rung MLM seller that they just aren't trying hard enough.
If the system is skewed in such as way as to prevent the person from being able to gain in it, then making it their fault for not seeing a way through that makes you appear manipulative and tends to make your motives suspect.
People have always been this way though. The tribes are just organized differently in the internet age.
Reality is specific. Actions, materials. Words and language are arbitrary, they're processes, and they're simulations. They don't reference things, they represent them in metaphors, so sure they have "meaning" but the meanings reduce the specifics in reality which have many times the meaning possibility to linearity, cause and effect. That's not conforming to the reality that exists, that's severely reducing, even dumbing down reality.
Or at least, words had meaning. As we become post-lexical, it becomes harder to tell how well any sequence of words corresponds to reality. This is post truth - not that there is no reality, but that we no longer can judge the truth content of a statement. And that's a huge problem, both for our own thought life, and for society.
Can you provide a verified proof of this statement please?
Then fast forward to a man being born from a virgin that rose from the dead three days after being crucified.
What is truth? Pontius Pilate
[0] Trump as "King Trump" flying a jet that dumps shit onto protesters https://truthsocial.com/@realDonaldTrump/posts/1153982516232...
[1] https://www.snopes.com/news/2025/09/30/medbed-trump-ai-video...
That same link has two “reader notes” about truth.
The lie is half way around the world etc, but that can also be explained by people’s short term instincts and reaction to outrage. It’s not mutually exclusive with caring about truth.
Maybe I’m being uncharitable — did you mean something like “people don’t care about truth enough to let it stop them from giving into outrage”? Or..?
While there were some debris instances IRL the freeway was completely shut down per the governors orders and nobody was harmed. (Had he not done this, that same debris may have hit motorists, so this was a good call on his part)
You could see the "Sora" watermark in the video, but it was still popular enough to make it in my reels feed that is normally always a different kind of content.
In this case whoever made that was sloppy enough to use a turnkey service like Sora. I can easily generate videos suitable for reels using my GPU and those programs don't (visibly) watermark.
We are in for dark times. Who knows how many AI-generated propaganda videos are slipping under the radar because the operator is actually half-skilled.
Curious what you used. I have an RTX 5090 and I've tried using some local video generators and the results are absolute garbage unless I'm asking for something extremely simple and uncreative like "woman dancing in a field".
I am pretty sure what you want is doable on a 5090 with some effort but it will not be just a text prompt to video. More like input key frames as images and interpolate video between them.
For over a year now we've been at the point whereby a video of anyone saying or doing anything can be generated by anyone and put on the Internet, and it's only becoming more convincing (and rapidly)
We've been living in a post-truth world for almost ten years, so it's now become normalized
Almost half of the population has been conditioned to believe anything that supports their political alignment
People will actually believe incredibly far-fetched things, and when the original video has been debunked, will still hold the belief because by that point the Internet has filled up with more garbage to support something they really want to believe
It's a weird time to be alive
Honestly it goes right back to philosophy and what truth even means. Is there even such a thing?
Truth absolutely is a thing. But sometimes, it's nuanced, and people don't like nuance. They want something they can say in a 280-character tweet that they can use to "destroy" someone online.
I find it a bit more concerning that anyone would not already understand how deeply we exist in a "post-truth" world. Every piece of information we've consumed for the last few decades has increasingly been shaped by algorithms optimizing someone else's target.
But the real danger of post-truth is when there is a still enough of a veneer of truth that you can use distortions to effectively manipulate the public. Losing that veneer is essentially a collapse of the whole system, which will have consequences I don't think we can really understand.
The pre and early days of social media were riddled with various "leaks" of private photos and video. But what does it mean to leak a nude photo of a celebrity when you can just as easily generate a photo that is indistinguishable? The entire reason leaks like that were so popular is precisely because people wanted a glimpse into something real about the private life of these screen personalities (otherwise 'leaks' and 'nude scenes' would have the same value). As image generation reaches the limit, it will be impossible to ever really distinguish between voyeurism and imagination.
Similarly we live in an age of mass surveillance, but what does surveillance footage mean when it can be trivially faked. Think of how radicalizing surveillance footage has been over the past few decades. Consider for example the video of the Rodney King beating. Increasingly such a video could not be trusted.
> I'm kinda terrified of the future of politics.
If you aren't already terrified enough of the present of politics, then I wouldn't be worried about what Sora brings us tomorrow. I honestly think what we'll see soon is not increasingly more powerful authoritarian systems, but the break down of systems of control everywhere. As these systems over-extend themselves they will collapse. The peak of social media power was to not let it go further than it was a few years ago, Sora represents a larger breakdown of these systems of control.
People forget, or didn’t see, all the staged catastrophes in the 90s that were shortly afterwards pulled off the channel once someone pointed out something obvious (f.e. dolls instead of human victims, wrong location footage, and so on).
But if you were there, and if you saw that, and then saw them pull it off and pretend like it didn’t happen for rest of the day, then this AI thing is a nothing burger.
Everything is manipulated or generated until proven otherwise.
It smells of e/acc, effective altruist ethics which are not my favorite, but I don't work at OpenAI so I don't have a say I can only interpret.
I agree, but we will likely continue down this road...
At that moment, it simultaneously became possible to create "deep fakes" by simply forging a signature and tricking readers as to who authored the information.
And even before that, just with speaking, it was already possible to spread lies and misinformation, and such things happened frequently, often with disastrous consequences. Just think of all the witch hunts, false religions, and false rumors that have been spread through the history of mankind.
All of this is to say that mankind is quite used to dealing with information that has questionable authorship, authenticity, or accuracy. And mankind is quite used to suffering as a result of these challenges. It's nothing particularly new that it's moving into a new media format (video), especially considering that this is a relatively new format in the history of mankind to begin with.
(FWIW, the best defense against deep fakes has always been to pay attention to the source of information rather than just the content. A video about XYZ coming from XYZ's social media account is more likely to be accurate than if it comes from elsewhere. An article in the NYTimes that you read in the NYTimes is more likely to be authentic than a screenshot of an article you read from some social media account. Etc. It's not a perfect measure -- nothing is -- but I'd say it's the main reason we can have trust despite thousands of years of deep fakes.)
IMO the fact that social media -- and the internet in general -- have decentralized media while also decoupling it from geography is less precedented and more worrisome.
I think we may revert back to trusting only smaller groups of people, being skeptical of anything outside that group, becoming a bit more tribal. I hope without too many deleterious effects, but a lot could happen.
But humans, as a species, are survivors. And we, with our thinking machines will figure out ultimately how to deal with it all. I just hope the pain of this transition is not catastrophic.
Fakery isn't new, only the product of scale and quality at which it is becoming possible.
From the writing, through organized religion, printing press, radio and tv, internet and now ai.
Printing press and reformation wars is obvious, radio and totalitarianism is less known, internet and new populism is just starting to be recognized for what it is.
Eventually we'll get it regulated and adjust to it. But in the meantime it's going to be a wild ride.
I consider myself pretty on the ball when it comes to following this stuff, and even I've been caught off guard by some videos, I've seen videos on Reddit I thought were real until I realised what subreddit I was on
Our ways of thinking and our courts understand that you can’t trust what people say and you can’t trust what you read. We’ve internalized that as a society.
Looking back, there seems to have been a brief period of time when you could actually trust photographs and videos. I think in the long run, this period of time will be seen as a historical anomaly, and video will be no more trusted than the printed or spoken word is today.
This will simply take us back about 150 years to the time before the camera was common.
The transition period may be painful though.
Then you can see any conversation about the video will be even more divorced from reality.
None of this requires video manipulation.
The majority of people are idiots on a grand scale. Just search any social media for PEMDAS and you will find hordes of people debating the value of 2 + 3 / 5 on all sorts of grounds. “It’s definitely 1. 2+3 =5 then by 5 is 1” stuff like that.
Like cars making horse manure in cities a non-issue (https://www.youtube.com/watch?v=w61d-NBqafM)
Maybe the solution to everybody lying would be some way to directly access a person's actual memories from their brains..
It's like the old saying: They create their own ecosystem. Circular stock market deals being the most obvious, but the WorldCoin has been for years in the making and Altman often described it as the only alternative in a post-truth world (the one he himself is making of course).
[0] https://www.forbes.com.au/news/innovation/worldcoin-crypto-p...
[0] https://www.404media.co/openai-sam-altman-interview-chatgpt-...
It seems true that no company has used frontier models to create a product with business value commensurate with the cost it takes to train and run them. That what OpenAI is trying to do with Sora, and with Codex, Apps, "Agent" flows, etc. I don't think there's more to read into it than that.
Anthropic has said that every model they've trained has been profitable. Just not profitable enough to pay to train the next model.
I bet that's true for OpenAI's LLMs too, or would be if they reduced their free tier limits.
It's similar to the process of electrification. Every existing machine/process needed to be evaluated to see if electricity would improve it: dish washing, clothes drying, food mixing, etc.
OpenAI is not alone. Every one of their products has an (sometimes superior) equivalent from Google (e.g. Veo for Sora) and other competitors.
It makes them look desperate though. Nothing like starting tons of services at once to show you have a vision
Also because they have the funding to do it.
Reminds me a bit of the early Google days, Microsoft, Xerox, etc,
This is just what the teenage stage of the top tech startup/company in an important new category looks like.
I think he's being a bit harsh here. And there are some confounding factors why.
Yes, we have an AI bubble. Yes there's been a ton of hype that can't be met with reality in the short term. That's normal for large changes (and this is a large technological change). OpenAI may have some rough days ahead of it soon, but just like the internet, there's still a lot of signal here and a lot of work to still be done. Going through Suna+Sora videos just last night was still absoutely magical. There's still so much here.
But, OpenAI is also becoming, to use a Ben Thompson term, an aggregator. If it's where you go to solve many problems, advertising and more is a natural fit. It's not certain who comes out on top of the space (or if it can be shared), but there are huge rewards coming in future years, even after a bubble has popped.
Cal is having a very strong reaction here. I value it, but I wish it was more nuanced.
Ads destroy... pretty much everything they touch. It's a natural fit, but a terrible one.
Your jumping off point is a cliff into a pile of leaves. It looks correct and comfy but will hurt your butt taking it for granted. You’re telling people to jump and saying “it’ll get better eventually just keep jumping and ignore the pain!”
What if my reason isn’t, “I like typing code” but instead, “we don’t need more Googles doing Google things and abusing privacy.”
Then personally the whole thing is spam.
Regardless of what the reasons for and against Llms maybe doesn’t obfuscate that the primary use case for generated content has been to scam and spam people.
That spam scam can be simply getting people to pour their personal private info into an Llm or it can be ads or a generated lie. Regardless it’s unwanted to a lot of people. And the history of that tech and attempt at normalizing it are founded in spam techniques.
Even the gratuitous search for AGI is spammed on us as these companies take tax payer money and build out infrastructure that’s actually available to 0% of the public for use.
Like I discredit in my mind anyone that cites Chatgpt as a source.
39 more comments available on Hacker News