How People Use Chatgpt [pdf]
Posted4 months agoActive3 months ago
cdn.openai.comTechstoryHigh profile
calmmixed
Debate
40/100
ChatgptAI AdoptionLLM Usage
Key topics
Chatgpt
AI Adoption
LLM Usage
The OpenAI report reveals how people use ChatGPT, with non-work queries dominating (70%) and writing being the main work task, sparking discussions on AI adoption, usage trends, and potential applications.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
52
0-12h
Avg / period
16.3
Comment distribution98 data points
Loading chart...
Based on 98 loaded comments
Key moments
- 01Story posted
Sep 15, 2025 at 3:14 PM EDT
4 months ago
Step 01 - 02First comment
Sep 15, 2025 at 5:34 PM EDT
2h after posting
Step 02 - 03Peak activity
52 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 22, 2025 at 2:21 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45253775Type: storyLast synced: 11/20/2025, 7:55:16 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Since many consumers are typically relatively tight-fisted in the b2c market, I don't think this bodes well for the long-term economics of the market. This may explain the relatively recent pivot to attempt to "discover" uses.
I don't think this ends happily.
Rest of the market be damned -- combined with the poor customer mix (low to middle income countries) this explains why there has been such a push by the big labs to attempt to quantize models and save costs. You effectively have highly paid engineers/scientists running computationally expensive models on some of the most expensive hardware on the market to serve instructions on how to do things to people in low income countries.
This doesn't sound good, even for ad-supported business models.
Is there enough product differentiation between OAI and Gemini? Not that I can see. And even if it was a low price, thats not the point - people hate paying a penny for something they expect to be free.
By the time OAI has developed anything that enables them to acquire and exercise market power (profitably), they will have ran out of funding (at least on favourable terms). Which could cause key talent to leave to competitors and so on. Essentially a downward spiral to death.
Still, 700 million users, and they can still add a lot of products within ChatGPT. Ads will also be slapped on answers.
If all fails, Sam will start wearing "Occupy Jupiter" t-shirts.
Once OpenAI turns to ads, I think it's an indicator they are out of ideas.
They aren't pulling an Amazon snd balancing cash flow with costs. They're just incinerating money for a low value userbase. Even at FB arpu the economics are still very poor.
Okay, so still hundreds of millions of users
>They aren't pulling an Amazon snd balancing cash flow with costs.
Nobody said they were. I said having hundreds of millions of completely free users would suck the profitability of any business, and that the remedy would be simple, should the need for it arise.
>They're just incinerating money for a low value userbase.
If you don't see how implementing ads in a system designed for having natural conversations to users whose most common queries are for “Practical Guidance” and “Seeking Information” could be incredibly valuable then you have no foresight and I don't know what to tell you.
>Even at FB arpu the economics are still very poor.
No they aren't and I honestly have no idea what you're talking about. Inference is cheap and has been for some time.
Implementing adds is a hail-mary. It puts them in a knife fight with google which will likely result in a race to the bottom which OpenAI cannot sustain and win.
FB global ARPU is about 50 USD. At 700M customers, they do 35B in revenue annually. This compares to a publicly stated expected cost of approximately 150B in computing alone over the next 5 years (see: https://fortune.com/2025/09/06/openai-spending-outlook-115-b...). This leaves a profit of 5B per year, with 90B expected r&d costs. Even if OpenAI develops a product and fires all employees, you are looking at a ROIC of about 18 years.
Fundamentally, OpenAI does not have the unit economics of a traditional SaaS. "Hundreds of millions of users" is hundreds of millions of people consuming expenses and not generating sufficient revenue to justify the line of business as a going concern. This, coupled with declining enterprise AI adoption (https://www.apolloacademy.com/ai-adoption-rate-trending-down...) paints an ugly picture.
They are gaining everywhere. Some more than others, but to say they are only gaining in poorer markets is blatantly untrue.
>FB global ARPU is about 50 USD. At 700M customers, they do 35B in revenue annually.
Yeah, and that would make them healthily profitable.
>This compares to a publicly stated expected cost of approximately 150B in computing alone over the next 5 years
Yes, because they expect to serve hundreds of millions to potentially billions more users. 'This leaves a profit of 5B per year' makes some very bizarre assumptions. You’re conflating a future-scale spending projection with today’s economics. That number is a forward-looking projection tied to massive scale - it doesn’t prove current users alone justify that spend, and they clearly don't. There is no reality where they are spending that much if their userbase stalls at today's numbers, so it's just a moot point and '5B per year' a made up number.
>Fundamentally, OpenAI does not have the unit economics of a traditional SaaS.
Again, Everything points to their unit economics being perfectly fine.
- Prices of API access of Open models from third-party providers who would have no motive to subsidize inference
- Google says their median query is about as expensive as a google search
Thing is, what you're saying would have been true a few years ago. This would have all been intractable. But llm inference costs have quite literally been slashed several orders of magnitude in the last couple of years.
Imagine 700M users “doomchatting” with GPT5 for several hours per day to justify the ROI of advertising.
Ads won't be slapped onto answers, my guess is that they will be subtly and silently inserted into them so that you don't even notice. It won't always be what you see either as companies, political groups, and others who seek to influence you will pay to have specific words/phrases omitted from answers as well.
AI at this point is little more than a toy that outright lies occasionally yet we're already seeing AI hurting people's ability to think, be creative, use critical thinking skills, and research independently.
Reddit.
And if someone is using an LLM for either topic, my sympathy goes out the window. Same as with reddit. E.g. if you take the discourse on /r/worldnews seriously, you deserve to be propagandized.
On the other hand, I remember when BlackBerry had enterprise locked down and got wiped out by consumer focused Apple.
In any event, having big consumer growth doesn't seem like a bad thing.
It will be bad if it starts a race to the bottom for ad driven offering though.
It could indicate that many people find it more of an entertainment product than a tool, and those are often harder to monetize. You've got ads, and that's about it, and puts a probable cap on your monthly revenue per user that's less than most of the subscription prices these companies are trying to get (especially in non-USA countries).
(I find it way more of a tool and basically don't use it outside of work... but I see a LOT of AI pics and videos in discord and forums and such.)
When OpenAI sells a ChatGPT subscription, they incur large costs just to serve the product, shrinking margins.
Big difference in unit economics, hence the quantization push.
It’s the prodigal child of tech.
Business have higher friction - legal, integrations, access control, internal knowledge leaks (a document can be restricted access but result may leak into a more open query). Not to mention the typical general inertia. This friction works both ways.
Think capacitive vs induction electric circuits.
Similarly, if costs double (or worse, increase to a point to be close to typical SaaS margins) and LLMs lose their shine I dont think there will be friction on the way out. People (especially executives) will offer up ChatGPT as a sacrifice.
That said I do think eventually prices will increase somewhat, unless SOTA models start becoming profitable at current prices (my knowledge is at least 6 months old on this so maybe they have already become profitable?)
people said same thing for youtube, "videos are bandwidth hungry", no way make money off of it.
consumer user is still feeding the LLM with training data.
They only analyze the consumer plans, and ignored Enterprise, Teams and Education plans.
> ChatGPT is widely used for practical guidance, information seeking, and writing, which together make up nearly 80% of usage. Non-work queries now dominate (70%). Writing is the main work task, mostly editing user text. Users are younger, increasingly female, global, and adoption is growing fastest in lower-income countries
Young moms with no money in poor countries use this product the most. I bet that was fun news to deliver up the chain.
In 2025, it's abundantly clear that the mask is off. Only the whales matter in video games. Only the top donors matter in donation funding. Modern laptops with GPUs are all $2k+ dollars machines. Luxury condos are everywhere. McDonalds revenues and profits are up despite pricing out a lot of low income people.
The poor have less of the nothing they already have. You can make a hundred affordable cars or get as much, if not order of magnitudes more, profit with just one luxury vehicle sale.
Most political donors are $25/month Actblue donations, and it doesn't matter because the campaigns with the most donations regularly lose.
> McDonalds revenues and profits are up despite pricing out a lot of low income people.
They didn't really raise prices, they just put coupons in the app.
> Luxury condos are everywhere.
Houses don't cost more because they have "luxury" features. A nicer countertop doesn't hypnotize people into paying more for a house. Prices are negotiated between buyer and seller and most of the development cost is the land price.
> The poor have less of the nothing they already have.
Wage inequality in the US is lower than it was in 2019. In general income inequality hasn't increased since 2014.
https://www.nber.org/papers/w31010
Is that you?
My initial assumption would be that there are a lot, likely a majority, of parents who have had next to no advice on how to raise kids. Furthermore, I would posit that many of them were not raised in particularly nurturing circumstances themselves.
As such, I would expect that the advice ChatGPT gives (i.e. an average from parenting advice blogs and forums), would on average result in better parenting.
That's obviously not to say that ChatGPT gives great advice, but that the bar is very low already.
Whether heeding ChatGPT advice would be better or worse than no advice at all, I honestly cannot say. On the one hand, getting some advice would probably help in many, many cases - there's a lot of low-hanging fruit here; on the other, low-quality advice has the potential to ruin the lives of multiple people at any moment. This is like medical or lawyer advice: very high stakes in many cases. Should we rely on a model that doesn't really understand the underlying logic for advice on such matters? The "average" of parenting blogs can be a mish-mash of different philosophies or approaches glued together, making up something that sounds plausible but leads to catastrophic results years or decades later.
I don't know. Parenting is a complex problem in itself; then you have people generally not looking for advice or being unable to recognize good advice. It doesn't look like adding a hallucinating AI model to the mix would help much, but I may be wrong on this. I guess we'll find out the hard way: through people trying (or not) it out and then living with consequences (if any).
Ofc ChatGPT goes in hard to syncopanthically confirm all 'suggestive' leads with zero pushback.
This is true. However:
As someone who's done multiple assessments in a clinical setting for anxiety & depression, there is no special magic that requires a human to do it, and many providers are happy to confirm a diagnosis pretty quickly without digging in more. There's GAD-7 & PHQ-9 respectively. While the interview is semi-structured and there is some discretion to the interviewer (how the patient presents in terms of affect, mood, etc.) they mostly go off the quiz.
The trouble you can run into is if there's another condition or differential diagnosis which could be missed. (By both an LLM and the interviewer alike.)
Some do, and they think that they are using it as a replacement. I've been doing research on its use among college students and I've heard firsthand that some of them (especially from students in non-STEM fields) think ChatGPT can be as useful as, if not better than, search engines at times for _seeking_ information.
You may be talking to a specific subset of the population, but once you branch out and observe/hear from broader demographics, you'd be surprised to learn about people's mental model of the genAI technologies.
I mean how often to you make fairly speculative claims and then an hour later see a just published report on it and get it validated? Nuts.
I personally hate chatgpt's voice (writing style) but I guess that's a minority position.
https://openai.com/index/how-people-are-using-chatgpt/
12 more comments available on Hacker News