Chatgpt Pulse
Posted3 months agoActive3 months ago
openai.comTechstoryHigh profile
heatednegative
Debate
85/100
Artificial IntelligenceChatgptSurveillance CapitalismMental Health
Key topics
Artificial Intelligence
Chatgpt
Surveillance Capitalism
Mental Health
OpenAI introduces ChatGPT Pulse, a feature that allows the AI to proactively initiate conversations and provide updates, sparking concerns about data harvesting, mental health, and the potential for manipulation.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
16m
Peak period
123
0-12h
Avg / period
22.9
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 25, 2025 at 12:59 PM EDT
3 months ago
Step 01 - 02First comment
Sep 25, 2025 at 1:16 PM EDT
16m after posting
Step 02 - 03Peak activity
123 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 1, 2025 at 1:03 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45375477Type: storyLast synced: 11/22/2025, 11:47:55 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Bahhh, boring!!!
0 - https://www.tumblr.com/elodieunderglass/186312312148/luritto...
Now you would be really a weirdo to not have one since enough people gave in for small convenience to make it basically mandatory.
Very rich people buy life from other peoples to manage their information to have more of their life to do other things. Not so rich people can now increasingly employ AI for next to nothing to lengthen their net life and that's actually amazing.
What makes me suspicious: Is there a certain number on their balance sheets at which the system turns sour, when the trap snaps? Because the numbers all seem absurdly large already and keep increasing. Am I to believe that it will all come down after the next trillion or 10? I mean, it's not unthinkable of course, I just don't know why. Even from the most cynical view: Why would they want to crash a system where everything is going great for them?
So I do wonder: Are large amounts of wealth in the hands of a few per se a real world problem for us or is our notion of what the number means, what it does to or for us, simply off?
the problem is that western civilisation is so far up their asses with capitalism that they think that some corporation cares about them (users).
What I am asking to interview this viewpoint is: Why will it be a problem when a company reaches a certain size? If they have no other goal than making money, are the biggest assholes of all time, and make money through customers, unless you can actually simply extort customers (monopolies), they will continue to want to do stuff that makes customers want to give them more money. Why would that fail as soon as the company goes from 1 trillion to 2 trillion?
I completely agree: The amount of money that corps wield feels obscene. I am just looking for a clear explanation of what the necessary failure mode is at a certain size, because that is something that we generally just assume. Unequal distribution is the problem and it's always doom, but that clashes with ever improving living standards on any metric I think is interesting.
So I think it's a fair question how and when the collapse is going to happen, to understand if that was even a reasonable assumption to begin with. I have my doubts.
How can the world continue to function this way if fewer of us have so much wealth that the rest of us effectively have no say in how the world works?
Our society is so cooked, man. We don’t even realize how over it is, and even people genuinely trying to understand are not able to.
Non data driven living is 1x
Therefore data driven beings will outcompete
Same reasoning shows that 3.10 is better than 3.1
Eg if i search for a site, it can link it to what i was working on at the time, the github branch i was on, areas of files i was working on, etcetc.
Sounds sexy to me, but obviously such a massive breach of trust/security that it would require fullly local execution. Hell it's such a security risk that i debate if it's even worth it at all, since if you store this you now have a honeypot which tracks everything you do, say, search for, etc.
With great power.. i guess.
Edit: Downvote all you want, as usual. Then wait 6 months to be proven wrong. Every. Single. Time.
> Downvote all you want
“Please don't comment about the voting on comments. It never does any good, and it makes boring reading.”
https://news.ycombinator.com/newsguidelines.html
- People who treat ChatGPT as a romantic interest will be far more hooked as it "initiates" conversations instead of just responding. It's not healthy to relate personally to a thing that has no real feelings or thoughts of its own. Mental health directly correlates to living in truth - that's the base axiom behind cognitive behavioral therapy.
- ChatGPT in general is addicting enough when it does nothing until you prompt it. But adding "ChatGPT found something interesting!" to phone notifications will make it unnecessarily consume far more attention.
- When it initiates conversations or brings things up without being prompted, people will all the more be tempted to falsely infer a person-like entity on the other end. Plausible-sounding conversations are already deceptive enough and prompt people to trust what it says far too much.
For most people, it's hard to remember that LLMs carry no personal responsibility or accountability for what they say, not even an emotional desire to appear a certain way to anyone. It's far too easy to infer all these traits to something that says stuff and grant it at least some trust accordingly. Humans are wired to relate through words, so LLMs are a significant vector to cause humans to respond relationally to a machine.
The more I use these tools, the more I think we should consciously value the output on its own merits (context-free), and no further. Data returned may be useful at times, but it carries zero authority (not even "a person said this", which normally is at least non-zero), until a person has personally verified it, including verifying sources, if needed (machine-driven validation also can count -- running a test suite, etc., depending on how good it is). That can be hard when our brains naturally value stuff more or less based on context (what or who created it, etc.), and when it's presented to us by what sounds like a person, and with their comments. "Build an HTML invoice for this list of services provided" is peak usefulness. But while queries like "I need some advice for this relationship" might surface some helpful starting points for further research, trusting what it says enough to do what it suggests can be incredibly harmful. Other people can understand your problems, and challenge you helpfully, in ways LLMs never will be able to.
Maybe we should lobby legislators to require AI vendors to say something like "Output carries zero authority and should not be trusted at all or acted upon without verification by qualified professionals or automated tests. You assume the full risk for any actions you take based on the output. [LLM name] is not a person and has no thoughts or feelings. Do not relate to it." The little "may make mistakes" disclaimer doesn't communicate the full gravity of the issue.
They did handle the growth from search to email to integrated suite fantastically. And the lack of a broadly adopted ecoystem to integrate into seems to be the major stopping point for emergent challengers, e.g. Zoom.
Maybe the new paradigm is that you have your flashy product, and it goes without saying that it's stapled on to a tightly integrated suite of email, calendar, drive, chat etc. It may be more plausible for OpenAI to do its version of that than to integrate into other ecosystems on terms set by their counterparts.
However, I take your point - OpenAI has an interest in some other party paying them a fuckton of money for those tokens and then publicly crediting OpenAI and asserting the tokens would have been worth it at ten fucktons of money. And also, of course, in having that other party take on the risk that infinity fucktons of money worth of OpenAI tokens is not enough to make a gmail.
So they would really need to believe in the strategic necessity (and feasibility) of making their own gmail to go ahead with it.
In that case, there's some ancillary value in being able to claim "look, we needed a gmail and ChatGPT made one for us - what do YOU need that ChatGPT can make for YOU?"
Even at our small scale I wouldn’t want to be locked out of something.
Then again there’s also the sign in with google type stuff that keeps us further locked in.
The challenge in migrating email isn't that you have to move the existing email messages; any standard email client will download them all for you. The challenge is that there are thousands of external people and systems pointing to your email address.
Your LLMs memory is roughly analogous to the existing email messages. It's not stored in the contacts of hundreds of friends and acquaintances, or used to log in to each of a thousand different services. It's all contained in a single system, just like your email messages are.
Hide the notifications from uber which are just adverts and leave the one from your friend sending you a message on the lock screen.
Gmail already does filter the noise through "Categories" (Social, Updates, Forums, Promotions). I've turned them off as I'm pretty good about unsubscribing from junk and don't get a ton of email. However, they could place an alert at the top of your inbox to your "daily report" or whatever. Just as they have started to put an alert on incoming deliveries (ex. Amazon orders). You can then just dismiss it, so perhaps it's not an email so much as a "message" or something.
Or more likely: `[object, object]`
https://www.youtube.com/watch?v=GCSGkogquwo
I tried to ask GPT-5 pro the other day to just pick an ambitious project it wanted to work on, and I’d carry out whatever physical world tasks it needed me to, and all it did was just come up with project plans which were rehashes of my prior projects framed as its own.
I’m rapidly losing interest in all of these tools. It feels like blockchain again in a lot of weird ways. Both will stick around, but fall well short of the tulip mania VCs and tech leaders have pushed.
I’ve long contended that tech has lost any soulful vision of the future, it’s just tactical money making all the way down.
It's nice to know my feelings are shared; I remain relatively convinced that there are financial incentives driving most of the rabid support of this technology
But again, how does this work? After twirling my moustache that I wax with Evil Villian brand moustache wax, I just go on HN and make up shit to post about companies that aren't even public but are in the same industry, and that'll drive the stock price up... somehow? Someone's going to read a comment from me saying "I use plan mode in Claude Code to make a Todo.md, and then have it generate code", and based on that, that's the straw that breaks the camels back, and they rush out to buy stock in AI companies because they'd never heard of the stock market before I mentioned Claude Code.
Then, based on randos reading a comment from me about Claude Code, the share price goes up by a couple of cents, but I can't sell the handful of shares I have because of blackout windows anyway, but okay so eventually those shares do sell, and I go on a lavishly expensive vacation in Europe all because I made a couple of positive comments on HN about AI that were total lies.
Yeah, that's totally how that works. I also get paid to go out and protest on weekends to supplement the European vacation money. Just three more shitposts about Tesla and I get to go to South East Asia as well!
The LLM does not have wants. It does not have preferences, and as such cannot "pick". Expecting it to have wants and preferences is "holding it wrong".
The architectural limits will always be there, regardless of training.
CEO's are gonna CEO, it seems their job has morphed into creative writing to maximize funding.
IMO we’re clearly there, gpt5 would easily be considered agi years ago. I don’t think most people really get how non-general things were that are now handled by the new systems.
Now agi seems to be closer to what others call asi. I think k the goalposts will keep moving.
The GPT model alone does not offer autonomy. It only acts in response to explicit input. That's not to say that you couldn't built autonomy on top of GPT, though. In fact, that appears to be exactly what Pulse is trying to accomplish.
But Microsoft and OpenAI's contractual agreements state that the autonomy must also be economically useful to the tune of hundreds of billions of dollars in autonomously-created economic activity, so OpenAI will not call it as such until that time.
Every human every day has the choice to not go to work, has the choice not to follow the law, has a choice to... These AI doesn't have nearly as much autonomy as that.
> The concept does not, in principle, require the system to be an autonomous agent; a static model—such as a highly capable large language model—or an embodied robot could both satisfy the definition so long as human‑level breadth and proficiency are achieved
Edit -
> That is ultimately what sets AGI apart from AI.
No! The key thing was that it was general intelligence rather than things like “bird classifier” or “chess bot”.
It says that one guy who came up with his own AGI classification system says it might not be required. And despite it being his own system, he still was only able to land on "might not", meaning that he doesn't even understand his own system. He can safely be ignored. Outliers are always implied, of course.
> No! The key thing was that it was general intelligence rather than things like “bird classifier” or “chess bot”.
I suppose if you don't consider the wide range of human intelligence as the marker of general intelligence then a "bird classifier" plus a "chess bot" gives you general intelligence. We had nearly a millennia ago!
But usually general intelligence expects human-like intelligence, which would necessitate autonomy — the most notable feature of human intelligence. Humans would not be able to exist without the intelligence to perform autonomously.
But, regardless, you make a good point: A "language classifier" can be no more AGI than a "bird classifier". These are narrow systems, focused on a single task. A "bird classifier" doesn't become a general intelligence when it crosses some threshold of being able to classify n number of birds just as a "language classifier" wouldn't become a general intelligence when it is able to classify n number of language features, no matter how large n becomes.
Conceivably these classifiers could be used as part of a larger system to achieve general intelligence, but on their own, impossible.
They have to do this manually for every single particular bias that the models generate that is noticed by the public.
I'm sure there are many such biases that aren't important to train out of responses, but exist in latent space.
What do you think humans have?
LLMs need a retrain for that.
Obviously you can get probability distributions and in an economics sense of revealed preference say that because the model says that the next token it picks is .70 most likely...
If a model has a statistical tendency to recommend python scripts over bash, is that a PREFERENCE? Argue it’s not alive and doesn’t have feelings all you want. But putting that aside, it prefers python. Saying the word preference is meaningless is just pedantic and annoying.
Try explaining ionic bonds to a high schooler without anthropomorphising atoms and their desires for electrons. And then ask yourself why you’re doing that? It’s easier to say and understand with the analogy.
Perhaps instead of "preference", "propensity" would be a more broadly applicable term?
[1]https://en.wikipedia.org/wiki/G%C3%B6mb%C3%B6c
Whenever you message an LLM it could respond in practically unlimited ways, yet it responds in one specific way. That itself is a preference honed through the training process.
Outside that? If left to their own devices, the same LLM checkpoints will end up in very same-y places, unsurprisingly. They have some fairly consistent preferences - for example, in conversation topics they tend to gravitate towards.
It doesn't feel like blockchain at all. Blockchain is probably the most useless technology ever invented (unless you're a criminal or an influencer who makes ungodly amounts of money off of suckers).
AI is a powerful tool for those who are willing to put in the work. People who have the time, knowledge and critical thinking skills to verify its outputs and steer it toward better answers. My personal productivity has skyrocketed in the last 12 months. The real problem isn’t AI itself; it’s the overblown promise that it would magically turn anyone into a programmer, architect, or lawyer without effort, expertise or even active engagement. That promise is pretty much dead at this point.
Has your productivity objectively, measurably improved or does it just feel like it has improved? Recall the METR study which caught programmers self-reporting they were 20% faster with AI when they were actually 20% slower.
I am seeing a pattern here. It appears that AI isn't for everyone. Not everyone's personality may be a good fit for using AI. Just like not everybody is a good candidate for being a software dev, or police officer etc.
I used to think that it is a tool. Like a car is. Everybody would want one. But that appears not be the case.
For me, I used AI every day as a tool, for work and and home tasks. It is a massive help for me.
It's hard for me to imagine many. It's not doing the dishes or watering the plants.
If I wanted to rearrange the room I could have it mock up some images, I guess...
How can you verify the recommendations are sound, valid, safe, complete, etc., without trying them out? And trying out unsound, invalid, unsafe, incomplete, etc., recommendations might result in dead plants in a couple of weeks.
I've found it immensely helpful for giving real world recommendations about things like this, that I know how to find on my own but don't have the time to do all the reading and synthesizing.
Such an odd complaint about LLMs. Did people just blindly trust Google searches before hand?
If it's something important, you verify it the same way you did anything else. Check the sources and use more than a single query. I have found the various LLMs to very useful in these cases, especially when I'm coming at something brand new and have no idea what to even search for.
Use only actionable prompts, negations don't work on ai and they don't work on people.
Ok, so subjective
Task: Walk to the shops & buy some milk.
Deliverables: 1. Video of walking to the shops (including capturing the newspaper for that day at the local shop) 2. Reciept from local store for milk. 3. Physical bottle of Milk.
But a device that reaches out to you reminds you to hook back in.
By their own definition, its a feature nobody asked for.
Also, this needs a cute/mocking name. How about "vibe living"?
This reads like the first step to "infinite scroll" AI echo chambers and next level surveillance capitalism.
On one hand this can be exciting. Following up with information from my recent deep dive would be cool.
On the other hand, I don't want to it to keep engaging with my most recent conspiracy theory/fringe deep dives.
570 more comments available on Hacker News