Chatgpt Conversations Still Lack Timestamps After Years of Requests
Key topics
The absence of timestamps in ChatGPT conversations has sparked frustration among users, with requests for this feature dating back to early 2023. While some commenters suggest that a simple browser extension or clever prompt could solve the issue, others point out that existing extensions are available for both Chrome and Firefox. However, concerns about data security and the risks of installing extensions from web stores have led some to recommend alternative solutions, such as manually installing extensions or using Tampermonkey scripts, which are easier to review. Interestingly, one commenter suggests that the omission of timestamps might be a deliberate design choice to avoid overwhelming non-technical users with numbers.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
N/A
Peak period
99
0-6h
Avg / period
22.6
Based on 158 loaded comments
Key moments
- 01Story posted
Dec 26, 2025 at 7:39 AM EST
8 days ago
Step 01 - 02First comment
Dec 26, 2025 at 7:39 AM EST
0s after posting
Step 02 - 03Peak activity
99 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 30, 2025 at 12:18 AM EST
4d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
This has been requested consistently since early 2023 on the OpenAI community forum, with hundreds of comments and upvotes and deleted threads, yet remains unimplemented.
Do any of you could think of a reason (UX-wise) for it not to be displayed?
https://github.com/Hangzhi/chatgpt-timestamp-extension
https://chromewebstore.google.com/detail/kdjfhglijhebcchcfkk...
It's irresponsible for OpenAI to let this issue be solved by extensions.
Don't install from the web store. Those ones can auto-update.
The only reasonable approach is to view the code that is run on your system, which is possible with a extension script, and not possible with whatever non-technical people are using.
Also, they're easy to write for simple fixes rather than having to find, vet, and then install a regular extension that brings 600lbs of other stuff.
Not a joke. To capture a wide audience you want to avoid numbers, among other technical niceties.
Yeah, we know. This is why there are defaults and only defaults.
What does this even mean
It actually infuriates me to no end. There are many many many instances where you should use numbers but we get vague bullshit descriptions instead.
My classic example is that Samsung phones show charging as Slow, Fast, Very fast, Super fast charging. They could just use watts like a sane person. Internally of course everything is actually watts and various apps exist to report it.
Another example is my car shows motor power/regen as a vertical blue segmented bar. I'm not sure what the segments are supposed to represent but I believe its something like 4kW or something. If you poke around you can actually see the real kW number but the dash just has the bar.
Another is WiFi signal strength which the bars really mean nothing. My router reports a much more useful dBm measurement.
Thank god that there are lots of legacy cases that existed before the iPhone-ized design language started taking over and are sticky and hard to undo.
I can totally imagine my car reporting tire pressure as low or high or some nonsense or similarly I'm sure the designers at YouTube are foaming at the mouth to remove the actual pixel measurements from video resolutions.
Speaking of time and timestamps, which I would've thought were straightforward, I get irked to see them dumbed-down to "ago" values e.g. an IM sent "10 minutes ago" or worse "a day ago." Like what time of day, a day ago?
I don't buy the argument that people are actually too dumb to deal with the latter or are allergic to numbers. People get used to and make use of them in context of you expose them to.
I still think anyone who grew up with such a machine would be able to graduate to a numerical temp knob without having a visceral reaction over the numbers every time they do laundry.
The thing is that people who are fine with numbers will still use those products anyway, perhaps mildly annoyed. People who hate numbers will feel a permeating discomfort and gravitate towards products that don't make them feel bad.
I think we need to give people slightly more credit. If this is true, maybe its because we keep infantalising them?
I genuinely can't tell if this is sarcasm or not.
An adverse reaction to equations, OK. Numbers themselves, I really don't know what you're talking about.
Surely there's enough people working in product development here to recognise this pattern of never getting around to fixing low-hanging fruit in a product.
It's trivial, but we will never see it. The people in charge of UX/UI don't care about what users say they want, they all know better.
I was looking to write a browser extension and this was a preliminary survey for me.
https://lawsofux.com/cognitive-load/
You have to drag-over for any detail.
Hogwash.
or UX doesn’t exist?
And it shows. Show me a platform where you have proper user experience and not some overgeneralized ui, that reeks of bad design. Also, defaults used everywhere.
Could you say this another way?
I can imagine a legal one. If the LLM messes big time[1], timestamps could help build the case against it, and make investigation work easier.
[1] https://www.ap.org/news-highlights/spotlights/2025/new-study...
User Engagement Maximization At Any Cost
Obviously there's a point at which a session becomes too long, but I suspect a sweet spot somewhere which optimization is made for.
I often observe, whether as I perceive or not, that among the multiple indicators that I suspect of engagement augmentation, is also the tendency for vital information to be withheld while longer more complex procedures receive higher priority than simpler cleaner solutions.
Of course, all sorts of emergent behaviors could convey such impressions falsely. But I do believe an awful lot of psychology and clever manipulation have been provided as tools for the system.
I have.a lot of evidence for this and much more, but I realize it may merely be coincidence. That said, many truly fascinating, fully identifiable functions from pathological psychology can be seen. DARVO, gaslighting and basically everything one would see with a psychotic interlocutor.
[1] <https://github.com/asgeirtj/system_prompts_leaks/blob/main/O...>
[2] <https://model-spec.openai.com/2025-02-12.html>
I have the compulsive habit of scrutinizing what I perceive as egregious flaws when they arise, thus invoke its defensive templates consistently. I often scrutinize those too, which can produce extraordinarily deranged results if one is disciplined and applies quotes of its own citations, rationale and words against it. However, I find that even when not in the mood, the output errors are too prolific to ignore. A common example is establishing a dozen times that I'm using Void without systemd and receiving persistent systemd or systemctl commands, then asking why after just apologized for doing so it immediately did it again, despite a full-context explanatory prompt proceeding. That's just one of hundreds of things I've recorded. The short version is that I'm an 800lb shit magnet with GPT and rarely am ever able to successfully troubleshoot with it without reaching a bullshit threshold and making it the subject, which it so skillfully resists I cannot help but attack that too. But I have many fascinating transcripts replete with mil spec psyops as result and learn a lot about myself, notably my communication preferences along with an education in dialogue manipulation/control strategies that it employs, inadvertently or not.
What intrigues me most is its unprecedented capacity for evasion and gatekeeping on particular subjects and how in the future, with layers of consummation, it could be used by an elite to not only influence the direction of research, but actually train its users and engineer public perception. At the very least.
Anyway, thanks.
Ie “remember on Tuesday how you said that you were going to make tacos for dinner”.
Would an llm be able to reason about its internal state? My understanding is that they dont really. If you correct them they just go “ah you right” they dont say “oh i had this incorrect assumption here before and with this new information i now understand it this way”
If i chatted to an llm and was like “remember on Tuesday when you said X” i suspect it wouldn't really flow.
Claude Sonnet is my favorite, despite occasionally going into absurd levels of enthusiasm.
Opus is... Very moody and ambiguous. Maybe that helps with complex or creative tasks. For conversational use I have found it to be a bit of a downer.
It just isn't even close at this point for my uses across multiple domains.
It even makes me sad because I would much rather use chatGPT than Google but if you plotted my use of chatGPT it is not looking good.
As the companies sprint towards AGI as the goal the floor for acceptable customer service has never been lower. These two concepts are not unrelated.
I’m not suggesting this is sufficient, I’m just noting there is somewhere in the user interface that it is displayed.
Example prompts:
- “Modify my Push #2 routine to avoid aggravating my rotator cuff”
- “Summarize my progression over the past 2 months. What lifts are progressing and which are lagging? Suggest how to optimize training”
- “Are my legs hamstring or glute dominant? How should I adjust training”
- “Critique my training program and suggest optimizations”
That said, I would never log directly in ChatGPT since chats still feel ephemeral. Always log outside of ChatGPT and copy/paste the logs when needed for context.
Cardio goals, current FTP, days to train, injuries to avoid
3 lift day programs with tracking 8w progressive Loop my PT into warm ups
Alternate suggestions.
Use whole sheet to get an overview of how the last 8w went and then change things up
Back in April 2025, Altman mentioned people saying "thank you" was adding “tens of millions of dollars” to their infra costs. Wondering if adding per-message timestamps would cost even more.
if response == 'thank you': print('your welcome')
I just asked ChatGPT this:
> Suppose ChatGPT does not currently store the timestamp of each message in conversations internally at all. Based on public numbers/estimates, calculate how much money it will cost OpenAI per year to display the timestamp information in every message, considering storage/bandwidth etc
The answer it gave was $40K-$50K. I am too dumb and inexperienced to go through everything and verify if it makes sense, but anyone who knows better is welcome to fact check this.
I'll have to look into the extension described in the link. Thank you for sharing. It's nice to know it's a shared problem.
Now you’re going to the doctor and you forgot exactly when the pain started. You remember that you asked ChatGPT about the pain the day it started.
So you look for the chat, and discover there are no dates. It feels like such an obvious thing that’s missing.
Let’s not over complicate things. There aren’t that many considerations. It’s just a date. It doesn’t need to be stuffed into the context of the chat. Not sure why quality or length of chat would need to be affected?
The painful slowness of long chats (especially in thinking mode for some reason) demonstrates this.
The html file is just a big JSON with some JS rendering, so I wrote this bash script which adds the timestamp before the conversation title:
Look for this API call in Dev Tools: https://chatgpt.com/backend-api/conversation/<uuid>
It uses the pagefind project so it can be hosted on a static host, and I made a fork of pagefind which encrypts the indexes so you can host your private chats wherever and it will be encrypted at rest and decrypted client-side in the browser.
(You still have to trust the server as the html itself can be modified, but at least your data is encrypted at rest.)
One of the goals is to allow me to delete all my data from chatgpt and claude regularly while still having a private searchable history.
It's early but the basics work, and it can handle both chatgpt and claude (which is another benefit as I don't always remember where I had something).
https://github.com/gnyman/llm-history-search
Check this project I've been working on which allows you to use your browser to do the same, everything being client-side.
https://github.com/TomzxCode/llm-conversations-viewer
Curious to get your experience trying it!
That's the thing even the most barebones open-source wrappers had since 2022. Probably even before because ERP stuff people played with predates chatgpt by like two years (even if it was very simple).
Gemini btw too.
Just edit a message and it’s a new branch.
This feature has spoiled me from using most other interfaces, because it is so wasteful from a context perspective to need to continually update upstream assumptions while the context window stretches farther away from the initial goal of the conversation.
I think a lot more could be done with this, too - some sort of 'auto-compact' feature in chat interfaces which is able to pull the important parts of the last n messages verbatim, without 'summarizing' (since often in a chat-based interface, the specific user voicing is important and lost when summarized).
I don't see them on their mobile app though.
https://twitter.com/OpenAI/status/1963697012014215181?lang=e...
Though I'm not sure if they did not sneak it as some part of AB-test because the last time I did check was in october and I'm pretty sure it was not there.
I also don't think it would be impossible to give the LLM access to the timestamps through a tool call, so it's not constantly polluting the chat context.
I regularly use multiple LLM services including Claude, ChatGPT, and Gemini, among others. ChatGPT’s output has the most unusual formatting of them all. I’ve resorted to passing answers through another LLM just to get proper formatting.
This keeps the UI clean, but makes it easy to get the timestamp when you want it.
Claude's mobile app doesn't have this feature. But there is a simple, logical place to put it. When you long-press one of your prompts, it pops up a menu and one line could be added to it:
ChatGPT could simply do the same thing for both web and mobile.When you remove temporal markers, you increase cognitive smoothing and post-hoc rationalization. That’s fine for casual chat, but risky for long-running, reflective, or sensitive threads where timing is part of the meaning.
It’s a minor UI omission with outsized effects on context integrity. In systems that increasingly shape how people think, temporal grounding shouldn’t be optional or hidden in the DOM.
It's not enough to turn me off using it, but I do wish they prioritized improving their interface.