Dell Admits Consumers Don't Care About AI Pcs
Key topics
Dell's admission that consumers don't care about AI PCs has sparked a lively debate, with some commenters pointing out that the company's honesty might be costly, as evidenced by its 15% stock tanking. While some see this as a refreshing dose of realism, others note that investors are still hooked on the AI hype, making it a tough sell for companies to ditch the trend. The discussion reveals a divide between those who see AI as a valuable tool, particularly for coding, and those who view it as a unnecessary fad, with some predicting that over-reliance on AI will lead to a loss of fundamental skills. As the AI bubble continues to be scrutinized, this conversation feels timely, highlighting the tension between technological innovation and investor expectations.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
43m
Peak period
59
0-6h
Avg / period
17.8
Based on 160 loaded comments
Key moments
- 01Story posted
Jan 7, 2026 at 10:46 AM EST
2d ago
Step 01 - 02First comment
Jan 7, 2026 at 11:28 AM EST
43m after posting
Step 02 - 03Peak activity
59 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 9, 2026 at 4:14 PM EST
2h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Making consumers want things is fixable in any number of ways.
Tariffs?..
Supply chain issues in a fracturing global order?..
.. not so much. Only a couple ways to fix those things, and they all involve nontrivial investments.
Even longer term threats are starting to look more plausible these days.
Lot of unpredictability out there at the moment.
Unfortunately investors are not ready to hear that yet...
But when I come on HN and see people posting about AI IDEs and vibe coding and everything, I'm led to believe that there are developers that like this sort of thing.
I cannot explain this.
But the fact remains that I'm producing something for a machine to consume. When I see people using AI to e.g. write e-mails for them that's where I object: that's communication intended for humans. When you fob that off onto a machine something important is lost.
It's okay, you'll just forget you were ever able to know your code :)
But I wasn't talking about forgetting one language or another, i was talking about forgetting to program completely.
That usually means you're missing something, not that everyone else is.
The guy coding in C++ still has a great job, he didnt miss anything, is all fucking FOMO.
I've also had luck with it helping with debugging. It has the knowledge of the entire Internet and it can quickly add tracing and run debugging. It has helped me find some nasty interactions that I had no idea were a thing.
AI certainly has some advantages in certain use cases, that's why we have been using AI/ML for decades. The latest wave of models bring even more possibilities. But of course, it also brings a lot of potential for abuse and a lot of hype. I, too, all quite sick of it all and can't wait for the bubble to burst so we can get back to building effective tools instead of making wild claims for investors.
"This package has been removed, grep for string X and update every reference in the entire codebase" is a great conservative task; easy to review the results, and I basically know what it should be doing and definitely don't want to do it.
"Here's an ambiguous error, what could be the cause?" sometimes comes up with nonsense, but sometimes actually works.
I never use LLMs
this is their aim, along with rabbiting on about "inevitability"
once you drop out of the SF/tech-oligarch bubble the advocacy drops off
I can see a trend of companies continuing to use AI, but instead portraying it to consumers as "advanced search", "nondeterministic analysis", "context-aware completion", etc - the things you'd actually find useful that AI does very well.
Anyone technical enough to jump into local AI usage can probably see through the hardware fluff, and will just get whatever laptop has the right amount of VRAM.
They are just hoping to catch the trend chasers out, selling them hardware they won't use, confusing it as a requirement for using ChatGPT in the browser.
What they don't want is an AI sticker slapped onto existing products with no real value. What does an 'AI PC' do that a regular PC does not? ChatGPT is a cloud app, it will run fine on any machine.
The return back to physical buttons makes the XPS look pretty appealing again.
I love my 2020 XPS.
The keyboard keys on mine do not rattle, but I have seen newer XPS keyboard keys that do rattle. I hope they fixed that.
It also looks like names are being changed, and the business laptops are going with a dell pro (essential/premium/plus/max) naming convention.
https://www.youtube.com/watch?v=J4yl2twJswM
I wish every consumer product leader would figure this out.
Consumer AI has never really made any sense. It's going to end up in the same category of things as 3D TV's, smart appliances, etc.
It's all optics, it's all grift, it's all gambling.
With more of the compute being pushed off of local hardware they can cheapen out on said hardware with smaller batteries, fewer ports and features, and weaker CPUs. This lessens the pressure they feel from consumers who were taught by corporations in the 20th century that improvements will always come year over year. They can sell less complex hardware and make up for it with software.
For the hardware companies it's all rent seeking from the top down. And the push to put "AI" into everything is a blitz offensive to make this impossible to escape. They just need to normalize non-local computing and have it succeed this time, unlike when they tried it with the "cloud" craze a few years ago. But the companies didn't learn the intended lesson last time when users straight up said that they don't like others gatekeeping the devices they're holding right in their hands. Instead the companies learned they have to deny all other options so users are forced to acquiesce to the gatekeeping.
I don't want AI involved in my laundry machines. The only possible exception I could see would be some sort of emergency-off system, but I don't think that even needs to be "AI". But I don't want AI determining when my laundry is adequately washed or dried; I know what I'm doing, and I neither need nor want help from AI.
I don't want AI involved in my cooking. Admittedly, I have asked ChatGPT for some cooking information (sometimes easier than finding it on slop-and-ad-ridden Google), but I don't want AI in the oven or in the refrigerator or in the stove.
I don't want AI controlling my thermostat. I don't want AI controlling my water heater. I don't want AI controlling my garage door. I don't want AI balancing my checkbook.
I am totally fine with involving computers and technology in these things, but I don't want it to be "AI". I have way less trust in nondeterministic neural network systems than I do in basic well-tested sensors, microcontrollers, and tiny low-level C programs.
Have some half decent model integrated with OS's builtin image editing app so average user can do basic fixing of their vacation photos by some prompts
Have some local model with access to files automatically tag your photos, maybe even ask some questions and add tags based on that and then use that for search ("give me photo of that person from last year's vacation"
Similarly with chat records
But once you start throwing it in cloud... people get anxious about their data getting lost, or might not exactly see the value in subscription
On the other hand everyone non-technical I know under 40 uses LLMs and my 74 year old dad just started using ChatGPT.
You could use a search engine and hope someone answered a close enough question (and wade through the SEO slop), or just get an AI to actually help you.
For example, if you close a youtube browser tab with a comment half written it will pop up an `alert("You will lose your comment if you close this window")`. It does this if the comment is a 2 page essay or "asdfasdf". Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort. The end result is I only have to deal with that annoying popup when I really am glad it is there.
That is a trivial example but you can imagine how a locally run LLM that was just part of the SDK/API developers could leverage would lead to better UI/UX. For now everyone is making the LLM the product, but once we start building products with an LLM as a background tool it will be great.
It is actually a really weird time, my whole career we wanted to obfuscate implementation and present a clean UI to end users, we want them peaking behind the curtain as little as possible. Now everything is like "This is built with AI! This uses AI!".
There isn't even an "I've watched this" or "don't suggest this video anymore" option. You can only say "I'm not interested" which I don't want to do because it will seems like it will downrank the entire channel.
Even if that is the case, I rarely watch the same video, so the recommendation engine should be able to pick that up.
But that's the real problem. You can't just average everyone and apply that result to anyone. The "average of everyone" fits exactly NO ONE.
The US Navy figured this out long ago in a famous anecdote in fact. They wanted to fit a cockpit to the "average" pilot, took a shitload of measurements of a lot of airmen, and it ended up nobody fit.
The actual solution was customization and accommodations.
Convince them to sink their fortunes in, and then we just make sure it pops.
I don't think that's a great example, because you can evaluate the length of the content of a text box with a one-line "if" statement. You could even expand it to check for how long you've been writing, and cache the contents of the box with a couple more lines of code.
An LLM, by contrast, requires a significant amount of disk space and processing power for this task, and it would be unpredictable and difficult to debug, even if we could define a threshold for "important"!
Sort of like how most of the time when people proposed a non-cryptocurrency use for "blockchain", they had either re-invented Git or re-invented the database. The similarity to how people treat "AI" is uncanny.
Likewise when smartphones were new, everyone and their mother was certain that random niche thing that made no sense as an app would be a perfect app and that if they could just get someone to make the app they’d be rich. (And of course ideally, the idea haver of the misguided idea would get the lions share of the riches, and the programmer would get a slice of pizza and perhaps a percentage or two of ownership if the idea haver was extra generous.)
The funny thing is that this exact example could also be used by AI skeptics. It's forcing an LLM into a product with questionable utility, causing it to cost more to develop, be more resource intensive to run, and behave in a manor that isn't consistent or reliable. Meanwhile, if there was an incentive to tweak that alert based off likelihood of it's usefulnesss, there could have always just been a check on length of the text. Suggesting this should be done with an LLM as your specific example is evidence that LLMs are solutions looking for problems.
If the computer can tell the difference and be less annoying, it seems useful to me?
We should keep in mind that we're trying to optimize for user's time. "So, she cheated on me" takes less than a second to type. It would probably take the user longer to respond to whatever pop up warning you give than just retyping that text again. So what actual value do you think the LLM is contributing here that justifies the added complexity and overhead?
Plus that benefit needs to overcome the other undesired behavior that an LLM would introduce such as it will now present an unnecessary popup if people enter a little real data and intentionally navigate away from the page (and it should be noted, users will almost certainly be much more likely to intentionally navigate away than accidentally navigate away). LLMs also aren't deterministic. If 90% of the time you navigate away from the page with text entered, the LLM warns you, then 10% of the time it doesn't, those 10% times are going to be a lot more frustrating than if the length check just warned you every single time. And from a user satisfaction perspective, it seems like a mistake to swap frustration caused by user mistakes (accidentally navigating away) with frustration caused by your design decisions (inconsistent behavior). Even if all those numbers end up falling exactly the right way to slightly make the users less frustrated overall, you're still trading users who were previously frustrated at themselves for users being frustrated at you. That seems like a bad business decision.
Like I said, this all just seems like a solution in search of a problem.
Close enough for the issue to me and can't be more expensive than asking an LLM?
If I want to close the tab of unsubmitted comment text, I will. I most certainly don’t need a model going “uhmmm akshually, I think you might want that later!”
Literally "T-shirt with Bluetooth", that's what 99.98% of "AI" stickers today advertise.
That doesn't sound ideal at all. And in fact highlights what's wrong with AI product development nowadays.
AI as a tool is wildly popular. Almost everyone in the world uses ChatGPT or knows someone who does. Here's the thing about tools - you use them in a predictable way and they give you a predictable result. I ask a question, I get an answer. The thing doesn't randomly interject when I'm doing other things and I asked it nothing. I swing a hammer, it drives a nail. The hammer doesn't decide that the thing it's swinging at is vaguely thumb-shaped and self-destruct.
Too many product managers nowadays want AI to not just be a tool, they want it to be magic. But magic is distracting, and unpredictable, and frequently gets things wrong because it doesn't understand the human's intent. That's why people mostly find AI integrations confusing and aggravating, despite the popularity of AI-as-a-tool.
The hard part is the AI needs to be correct when it doesn't something unexpected. I don't know if this is a solvable problem, but it is what I want.
I want reproducibility not magic.
If your "AI" light switch doesn't turn on the lights, you have to rephrase the prompt.
Sawstop literally patented this and made millions and seems to have genuinely improved the world.
I personally am a big fan of tools that make it hard to mangle my body parts.
If you want to tell me that llms are inherently non-deterministic, then sure, but from the point of view of a user, a saw stop activating because the wood is wet is really not expected either.
(Though, of course, there certainly are people who dislike sawstop for that sort of reason, as well.)
That is more what I am advocating for, subtle background UX improvements based on an LLMs ability to interpret a users intent. We had limited abilities to look at an applications state and try to determine a users intent, but it is easier to do that with an LLM. Yeah like you point out some users don't want you to try and predict their intent, but if you can do it accurately a high percentage of the time it is "magic".
Rose tinted glasses perhaps, but I remember it as a very straightforward and consistent UI that provided great feedback, was snappy and did everything I needed. Up to and including little hints for power users like underlining shortcut letters for the & key.
And even that's only because browsers ended up in a weird "windows but tabs but actually tabs are windows" state.
So yeah, I'd miss the UX of dragging tabs into their own separate windows.
But even that is something that still feels janky in most apps ( windows terminal somehow makes this feel bad, even VS code took a long time to make it feel okay ), and I wouldn't really miss it that much if there were no tabs at all and every tab was forced into a separate window at all times with it's own task bar entry.
The real stuff not on Win95 that everyone would miss is scalable interfaces/high DPI (not necessary as in HiDPI, just above 640x480). And this one does require A LOT of resources and is still wobbly.
You could have multiple windows, and you could have MDI windows, but you couldn't have shared task bar icons that expand on hover to let you choose which one to go to.
If you mean that someone could write a replacement shell that did that, then maybe, but at that point it's no longer really windows 95.
Microsoft seems not to believe that users want to use search primarily as an application launcher, which is strange because Mac, Linux, and mobile have all converged on it.
I'd wager it's more likely to be the opposite.
Older UIs were built on solid research. They had a ton of subtle UX behaviors that users didn't notice were there, but helped in minor ways. Modern UIs have a tendency to throw out previous learning and to be fashion-first. I've seen this talked about on HN a fair bit lately.
Using an old-fashioned interface, with 3D buttons to make interactive elements clear, and with instant feedback, can be a nicer experience than having to work with the lack of clarity, and relative laggyness, of some of today's interfaces.
Yes. For example, Chrome literally just broke middle-click paste in this box when I was responding. It sets the primary selection to copy, but fails to use it when pasting.
Middle click to open in new tab is also reliably flaky.
I really miss the UI consistency of the 90s and early 2000s.
And nobody relied on them when they were distracting and unpredictable. People only rely on them now because they are not.
LLMs won't ever be predictable. They are designed not to be. A predictable AI is something different from a LLM.
Like what? All those popups screaming that my PC is unprotected because I turned off windows firewall?
One of the episodes had them using Windows 98. As I recall, the reaction was more or less "this is pretty ok, actually".
No idea if they are AI Netflix doesn't tell and I don't ask.
AI is just a toxic brand at this point IMO.
https://en.wikipedia.org/wiki/Netflix_Prize
It doesn’t fix the content problem these days though.
I read this post yesterday and this specific example kept coming back to me because something about it just didn't sit right. And I finally figured it out: Glancing at the alert box (or the browser-provided "do you want to navigate away from this page" modal) and considering the text that I had entered takes... less than 5 seconds.
Sure, 5 seconds here and there adds up over the course of a day, but I really feel like this example is grasping at straws.
Granted, it seems the even better UX is to save what the user inputs and let them recover if they lost something important. That would also help for other things, like crashes, which have also burned me in the past. But tradeoffs, as always.
I'm not sure we need even local AI's reading everything we do for what amounts to a skill issue.
Wouldn't you just hit undo? Yeah, it's a bit obnoxious that Chrome for example uses cmd-shift-T to undo in this case instead of the application-wide undo stack, but I feel like the focus for improving software resilience to user error should continue to be on increasing the power of the undo stack (like it's been for more than 30 years so far), not trying to optimize what gets put in the undo stack in the first place.
The problem is that by agreeing to close the tab, you're agreeing to discard the comment. There's currently no way to bring it back. There's no way to undo.
AI can't fix that. There is Microsoft's "snapshot" thing but it's really just a waste of storage space.
Because:
1. Undo is usually treated as an application-level concern, meaning that once the application has exited there is no specific undo, as it is normally though of, function available. The 'desktop environment' integration necessary for this isn't commonly found.
2. Even if the application is still running, it only helps if the browser has implemented it. You mention Chrome has it, which is good, but Chrome is pretty lousy about just about everything else, so... Pick your poison, I guess.
3. This was already mentioned as the better user experience anyway, so it is not exactly clear what you are trying to add. Did you randomly stop reading in the middle?
I tell the computer what to do, not the other way around.
So much of it nowadays is like the blockchain craze, trying to use it as a solution for every problem until it sticks.
there are definitely useful applications for end user features, but a lot of this is ordered from on-high top-down and product managers need to appease them...
I'd put this in "save 5 seconds daily" to be generous. Remember that this is time saved over 5 years.
No, ideally I would be able to predict and understand how my UI behaves, and train muscle memory.
If closing a tab would mean losing valuable data, the ideal UI would allow me to undo it, not try to guess if I cared.
This AI summer is really kind of a replay of the last AI summer. In a recent story about expert systems seen here on Hackernews, there was even a description of Gary Kildall from The Computer Chronicles expressing skepticism about AI that parallels modern-day AI skepticism. LLMs and CNNs will, as you describe, settle into certain applications where they'll be profoundly useful, become embedded in other software as techniques rather than an application in and of themselves... and then we won't call them AI. Winter is coming.
I've seen people argue that the goalposts keep moving with respect to whether or not something is considered AI, but that's because you can argue that a lot of things computers do are artificial intelligence. Once something becomes commonplace and well understood, it's not useful to communicate about it as AI.
I don't think the term AI will "stick" to a given technology until AGI (or something close to it).
I agree this would be a great use of LLMs! However, it would have to be really low latency, like on the order of milliseconds. I don't think the tech is there yet, although maybe it will be soon-ish.
It’s already there for Apple developers: https://developer.apple.com/documentation/foundationmodels
I saw some presentations about it last year. It’s extremely easy to use.
When "asdfasdf" is actually a package name, and it's in reply to a request for an NPM package, and the question is formulated in a way that makes it hard for LLMs to make that connection, you will get a false positive.
I imagine this will happen more than not.
Ideally, in my view, is that the browser asks you if you are sure regardless of content.
I use LLMs, but that browser "are you sure" type of integration is adding a massive amount of work to do something that ultimately isn't useful in any real way.
Are you sure about that? It will trigger only for what the LLM declares important, not what you care about.
Is anyone delivering local LLMs that can actually be trained on your data? Or just pre made models for the lowest common denominator?
Google isn’t running ads on TV for Google Docs touting that it uses conflict-free replicated data types, or whatever, because (almost entirely) no one cares. Most people care the same amount about “AI” too.
I don't think an NPU has that capability.
People have more or less converged on what they want on a desktop computers in the last ~30 years. I'm not saying that there isn't room for improvement, but I am saying that I think we're largely at the state of "boring", and improvements are generally going to be more incremental. The problem is that "slightly better than last year" really isn't a super sexy thing to tell your shareholders. Since the US economy has basically become a giant ponzi scheme based more on vibes than actual solid business, everything sort of depends on everything being super sexy and revolutionary and disruptive at all times.
As such, there are going to be many attempts from companies to "revolutionize" the boring thing that they're selling. This isn't inherently "bad", we do need to inject entropy into things or we wouldn't make progress, but a lazy and/or uninspired executive can try and "revolutionize" their product by hopping on the next tech bandwagon.
We saw this nine years ago with "Long Blockchain Ice Tea" [1], and probably way farther back all the way to antiquity.
[1] https://en.wikipedia.org/wiki/Long_Blockchain_Corp.
Do consumers understand that OEM device price increases are due to AI-induced memory price spike > 100%?
220 more comments available on Hacker News