Apple's Slow AI Pace Becomes a Strength as Market Grows Weary of Spending
Key topics
As the AI frenzy reaches a fever pitch, Apple's cautious approach is suddenly looking like a savvy move, with some commenters hailing their restraint as a strength in a market growing weary of AI hype. While some users aren't clamoring for more AI integration, others worry that Apple's lag could become a liability if customers start expecting more AI-powered features. The discussion reveals a nuanced debate, with some, like Lalabadie, praising Apple's focus on on-device SLMs, while others, like engcoach, lament a perceived cultural rot that's stifling innovation. As empath75 notes, Apple can still thrive as an AI consumer, not necessarily a producer, and jtbayly predicts they'll produce their own AI once the dust settles.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
6m
Peak period
69
0-2h
Avg / period
13.3
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 9, 2025 at 10:08 AM EST
about 1 month ago
Step 01 - 02First comment
Dec 9, 2025 at 10:14 AM EST
6m after posting
Step 02 - 03Peak activity
69 comments in 0-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 10, 2025 at 8:01 PM EST
30 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I think the decision is first a self-serving one that's in line with how they want their devices and services to operate, but it also happens to be (in my opinion) the future-proof way of integrating consumer AI.
Apple has generally been a company that waits, gets criticized for being behind, and then produces a better version (more usable, better integrated, etc), claims it is new, and everybody buys it. Meanwhile a few people moan about how Apple wasn't actually the first to make it.
The golden goose is dead.
From a user perspective it may not be a strength: users / customers may expect certain functionality that works accurately and responsively.
I certainly never heard anyone complain in real life.
But admittedly, most of those people are established adults who've figured out an effective rhythm to their home and work life and aren't longing for some magic remedy or disruption. They're not necessarily weary, and they were curious at first, but it seems like they're mostly just waiting for either the buzz to burn off or for some "it just works" product to finally emerge.
I imagine there are younger people wowed by the apparent magic of what we have now and excited that they might use it punch up the homework assignments or emails or texts that make them anxious, or that might enjoy toying with it as a novel tool for entertainment and creative idling. Maybe these are some of the people in your "real life"
There are a lot of people out there in "real life", bringing different perspectives and needs.
What I meant specifically was that I don't remember anyone complaining about AI features getting in the way or being shoehorned. That particular complaint seems popular only on Reddit or HN.
I work at a coworking space. Most of the folks I've worked alongside had active chats in ChatGPT for all sorts of stuff. I've also seen devs use AI copilots, like Copilot and Codex. I feel big old when I drop into fullscreen vim on my Mac.
AI art is also used everywhere. Especially by bars and restaurants. So many AI happy hour/event promo posters now, complete with text (AI art font is kind-of samey for some reason). I've even seen (what look like) AI generated logos on work trucks.
People are getting use out of LLMs, 100%. Yet the anti-AI sentiment is through the roof. Maybe it's like social media where the most vocal opponents are secretly some of its most active users. Idk.
Those of this group who use AI mostly ignore poor rebadges and integrations like MS Copilot and just use ChatGPT and Claude directly. They prefer it to remain intentional and contained within a box that they control the bounds of.
A few months ago, MCP-style tool calling seemed like the clear standard. Now even Anthropic is shifting toward "code-mode" and reusable skills.
For Apple, reliable tool calling is critical because their AI needs to control apps and the whole device. My bet: Apple's AI will be able to create its own Shortcuts on the fly and call them as needed, with OSA Script support on Mac.
As Google integrates Gemini into their Google Assistant and Google Home products, if it starts to become leaps and bounds better than Siri, customers are going to start wondering why Apple is falling behind. If Apple can't achieve those things without AI and that could cause problems. Customers aren't saying "I want AI features", but they are indirectly asking for them because the features they want require AI to do what they expect.
(I realize Google and Apple have a deal happening to have Gemini integrated into Siri so this isn't the best example, but I think it illustrates the point I'm trying to make)
It would be like MS is forcing their copilot currently everywhere, it is totally useless and a nuisance.
It's certainly been useful in my organization.
Copilot can search even in PowerPoints. Being able to search your organisation's documents is kind of a killer feature, provided they make it work reliably.
Kati’s Research AI is genuinely great at search. It tries to answer your question, but also directly cites resources. This can help you when you’re not sure where the answer to a question lies, and it winds up being in multiple places.
Unless your query is super simple and of low consequence, you still need to open the files. But LLM-powered search is like the one domain (apart from coding) where these fuckers work.
I am yet to see ai functionality ppl are dying for.
Not to say Apple isn't also degrading their OS with bad design changes, but "more AI" is not something users are clamoring for.
Let everyone else pay for the research and make the mistakes, find out what works and what doesn't. Apple already has the consumers, they might as well save a few (hundred?) bn in the process and later deploy something which doesn't tell you to glue your cheese to your pizza.
When in reality, they _wanted_ to but have become so dysfunctional organization wise, they weren't able to. Kind of funny how that worked out.
I still think they're really dropping the ball. They could have local models running on devices, interfacing with a big cloud partner (Google, OpenAI, etc.) Make Siri awesome. But no.
See Gemini Nano. It is available in custom apps, but the results are so bad; factual errors and hallucinations make it useless. I can see why Google did not roll it out to users.
Even if it was significantly better, inference is still slow. Adding a few milliseconds of network latency for contacting a server and getting a vastly superior result is going to be preferable in nearly all scenarios.
Arguments can be made for privacy or lack of connectivity, but it probably does not matter to most people.
Local model answers and reaches into the cloud for hard tokens.
Apple Music is an ecosystem play.
lmao, even
They are flat-out incompetent. Siri has somehow regressed over the years and visual intelligence only works in demos. They have the most abominable integration with ChatGPT imaginable.
At least the MLX team has been shipping an impressive product.
Me: Nah, it doesn't. I get fine-grained app permissions but there's a certain absurdity in using voice control for your CarPlay app, where Apple Maps is currently navigating you home, and you say "Find me the nearest Panera" and the reply is "Sorry, I don't know where you are."
The reason there was such a narrative is because Wall Street and Silicon Valley are both narrative machines with little regard for veracity, and they are also not that smart (at least according to people who successfully beat their system, such as Buffett).
"Warren, if people weren't so often wrong, we wouldn't be so rich." – the late great Charlie Munger.
You don't have to send all your thoughts to a third party. That's the big advantage.
I know it's fashionable to shit-talk AI and Google, and lord knows I dislike the latter, but Gemini works and is day-to-day useful.
It’s not the same, but PMs and VPs at my company think we can vibe code our way out of migrating a 1.6 million line codebase to a newer language / technology. Or that our problems can be solved by acquiring an AI startup, whose front end looks exactly the same as every other AI startup’s front page, and slapping a new CSS file that looks like that startup on top of our existing SPA because their product doesn’t actually do anything. It’s an absurd world out there.
I find a lot of the low-key things helpful: I use an app at the same time and place every day, and it’s nice to have a handy one-tap way to open it. It does a decent job organizing photos and letting me search text in screenshots.
Now, after a few months (!), reality sets in and those hyped-up investors realize that it's not as much of a short-term game as they told themselves it would be...
I hope they adopt the same model with AI - leverage whatever frontier model is best and provide their own privacy infrastructure in front.
At some point Apple will figure out a way to provide the right info from your calendar, messages, email etc as context and couple this with a bunch of secure tools for creating calendar entries, etc. Agentic AI will then be something I personally benefit from.
https://www.engadget.com/big-tech/judge-puts-a-one-year-limi...
Limits are now being placed on it as of a couple days ago
Historically the strength of Apple was that they didn't ship things until they actually worked. Meaning that the technology was there and ready to make an experience that was truly excellent.
People have been complaining for years that Apple isn't shipping fast enough in this area. But if anything I think that they have been shipping (or trying to ship) too fast. There are a lot of scenarios that AI is actually great at but the ones that move the needle for Apple just aren't there yet in terms of quality.
The stuff that is at a scale that it matters to them are integrations that just magically do what you want with iMessage/calendars/photos/etc. There are potentially interesting scenarios there but the fact is that any time you touch my intimate personal (and work) data and do something meaningful I want it to work pretty much all the time. And current models aren't really there yet in my view. There are lots of scenarios that do work incredibly well right now (coding most obviously). But I don't think the Apple mainline ones do yet.
now the tides are turning, so they can go back to scheming behind the closed doors without risking their top people leaving for meta for a bazillion dollars.
In general I would agree, but Siri is honestly still so bad.
Tell that to almost anything they've shipped in the last 5-10 years. It's gotten so bad that I wait halfway through entire major OS version before upgrading. Every new thing they ship is almost guaranteed to be broken in some way, ranging from minor annoyance to fully unusable.
I buy Apple-everything, but I sure wish there were better options.
They dragged their feet on a host of technologies that other handset makers adopted, released and subsequently improved.
- USB C charging
- 90hz, 120Hz refresh rates
- wireless charging
- larger batteries (the iPhone 17 still lags behind Samsung and Google)
I'm not sure what happened, but the iPhone used to have the most fluid, responsive experience compared to Android. Now, both Google and Samsung have surpassed them in that regard.
I've used both Android and have owned several iPhones and it just seems like its not an issue of releasing something that isn't ready, but more about them not being capable enough to release phones to compete with other phones that are regularly beating them in the specs race.
Great artists steal.
- Everyone else: "We mainly build huge AI compute clusters to process large amount of data and create value, at high cost for ramp-up and operation."
- Apple: "We mainly build small closed-down AI compute-chips we can control, sell them for-profit to individual consumers and then orchestrate data-processing on those chips, with setup and operational cost all paid by the consumer."
I can't think of any company which has comparable know-how and, most of all, a comparable sell-out scale to even consider Apple's strategy.
No matter what they do, they will sell hundreds of millions compute devices for the foreseeable future. They use this to build out AI infrastructure they control, pre-paid by the future consumers.
THIS is their unique strength.
They roll out hardware to consumers they can use for AI once their service is ready, with users paying for that rollout until then.
Meanwhile they have started to deploy a marketplace ecosystem for AI tasks on iOS, where Apple has the first right-to-refuse, allowing the user to select a (revenue-share-vetted) 3rd party provider to complete the task.
So until Apple is ready, the user can select OpenAI (or soon other providers) to fulfill an AI-task, and Apple will collect metrics on the demand of each type of task.
This will help them prioritize for development of own models, to finally make use of their own marketplace rules to direct the business away from third parties to themselves.
My guess is that they will offer a mixed on-device/cloud AI-service that will use the end-users hardware where possible, offloading compute from their clouds to the end-users hardware and energy-bill, with a "cheap" subscription price undercutting others on that AI-marketplace.
But for this to make economic sense, the "AI-bubble" may need to burst first, forcing the competitors to actually provide their services for-profit.
Until then it might be more profitable to just forward AI-tasks to OpenAI and others and let them burn more money.
Do you have any evidence whatsoever that could back-up this claim? It feels like you're just saying this because you want it to be true, not because you have any concrete proof that Apple can sell competitive inference.
Sorry, I didn't mean to state that Apple A/M-series will be competitive on inference performance compared to other solutions. There is no sufficient data for this at the moment. But this is not the competition I expect to happen.
I expect them to stiffle competition and setting themselves up as the primary player in the Apple ecosystem for AI services, simply because they are making "Apple Intelligence" an ecosystem orchestration layer (and thus themselves the gatekeeper).
1. They made a deal with OpenAI to close Apple's competitive gap on consumer AI, allowing users to upgrade to paid ChatGPT subscriptions from within the iOS menu. OpenAI has to pay at least (!) the usual revenue share for this, but considering that Apple integrated them directly into iOS I'm sure OpenAI has to pay MORE than that. (also supported by the fact that OpenAI doesn't allow users to upgrade to the 200USD PRO tier using this path, but only the 20USD Plus tier) [1]
2. Apple's integration is set up to collect data from this AI digital market they created: Their legal text for the initial release with OpenAI already states that all requests sent to ChatGPT are first evaluated by "Apple Intelligence & Siri" and "your request is analyzed to determine whether ChatGPT might have useful results" [2]. This architecture requires(!) them to not only collect and analyze data about the type of requests, but also gives them first-right-to-refuse for all tasks.
3. Developers are "encouraged" to integrate Apple Intelligence right into their apps [3]. This will have AI-tasks first evaluated by Apple
4. Apple has confirmed that they are interested to enable other AI-providers using the same path [4]
--> Apple will be the gatekeeper to decide whether they can fulfill a task by themselves or offer the user to hand it off to a 3rd party service provider.
--> Apple will be in control of the "Neural Engine" on the device, and I expect them to use it to run inference models they created based on statistics of step#2 above
--> I expect that AI orchestration, including training those models and distributing/maintaining them on the devices will be a significant part of Apple's AI strategy. This could cover alot of text and image processing and already significantly reduce their datacenter cost for cloud-based AI-services. For the remaining, more compute-intensive AI-services they will be able to closely monitor when it will be most economic to in-source a service instead of "just" getting revenue-share for it (via above step#2).
[1] https://help.openai.com/en/articles/7905739-chatgpt-ios-app-...
[2] https://www.apple.com/legal/privacy/data/en/chatgpt-extensio...
[3] https://developer.apple.com/apple-intelligence/
[4] https://9to5mac.com/2024/06/10/craig-federighi-says-apple-ho...
see here: https://news.ycombinator.com/item?id=46210481
> Magic Cue - Magic Cue proactively surfaces relevant info and suggests actions, similar to how Apple's personalized Siri features were supposed to work. It can display flight information when you call an airline, or cue up a photo if a friend asks for an image.
https://www.macrumors.com/2025/08/20/google-pixel-10-ai-feat...
Google shipped it, despite it not working.
Apple announced that the Siri uodate didn't work well enough to ship.
I wish they did but they don't. They have been for decade so stingy on RAM for iPhone and iPad. There are at current point that only small percent of their userbase have iPhone or iPad with 8GB RAM that somehow can run any AI models even open source and be of any use. Not mentioning they don't compare to big Models.
They don't even provide option to sell iPhone with bigger RAM. iPad can have max 16GB RAM. Those mainstream macbook air also can have max 32 GB RAM.
And for the current price of cheap online AI where e.g. perplexity provides so many promo for PRO version for like less $10 per year and all ai providers give good free models with enough rate limit for many users I don't see apple hardware like particularly bought because of AI compute-chips - at least not non-pro users.
If the loose AI though and because of that won't have good AI integrations they will loose also eventually in hardware. e.g. Polish language in Siri still not supported so my mum cannot use it. OSS Whisper v3 turbo was available ages ago but apple still support only few languages. 3rd party keyboard cannot integrate so well with audio input and all sux in this case because platform limitation.
To me, it feels like Apple should have supported CUDA from the start. Sell the ARM-hungry datacenter some rackmount Macs with properly fast GPUs, and Apple can eventually bring the successful inference technology to cheaper devices. The current all-or-nothing strategy has produced nothing but redundant hardware accelerators.
Not because of Engineering deficiencies, but because datacenters buy based on facts, not fluff.
Now their ARM silicon is top-notch, no doubt about that. But will they earn a higher margin if they put it in a datacenter instead of a consumer device which is then used to consume Apple Services? I don't think so.
Nvidia is a five trillion dollar business right now. The total sum of Apple's profits from services, hardware and servicing/repair costs all fail to crest Nvidia's total addressable market. We've been past the point of theorizing for almost two years now.
Apple has the means to break into that market, too. They don't need the silicon (iPhone/iPad are way overpowered, Vision Pro and Mac are low-volume), they have thousands of engineers with UNIX experience, and hundreds of billions of dollars in liquid cash waiting to be spent. If the China divestment and monopoly case happen, Apple needs a game plan that guarantees them protection from US politicians and secures an easy cash flow.
From the consumer perspective, it seems simple; stop shipping the latest silicon in the iPhone. Nobody uses it. They're not playing AAA-games or inferencing the latest AI models, and the efficiency gains haven't been noticable for a decade. You don't need TSMC 2nm to browse the App Store, or watch AppleTV. The only opportunity cost comes from selling consumers hardware they can't appreciate.
From a vendor-perspective, ~200mn iPhones are sold each year, the end-user will pay for it. The scale of this is financing the entire development and supply-chain for the silicon itself, and it contributes not only to hardware but also service revenue of the entire company.
nVidia owns 94% of the GPU market and shipped 11.6mn GPU's in Q2/2025, let's say they ship 60mn GPUs in 2025 total.
--> Why should I stop shipping the latest silicon in the iPhone?
Even without stopping production, why should I enter and compete in a market that is currently dominated by a single player, has a total size of ~60mn units/year, with each product deprecating almost instantly as soon as a more efficient product is announced?
Apple's silicon is not magically more efficient than everything else, their products are efficient because they are vertically integrated.
I doubt that Apple Silicon is competitive to nVidia in a datacenter setting
At the very least it's used by the Photos app[1]. Likely other Apple apps too.
[1] https://machinelearning.apple.com/research/recognizing-peopl...
IMO, It’s a very apple strategy, stuff just works and is slowly more accelerated/lower power.
How will that work out with the battery?
I mean, they could have mined crypto on our phones but that would have been a bad idea for the same reason.
That's a selective list. High RAM Macs are available. MBPro goes up to 128GB. Mac Studio goes up to 512GB. Not cheap, but available.
Consumer hardware chips will be plenty powerful to run “good enough” models.
If I’m an application dev, do I want to develop something on top of OpenAI, or Apple’s on device model that I can use as much as a I want for free? On device is the future
The existential FEAR of the smartphone ecosystem players (Apple, Google) is, that another ecosystem (!) may come along, one that is tighter integrated into the daily lives, is more predictive of the users' needs, requires less interaction and is not under THEIR control.
Because this is not about devices, it's about owning the total userbase of that OS-ecosystem.
Replacing the Smartphone has been attempted numerous times in the past decade, but no device was able to replace it as a consumption device. Now technology has reached a level of maturity that Smart Glasses may have a shot at this. AND they come along with their own ecosystem as well.
Whatever happens, they won't replace all phones within 5 years. But it's possible that such a device would become a companion to an iOS/Android phone and within 5 years gradually eases off users of their phones into that other ecosystem.
And that's scary for Apple and Google.
Because this is not a device-war, this is an ecosystem-war.
Having piles of money when everyone else is lighting it on fire and a brand that would require quite the mistake to ruin gives you a long runway.
Is anyone really profiting from AI yet? I know Google basically saved their search monopoly but any one else?
In my view Apple is positioning themselves (once more) to win without the need of competing on fair grounds. They are late to this party, but their biggest asset is the control over the data and spending of their users.
The users WANT to use those services, and Apple is not ready to offer anything. But as long as they can be the "broker" between the user and such services (and most of all the deciding party!), they can sell the consumption of their entire userbase for revenue-share to the service-providers.
Their biggest risk (beside of stock-market impacts) is, that Apple users start to engage directly with such services without Apple being an intermediary party (using a browser or another device).
So their highest priority will be to keep the user entertained so they can continue profiting from their consumption until they themselves have arrived at the party.
Once they have arrived, they will start diverting profitable AI-tasks from 3rd parties back to their own services, leaving unprofitable ones to the then-integrated 3rd party providers
...Nvidia? Did you just step out of a cryogenic chamber from 2008?
The datacenter business is booming right now, cutting-edge and efficient hardware is needed more than ever. Nvidia and Apple are the only two companies in the world with the design chops and TSMC inroads to address that market. Nvidia's fully committed and making money hand over fist; Apple is putting 2nm silicon in the iPad Pro and asking fucking consumers to pay $1,500 for it. Do you not see the issue with this business model?
People will say Apple can't crack the datacenter market, I say bullshit. Apple drafted OpenCL. Every dollar Nvidia makes is money Apple pissed away on trinkets like smartwatches and TikTok tablets.
Now that it’s cheap and easy, those kind of photos will lose its signal.
Everyday Syndrome is proven right.
I'm not sure how Apple is enabling anything interesting around AI right now.
That's what this bland article is not even touching on. Yes, having missed the boat is great if the boat ends up sinking. That doesn't make missing boats a great strategy.
Building huge models and huge data centers is not the only thing they could have done.
They had some interesting early ideas on letting AI tap app functionality client-side. But that has gone nowhere, and now everything of relevance is happening on servers.
Apple's devices are not even remotely the best dumb terminals to tap into that. Even that crown goes to Android.
I don't want to imply that this is their only play or that it will even work out.
The EU (and others) already identified this general scheme of stiffling competition by "brokering" between the consumer and the free market, so outside of the US I'm not even sure how much Apple will be able to rely on such a strategy (again)...
I'm not sure where you position Samsung or Xiaomi, Oppo etc. They're competitive on price with chipsets that can handle AI loads in the same ballpark, as attested by Google's features running on them.
They're not vertically integrated and don't have the same business structure, but does it matter regarding on-device AI ?
- Apple owns more than 50% of this market-segment, the annual sales of iPhones is roughly 200 Million units. In comparison, Samsung Galaxy S-series sits at roughly 20-25 Millions.
- Apple's is alone in the iOS ecosystem, while Samsung, Xiaomi and Oppo have to compete within the Android space every year. iOS is extremely sticky, which makes a certain volume of iPhones almost guaranteed to sell every year, at a lofty profit margin.
In comparison, Samsung always has to consider that the next BAD Galaxy-S might only sell a fraction of the previous one, because users might move horizontally to another Android brand (even to Pixel, a first-party product of their ecosystem provider). So Samsung cannot even make bets based on the sale of 20 million units, they are already at risk to make bets on the initial shipment-volume (~5 millions) because if the device doesn't sell they will have to PAY money to the carriers to get them into the market.
Apple has a much lower risk here. If the next iPhone is not catching on, Apple will likely still sell 200mn iPhones in that year, because the ecosystem lock-in is so strong that there is little risk of losing customers to anything else than ANOTHER (then more-profitable) iPhone.
So even when assuming a MASSIVE annual drop of 25% in Sales, Apple can still make development bets based on a production forecast of 150 MILLION units.
For their supply-chain that's still an average production output of ~400k units per DAY for each component. With that volume you can get entire factories to only produce for you.
That's why I can't think of any company in a comparable position. Apple can add hardware to their device and sell the resulting product to the consumer for profit before delivering any actual value with it.
If any competitor in the Android space attempts that, just the component costs alone will risk the device to be dead-on-arrival just because "some other Android device" delivers the same experience at lower cost.
I read the original comment as positing no other company is positionned to forgot building their own AI platform and instead sell pricey terminals that can run second party local models.
From that POV, sur Samsung and others have competition, and theirargins won't be as large as Apple, but they also have a larger market and can work with less money (Samsung as a whole will never struggle to find more)
> 50% of this market-segment
If we're limiting the segment to phones that have enough RAM to run decent models, Apple users who haven't updated to the upper models (no SE, no 16e etc) recent years are all out of the picture. I didn't check the numbers, but wouldn't have expect them to be much ahead of Samsung and the beasties Android devices.
I'm not following. What infrastructure? Pre-paid how?
Apple pays for materials and chips before it sells the finished product to consumers. Nothing is pre-paid.
And what infrastructure? The inference chips on iPhones aren't part of any Apple AI infrastructure. Apple's not using them as distributed computing for LLM training or anything, or for relaying web queries to someone's device -- nor would they.
The AI-capabilities of the devices will be pre-paid, as they will come with the product without delivering any significant value yet. The end-user will bear the cost for that before he is getting anything meaningful in return, because Apple's production volume is at such a scale that they can offset those investments without risking to lose any meaningful sales volume.
Other players can't do that because they don't sell 200mn units per year. If they would add on-device inference chips, they would have to significantly increase the device-price, risking to not sell any product
To my understanding, they market their ML stack as four layers [1]:
- Platform Intelligence: ready-made OS features (e.g., Writing Tools, Genmoji, Image Playground) that apps can adopt with minimal customization.
- ML-powered APIs: higher-level frameworks for common tasks—on-device Foundation Models (LLM), plus Vision, Natural Language, Translation, Sound Analysis, and Speech; with optional customization via Create ML.
- ML Models (Core ML): ship your own models on-device in Core ML format; convert/optimize from PyTorch/TF via coremltools, and run efficiently across CPU/GPU/Neural Engine (optionally paired with Metal/Accelerate for more control).
- Exploration/Training: Metal-backed PyTorch/JAX for experimentation, plus Apple’s MLX for training/fine-tuning on Apple Silicon using unified memory, with multi-language bindings and models commonly sourced from Hugging Face.
[1] https://developer.apple.com/videos/play/wwdc2025/360/
The revenue from AI is growing at a much slower rate than recurring capex and depreciation is accumulating. This will create distress opportunities that cash-rich companies like APPL may seize. Might be a private equity deal, might be in the public markets as some of the players dip hard after IPO.
As this plays out, APPL's silicon has unified memory, power consumption and native acceleration that gives it an edge running SLMs and possibly LLMs at scale. Wouldn't shock me to see APPL introduce a data-center solution.
APPL was the Type Code[0] for an Application, in classic MacOS (1984).
0: https://en.wikipedia.org/wiki/Resource_fork#Types
And this is the case across the board.
My friend's Fitbit works way better than my Apple watch.
Third and final example is how bad Apple's native dictation engine is. I can run OpenAI Whisper models on my Mac and get dramatically better output.
As a long time Apple fan who's had everything since before the first iPhone, I feel this apathy towards product quality cannot be disguised as some strategic decision to fast follow with AI.
That's odd because I've used both, along with a bunch other wearables (e.g. Whoop), and I wouldn't give up my Apple Watch for anything. Massively useful, can take calls, make payments, stream music from my Apple playlists, read and reply to messages, and a ton of other things.
I mention this because , at least for the functionalities that you mention, I think the pixel watches are catching up nicely.
... but they still haven't been able to make me feel less stupid talking into a watch for phone calls like some off-brand James Bond wannabe, even if it works great.
Siri isn't competing with Gemini, yet.. Siri is old tech, Gemini is the new tech.
Same with dictation.
Siri hasn't been updated generationally with SOTA to compete with Gemini yet.. it simply hasn't been updated. This is part of the "slow pace" that the post is talking about (part of, not entirely the slowness though).
For example, Amazon updated my old Echo dots with Alexa+ beta, and it's pretty good. I have Grok in my Tesla, and though I don't like Grok or xAI, it's there and I use it occasionally.
Apple hasn't done their release of these things yet.
Overselling abilities is for sure a lack of quality.
My assertion is that Apple hasn't yet released a generational complement to Gemini or ChatGPT voice modes. That's a problem, but one specifically of availability and release, which.. again (and despite the downvoters).. matches the assertion of the post ("slow AI pace").
Trying and failing to make a SoTA foundational model is not a strategic move. It's similar to Amazon and Meta, they also have tried and not succeeded.
Microsoft has been criticized for investing in AI heavily. But it actually makes sense for Microsoft if you consider the nature of their business. The problem is not with the investment per se but with what they got out of it. Unfortunately, Microsoft sucks at product management, so instead of creating useful stuff that users want and are ready to pay for, they created stuff that no one understands, no one can use, and no one wants to pay for. Github copilot is an exception of course. I'm talking more about their Office 365 AI.
262 more comments available on Hacker News