We Melted Iphones for Science
Posted4 months agoActive4 months ago
accelerateordie.comTechstory
skepticalnegative
Debate
80/100
IosAppleAIPerformance
Key topics
Ios
Apple
AI
Performance
A developer claims that their AI-powered video chat app caused iPhones to overheat, leading Apple to implement a new cooling system, but the community is skeptical of the story and its claims.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
24m
Peak period
20
0-1h
Avg / period
5.3
Comment distribution42 data points
Loading chart...
Based on 42 loaded comments
Key moments
- 01Story posted
Sep 11, 2025 at 2:44 PM EDT
4 months ago
Step 01 - 02First comment
Sep 11, 2025 at 3:07 PM EDT
24m after posting
Step 02 - 03Peak activity
20 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 12, 2025 at 6:46 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45214784Type: storyLast synced: 11/20/2025, 8:37:21 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
"Off-the-books" meetings are just friend-based connections. I got a bug in the Linux nvidia driver fixed that was affecting me by just hitting up an old friend. I could write that story as him saying "Keep this top-secret" if I wanted because that's just fun storywriting.
Obviously, there was more to the conversation that what I wrote, but these are the actual words that I remember being said.
For more context, the PM at Apple in question was a former colleague of my then girlfriend. I reached out to her to have a friendly catch up. It wasn't positioned as meeting officially with Apple. I was literally just going through LinkedIn trying to figure out who I knew at Apple. So I hit her up on LinkedIn and asked to catch up, then told her about the situation. And this is how she responded. Worth noting: English is not her native language.
I can’t imagine anyone treating me like that and I’ve dealt with billionaires. Hilarious to have some L3 FAANG act like Genghis Khan.
Unlike Apple's formal "developer evangelist" and several others I contacted, the guy actually took the time to talk to us, and I was/am grateful for that. He's a cog in a very large corporate machine. Apple is Apple. He's not the CEO. He was doing his job and did me a favor. I am grateful to him.
But he was pretty professional in the way he went about it, “no names, no company”. And “You found a security bug! Show me. But you won’t get credit”
E.g. maybe they actually said some variation of "your app is bricking iPhones? how did you get through app store review..." and the author interpreted it as "squashing his company like a bug".
It strikes me as troublesome that a company that found a bug could be banned from the App Store and the rep talks about it as killing the company.
Yes, all of that would be troublesome... if it were true. Given the rest of the post's content I'm leaning pretty heavily towards "made up". This whole thing reads like "Am I the Asshole" or similar subreddits which are 99% outlets for fiction writers.
I'm also trying to understand what OASIS was really supposed to do that was going to.... uh... matter? It's a video chat app where you can be someone else in the video. Ok, that's cool but I'm failing to see how this is groundbreaking.
> Her: "Wait, haven't we banned you from the App Store? Why haven't we killed your company already?"
> Me: "We... haven't exactly told anyone at Apple about this."
> Her: "You're a mosquito. Apple will just stomp on you and you will not exist."
Told Apple what? That they have a bug? Why would they ban you from the app store? Why would someone say "You're a mosquito. Apple will just stomp on you and you will not exist.", it makes zero sense to me given the context laid out here.
Lastly, did Apple fix the problem? They made changes but we won't know anything for sure until next Friday at the very earliest.
Seems like a lot of name dropping (why should I care about a big name that didn't invest in you?) and big numbers ($10B, never explained) for a failed startup.
> You can be right about the future and still fail in the present.
Not clear at all what OASIS was "right" about really.
> Apple's A19 Pro isn't just a chip announcement. It's a confession. An admission. A vindication.
Ok, sure. If you say so.
Lastly, what were you "right" about? That iPhones can get hot?
Just none of this makes any sense or seems very interesting IMHO.
Apple adopted a new cooling technique on their highest end device to differentiate and give spec sheet chasers something to be hyped about. It should help reduce throttling for the very odd event where someone is running a mobile device at 100% continuously (which is actually super rare in normal usage). It's already in the Pixel 9 Pro, for instance, and is a new "must have". It has nothing to do with whatever app these guys were building.
The rest of the nonsense is just silly. If you are building an app for a mobile device and it pegs the CPU and GPU, you're going to have a bad time. That's the moment you realize it's time to go back to the drawing board.
We were just calling the iPhone's built-in face tracking system via the Vision Framework to animate the avatars. That's the thing that was running on GPU.
That is neither here nor there on CoreML -- which also uses the CPU, GPU, and ANE, and sometimes a combination of all of them -- or the weird thing about MLX.
The only reason to use CoreML these days is to tap into the Neural Engine. When building for CoreML, if one layer of your model isn't compatible with the Neural Engine, it all falls back to the CPU. Ergot, CoreML is the only way to access the ANE, but it's a buggy all-or-nothing gambit.
Have you ever actually shipped a CoreML model or tried to use the ANE?
This is nonsensical.
MLX and CoreML are orthogonal. MLX is about training models. CoreML is about running models, or ML-related jobs. They solve very different problems, and MLX patches a massive hole that existed in the Apple space.
Anyone saying MLX replaces CoreML, as the submission does, betrays that they are simply clueless.
>The only reason to use CoreML these days is to tap into the Neural Engine.
Every major AI framework on Apple hardware uses CoreML. What are you even talking about? CoreML, by the very purpose of its design, uses any of the available computation subsystems, which on the A19 will be the matmul units on the GPU. Anyone who thinks CoreML exists to use the ANE simply doesn't know what they're talking about. Indeed, the ANE is so limited in scope and purpose that it's remarkably hard to actually get it to use the ANE.
>Have you ever actually shipped a CoreML model or tried to use the ANE?
Literally a significant part of my professional life, which is precisely why this submission triggered every "does this guy know what he's talking about" button.
https://github.com/ml-explore/mlx-swift
Maybe I am working on a different set of problems than you are. But why would you use CoreML if not to access ANE? There are so many other, better newer options like llama.cpp, MLX-Swift, etc.
What are you seeing here that I am missing? What kind of models do you work with?
> But why would you use CoreML if not to access ANE?
The whole point of CoreML is hardware agnostic operations, not to mention higher level operations for most model touchpoints. If you went into this thinking CoreML = ANE, that's just fundamentally wrong at the beginning. ANE is one extremely limited path for CoreML models. The vast majority of CoreML models will end up running on the GPU -- using metal, it should be noted -- aside from some hyper-optimized models for core system functions, but if/when Apple improves the ANE, existing models will just use that as well. Similarly when you run a CoreML model on an A19 equipped unit, it will use the new matmul instructions where appropriate.
That's the point of CoreML.
Saying other options are "better, newer" is just weird and meaningless. Not only is CoreML rapidly evolving and can support just about every modern model feature, in most benchmarks of CoreML vs people's hand-crafted metal, CoreML smokes them. And then you run it on an A19 or the next M# and it leaves them crying for mercy. That's the point of it.
Can someone hand craft some metal and implement their own model runtime? Of course they can, and some have. That is the extreme exception, and no one in here should think that has replaced anything
More recently, I personally tried to convert Kokoro TTS to run on ANE. After performing surgery on the model to run on ANE using CoreML, I ended up with a recurring Xcode crash and reported the bug to Apple (as reported in the post and copied in part below).
What actually worked for me was using MLX-audio, which has been great as there is a whole enthusiastic developer community around the project, in a way that I haven't seen with CoreML. It also seems to be improving rapidly.
In contrast, I have talked to exactly 1 developer who have ever used CoreML since ChatGPT launched, and all that person did was complain about the experience and explain how it inspired them to abandon on-device AI for the cloud.
___ Crash report:
A Core ML model exported as an `mlprogram` with an LSTM layer consistently causes a hard crash (`EXC_BAD_ACCESS` code=2) inside the BNNS framework when `MLModel.prediction()` is called. The crash occurs on M2 Ultra hardware and appears to be a bug in the underlying BNNS kernel for the LSTM or a related operation, as all input tensors have been validated and match the model's expected shape contract. The crash happens regardless of whether the compute unit is set to CPU-only, GPU, or Neural Engine.
*Steps to Reproduce:* 1. Download the attached Core ML models (`kokoro_duration.mlpackage` and `kokoro_synthesizer_3s.mlpackage`) 2. Create a new macOS App project in Xcode. Add the two `.mlpackage` files to the project's "Copy Bundle Resources" build phase. 3. Replace the contents of `ContentView.swift` with the code from `repro.swift`. 4. Build and run the app on an Apple Silicon Mac (tested on M2 Ultra, macOS 15.6.1). 5. Click the "Run Prediction" button in the app.
*Expected Results:* The `MLModel.prediction()` call should complete successfully, returning an `MLFeatureProvider` containing the output waveform. No crash should occur.
*Actual Results:* The application crashes immediately upon calling `model.prediction(from: inputs, options: options)`. The crash is an `EXC_BAD_ACCESS` (code=2) that occurs deep within the Core ML and BNNS frameworks. The backtrace consistently points to `libBNNS.dylib`, indicating a failure in a low-level BNNS kernel during model execution. The crash log is below.
CoreML is pervasively used throughout iOS and macOS, and this is more extensive than ever in the 25 versions. Zero percent of the system uses MLX for the runtime. The incredibly weird and nonsensical submissions weird contention that because ANE doesn't work for them, therefore Apple is admitting something is just laughable silliness.
And FWIW, people's impressions of the tech world from their own incredibly small bubble is often deeply misleading. I've read so many developers express with utter conviction that no one uses Oracle, no one uses Salesforce, no one uses Windows, no one uses C++, no one uses...
I'm telling you what I was told. It's a true story. I was there. It happened to me.
Why would I make up a detail like that?
It also seems like one of those self-aggrandizing things that tries to spin everything as a reaction to themselves, instead of just technology progressing. No, vapour chamber cooling isn't some grand admission, it's something that a variety of makers have been adopting to reduce throttling as a spec-sheet item of their top end devices. It isn't all about you.
And given that the base 17 doesn't have VCC, I guess Apple isn't "admitting" it at all, no?
And the CoreML v MLX nonsense at the end is entirely nonsensical and technically ignorant. Like, wow.
No one should learn anything from this piece. The author might know what they're talking about (though I am doubtful), but this pieces was "how to make an Apple event about ourselves" and it's pretty ridiculous.
It will be fun to see how hot the iPhone Air gets since it has the same chip as the 17 Pro (w/ one fewer GPU core), but a less thermally conductive metal and no vapor chamber.
In the real world I doubt anyone will ever notice the difference, VCC or not. VCC only will materially affect usage when someone is doing an activity that will hit throttling, which is actually extraordinarily rare in normal use, and usually only comes into play in benchmarking. The overwhelming majority of time we peg those cores for a tiny amount of time and get a quick Animoji or text-extraction from an image, and so on. Even the "AI" usage on a mobile device is extremely peaky.
It's really just a heat pipe - vapour trapped inside copper, not circulating liquid to a radiator.
https://www.youtube.com/watch?v=OR8u__Hcb3k
“at 60fps in HD resolution. In real-time. On iPhone. In 2021.”
“5ms latency”
“512 x 512 pixel resolution per video”
I don’t mean to be rude but I’m having trouble convincing myself this is a real story.
The whole thing is written in a bombastic storytelling style that is typical of LinkedIn threads. If this is entertaining to you, this is the link for you since it has actual image examples of their model output varying between platforms.
2 more comments available on Hacker News