Scientists Create Ultra Fast Memory Using Light
Key topics
The scientific community is abuzz with the news of ultra-fast memory created using light, sparking a lively debate about the technology's potential and limitations. Commenters quickly pointed out that the "300mm chips" mentioned in the headline likely referred to 300mm wafers, a standard size in the industry, rather than the chip size itself, which was actually around 0.1 mm2. As discussion unfolded, some experts questioned whether the size of the photonic circuit was a fundamental constraint or if it could be miniaturized, while others wondered if techniques like DWDM could be used to process multiple bits in parallel. Meanwhile, a few commenters took a step back, noting that similar "optical computing" demonstrations have been around for decades, and that other technologies like MRAM and memristors might be closer to making a significant impact.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
6d
Peak period
17
144-156h
Avg / period
7.8
Based on 31 loaded comments
Key moments
- 01Story posted
Dec 4, 2025 at 1:11 PM EST
29 days ago
Step 01 - 02First comment
Dec 10, 2025 at 3:58 PM EST
6d after posting
Step 02 - 03Peak activity
17 comments in 144-156h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 12, 2025 at 8:45 AM EST
21 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
(I am sure they meant nm, but nobody is checking the AI output)
> footprint of 330 × 290 µm2 using the GlobalFoundries 45SPCLO
That’s a 45nm process but the units for the chip size probably should have been 330um? However I’m not well versed enough in the details to parse it out.
https://arxiv.org/abs/2503.19544
The area is massive. 330um × 290um are the X and Y dimensions. The area is roughly 0.1 mm2.
This is the problem with photonic circuits. They are massive compared to electronics.
This is why modern semiconductor devices no longer use lithography with visible light or even with near ultraviolet, but they must use extreme ultraviolet.
The advantage of such optical devices is speed and low power consumption in the optical device itself (ignoring the power consumption of lasers, which might be shared by many devices).
Such memories have special purposes in various instruments, they are not suitable as computer memories.
To give a feeling: micro-ribg resonators are anywhere between 10 to 40 micrometer in diameter. You also need a bunch of other waveguides. The process in the paper uses silicon waveguides, with 400nm width if I'm not wrong. So any optical feature unfortunately isn't going down as much as CMOS technology.
Fun fact: the photolithography has the same limitations. They use all kinds of tricks (different optical affects to shrink the features) but fundamentally limited by the wavelength used. This is why we are seeing a push to a lower and lower wavelengths by ASML. That + multiple patterning helps to scale CMOS down.
Specifically this paper is based on simulations, and I've only skimmed the paper, but the power efficiency numbers sound great because they say 40 GHz read/write speeds, but these consume comparatively large powers even if not reading or writing (the lasers have to be running constantly). I also think they did not include the contributions of the modulation and the required drivers (typically you need quite large voltages)? Somebody already pointed out that the size of these is massive, and that's again fundamental.
As someone working in the broad field, I really wish people would stop these type of publications. While these numbers might sound impressive at a first glance, they really are completely unrealistic. There are lots of legitimate applications of optics and photonics, we don't need to resort to this sort of stuff.
> they really are completely unrealistic
Unrealistic only because they're power hungry? That sounds like a temporary problem, kind of like when we come up with a bunch of ML approaches we couldn't actually do in the 80s/90s because of the hardware resources required, but today work fine.
Maybe even if the solution aren't useful today, they could be useful in the future? Or maybe with these results, there are more people being inspired to create solutions specifically about the power usage?
"we don't need to resort to this sort of stuff" makes it sound like this is all so beneath you and not deserving of attention, but why are you then paying attention to it?
No they are fundamentally power hungry because you essentially need a nonlinear response, i.e. photons need to interact with each other. However photons are bosons and really dislike interacting with each other.
Same thing about the size of the circuits they are determined by the wavelength of light, so fundamentally they are much larger than electronic circuits.
> "we don't need to resort to this sort of stuff" makes it sound like this is all so beneath you and not deserving of attention, but why are you then paying attention to it?
That's not what I said, in fact they deserve my attention because they need to be called out, as the article clearly does not highlight the limitations.
There are promising avenues to use "bosonic" nonlinearity to overtake traditional fermionic computing, but they are basically not being explored by EE departments despite their oversized funding
The core paragraph from paper linked below is here (GIANT voltages):
As shown in Figure 6(a), even with up to 1 V of noise on each Q and QB node (resulting in a 2 V differential between Q and QB), the pSRAM bitcell successfully regenerates the previously stored data. It is important to note that higher noise voltages increase the time required to restore the original state, but the bitcell continues to function correctly due to its regenerative behavior.
We are going tohave energy abundant at some point.
https://arxiv.org/abs/2503.19544v1
The memory cell is huge in comparison with semiconductor memories, but it is very fast, with a 40 GHz read/write speed.
There are important applications for a very high speed small memory, e.g. for digital signal processing, but this will never replace a general-purpose computer memory, where much higher bit densities are needed.
Nice AI text again
Memristors are also probably coming after MRAM-CIM and before photonic computing.
For a second, I thought this headlined was copied+pasted from the hallucinated 10-years-from-now HN frontpage that recently made the HN front page:
https://news.ycombinator.com/item?id=46205632