Researchers Develop a Camera That Can Focus on Different Distances at Once
Key topics
A team of researchers has developed a camera that can focus on multiple distances simultaneously, sparking a lively debate about its similarities to the now-defunct Lytro camera, a pioneering light field camera. Commenters weighed in on the differences between the two technologies, with some noting that the new camera uses a spatial light modulator, a feature Lytro lacked. While some dismissed Lytro's image quality as "trash," others countered that its processing was surprisingly simple and effective. The discussion highlights the ongoing quest for innovative camera technologies that can capture and manipulate light in new and useful ways.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
3d
Peak period
25
84-96h
Avg / period
9.8
Based on 39 loaded comments
Key moments
- 01Story posted
Dec 23, 2025 at 4:42 PM EST
10 days ago
Step 01 - 02First comment
Dec 26, 2025 at 11:42 PM EST
3d after posting
Step 02 - 03Peak activity
25 comments in 84-96h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 31, 2025 at 4:50 AM EST
2d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Processing was as simple as "click on the thing you want in focus". and 4MP was just fine for casual use it was targetting
That’s a bad recipe for casual and professional users alike. Can’t ingest into your workflow quickly, images are low res, can’t improve the image, and your smartphone was better just missing one, admittedly neat, feature. If that existed in phones people would use it like crazy I imagine.
Too narrow of a use case IMO, hence why it failed.
I remember a friend, who was a photography buff, was quite excited about the camera. But he didn't actually buy one.
Loss, i.e. equivalent aperture is a different matter and I think this would imply quite a light loss.
We’re talking about a specific camera, the lytros, which released in 2012 and had a 4MP resolution. I’m not saying there was a limitation in the technology broadly speaking. Just that this camera was not worth it for the time. It’s sacrificed too much for one feature and at $400 it just didn’t sell
While conventional cameras capture a single high-resolution focal plane and light field cameras sacrifice resolution to "re-focus" via software after the fact, the CMU Split-Lohmann camera provides a middle ground, using an adaptive computational lens to physically focus every part of the image independently. This allows it to capture a "deep-focus" image where objects at multiple distances are sharp simultaneously, maintaining the high resolution of a conventional camera while achieving the depth flexibility of a light field camera without the blur or data loss.
Something I find interesting is that while holograms and the CMU camera both manipulate the "phase" of light, they do so for opposite reasons: a hologram records phase to recreate a 3D volume, whereas the CMU camera modulates phase to fix a 2D image.
Lytro as I understand it, trades a huge amount of resolution for the focusing capability. Some ridiculous amount, like the user gets to see just 1/8th of the pixels on the sensor.
In a way, I'd say rather than too early it was too late. Because autofocus was already quite good and getting better. You don't need to sacrifice all that resolution when you can just have good AF to start with. Refocusing in post is a very rare need if you got the focus right initially.
And time has only made that even worse. Modern autofocus is darn near magic, and people love their high resolution photos.
A pro will show up with a 300mm f/2.8, a tripod, a camera with good AF and high ISO, and the skills, plan and patience to catch birds in flight.
But all that stuff is expensive. The consumer way to approximate the lack of a good lens is a small, high res sensor. That only works in bright light, but you can get good results with affordable equipment in the right conditions. Greatly reducing the resolution is far from optimal when you can't have a big fancy lens to compensate.
And where is focus the hardest? Mostly where you want to have high detail. Wildlife, macro, sports.
It's also possible to generate a depth map from a single shot, to use as a starting point for a 3D model.
They're pretty neat cameras. The relatively low output resolution is the main downside. They would also have greatly benefited from consulting with more photographers on the UI of the hardware and software. There's way too much dependency on using the touchscreen instead of dedicated physical controls.
The more recent cameras can detect birds specifically and are great at tracking them.
> It's also possible to generate a depth map from a single shot, to use as a starting point for a 3D model.
That is true, but is a very niche need. Wonderful if you do need it, but it's a small market.
https://youtu.be/4qXE4sA-hLQ?si=QsEG2PtAmVjIfwDA
https://youtu.be/4qXE4sA-hLQ
https://imaging.cs.cmu.edu/svaf/static/pdfs/Spatially_Varyin...
https://imaging.cs.cmu.edu/svaf/static/pdfs/Spatially_Varyin...
1. https://www.laserfocusworld.com/optics/article/16555776/alva... 2. https://pdfs.semanticscholar.org/55af/9b325ba16fa471e55b2e49...
That would make it really useful, maybe replacing carmera+lidar.
While this methods has no post processing, it requires a pre processing step to pre-calture the scene, segment it, estimate depth an compute the depth map.
When you reduce aperture size the depth of field increases. So for example when you use f/16 pretty much everything from a few feet to infinity is in focus.
Not doubting you, just asking to understand. Astrophotography doesn't always behave the same as terrestrial photography
- focal length (wider is deeper)
- crop factor (higher is deeper)
- subject distance (farther is deeper)
compared to your telescope, any terrestrial photography is likely at the opposite extreme of all of those, and at a disadvantage everywhere but subject distance, which mechanically is most sensitive near infinity focus.
Last page in the paper has a comparison between their approach and f/32 https://imaging.cs.cmu.edu/svaf/static/pdfs/Spatially_Varyin...
- processing: while there is no post processing, it needs scene depth information which requires pre computation, segmentation and depth estimation. Not a one-shot technique and quality depends on computational depth estimates being good
- no free lunch. The optical setup needs to trade in some light for this cool effect to work. Apart from the limitations of the prototype, how much loss is expected in theory? How does this compare to a regular camera setup with lower aperture? F/36 seems excessive for comparison.
- resolution - what resolutions have been achieved? (maybe not the 12 MPixels of the sensor? For practical or theoretical reasons? ) What depth range can the prototype capture? "photo of Paris Arc de triumphe displayed on a screen". This is suspiciously omitted
- how does the bokeh look like when out of focus? At the edge of an object? The introduction of weird or unnatural artifacts would seriously limit the acceptance
Don't get me wrong - nice technique! But to my liking the paper is omitting fundamental properties
This is a pretty good point, which gets me to wonder whether the developers of autonomous vehicles use variable focus adjustments as a part of their ML stack? Or simply set the focal point to infinity.