Smartphone Cameras Go Hyperspectral
Posted3 months agoActive3 months ago
spectrum.ieee.orgResearchstoryHigh profile
skepticalmixed
Debate
80/100
Hyperspectral ImagingSmartphone CamerasSpectroscopy
Key topics
Hyperspectral Imaging
Smartphone Cameras
Spectroscopy
A new technique claims to enable hyperspectral imaging on smartphone cameras using a special color reference chart, but the community is divided on its feasibility and potential applications.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
39
0-6h
Avg / period
10.9
Comment distribution87 data points
Loading chart...
Based on 87 loaded comments
Key moments
- 01Story posted
Sep 24, 2025 at 10:20 AM EDT
3 months ago
Step 01 - 02First comment
Sep 24, 2025 at 12:11 PM EDT
2h after posting
Step 02 - 03Peak activity
39 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 26, 2025 at 9:35 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45360824Type: storyLast synced: 11/20/2025, 6:39:46 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> “Every photo carries hidden spectral information waiting to be uncovered. By extracting it, we can turn everyday photography into science.”
And with our patent, extract rent from anyone who wants to do it!
You are just a dude, therefore business grows slowly.
You gather enough attention that some corporation with a lot of bling just goes and copies your thing.
Your business fails.
Can you clarify this? Just curious about what you mean.
Often I say that if we phased out patents and other IP restrictions, then investments would not stop but they would change from less frequent large investments to more frequent small investments. As long as designs can stay secret until release, there will always be first mover advantage and brand recognition. But you might get smaller investments to build out manufacturing for the next year, rather than bigger longer term investments. The flip side is that stagnant innovators who got lucky once will be subsumed by more agile competitors who can better deliver those innovations to market.
Thus investment and innovation wouldn’t stop - markets still ensure certain advantages for innovators - but the nature of investment and innovation wouldn’t shift towards more incremental moves and more diverse actors. A major upside to this is that those best suited to scale an existing nascent technology would be free to compete at doing so.
It should be noted that even die hard capitalists are against IP restrictions [1] as they are a massive government investment in the market. So proponents of IP restrictions must reconcile their arguments in favor of this government intervention with their potential interest in free markets.
[1] https://youtu.be/GZgLJkj6m0A
It's 2007. Just-a-dude has a great idea, he notices customers to his website often buy just one item, so he'll let them do that with one simple click. What's this, he's just received a cease and desist? Sorry bro, Amazon patented that 10 years ago.
how?
just-a-dude doesn't have a team of patent attorneys sitting in his back office waiting for work.
It’s very easy to declare that someone else “should” do a bunch of work and spend a bunch of money for the altruistic benefit of society.
Instead, anything you don’t like is a rent-seeking late stage capitalism narrative.
I ended up having to flash Lineage, as there was some outrage that in a highly limited set of circumstances, thin see-through T-shirts became slightly more see-through and OnePlus disabled that camera in their later firmware updates.
Ironically, older iPhones have better depth resolving capability overall. Apple sacrificed depth sensing performance in favor of smaller unit size in the newer ones.
CMOS image sensors are naturally sensitive to near IR. Early feature phones had no IR filters on their cameras - you could see an IR remote light up through them. But as people became more and more obsessed with smartphone camera quality, smartphones started to ship with those filters too. You get more "lifelike" colors that way.
Although in some multi-camera smartphones, one of the secondary cameras may lack an IR filter.
Nowadays, there is a more mature ecosystem, with specialized drone mapping cameras tailored for the purpose.
For our use case, the micasense rededge would have been perfect.
White balance is hard, in part, because the sensitivity bands of our vision and the camera sensors do not align. Take a look at fluorescent (or, better still, sodium vapor) light spectra for clarity on why this is a massive pain.
It's a cool trick if it works, but it seems very finicky and I guess would be limited to transparent/homogeneous liquids?
But that seems far more difficult. Precisely combining and applying combinations of inks to a mirrored surface sounds like a helluva manufacturing challenge.
Probably closer to what you're thinking about would be putting a bunch of tiny bandpass filters infront of a mirror, but in that case you can ditch the mirror entirely and just point the camera through the filter array.
A filter array right on-top of the sensor is how (the vast majority) of) CMOS cameras distinguish colour anyway.
> designed a special color reference chart that can be printed on a card
My rudimentary understanding of physics makes me suspect this sentence is a simplification.
A normal printer use Cyan Magenta Yellow Black to print. A photo of such a print would already destroy alot of spectral information for the same reason the individual rgb sensors do.
So i suspect those colored dots are a very careful and deliberate concoction of very particular inks with very specific spectral color bands.
I suspect alot of effort went into finding, mixing and algoritmically combining the right inks.
I'm guessing it works similarly to a how a narrow band florescent lamp makes only materials that reflect a very specific frequency be visible, which makea alot of prints and pigments look wierd. (If you do the opposite; use ink with very specific spectral band, you can instead measure the lamp)
Insanely clever. (Whatever they did)
Perhaps there's some accounting for this, and I'm curious to learn what it is, because it's a phenomenally complex problem.
1. You might think the sun is a standard source, but it's usually modulated by the atmosphere[2].
2. Unless you are in space.
The slip itself is a calibration reference, so a clean photo of it could serve to compensate for the lamp and camera and calculate how accurate the readings is for different parts of the spectrum. (But good wide spectrum light would be ideal for high precision readout)
You're also still limited to visible light because of the camera uv and ir filter, for which the sun is a decent reference.
"Between around 10,000 nm (far infrared) and around 100 nm (deep ultraviolet), the spectrum of the Sun's spectral irradiance agrees reasonably well (though not perfectly) with that of a blackbody radiator at about 5,700K. That is about the temperature of the Sun's photosphere. The deviation from a perfect blackbody spectrum is due to many factors, including the absorption of light by constituents of the solar atmosphere, and the fact that the photosphere is not uniform, but has some hotter and some cooler regions, so that what is seen from the Earth is a composite spectrum of blackbody radiators at a range of different temperatures. About 99% of the total electromagnetic radiation coming from the Sun is in the ultraviolet-visible-infrared region."
https://acd-ext.gsfc.nasa.gov/anonftp/acd/daac_ozone/Lecture...
I hope there is a research paper on this i can read.
they are not four but twelve different (!) base colors. they are calibrated and are very light proof i.e. degrade much slower over time, "look the same" after years.
what's your background in understanding how printing works?
I'm not sure what happens on paper, but when you have ink disolved in water the abortion is not linearly proportional to the concentration, it's exponential. For example, consider a red ink and 5 magical selected frequencies and the absortions at 1% of concentration are: 99%, 99%, 50%, 10%, 1%
If you double the ink at 2% you get 99.99%, 99.99%, 75%, 19%, 1.99%
So increasing or decreasing the ink concentration may give you information of different frequencies, even with only one ink and only one sensor. In this case mostly about the 3rd and 4th. With more concentration you may kill all the light in the 3rd and measure the absortions ratio between the 4th and the 5th.
One problem I see here is how to order them, but I guess it's possible with a few sensors and a few inks. Each sensor sees all the frequencies, but with different weight. I'm not sure if it's possible to solve this, but perhaps you need some initial approximated model of the inks(???).
Now, when you put the inks on paper and have a unreliable light source and perhaps other technical problems, ...
In conclusion, I think it's possible to use different saturation and mixes of the inks to get different spectral distribution of the light that bounce on the card. Then use the three sensors to get three averages and try to use big linear algebra book to reconstruct what happens in between. But I should read the paper to be sure.
It won't be as accurate, but it might be enough to offer some insights into whether liquid photographed in the article is in fact whisky, not urine (which to me seems to be a much more noble demonstration subject).
We utilized a professional photographic inkjet printer (ImagePROGRAF PRO-1000, Canon), equipped with 11 ink cartridges and a chroma optimizer (PFI-1000 LUCIA PRO Ink, Canon), and used the manufacturer-recommended genuine paper (Photo Paper Premium Fine Art Smooth, Canon) for printing. To reproduce the desired reference colors for the spectral color chart, we also implemented a customized printing calibration process while maintaining the International Color Consortium (ICC) profile. The actual printed colors (output) showed notable distortions compared with the intended colors (input), which were particularly influenced by the type of paper (print sheet). For customized printing calibration, we mapped the exact relationship of the CIE xy chromaticity values between the digital color input and printed output values. After the printing process was completed, we measured the reflectance spectra of all reference colors from the printed spectral color chart (Fig. S1) using a spectrometer and a diffuse reflectance standard (equivalent to using CIE illuminant E). We confirmed that the CIE xy chromaticity values obtained from these measurements were in excellent agreement with the desired input values within the SWOP v2 gamut (Fig. 1(e)).
Using a known reflectance chart in-scene to recover spectral information is a standard calibration technique.
What "investment" is patent law protecting here?
I've always wanted to build a tricorder with my son, was just thinking about it last week when he was putting together a digital compass (with RasPi Nano, magnetic sensor, GPS, and LED light ring + OLED).
https://cat.smartwalkie.com/store/products/cats62pro
Custom processors to accelerate generation of AI text: Go ahead
Slightly thicker to fit a bigger battery in: How dare you
EDIT: Okay, after going to the actual paper I at least get transmission mode - you photograph the color chart through the sample and this will of course imprint the absorption spectrum onto the know spectrum of the color chart and you can then look at the difference to the color chart without the sample in between. But I do not get the logic behind their reflectance mode.
Every now and then we discover new traces left by the original light field (by using a fuller picture to model the image formation process, instead of an easier oversimplified one).
It makes one wonder how much of the noise is actually misinterpreted signal.
First, you need to understand coded masks[1,2]. This is a way to use an LCD or other array to mask off parts of a scene so that very specific parts of it are sampled, but the rest aren't, to get a single, high resolution analog value. Then you switch to other masks, and get more values. You can then work backwards in the math through the known mask shapes, to get the original image with far fewer samples that would be required one at a time.
Think of the above as a 2d visual version of the Fourier transform[3,4]. This transform is used heavily to compress images throwing away most of the bits in an image without losing it's essence.
The analysis they're talking about uses a very specially printed card. It isn't just something generated with a standard 4 ink printer, each "dot" is a separate unique ink with tightly controlled spectral curves, these form a virtual version of the above masks. When you view these through a sample, it can then give an idea of the spectral response of the camera, and the liquid, by using the many different known response curves of the "dots" to work backwards, and generate the 1 dimensional very tight response curve of a hyperspectral imager, by figuring out where each "dot" is in the scene, then averaging that dot's intensity across the RGB values of the picture taken by the camera. Today's cameras have sufficient resolution and bit depth that you get the original bit depth (usually 8 bits) and an additional bit for each doubling of the number of pixels in a given "dot". This is degraded by Bayer pattern filters[5,6], and the nature of cameras, but it's not unrecoverable.
Like with Coded Masks, and Fourier transforms, you then take your high resolution analog values, and work backwards to get the things you want to measure.
[1] https://www.youtube.com/watch?v=_ezhdhHNku0
[2] https://en.wikipedia.org/wiki/Coded_aperture
[3] https://en.wikipedia.org/wiki/Fourier_transform
[4] https://www.youtube.com/watch?v=spUNpyF58BY&pp=ygURZm91cmllc...
[5] https://www.youtube.com/watch?v=LWxu4rkZBLw&pp=ygUMYmF5ZXIgZ...
[6] https://en.wikipedia.org/wiki/Bayer_filter