Eye Prosthesis Is the First to Restore Sight Lost to Macular Degeneration
Posted3 months agoActive2 months ago
med.stanford.eduResearchstory
excitedpositive
Debate
20/100
Medical TechnologyVision RestorationMacular Degeneration
Key topics
Medical Technology
Vision Restoration
Macular Degeneration
Stanford researchers have developed an eye prosthesis that can restore sight lost to macular degeneration, sparking excitement and discussion about its potential and implications.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
7d
Peak period
13
Day 8
Avg / period
5.8
Comment distribution23 data points
Loading chart...
Based on 23 loaded comments
Key moments
- 01Story posted
Oct 22, 2025 at 12:36 PM EDT
3 months ago
Step 01 - 02First comment
Oct 29, 2025 at 9:47 AM EDT
7d after posting
Step 02 - 03Peak activity
13 comments in Day 8
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 1, 2025 at 10:15 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45671677Type: storyLast synced: 11/20/2025, 1:42:01 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The Phosphenes[0] patients sense will depend on what is left of the retina. People using earlier systems reported some interpolation happened. Maybe that is true of this device too.
[0] - that is the name for the image the brain manifests in response to signals received by the visual cortex. Most of us experience them when we close our eyes and rub them, or maybe just see stuff that is unreal.
Are there instances of single eye outcome where the subject has drawn perceived image so we can understand how this exposes into conscious visual stimuli?
Even just a flash on the left == left object vs flash on the right == right object would be a useful signal compared to zero. But describing it as "vision" would be stretching it. 378 pixels is a few letters at 10x18 so it's 2-3 words. Again, massive gains on nothing, but it's beyond "large print" its "large print with a magnifying glass" and it might be phosphor burn colour against black or a foggy field, or a number of things.
To be clear, this is amazing stuff and hats off to anyone who helped make it happen, but let's not assume we're in "snow crash" territory just yet.
Interpolation would be more transparent, much like it is for you right now. There are no phosphors in tubes in any of this.
I made no such "snow crash" assumption.
Users of devices like this have described their experiences and those are not generally big square pixels.
Think of those more like points the brain can do something with.
The chip stimulates the remaining neuro-signal entities present in the damaged retina. I doubt there is a 1:1 relationship between those and the signaling points on the chip.
When the company can do better than on/off bright/contrast, the overall experience should improve dramatically. There will be more signal points (1024 ish?) and those having variable output levels will give the users visual cortex a whole lot more to work with.
About the only analogous thing I can come up with is cochlear implants. Those have a number of signal points that seems a lot smaller in number than expected. That was certainly my take. The more of those there are, the more concurrent sounds can be differentiated. A greater sense of timbre, in other words, becomes possible.
The reason for it being a two level device at present is likely due to it being mostly research and not so much engineering.
They say their next chip will deliver grey scale and many more signal points.
My guess on color is one or more of the following is true:
[0]The color info is normally sent via the color sensitive cells now damaged and we have yet to understand how that signal enters the nerves we can send a signal to.
[1]It may be that we need a far smaller, more precise signal point to achieve color. Current tech stimulates many nerve endings. This was the basis for my interpolation comment above. Basically, each pixel stimulates an area of the damaged retina which contains a great many possible signal points if it were possible to stimulate them individually. Because so many are stimulated all at once, the subject perceives white phosphines rather than colored ones.
An analogy would be the colors on a CRT. A broadband beam would light them all up, yielding monochrome vision. A narrow beam can light up a few or just one, yielding color.
One thing I just realized writing this is our blue sensor cells are scattered about, not well clustered like the green and red ones are.
Maybe current users see a bit of color at the very extent of the artificial visual field due to a failure to hit the necessary blue cells...
[2] It may be some sort of pulse is needed to encode colors. And perhaps the current signaling is continuous.
Hopefully, we get an answer from the team.
> the PRIMA device provides only black-and-white vision
I got an ICL (Intra Collamer Lens) implant at 22 (25 now) and that ruined my night vision with ghosts and glares.
I don’t know much about medical trials but this seems surprisingly low to me, especially given that the study population is presumably predisposed to liking the device (since they opted-in to an experimental study).
Did the implant not work in these cases or were there other quality-of-life issues? I wish university press releases on science were less rah-rah and presented more factual information. I guess that’s what the NEJM article is for.
And it still uses your regular peripheral vision so the experience merging the two might be uncomfortable.
Not discounting the success at all, but anything messing with your senses is probably very hard to adapt to unless it’s pretty much a perfect match with the experience you’re used to.
It also means that there is the potential here for night vision.
Can't wait to see what advancements will be made in vision-related healthcare over the next 20 years.