Even Realities Smart Glasses: G2
Postedabout 2 months agoActiveabout 2 months ago
evenrealities.comTechstory
calmmixed
Debate
60/100
Smart GlassesWearable TechnologyAI
Key topics
Smart Glasses
Wearable Technology
AI
The Even Realities Smart Glasses: G2 is a new wearable device that offers a unique take on smart glasses, but commenters are divided on its usefulness and limitations.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
6d
Peak period
43
132-144h
Avg / period
43
Comment distribution43 data points
Loading chart...
Based on 43 loaded comments
Key moments
- 01Story posted
Nov 13, 2025 at 12:30 PM EST
about 2 months ago
Step 01 - 02First comment
Nov 19, 2025 at 4:26 AM EST
6d after posting
Step 02 - 03Peak activity
43 comments in 132-144h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 19, 2025 at 10:55 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45917752Type: storyLast synced: 11/20/2025, 3:47:06 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Thanks this was key in deciding whether to consider this brand at all.
Reminds me of when dumbphones were introduced and people said things like why do I need to have a phone with me all the time.
The crux of it for me:
- if it's not a person it will be out of sync, you'll be stopping it every 10 sec to get the translation. One could as well use their phone, it would be the same, and there's a strong chance the media is already playing from there so having the translation embedded would be an option.
- with a person, the other person needs to understand when your translation in going on, and when it's over, so they know when to get an answer or know they can go on. Having a phone in plain sight is actually great for that.
- the other person has no way to check if your translation is completely out of whack. Most of the time they have some vague understanding, even if they can't really speak. Having the translation in the glasses removes any possible control.
There are a ton of smaller points, but all in all the barrier for a translation device to become magic and just work plugged in your ear or glasses is so high I don't expect anything beating a smartphone within my lifetime.
However you are limited in what you can do.
there are no speakers, which they pitch as a "simpler quieter interface" which is great but it means that _all_ your interactions are visual, even if they don't need to be.
I'm also not sure about the microphone setup, if you're doing voice assistant, you need beamforming/steering.
However, the online context in "conversate" mode is quite nice. I wonder how useful it is. they hint at proper context control "we can remember your previous conversations" but thats a largely unsolved problem on large machines, let alone on device.
For people who are prone to motion sickness, its also really useful to have it tied to the global frame. (I don't have that, fortunately)
It's a cool form factor but the built-in transcription, ai etc are not very well implemented and I cannot imagine a user viewing this as essential rather than a novelty gadget
Not really. You can build your own apps [1].
1.: https://github.com/even-realities/EvenDemoApp
I guess they could use a common “generic” form factor, that would allow prescription lenses to be ordered.
That said, this is really the ideal form factor (think Dale, in Questionable Content), and all headsets probably want to be it, when they grow up.
What matters more is how they support different eye-distances (interpupillary distance, IPD).
For instance, the teleprompter is terrible and buggy when it tries to follow along based on voice. A simple clicker for moving forward in a text file would be better than how it currently works.
How many people say they lost interest due to ocular issues versus complaints that it’s just not useful?
Seriously. A simple file browser with support for text files only would be more useful than the finicky G1 apps.
Of course visual issues could occur for someone, but it’s so aggravating that the they can’t just put in some sort of customization for content properly
Now we're going to see people's eyes moving around like crazy.
If it was just a heads-up display for android like xreal, but low power and wireless that might be cool for when I'm driving. But everyone wants to make AI glasses locked into their own ecosystem. Everyone wants to displace the smartphone, from the Rabbit R1 to the new ray-bans. It's impossible.
In the end this tech will all get democratized and open sourced anyways, so I have to hand it to Meta and others for throwing money around and doing all this free R&D for the greater good.
24 more comments available on Hacker News