Fair Screen – Detect Cluely/Interview Coder Kind of Interview Cheating Tools
Mood
informative
Sentiment
neutral
Category
startup_launch
Key topics
Interview_cheating
Coder_tools
Remote_interviews
Discussion Activity
Light discussionFirst comment
N/A
Peak period
1
Hour 1
Avg / period
1
Based on 1 loaded comments
Key moments
- 01Story posted
Nov 22, 2025 at 7:42 PM EST
1d ago
Step 01 - 02First comment
Nov 22, 2025 at 7:42 PM EST
0s after posting
Step 02 - 03Peak activity
1 comments in Hour 1
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 22, 2025 at 7:42 PM EST
1d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Over the last year, “undetectable” interview-assistant tools have exploded. They overlay real-time AI prompts, code, or answers in transparent/non-shareable windows, work through virtual desktops, or hide inside remote sessions. Platforms like Zoom, Meet, Teams, etc. can’t see these windows because of sandboxing, so interviewers have no idea when answers are coming from an AI tool sitting just outside the captured screen.
Fair Screen takes a different approach: Instead of scanning processes or capturing screen data, it watches for the behavior of the window system itself — invisible overlays, transparent windows, remote desktop footprints, crosshair-style cursor changes, VM artifacts, and other harmless signals that these tools unintentionally leave behind.
These signals are surfaced in real time to the interviewer in a simple dashboard. No recording, no screenshots, no process killing, no monitoring software. Just “this looks like an invisible window is present” or “this looks like RDP/VM behavior.”
Why I built it: I kept hearing the same story from interviewers:
Answers that were too perfect
Strange pauses
Eyes scanning an invisible script
Cursor turning into a crosshair
Candidates reading off screen in a way that video can’t show
There were zero tools aimed at detecting this without spying or collecting user data. The only solutions were invasive proctoring, which nobody likes.
How it works (technical summary):
Uses OS-level window enumeration (non-invasive, metadata only)
Identifies windows that are non-shareable, click-through, or overlaying the main screen
Detects artifacts of remote sessions and VMs through display, compositor, and input characteristics
Streams only these signals (not content) to the interviewer dashboard
Interviewer sees a live feed of “risk indicators,” not the actual screen
What it does NOT do:
No screen recording
No screenshots
No keylogging
No process scanning
No network monitoring
No content analysis
It is intentionally privacy-first.
Live demo: https://fairscreen.co
(You can generate a session and see how the dashboard reacts.)
I would really appreciate feedback from the HN community on:
The technical approach
Privacy tradeoffs
Edge cases I may have missed
Ideas for making this more transparent and trustworthy
Whether there’s a better way to handle false positives
This is currently free to use while I gather feedback and refine the detection heuristics.
Happy to answer any technical questions!
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.