We Broke AI-Assisted Interview Cheating [video]
Posted3 months agoActive3 months ago
youtube.comTechstory
calmpositive
Debate
20/100
AIInterview CheatingEducation Technology
Key topics
AI
Interview Cheating
Education Technology
A company claims to have developed a system to detect AI-assisted cheating in interviews, sparking discussion on the implications of AI in hiring processes.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
1s
Peak period
3
0-1h
Avg / period
2
Key moments
- 01Story posted
Oct 6, 2025 at 11:51 AM EDT
3 months ago
Step 01 - 02First comment
Oct 6, 2025 at 11:51 AM EDT
1s after posting
Step 02 - 03Peak activity
3 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 6, 2025 at 1:32 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45492686Type: storyLast synced: 11/17/2025, 11:06:44 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Here’s how we did it:
When interviewing for summer Interns, I had a weird feeling that the candidates were cheating. There was one candidate in particular who would constantly look at the corner of the screen every time I'd ask him a question. Maybe it was my paranoia (because of all the interview cheating posts I was seeing on my social media) but I had a feeling that the person was cheating.
We looked at the cheating prevention/detection solutions on the market. Most of them there rely on heuristics (eye tracking, measuring speech inflections) or spyware (keystroke loggers). These things are super intrusive, not to mention, incredibly fragile. The chance of false positives is non-trivial. God forbid I become nervous during my interview and have to look around.
We wanted to take a different approach from current solutions. We relied on our experience hacking into ML models, specifically via adversarial examples. Here, we make special “invisible” pixel changes so that when the AI cheating tool screenshots the interview question, the pixels force the underlying model to refuse to answer, or even output an incorrect solution. For audio based cheating, we made small, targeted perturbations in the spectral domain that caused the AI assistant to mistranscribe the question entirely.
It took us a few weeks to implement the first prototype. However, that's when we ran into our first major hurdle. The pixels that could break one cheating tool, would not work against others. This was frustrating because we couldn't figure out why this was the case. In fact, we almost called it quits. However, after a few weeks of experiments, we found two cultiprits. (1) Different underlying LLMs: For example, Cluely likely uses Claude and InterviewCoder uses some variant of the GPT family. Each model requires different pixel change strategies. (2) System Prompts: The pixel changes are impacted by system prompts used by the cheating tool. Since each tool has a different variation of the system prompt, it requires different pixel change methods.
Our dream was to build a “one-size-fits-all” attack. It took months of iteration and hundreds of experiments to build something that worked against ALL cheating tools.
Along the way, we extended our method to defeat audio cheating. Here, an AI assistant listens to the interviewer and writes back answers on the hidden screen. Making those spectral changes in real time (milliseconds, not hours) was a technical nightmare, but we got there.
In short, after hundreds of experiments and a few months of stubborn engineering, we built a low-friction layer that breaks the “screenshot-and-ask” and audio-proxy workflows used by cheating tools without invading candidate privacy or relying on brittle behavior heuristics. We productized those defenses as BlindSpꙨts (https://blind-spots.ai) so teams can protect live interviews in real time, with minimal false positives and no keystroke or camera spyware. If you run technical interviews, ask us for a demo!
Check out the video here: https://www.youtube.com/watch?v=wJPfr5hIl10