We Hacked Burger King: How Auth Bypass Led to Drive-Thru Audio Surveillance
Posted4 months agoActive4 months ago
bobdahacker.comTechstoryHigh profile
heatednegative
Debate
80/100
SecuritySurveillanceVulnerability Disclosure
Key topics
Security
Surveillance
Vulnerability Disclosure
A security researcher discovered vulnerabilities in Burger King's drive-thru system, allowing unauthorized access to audio recordings, and published a blog post about it, which was later taken down due to a DMCA complaint.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
21m
Peak period
93
0-6h
Avg / period
16
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 6, 2025 at 9:04 AM EDT
4 months ago
Step 01 - 02First comment
Sep 6, 2025 at 9:25 AM EDT
21m after posting
Step 02 - 03Peak activity
93 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 9, 2025 at 2:07 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45148944Type: storyLast synced: 11/20/2025, 8:28:07 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
There is if it relegates you to shitty work environments and doesn’t afford a decent living as is generally the case in the US.
Pay people $30/hour and I bet they'll say it every time without software yelling at them. (With the software in place, I have never heard the line "you rule" at Burger King, but I also only go like twice a year. So why write it? It doesn't work.)
Ironically, the less a job pays, the harsher and more demanding the bosses tend to be.
Earning six figures as a software developer, working from home, and you have to take a week off sick? No problem, take as long as you like, hope you feel better soon.
Earning minimum wage at a call centre? Missing a shift without 48 hours advance notice is an automatic disciplinary. No, we don't pay sick leave for people on a disciplinary (which is all of them). Make sure you get a doctor's note, or you're fired.
At least you didn’t find that the bathroom rating tablets had audio as well!
I'm pretty sure someone was willing to pay for this, but at least the researches acted responsibly.
I’m asking earnestly; it seems like if nobody actually cares about these gaps then there shouldn’t be an economic driver to find them, and yet (in many companies, but not Burger King) there is.
Is it all just cargo culting or are there cases where company vulnerabilities would be worth something?
To me it seems like quite a stretch for “don’t hack me” to get framed as “Burger King is leveraging their corporate power to tell me what to do against my will”.
And to be clear I actually do think that it would be better for Burger King to invite and reward responsible disclosure, in the same way that you’d want your bank to have a hotline for people to report problems like doors that won’t lock. But if the bank didn’t have that hotline it wouldn’t excuse breaking in.
The police and the judge and the jury don't care what colour fabric you put on your head this morning. They (in theory) care if you committed a crime and they can prove it. Which you did and they can, since you confessed. So you go to jail for a long time.
I doubt that would go down very well, neither would it if you did that with businesses instead private home.
It wasn't enough to just shove a folder through the gap of the doors; you had to ensure the folder opened up as it was falling, changing more of the pixels to get to the trigger threshold. It took me around 5 minutes to get it to consistently trigger. CEO was displeased dev & design team now knew how to bypass the door lock from the outside; he wasn't going to pay to fix it.
Maybe best to internalize Have It Your Way like BK's teams did.
https://www.darkreading.com/vulnerabilities-threats/dark-rea...
https://iowacapitaldispatch.com/2023/06/23/lawsuit-over-auth...
1) The court found that the county sheriff had the pentesters arrested and encouraged their prosecution _not_ because he believed there was any crime, but instead that was angry at some state official. (Which, y'know, sounds like a pretty serious civil rights violation.)
2) However, the civil rights / 4th amendment claims were dismissed by the federal court due to "qualified immunity", the doctrine where, in any sufficiently "unique" or "specific" situation, the police have no liability whatsoever for their actions [2].
[1] https://storage.courtlistener.com/recap/gov.uscourts.iasd.84... [2] https://en.wikipedia.org/wiki/Qualified_immunity
Darknet Diaries has an episode on this (#59), with interviews from the parties involved.
https://www.youtube.com/watch?v=Y0AbHKcIQxk
https://darknetdiaries.com/episode/59/
Hacking is hacking. If they wish to risk it, what's your problem?
They know the risks. Everyone knows hacking is illegal. Same with selling drugs; illegal yet folk do. Same premise. Get caught; no sympathy given.
"People may get hurt"? $country throw folk in to war; it's a harsh world we live in.
Bug bounty's are only the new norm because the younger audience want validation and compensation for their skills or that companies are being cheap to ensure security.
During my era of internet bug bounties were non-existent. You either got hired or you went to jail.
In my case I got fired from a bank accidentally boasting that I could replace printer status messages with "Out of Ink - please insert more blood". Granted I was 17.
Being banned from using any computer at school for discovering a DCOM exploit using Windows 98 Help resulting in being denied from doing my IT GCSE and from two colleges.
Or being doxxed by another hacker group for submitting their botnet to an AntiVirus firm. Good times, a living nightmare for my parents.
The point of bug bounties isn’t “validation” (as if old-school hackers didn’t want validation!), it’s that companies with responsible disclosure programs explicitly allow you to pentest them as long as you follow their guidelines. That removes the CFAA indictment risk. The guidelines generally aren’t much stricter than common sense (don’t publish user data, don’t hurt people, give them time to patch before publishing).
Unfortunately, the existence of bug bounties has made some people forget that hacking a company without an agreement in place is still a crime, and publishing evidence of crimes to a wide audience on the internet is a bad idea.
Most of what you’re saying just seems like nostalgia talking. Isn’t it better that hackers today have a way to find real vulnerabilities without going to jail?
But it didn't come across a warning. "You need to stop" is a demand not a warning. And I would like to believe they would know this when post online. if not /shrug.
Maybe they're working on behalf of an organization, a country that doesn't follow CFAA; Russia, China? Maybe they're state sponsored or under protection. They're obviously not stupid if they can infiltrate Fast-Food chains and social engineer others but I've been wrong before.
> is a bad idea
I would be surprised if they didn't. If not, okay well if shit hits the fan; no sympathy for me. Unlucky. They're doing it at their own risk.
> Isn’t it better that hackers today have a way to find real vulnerabilities without going to jail?
A doubled edged sword, I personally wouldn't count them as hackers. They're not hacking, they're penetrating based on T&C of an agreement. Yes, it could be called "ethical hacking" but I still wouldn't call it hacking.
A hacker is one who gains unauthorized access to computer. Hacking isn't such when your granted restricted access on a basis of T&C.
> Isn’t it better that hackers today have a way to find real vulnerabilities without going to jail?
I don't disagree, if that's your skill then go for it. It's the safest route allowing you to harness your skills, and which may provide future prospects. A dispensary selling drugs is better than the dealer on the corner of the street.
"To hack a bank" is different then to "hack a bank based on some agreement". One carries more weight then the other. Your penetrating a bank on an agreement. Your not hacking.
Bug bounty hunters to have faced jail, lawsuits, or threats — even when acting in good faith, it doesn't make you invulnerable.
I admire the persona of who this is, their acts highlights concern to us who use such conveniences. It exposes truth and tackles the issue at hand where others may exploit you because of. It shows negative light to corporations that many folk who daily.
Their title as on their blog "Ethical Hacker" I would say suitable to describe them as that. It's not like they're siphoning money off folk from ransomware.
> Most of what you’re saying just seems like nostalgia talking.
What I was demonstrating as someone who's been in trouble due to misunderstanding computer mishaps as a teen back when, also to establish my point that I know what I am talking about.
Yeah, it turned in to a nostalgia trip. I'd call myself more of a script kiddie and one who I'd see myself as white-hat.
Black-hat can be interesting however my moral compass has caught up with me and that my life has more worth that it would be jeopardous to do such besides I don't have the time and among other things.
The US Constitution? (lot of assumptions of locations here, insert your charter of freedoms/other guarantor of rights here if parent comment OP is not in the US)
[1] https://www.vice.com/en/article/this-is-the-hacking-investig...
I’m curious about the legal/reputational implications of this.
I personally found some embarrassing security vulnerabilities in a very high profile tech startup and followed responsible disclosure to their security team, but once I got invited to their HackerOne I saw they had only done a handful of payouts ever and they were all like $2k. I was able to do some pretty serious stuff with what I found and figured it was probably more like a $10k-$50k vuln, and I was pretty busy at the time so I just never did all the formal write up stuff they presumably wanted me to do (I had already sent them several highly detailed emails) because it wouldn’t be worth a measly $2k. Does that mean I can make a post like this?
The comments and headlines will be a bit snarkier, more likely to go viral - more likely to go national on a light news day, along with the human interest portion of not getting paid which everyone can relate to.
Bad PR move
So I legitimately don’t know what the legalities of writing a “here’s how I hacked HypeCo” article are if you don’t have the express approval to write that article from HypeCo. Though in my case the company did have an established, public disclosure program that told people they wouldn’t prosecute people who follow responsible disclosure. TFA seems even murkier because Burger King never said they wouldn’t press charges under the CFAA…
Branding it as “responsible” puts the thumb on the scale that somehow not coordinating with the vendor is irresponsible.
So yes, anyone who discloses before the company has had a reasonable chance to fix things is indeed irresponsible.
I'm so sick and tired of some companies that any vulnerability I find in their products going forward is an immediate public disclosure. It's either that or no disclosure, and it would be irresponsible not to disclose it at all.
Cracked a thrift store IoT medical device. Contacted vendor. They sent me a one way NDA. Lol no.
The platform knows my identity, publishing the details would be against their terms, there's an implied threat that they could take legal action against me if I published the details, and they even low-balled the severity to avoid paying out the appropriate amount. Awesome experience overall.
I'm not suggesting in this thread that coordinating with vendors is bad. I'm suggesting that to frame any non-coordinated disclosure as inherently irresponsible is bad, and that is what is implied when we use the label "responsible disclosure" for "coordinated disclosure".
Maybe things are better now.
Years ago the only contact for many companies was through customer service. "What do you mean you're in our computer? You're obviously on the phone!"
Doing the right thing can be awfully unpleasant.
Near the bottom of the blog post it says:
> When | What Happened
> Day 1, same day | RBI fixes everything faster than you can say "code red"
> Credit where it's due – RBI's response time was impressive.
Thats not putting my thumb on the scale so much as shouting my opinion. The rebrand puts its thumb on the scale specifically because it avoids saying “we think non-coordinated disclose is irresponsible”; it sneaks it under the name change.
Even the most security-aware companies have a process to fix vulnerabilities, which takes time.
I would never hire someone that doesn't reaponsibly coordinate with the vendor. In most cases it's either malicious or shows a complete lack of good judgement.
In the case of bobdajrhacker? Both.
But I find that this case is rare. Typically it would be something like many of the following being met:
- It is likely to be discovered by an attacker soon.
- History shows that the company is unlikely to fix it soon.
- Users have some way to protect themselves.
- Your disclosure is likely to reach a significant number of users.
It seems pretty reasonable to publish, given that?
"Day 1, same day: RBI fixes everything faster than you can say "code red""
Burger King is almost certainly going to experience no damage from this.
Their takeaway will likely be entirely non-existent. They’ll fix these bugs, they’ll probably implement zero changes to their internal practices, nor will they suddenly decide to spin up a bug bounty.
“The signal isn’t to pay white hats more, instead…”
And perhaps an addendum such as:
“…which will then, indirectly and in the long run, create the signal you were replying to.”
Appreciate your clarification despite the bluntness of my reply.
Also, you’re probably right, the signal will likely pass right over Burger King’s crown.
The screenshot of the email lacks detail so I don't know what part of the DMCA the author breached here, but this feels a lot like your standard DMCA abuse.
This AI generated takedown was funded in part by a Y-Combinator: https://cyble.com/press/cyble-recognized-among-ai-startups-f...
When dealing with a company whose business is filing dmca complaints using an automated system, the business model isn’t a lawsuit - it’s a settlement where the influencer is made whole and you get paid. The risk to the company is existential if you have enough clients using you to push back and risking them getting a platform ban or an injunction against them filing automated dmca complaints. Say they file a thousand complaints a day against a thousand YouTube channels. If even 50 of those channels file a counter claim it’s going to set off alarm bells.
All that being said the most toxic part of this is the company calling itself a cyber security company and trying to obfuscate seemingly pretty responsible disclosures using dmca.
There is basically zero consequences for whatever fuckups you do, thus no incentives for companies to pay for vulnerabilities.
To change that calculus, the chance of that future cost needs to go up and the amount of it also needs to go up. If the choice is between a $100k bug bounty now and a $10-million-dollar penalty for a security breach, people will bite the bullet and pay the bounty. If the CEO knows he will lose his house if its discovered that he dismissed the report and benefited financially from doing so, he will pay the bounty.
The consequences need to be shifted to the companies that play fast and loose with customer data.
I hope people invent AI bots which uncover vulnerabilities and make them available publicly for free, in real-time. This would create the right incentives for companies.
Modern software has become a giant house of cards, under the control of foreign powers who possess asymetric knowledge. This is because our overarching legal system protects mediocrity and this gives nefarious skilled people with a massive upper hand, while hurting well-intentioned skilled people who try to build software the right way.
The nefarious skilled people don't need to ask for permission and don't need to convince anyone to make money from their schemes... Well-intentioned skilled people build products which are impossible to sell or monetize because nobody cares enough about security... Companies mostly externalize the consequences of vulnerabilities to their users and leverage market monopolies to keep them.
No. Just because there's a blog post about a fixed vulnerability doesn't imply that it's ok to write a blog post about an unfixed vulnerability.
I'm not saying it's wrong to post a blog post about an unfixed vulnerability. I'm just saying that the existence of a blog post about a fixed vulnerability has no impact on whether it's ok or not to post a blog post about an unfixed vulnerability.
I guess they could argue shouting into a machine in public carries no expectation of privacy, but it seems like a liability to me.
But (in some states), it seems that it would be a very different can of worms if I were to elect to deliberately record the conversation I have with my friend without their consent. Even in a public space, that would appear to run directly afoul of the applicable laws.
Secretly recording voices is a felony is many places in 'merica.
Legally there is no “reasonable expectation of privacy” in public spaces and the only limit on that are extreme telephoto lenses looking from public spaces into private spaces.
Edit: Another commenter has made me aware that some states do ban non-consensual audio recordings in public: https://www.dmlp.org/legal-guide/massachusetts-recording-law
The laws prohibiting these recordings have neither been upheld nor overturned by the US Supreme Court.
[1] https://www.dmlp.org/legal-guide/massachusetts-recording-law
If you have an obvious security camera, or an obvious camera that normally would record audio, and you’re in public waving it around and it records audio? You are not secretly recording audio.
Same if someone is standing next to a obvious and clearly visible security camera which normally could also record audio, also, not secretly recording audio.
A hidden mic in your jacket, or like in that case, hiding the camera under a jacket? That is hidden recording.
The general rule of thumb is - if everyone can clearly see what you’re doing, it’s not secret.
It's related to wiretapping laws that are very broad.
What was here was a link to a California statute that is apparently misinformation somehow. Who knows, I'm just some igorant redneck apparently.
Edit: Another commenter has made me aware that some states do ban non-consensual audio recordings in public: https://www.dmlp.org/legal-guide/massachusetts-recording-law
The laws prohibiting these recordings have neither been upheld nor overturned by the US Supreme Court.
You may have a smudge on your optics, mr. sniper.
This is not confidential communications.
If there is a big obvious security camera staring at you, in a public place, that is the opposite of an expectation of privacy.
How would you reconcile your statement against state laws that require all-party consent for audio recordings? e.g. CISA, or FSCA
In the USA, there is no right or legal expectation of privacy in public spaces, which includes fast food restaurants that are open to the public (indoors or outdoors)
Edit: Another commenter has made me aware that some states do ban non-consensual audio recordings in public: https://www.dmlp.org/legal-guide/massachusetts-recording-law
The laws prohibiting these recordings have neither been upheld nor overturned by the US Supreme Court.
Edit: Another commenter has made me aware that some states do ban non-consensual audio recordings in public: https://www.dmlp.org/legal-guide/massachusetts-recording-law
The laws prohibiting these recordings have neither been upheld nor overturned by the US Supreme Court.
The laws prohibiting these recordings have neither been upheld nor overturned by the US Supreme Court.
You can want things to be black and white but litigators are going to argue.
Glik v. Cunniffe (1st Cir. 2011)
This was well know amongst the sort of people who regularly got harassed by police (in my circle of friends, riders of sportsbikes). There was well known legal advice saying to record every interaction you had with police, and if it turned out badly in any way, as soon as you got home write down the transcript of the conversation as "contemporaneous notes" and email them to your gmail account to establish a timestamp. But the only time you ever even mentioned your recording would be to your lawyer, so that if the cop challenged or contradicted your notes in court, your lawyer could then offer the recording as evidence.
These days, dashcams are pretty ubiquitous, and demonstrate that whatever the legal technicalities are, video recording in public without consent is not only widespread, but bashcam footage is also something police regularly request from the public.
> Video recording is permitted without consent in the public places.
I have no idea why you would think that these two statements are related, or why people would continue this conversation. Fishing and skateboarding are two other things that are often allowed, but neither are related to recording audio.
And for anyone who thinks this is a nitpick, please look it up.
edit: also, saying that you can record people when you're obviously recording people is also not relevant. The problem is recording people without their knowledge or consent. I cannot put an audio recorder in my pocket in many places (such as Illinois) and record you, whether in a private space or in a public space. If I put my audio recorder on the table, and you can choose whether you want to speak or not, it's legally an entirely different scenario, whether we are in a public space or not.
The laws prohibiting these recordings have neither been upheld nor overturned by the US Supreme Court.
Edit: Another commenter has made me aware that some states do ban non-consensual audio recordings in public: https://www.dmlp.org/legal-guide/massachusetts-recording-law
The laws prohibiting these recordings have neither been upheld nor overturned by the US Supreme Court.
That is not how wiretapping laws work in every state.
At least here in Argentina, clean bathrooms was a huge selling point in the 1990' for Burger King and McDonald's.
For example you can go to study to one of them with a few friends, and be there for hours because they have clean bathrooms, and from time to time one of the employees may come to offer coffee refill and ask if you want to buy something to eat with the coffee. [The free coffee refill changes from time to time. I'm not sure it's working now.]
1. Jane, a security researcher, discovers a vulnerability in a Acme Corporation's public-internet-facing website in a legal manner
2. Jane is a US resident and citizen
3. Acme Corporation is a US company
... is it legal for Jane to post publicly about the vulnerability with a proof of concept exploit?
Relatedly:
Why do security researchers privately inform companies of vulnerabilities and wait for them to patch before public disclosure? Are they afraid of liability?
Most of these things are best done across non-cooperative international borders, just to reduce the incentive for ‘throw them in jail’ as a easy ass covering measure.
Sandvig v. Barr tempers that a bit, with the DoJ now offering some guidance around good faith endeavors around security research.
I'd suggest Jane have a good lawyer on retainer, and a few years to spend in the tied up the legal system.
Because if they don’t inform the company and wait for the fix, their disclosure would make it easier for less ethical hackers to abuse the vulnerability and do real material harm to the company’s users/customers/employees. And no company would ever want to collaborate with someone who thinks it’s ok to do that.
It’s not even really a matter of liability IMO, it’s just the right thing to do.
(main exception: if the company refuses to fix the issue or completely ignores it, sometimes researchers will disclose it after a certain period of time because at that point it’s in the public’s best interest to put pressure on the company to fix it even if it becomes easier for it to be exploited)
You don't publish because you don't want to cause harm and you don't want to be liable for it.
You need to realize that vulnerabilities don't exist in a vacuum. They grant access to computer systems that control the life of people (millions of people) including their personal information, passwords, passport photos, card numbers, jobs, paychecks, transportation, food, etc... which is very likely to cover yourself, your mom, your family, your friends as you deal with larger companies.
When you publish a vulnerability, it will immediately be used by bad actors that intend to cause harm to all these people, including employees and customers.
The story is really about two things. Their poor information security is pathetic, but their actual surveillance tech is genuinely kind of politically concerning. Even if it is technically legal, it's unethical to record conversations without consent.
Good news! With AI programming assistance, this invasive technology--with the concomitant terrible security--will be available to even the smallest business so long as nephews "who are good with computers and stuff" exist!
The hilarious sarcasm throughout was the cherry on top for me.
https://web.archive.org/web/20250906150322/https://bobdahack...
E.g. their trademarks being put in the public domain and assets confiscated to compensate their victims.
The watch in amazement at how actual security suddenly becomes a priority.
While pretty egregious, this is sadly common. I'm certain there's a dozen other massive companies making similar mistakes.
58 more comments available on Hacker News