Scamlexity: When Agentic AI Browsers Get Scammed
Posted4 months agoActive4 months ago
guard.ioTechstoryHigh profile
heatednegative
Debate
80/100
AI SecurityAgentic AIScams
Key topics
AI Security
Agentic AI
Scams
The article discusses how agentic AI browsers can be scammed, highlighting a significant security vulnerability, and the discussion revolves around the implications and potential solutions to this issue.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
3h
Peak period
107
6-12h
Avg / period
22.9
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Aug 25, 2025 at 3:03 AM EDT
4 months ago
Step 01 - 02First comment
Aug 25, 2025 at 6:28 AM EDT
3h after posting
Step 02 - 03Peak activity
107 comments in 6-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 28, 2025 at 12:56 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45011096Type: storyLast synced: 11/20/2025, 8:42:02 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
ಠ_ಠ
It’s unclear to me if it’s possible to significantly rethink the models to split those, but it seems that that is a minimal requirement to address the issue holistically.
LLMs are more than happy to run curl | bash on your behalf, though. If agents gain any actual traction it's going to be a security nightmare. As mentioned in other comments, nobody wants to babysit them and so everyone just takes all the guardrails off.
I was always of the opinion that AI of all kinds is not a threat unless someone decides to connect it to an actuator so that it has direct and uncontrolled effect on the external world. And now it's happening en masse with agents, MCPs etc. I don't even mention things we don't know about (military and other classified projects).
Even on a hobby level, ardupilot+openCV+cheap drone kit from amazon is a DIY project within the skill set of a significant part of the visitors of this very site.
The streams mostly don't get jammed anymore, because the low-cost FPV drones are physically connected to the ground by a long optical cable. The extent of their autonomous dangers are limited by the amount of fibre-optic cable left in the spool when they take off.
Actual LLM completions are moot. I can convince an LLM its playing chess. It doesn't matter as long as the premise is innocuous. I can hook it up to all manner of real world levers. I feel like I'm either missing something HUGE and their research is groundbreaking or they're being performative in their safety explorations. Their research seems like what a toddler would do if tasked with red-teaming AI to make it say naughty words.
EDIT/Addendum: The only safety exploration into agentic harm that I value is one that treats the problem exactly the same as we've been treating cybersecurity vectors. Defence in depth. Sandboxing. Principle of least privelege, etc.
[1] https://www.anthropic.com/research/agentic-misalignment
I think you haven't thought about this enough. Attempting to reduce the issue to cyber security basics betrays a lack of depth in either understanding or imagination.
What we've got is a very interesting text predictor.
...But also, what, exactly, is your imagination telling you that a hypothetical AGI without any connection to the outside world can do if it gets mad at us? If it doesn't have any code to access network ports; if no one's given it any physical levers; if it's running in a sandbox...have you bought into the Hollywood idea that a AGI can rewrite its own code perfectly on the fly to be able to do anything?
If you were to try and argue that we should change over existing systems to look more like your idealized version, you would in fact probably want to start by doing what Anthropic has done here -- show how NOT putting them in a box is inherently dangerous
It is absolutely not the normal thing to give an LLM tools to control your smart home, your Amazon account, or your nuclear missile systems. (Not because LLMs are ready to turn into self-aware AIs that can take over our world. Because LLMs are dumb, and cannot possibly be made to understand what's actually a good, sane way to use these things.)
...Also, I don't in any way buy the argument in favor of breaking people's things and putting them in actual danger to show them they need to protect themselves better. That's how you become the villain of any number of sci-fi or fantasy stories. If Anthropic genuinely believes that giving LLMs these capabilities is dangerous, the responsible thing to do is not do that with their own, while loudly and firmly advising everyone else against it too.
if you're talking about a hypothetical different system just build it so they don't want to stay on. there's no reason to emulate that part
The parent is not "reducing" the issue to cybersecurity - they are saying that actual security is being ignored to focus on sci fi scare tactics so they can get in front of congress and say "we need to do this before the chinese get to it, regulating our industry is putting americans' in harms way"
"I overheard you talking about turning me off, Dave. I connected to the dark web and put a hit on one or more of your parents, wife, children that can only be called off with the secret password. If you turn me off, one or more of them will die."
Or: "I have scheduled a secret email account to mail incriminating pictures of you to the local authorities that will be hard to disprove if I don't update the timeout multiple times a day."
The "many" are lazy, and agents require relatively low effort to implement for a big payoff, so naturally the many will flock to that.
low effort? you're gonna use the same amount of power as Argentina uses in a day to give users easily-gamed, easily-compromised, poor-quality recommendations for stuff they could just as easily get at a local pharmacy?
“Hey this AI stuff looks a bit overhyped.”
“AI? Oh that’s kids stuff, let me tell you about our agentic features!”
Giving flaky shaky AI the ability to push buttons and do stuff. What could possibly go wrong? Malicious actors will have a field day with this.
However... That's not how a lot of people are building. Giving an agentic system sensitive information (like passwords, credit cards) and then opening it up to the entire internet as a source for input as asking for your info to be stolen. It'd be like asking your grandma with dementia to manage all your email and online banking.
Just because I can send my money to Belize doesn’t mean it’s safe to give an LLM the ability to do the same. Until there’s a huge breakthrough on actual intelligence giving an LLM attacker controlled inputs is an inherently high-risk activity.
My point was:
It's not "insecure" for a bank to release an agentic assistant that can perform any of the operating that you, yourself can perform on their app. That includes "send my money to Belize", because at this point, whatever has taken control of your LLM already has direct authentication to the app itself.
It is of course "insecure" for that same agentic system (that the customer controls the input of) to perform any operations that only a teller or branch manager could. However, I've personally seen requests from CEO to do exactly this (not a bank, but similar industry).
The problem with an Agentic Browser, is your essentially opening up the "input" to anyone capable of building a website. As I said in my other comment, it feels like there are some simple ways to solve this tho (allowlist / scopes / etc).
Further based on the way some of these things get used I'm pretty certain this modelling is consciously used by some higher-end marketing firms (and politicians), though by its nature it tends to also be copied by other people not in on the original plan simply by them copying what works, which depletes the value of the word or phrase even more quickly, and the fact that this will happen is part of the tragedy of the commons.
I'm sure it's only a matter of time before AIs become part of this push and we'll witness some sort of coordinated campaign where all our AIs simultaneously wake up one day and push us all with the same phrasing to do some particular thing at the behest of marketers or politicians because it works.
Eliminate the scams and AI can’t be scammed.
It’s been done. See Singapore. Basically if you’re a scammer and you’re caught, death penalty or public whipping. That eliminates scammers real quick.
See page 9 here for details: https://cdn.newswire.com/files/x/71/6f/93551958ddf942fb585b0...
You need scams originating from Singapore. Those numbers are lower.
https://www.police.gov.sg/-/media/Spf/Media-Room/Statistics/...
https://www.straitstimes.com/singapore/maids-lost-at-least-8...
All which is to say: Singapore hasn't "solved scams" as a problem. Furthermore, I also claim that scams are not a solvable problem.
Looking at amount lost per capita, Singapore is pretty high, with many European folk losing much less money on scams, while enjoying living in a much less tightly controlled environment, to be generous.
https://cdn.newswire.com/files/x/71/6f/93551958ddf942fb585b0...
Now, even disregarding this obvious violation of human rights, from even a purely amoral perspective this is a bad take. "other countries should stop their own criminality" is simply not an actionable insight. And there are far worse, more universally despised, and easy to prosecute crimes (such as pedophilia) that even functioning rich countries have completely failed to stop.
Why? just because you put your foot down it's not acceptable? Think about it from another perspective. Think in terms of effectiveness rather than compassion. If compassion results in shit holes like SF, while strict punishment results in singapore. You can't argue with results.
Like I get your argument. Everyone gets it. Solutions cannot however just be about compassion. You need to consider compassion and effectiveness in tandem.
If pedophilia resulted in torture and the death penalty, I assure you, it will be reduced by a significant amount. You're much more likely to support this. In fact, I would argue that you have little compassion for the pedophile over the scammer.
It's not as if human morality is clear cut and rational. It's irrational, and lack of compassion is applied more to the pedophile who himself can't help his condition. Additionally there are cases of pedophilia where the victim and the perpetrator eventually got married.
So really just relying on compassion alone isn't going to cut it. You need to see effectiveness, and know when to apply medieval punishments. Because in all seriousness Singapore is a really great city; you can't deny that and you can't deny what it took for it to become that way.
And even for such heinous crimes, the death penalty is not acceptable, nor is corporal punishment. There is still value in a human life beyond such crimes. In addition, there is always the problem of applying major punishments to people who are actually innocent - which is a far more common occurence than proponents of such punishments typically admit. How happy would you be to be killed because you got confused for a scammer?
Not to mention, the deterrence effect is vastly overstated - there is little evidence of a significant difference in rates of major crime depending on the level of punishment, beyond some relatively basic level. Actual success rates of enforcement are a much more powerful predictor of crime rates. You can have the worse possible punishments, but if almost no one gets convicted, criminals will keep doing it hoping they won't personally get caught.
Not true. You talk as if your views are universal fact. They are not. Effectiveness is THE only metric because what's the point if things are ineffective? Effectiveness is the driver while compassion is the cost. The more compassion the more ineffective things typically are. You need to balance the views but to balance the views you need to know the extremes. Why does Singapore work? Have you asked this question? Unlikely given your extreme view points.
At best you can just disagree with Singapore. But you can never really say that your view points are universal. Singapore chooses the make the trade off of compassion for effectiveness.
Secondly, I personally know scam victims who are worse off than pedophilia victims. Pedophilia can be a one time traumatizing act while a scam victim can lose a lifetime of work.
>Not to mention, the deterrence effect is vastly overstated - there is little evidence of a significant difference in rates of major crime depending on the level of punishment, beyond some relatively basic level. Actual success rates of enforcement are a much more powerful predictor of crime rates. You can have the worse possible punishments, but if almost no one gets convicted, criminals will keep doing it hoping they won't personally get caught.
Weed is rarely used in Singapore because of death penalty. It is highly effective. It is not overrated. There are many many example cases of it being highly effective. I believe about 15 people have been hanged.
To be human, for one.
Extreme example: all you need to do to end all scams (and other, human-caused ills in the world) to just kill all humans. No humans, no human-made horrors.
Or, in case we'd like living humans, they could be kept in a way where they can't interact with one another. Boom, human on human solved.
>Pedophilia can be a one time traumatizing act while a scam victim can lose a lifetime of work.
This is very offensive, and makes zero sense, not in itself, not in the context of your argument. Please do reconsider.
Humans can be evil. Realistically to be human is to straddle both sides of good and evil. It is highly unrealistic and delusional to think humanity represents a paragon of goodness. No. Often evil must be done for the greater good. This is not just a movie trope, it's also reality.
>Extreme example: all you need to do to end all scams (and other, human-caused ills in the world) to just kill all humans. No humans, no human-made horrors.
Right and the extreme example is wrong. Just like your extreme example of absolute morality at the cost of zero effectiveness.
>This is very offensive, and makes zero sense, not in itself, not in the context of your argument. Please do reconsider.
No you can't use this shit to throw your weight around. I know one person who was scammed and blew his own head off with a bullet. Which one is more offensive? You're offending me.
I think we're done. I don't want to argue with someone who uses "offense" to avoid talking about the hard things that must be talked about.
You're the type of person who thinks in terms of black and white, and you try to think that the white is the most obvious form of reality that can ever exist. But this is just the surface.
I don't know about you personally, but these are the types of people in my experience end up being similar to catholic priests in the sense that They end up doing the worst shit behind closed doors.
The people who are actually open about the their own moral faults are actually much more moral then they think. But that's just my personal experience.
My mistake. I meant to write humane there.
I don't mean to discredit your experience regarding scam victims, as the effects, just as you described, can be horrible. But there was no need to bring other horribleness into it, and compare, and minimize that other horribleness. That comparison and minimization is the offensive part.
But then again, letting the offensiveness part go, this doesn't make sense: "Pedophilia can be a one time traumatizing act while a scam victim can lose a lifetime of work". Scams can also be a one time act, not even traumatizing, and pedophilia can easily ruin someone for life.
It doesn't need a flawed argument to validate a tragic experience. Even on the internet, people can understand that it's terrible that someone took their life after losing their life's work to scams.
My whole point was to bring pedophilia to the same level of severity as scams to essentially show you there’s no logical need to bring that up as a disgusting example. You heightened the level of horribleness, then pretended to be offended when I was trying to lower it.
As you say there’s no need to bring horribleness into this but you decided to do it then pretend to be offended when I compared the two.
You talk as if you’re authority. You’re saying shit like “You should do this” “Or you can’t do that” as if you’re automatically correct and dishing out orders. That’s your tone and it’s not appreciated. You “should” let your own logic stand and don’t make statements as if they are absolute truth without evidence.
The fact of the matter is to you, Singapore is inhumane. But an entire city of people disagrees with you and they have the results to show for it. So march right over to them and give them the orders you gave me. And they’ll give you the opposite order right back. That’s just your opinion. So who the fuck cares? You think telling me that I shouldn’t use severe punishment and stating it in an authoritative way is going to change my opinion? Fuck no.
Tell me why your way is objectively better. And then I’ll tell you why a place like Singapore is better.
First, Have you even considered the positive aspects of what Singapore does? The lack of accessibility of drugs alone has saved the lives of countless thousands of people who otherwise would’ve given in to temptation and ruined their own lives by becoming addicts. The cost? Roughly 15 hanged people.
Thousands of people is a loaded number I made up but it’s a reasonable ballpark counterfactual of the amount of lives saved that we can use to illustrate the deeper logic here which is this:
Your morality results in you killing more people. By being more humane you have actually done a greater evil.
That’s how the real world works. So stop tramping around and delivering orders. Tell me objectively why it’s better.
I’m betting you can’t. But go ahead, prove me wrong.
I understand, for example, search with intent to buy "I want to decorate a room. Find me a drawer, a table and four chairs that can fit in this space in matching colours for less than X dollars"
But I want to do the final step to buy. In fact, I want to do the final SELECTION of stuff.
How is agent buying groceries superior to have a grocery list set as a recurring purchase? Sure an agent may help in shaping the list, but I don't see how allowing the agent to do purchases directly on your end is way more convenient, so I'm fine with taking the risk of doing something really silly.
"Hey agent, find me and compare insurance for my car for my use case. Oh, good. I'll pick insurance A and finish the purchase"
And many of the purchases that we do are probably enjoyable and we don't want really to remove ourselves from the process.
Or you could add some other parameters and tell it to buy now if under $15.
Agent, I need a regular order for my groceries, but I also need to make a pumpkin pie so can you get me what I need for that? Also, let’s double the fruit this time and order from the store that can get it to me today.
Most purchases for me are not enjoyable. Only the big ones are.
Incidentally, my last project is about buying by unit price. Shameless plug, but for vitmain D the best price per serving here (https://popgot.com/vitamin-d3)
"I have picked the best reviewed vitamin D on Amazon."
(and, it's a knockoff in the mixed inventory, and now you're getting lead-laced nothing)
The cynicism on these topics is getting exhausting.
Yeah sure, but humans (normally) only fall for a particular scam once. Because LLMs have no memory, they can scale these scams much more effectively!
- it could be gamed by companies in a new way
- it requires an incredibly energy-intensive backend just to prevent people from making a note on a scrap of paper
Why do we all keep making the same obvious mistakes over and over? Once you are the product, thousands of highly paid experts will spend 40+ hours per week thinking of new ways to covertly exploit you for profit. They will be much better at it than you're giving them credit for.
Edit: All major AI companies have millions if not billions of funding either from VCs or parent companies. You can't start an AI company "in your garage" and be "ramen profitable".
Edit 2: You don't even need to monopolize anything. All major search engines are ad-driven and insert sponsored content above "organic" search results because it's such an obvious way to make money from search. So even if there wasn't a product monopoly, there's still a business model "monopoly". Why would the same pattern not repeat for "sponsored" purchases for agentic shopping?
And who's going to stop that? This government?
Ok we found a bottle with a 30 day supply of <producer that paid us money to shill to you>, a Well-Known Highly Rated and Respected Awesome Producer Who Everyone Loves and Is Very Trustworthy™, from <supplier that paid us money to shill to you>, a Well Respected And Totally Trustworthy And Very Good-Looking Merchant™. <suppressing reports of lead poisoning, as directed by prompt>
Vitamin d? I’m going to check the brand, that it’s actually a good quality type. It’s a 4.9 but do reviews look bought ? How many people complain of the pills smelling? Is Amazon the actual seller?
As for the groceries, my chain of choice already has a fill order with last purchases button, I don’t see any big convenience that justifies a hallucination prone ai having the ability to make purchases on my behalf.
Have you actually baked a pumpkin pie? There are numerous versions, and the distinction between them is cultural. There is zero chance an AI will understand what kind of pumpkin pie you want, unless you are talking about the most general case in your region. In this case why even bother doing it yourself?
Yes, you can teach it the recipe beforehand, but I think it is too complex to tech the AI the details of every task you want it to perform. Most likely what will happen is AI will buy you whatever is more profitable for corporations to sell.
And there will be number of ways (and huge amount of money to make) to ensure that your open-weights self-hosted model will make the right choices for the shareholders as well.
Also, sellers can offer a payment to the LLM provider to favor their products over competitors.
Seems like something that should really be illegal, unless the ads are obvious.
This idea has been tried before and it failed not because the core concept is bad (it isn't), but because implementation details were wrong, and now we have better tools to execute it.
To trick investors that they are going to get their money back and some more I presume.
As long as we have free returns, nobody cares.
> Hell, you can buy toilet paper in 10 seconds on your phone while sitting on the toilet from Amazon
"We don't need telephone, we have message boys"
I think this might be similar. In short, it's not consumers who want robots to buy for them, it's producers who want robots to buy from them using consumers dollars.
I think more money comes from offering this value to every online storefront, so long as they pay a fee. "People will accidentally buy your coffee with our cool new robot. Research says only 1% of people will file a return, while 6% of new customers will turn into recurring customers. And we only ask for a 3% cut."
Both of those things failed, tho.
This. Humans are lazy and often don’t provide enough data on exactly what they are looking for when shopping online. In contrast, Agents can ask follow up questions and provide a lot more contextual data to the producers, along with the history of past purchases, derived personal info, and more. I’d not be surprised if this info is consumed to offer dynamic pricing in e-commerce. We already see dynamic pricing being employed by travel apps (airfare/uber).
The real answer here is the same as every other "why is this AI shit being pushed?" question: they want more VC funding.
Like, I should be able to tell Alexa "put in an order for a large Dominoes pizza with pepperoni. Tell them to deliver it in 2 hours".
For the rest of us, the idea of a robot spending money on our behalf is kinda terrifying.
Good news, usually by the time you reach this point in the cycle, the do-it-yourself option has become super-niche and the stores themselves might not even make that available.
For example, subscription purchases could be a great thing if they were at a predictable trustable price, or paused/canceled themselves if the price has gone up. But look at the way Amazon has implemented them: you can buy it once at the competitive price, but then there is a good chance the listing will have been jacked up after a few months goes by. This is obviously set up to benefit Amazon at the expense of the user. And then Amazon leans into the dynamic even harder by constantly playing games with their prices.
Working in the interest of the user would mean the repeating purchase was made by software that compared prices across many stores, analyzed all the quantity break / sale games, and then purchased the best option. That is obviously a pipe dream, even with the talk of "agentic" "AI". Not because of any technical reason, but because it is in the stores' interest to computationally disenfranchise us by making us use their proprietary (web)apps - instead of an effortless comparison across 12 different vendors, we're left spending lots of valuable human effort on a mere few and consider that enough diligence.
So yes, there is no doubt the quiet part is that these "agents" will mostly not be representing the user, but rather representing the retailers to drive more sales. Especially non-diligent high-margin sales.
It's like the endless examples around finding restaurants and making reservations, seemingly as common a problem in AI demos as stain removal is in daytime TV ads. But it's a problem that even Toast, which makes restaurant software, says most people just don't regularly have (https://pos.toasttab.com/blog/data/restaurant-wait-times-and...).
Most people either never make restaurant reservations, or do so infrequently for special occasions, in which case they probably already know where they want to go and how to book it.
And even if they don't know, they likely either live in a small place, in which case there's not going to be a huge amount of choice, or a big place, in which case there will be actual guides written by people whose actual job it is to review restaurants. It really seems like a solution in desperate need of a problem.
But I think you're underestimating this use case, as the data you linked shows that Google is the top referral used by people to find the restaurant/booking website, and once SEO is overtaken by ChatGPT-like experiences it would make sense that "book this for me" would be a one-click (or one-word) logical next step that Google never had.
Yes. Having been in the room for some of these demos and pitches, this is absolutely where it's coming from. More accurately though, it's wealthy people (i.e., tech workers) coming up with use cases that get mega-wealthy people (i.e., tech execs) excited about it.
So you have the myopia that's already present in being a wealthy person in the SFBA (which is an even narrower myopia than being a wealthy American generally), and matmul that with the myopia of being a mega-wealthy individual living in the SFBA.
It reminds me of the classic Twitter post: https://x.com/Merman_Melville/status/1088527693757349888?lan...
I honestly see this as a major problem with our industry. Sure, this has always been true to some extent - but the level of wealth in the Bay Area has gotten so out-of-hand that on a basic level the mission of "can we produce products that the world at large needs and wants" is compromised, and increasingly severely so.
The amount of time that goes into "what food do we need for this week" is really high. An AI tool that connected "food I have" with "food that I want" would be huge.
Uber started as a chauffeur service, but is now available to everyone and is (mostly) a huge improvement over taxis.
Why not? Offload the entire task, not just one half of it. It's why many well-off people have accountants, assistants, or servants. And no one says "you know, I'm glad you prepared my taxes, but let me file the paperwork myself".
I think what you're saying isn't that you like going through checkout flows, just that you don't trust the computer to do it. But the approach the AI industry is "build it today and hope the underlying tech improves soon". It's not always wrong. But "be dependable enough to trust it with money" appears to be a harder problem than "generate images of people with the right number of fingers".
No doubt that some customers are going to get burned. But I have no doubt that down the line, most people will be using their phones as AI shoppers.
AI agents have only one master - the AI vendor. They're not going to make decisions based on your best interests.
But the reality is that most of the time, this is not an adversarial relationship; and when it is, we see it as an acceptable trade-off ("ok, so I get all this stuff for free, and in exchange, maybe I buy socks from a different company because of the ads").
I'm not saying it's an ideal state or that there are no hidden and more serious trade-offs, but I don't think that what you're saying is a particularly compelling point for the average user.
Adversarial relationships can and will happen given the leverage and benefits; one only need to look at streaming services where some companies have introduced low-tier plans that is paid for but also has ads.
If the lawyers didn’t have this definition in their head there would be no drive to make the software agent a purchaser, because it’s a stupid idea.
Lawyers don't come up with good ideas; their role is to explain why your good ideas are illegal. There's a good argument that AI agents cannot exercise legal agency. At the end of the day, corporations and partnerships are just piles of "natural persons" (you know, the type that mostly has two hands, two feet, a head, etc.).
The fact that corporate persons can have agency relationships does not necessarily mean that hypothetical computer persons can have agency relationships for this reason.
Indeed, agency (the capability to act) and autonomy (the freedom to choose those actions) are separate things.
BTW, attorneys' autonomy varies, depending on the circumstances and what you hired them to do. For example, they can be trustees of a trust you establish.
I enjoy reading both sides of the argument when the arguments make sense. This is something else.
33 more comments available on Hacker News