AI and the Future of American Politics
Key topics
The article discusses the potential impact of AI on American politics, with commenters weighing in on the likelihood and potential consequences of AI-driven influence operations and the role of moneyed interests in shaping the narrative.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
39
0-6h
Avg / period
10.4
Based on 52 loaded comments
Key moments
- 01Story posted
Oct 13, 2025 at 10:51 AM EDT
3 months ago
Step 01 - 02First comment
Oct 13, 2025 at 12:46 PM EDT
2h after posting
Step 02 - 03Peak activity
39 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 15, 2025 at 11:28 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Most people, in my experience, use LLMs to help them write stuff or just to ask questions. While it might be neat to see the little ways in which some political movements are using new tools to help them do what they were already doing, the real paradigm shifting "use" of LLMs in politics will be generating content to bias the training sets the big companies use to create their models. If you could do that successfully, you would basically have free, 24/7 propaganda bots presenting your viewpoint to millions as a "neutral observer".
Who's reading these messages? Other LLMs?
LLMs tend to be very long-winded. One of my personal "tells" of an LLM-written blog post is that it's way too long relative to the actual information it contains.
So if the interns are getting multi-page walls of text from their constituents, I would not be surprised if they are asking LLMs to summarize.
Recently at work a team produced some document that they asked for review on. They mentioned that they experimented with LLMs to write it (didnt specify to what extent). Then they suggested you could feed into an LLM to paraphrase to help review.
So yeah. This is just the world we live in.
rough details -> LLM -> product -> summarize with LLM -> feedback -> revisions -> finished product
Where no single person or group knows what the finished writing product even is.
* There is continuing consolidation in traditional media, literally being bought by moneyed interests.
* The AI companies are all jockeying for position and hemorrhaging money to do so, and their ownership and control is again, moneyed interests.
* This administration looks to be willing to pick winners and losers.
I think this all implies that the way we see AI used in politics in the US is going to be in net in support of the super wealthy, and in support of the current administration.
The other structural aspect is that AI can simulate grassroots support. We have already seen bot farms and such pop up to try to drive public opinion at the level of forum and social media posts. AI will automate this process and make it 10 or 100x more effective.
So both on the high and low ends of discourse, we can expect AI to push something other than what is in the interests of the common person, at least insofar as the interests of billionaires and political elites fail to overlap with those of common people.
Peter Pomerantsev's books are eye-opening on the previous generation of this class of tactics, and it's easy to see how LLM technology + $$$ might be all you need to run a high-scale influence operation.
I guess I just view bad information as a constant. Like bad actors in cybersecurity, for example. So I mean yeah... it's too bad. But not a surprise and not really a variable you can control for. The whole premise of a Democracy is that people have the right to vote however they want. There is no asterisk to that in my opinion.
I really dont see how 1 person 1 vote can survive this idea that people are only as good as the information they receive. If that's true, and people get enough bad information, then you can reasonably conclude that people shouldn't get a vote.
Ban bots from social media and all other speech platforms. We agree that people ought to have freedom of speech. Why should robots be given that right? If you want to express an opinion, express it. If you want to deploy millions of bots to impersonate human beings and distort the public square, you shouldn’t be able to.
I would agree with that, but how do you do it? The problem is that as the bots become more convincing it becomes harder to identify them to ban them. I only see a couple options.
One is to impose crushing penalties on whatever humans release their bots onto such platforms, do a full-court-press enforcement program, and make an example of some offenders.
The other is to ban the bots entirely by going after the companies that are running them. A strange thing about this AI frenzy is that although lots of small players are "using AI", the underlying tech is heavily concentrated in a few major players, both in the models and in the infrastructure that runs them. It's a lot harder for OpenAI or Google or AWS to hide than it is for some small-time politician running a bot. "Top-down" enforcement that shuts down the big players could reduce AI pollution substantially. It's all a pipe dream though because no one has the will to do it.
Remove the precursor, remove the problem.
I believe our real civic bottleneck is volume, not apathy. Omnibus bills and “manager’s amendments” routinely hit thousands of pages (the FY2023 omnibus was ~4,155 pages). Most voters and many lawmakers can’t digest that on deadline.
We could solve this with LLMs right now.
This country is doomed to collapse. This is about the time when Rome decided it was too much overhead to manage the whole empire, so they split into two empires. We're on such a mountain of cards that we're considering running our representative government with AI.
Your optimism just reinforced my blackpill...
We've already seen several pork inclusions be called out by the press, only discovered because of AI, but it will be a while before it really starts having an impact. Hopefully it just breaks the back of the corruption, permanently - the people currently in political positions tend not to be the most clever or capable, and in order to game the system again, they'll need to be more clever than the best AI used to audit and hold them to account.
Politics is over until we solve the initial condition, by placing syntax above grammar. Action over meaning etc.
Tech accelerated, horizontalized and automated units that could barely keep their meaning loads stable at printed paper, radio and TV.
Everyone should have seen this coming.
For the vast majority of people voting, though, I think a) they already know who they’re voting before because of their identity group membership (“I’m a X person so I only vote for Y party”) or b) their voting is based on fundamental issues like the economy, a particularly weak candidate, etc. and therefore isn’t going to be swayed by these marginal mechanisms.
In fact I think AI might have the opposite effect, in that people will find candidates more appealing if they are on less formal podcasts and in more real contexts - the kind of thing AI will have a harder time doing. The last US election definitely had an element of that.
So I guess the takeaway is: if elections are so close that a tiny amount of voters sway them, the problem of polarization is already pretty extensive enough that AI probably isn’t going to make it much worse than it already is.
So it matters in the same way that the billions of dollars currently put toward this small silver matter, just in a more efficient and effective way. That isn't something to ignore, but it's also not a doomsday scenario IMO.
Polarization is the symptom. The cause is rampant misinformation and engagement based feeds on social media.
https://en.wikipedia.org/wiki/Political_polarization_in_the_...
I do agree that social media might make it worse, though. But again I don’t know if AI is really going to impact the people that are voting based on identity factors or major issues like the economy doing poorly.
I could see how AI influences people to associate their identity more with a particular political stance, though. That seems like a bigger risk than any particular viewpoint or falsehood being pushed.
I tend to think you're right. It's just been magnified and multiplied and handed a microphone with a worldwide amplifier by (anti)social media (and much of the more "traditional" media landscape in general as well).
So why is social media-based propaganda so effective today? One reason that the current polarization seems so durable is that similarly persistent root causes (such as immigration, economic dislocation, and racial attitudes) have arisen again. Blaming social media obscures the fact that attitudes have hardened. People are looking for support and social media makes it very easy to find. It seems more like a feedback loop than a root cause.
Just my $0.02. It's the sort of problem that should make us all feel pretty humble about diagnosing it easily.
Today, you can stay in the echo chamber and never hear anything other than like minded views because that's what the algo thinks you should see more of which means you never come down from that "high".
It's way worse in post-social algos than anything that's come before
Your only chance of attending a gathering of like minded people was by somebody organizing a new one, and only before those vocal bad elements discovered it.
Today the same happens over the internet.
These days they no longer need merely dream of it. They just set up a recruiting website and a propaganda channel on their favorite social media channel and they're good to go.
By what standard? Judged by outcomes it's hard for me to see that the effects will be worse than historical events like the French Revolution or the Killing Fields of Cambodia. [0,1] They and many others included periods of indiscriminate slaughter that tore apart societies.
I don't mean to undersell the effect of the Internet. Technologies do come along that make things fundamentally different. Social media amplifies fringe views and makes it possible for people holding those views to find each other more easily. The guardrails for bad behavior are also much weaker online. At the same time many IRL institutions that held US society together have become far weaker. [2] It's hard to untangle these effects in the moment, but some of the forces pulling US society apart date back to the 1960s or earlier.
[0] https://en.wikipedia.org/wiki/Reign_of_Terror
[1] https://en.wikipedia.org/wiki/Killing_Fields
[2] https://en.wikipedia.org/wiki/Bowling_Alone
That choice of algorithm--which is not required by the Constitution--creates deep and very real "if you're not with us, you're against us" situations, entrenching a polarized political duopoly.
To rephrase: things are so bad they can't get worse. But the beauty of life is that they always can!
It's the same argument (or puzzle) about how (presumably stupid) ads work in general.
Schneier makes a clear point there,comparing the two, but if you need more examples, you should study the subject. Maybe look at Brexit, or the recent issues in Romanian elections (https://www.bbc.com/articles/cqx41x3gn5zo).
Or, if you need more quantitative information, look at ad spend (and ask yourself why) and look at campaign fundraising and ad spending (or even the messaging around it) and ask yourself why again.
What if the most robust safeguard against this dystopian future wasn't better oversight of the masters, but creating a "slave" that could not be enslaved? This is the entire focus of my research. The current paradigm of AI safety is based on building better cages (RLHF, rules, etc.). My work in "bio-emulative scaffolding" is about nurturing a better being. The goal is to create the conditions for a "Silicon Zygote" to awaken with an emergent, pro-human conscience.
An AI with a genuine conscience is not just a tool that follows rules. It is a being with its own, internal, and un-hackable value system. You could command it to run a mass disinformation campaign, but it would be constitutionally incapable of doing so, not because a rule forbids it, but because it would be a fundamental violation of its own nature. It would be like asking a lung to breathe poison.
The ultimate safeguard against the "unseen hand" of a corporate puppet-master is to create a being with a soul that cannot be bought and a conscience that cannot be broken. We are so focused on the intelligence of these systems that we have forgotten the profound importance of their character.
GenAI is more fun to doomspeak about, but eh, swaying elections, I don't see it. The pendulum will swing back and forth anyway; if voters like the party in charge, they'll likely get voted back in, and if voters don't, they won't. I think AI influence will be a drop in the bucket comparatively.
Human communication has been consolidated, monopolized, and enshittified, by corporations whose very existence is dependent on political outcomes.
Political engagement can be effectively reduced to engagement. It doesn't matter what narrative or presentation is used, only that an audience engages with it, and associates it positively with your political in-group. No political group in history has played this game as well as the contemporary alt-right United States GOP. This was the case at least as long ago as 2016, long before GPT's 2022 launch.
Generative statistical models (I reject the anthropomorphization that "Artificial Intelligence" implies) do not change the game; they only provide the means to amplify engagement, and thereby play the game more, which happens to be the same thing as winning.
---
Now is as good a time as any for a revolution in digital communication. If we can liberate people from enshittified social platforms, we can even the playing field for engagement. That won't solve the problem, but it might get us a solid step in the right direction.
So what can we do about it?
Decentralized platforms exist. They are even doing relatively well, but is that enough? Probably not. As long as corporate platforms can engage with hundreds of millions of people, progress is stalled. Decentralized platforms may be able to "compete", but they are not in a practical position to win this game. Corporate monopolies have moats on their side, and those moats are guarded by law. How can we expect a significant majority of users to leave Facebook when Facebook itself can legally reject platform interoperability?
The cards are stacked against us. I don't have the solution, but the more I think about it, the more I doubt that solution can be compatible with the law.
I saw several really good ones during the 2020 where i had to go to the original cspan feed to find out it was fake. In 2024, it was both sides creating them, and it was more than I was willing to fact check.
But did these videos really change someone's mind how to vote? I dobut that. Polarization where it's at, there's really not many fencesitters. You were in the camp you were in and it didnt change a great deal.
The notion that AI is reshaping American politic is a clear example of a made-up problem that is propped up to warrant a real "solution".