'under Tremendous Pressure': Newsom Vetoes Long-Awaited AI Chatbot Bill
Posted3 months agoActive3 months ago
sfgate.comOtherstory
skepticalmixed
Debate
40/100
AI RegulationCalifornia PoliticsTechnology Governance
Key topics
AI Regulation
California Politics
Technology Governance
California Governor Gavin Newsom vetoed a bill aimed at regulating AI chatbots, sparking debate among commenters about the implications for technology governance and the role of government in regulating emerging technologies.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
15m
Peak period
3
0-1h
Avg / period
1.7
Key moments
- 01Story posted
Oct 14, 2025 at 6:40 PM EDT
3 months ago
Step 01 - 02First comment
Oct 14, 2025 at 6:55 PM EDT
15m after posting
Step 02 - 03Peak activity
3 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 14, 2025 at 10:28 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45585901Type: storyLast synced: 11/20/2025, 4:02:13 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
To a lot of people here that's a feature. I don't think so. It would put California minors at a huge economic disadvantage to kids in other places. One state can't put AI back in the box. I think California has the right to run that experiment, but Newsom made a wise choice in stopping it.
This feels like conjecture. Can't we just as easily reason that kids with access to AI become complacent and reliant on non-authoritative sources?
I think we need a proper A/B test before we conclude these things for certain.
This "for the children" law would be widely flouted.
> is not foreseeably capable
I'm not sure how much consistency there is in "foreseeably" when it comes to LLMs these days. Even among programmers, let alone the general public.
> 22757.22.(a)(5) [It may not foreseeably be capable of:] Prioritizing validation of the user’s beliefs, preferences, or desires over factual accuracy or the child’s safety.
So if a kid says "I like chocolate", and it says "Everybody does, it's yummy", isn't that technically a violation? How should a court rule if a lawsuit occurs?
[0] https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml...