Why AI Code Fails Differently: What I Learned Talking to 200 Engineering Teams
Key topics
I kept hearing the same pattern: some teams are shipping 10-15 AI PRs daily without issues. Others tried once, broke production, and gave up entirely.
The difference wasn't what I expected– it wasn't about model choice or prompt engineering.
---
One team shipped an AI-generated PR that took down their checkout flow.
Their tests and CI passed, but AI had "optimized" their payment processing by changing `queueAnalyticsEvent()` to `analytics.track()`. The analytics service has a 2-second timeout so when it's slow, payment processing times out.
In prod, under real load, 95th percentile latency went from 200ms to 8 seconds. Ended up with 3hh of downtime and $50k in lost revenue.
Everyone on that team knew you queue analytics events asynchronously, but that wasn't documented anywhere. It's just something they learned when analytics had an outage years ago.
*The pattern*
Traditional CI/CD catches syntax errors, type mismatches, test failures.
The problem is that AIs don't make these mistakes. (Or at least, tests and lints catch them before they get committed). The problem with AI is that it generates syntactically perfect code that violates your system's unwritten rules.
*The institutional knowledge problem*
Every codebase has landmines that live in engineers' heads, accumulated through incidents.
AIs can't know these, so they fall into the traps. It's then on the code reviewer to spot them.
*What the successful teams do differently*
They write constraints in plain English. Then AI enforces them semantically on every PR. Eg. "All routes in /billing/* must pass requireAuth and include orgId claim"
AI reads your code, understands the call graph, and blocks merges that violate the rules.
*The bottleneck*
When you're shipping 10x more code, validation becomes the constraint; not generation speed.
The teams shipping AI at scale aren't waiting for better models. They're using AI to validate AI-generated code against their institutional knowledge.
The gap between "AI that generates code" and "AI you can trust in production" isn't about model capabilities, it's about bridging the institutional knowledge gap.
The article discusses why AI code fails differently and the importance of encoding institutional knowledge into constraints, sparking a discussion on the best approaches to achieve this.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
32s
Peak period
3
0-2h
Avg / period
2.5
Key moments
- 01Story posted
Nov 12, 2025 at 10:41 AM EST
2 months ago
Step 01 - 02First comment
Nov 12, 2025 at 10:41 AM EST
32s after posting
Step 02 - 03Peak activity
3 comments in 0-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 13, 2025 at 7:24 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Some teams are using Claude or similar models in GitHub Actions, which automatically review PRs. The rules are basically natural language encoded in a YAML file that's committed in the codebase. Pretty lightweight to get started.
Other teams upgrade to dedicated tools like cubic. We have a feature where you can encode your rules either in our UI, or we're releasing a feature where you can write them directly in your codebase. We'll check them on every PR and leave comments when something violates a constraint.
The in-codebase approach is nice because the rules live next to the code they're protecting, so they evolve naturally as your system changes.
Instead of layering on another AI for validation, maybe code generation should be used as a catalyst to finally formalize these rules. Turn them into custom linting rules, architectural tests (like with ArchUnit), or just well-written documentation that a model can be fine-tuned on. Using AI as a crutch for bad processes is a dangerous path
If you're curious, you can check it out here: https://cubic.dev
Happy to answer any questions about what we've seen working (or not working) across different teams.