The Copilot Productivity Paradox
Key topics
The article discusses the productivity paradox of AI assistants like CoPilot, and the discussion revolves around the capabilities and limitations of these tools in software development, with some users sharing their experiences and concerns.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
10h
Peak period
8
12-18h
Avg / period
2.7
Based on 16 loaded comments
Key moments
- 01Story posted
Sep 6, 2025 at 8:08 PM EDT
4 months ago
Step 01 - 02First comment
Sep 7, 2025 at 5:55 AM EDT
10h after posting
Step 02 - 03Peak activity
8 comments in 12-18h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 9, 2025 at 9:39 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
This seems unlikely to me. However, the disruption this would cause for developers would be very much secondary to the disruption it would cause for several multinational companies who produce software as their primary product.
In fact though, you won't need that either.
You'll be able to give it a few drawings and/or pictures of objects and get it to output STL, STEP or whatever other CAD format you want directly, plus all the technical drawings. Refine from there.
This video of a dad making a pbj sandwich based on instructions from his kids shows why this is impossible [1]. There is too much context you are assuming, and by the time you specify all the context, it becomes almost a programming language.
Communication assumes a lot of shared and hidden context. Contexts are shared world models. For us humans, our shared world models conflict as well. World-models could are determined by beliefs which in turn could be determined by variety of factors. Context could be cultural (same language different cultural backgrounds), could be genetic (men and women), etc.
[1] https://www.youtube.com/watch?v=j-6N3bLgYyQ
This does require facilitating the implementer to have full access to use the original product.
People get around this via "clean room" reverse engineering where one engineer tears down the thing to be cloned and writes a detailed spec, and then a different engineer (who has never seen the internals of the thing in question, and so isn't "tainted" by that knowledge) implements it from that spec. You could do this with AI, but then all that spec writing/reading is done by a machine too, so you haven't really bought yourself anything legally.
To be clear, I'm not a lawyer. I'm just musing aloud.
This means they no longer hold copyright on their code.
My point above is just that this ("ChatGPT, make me a clone of Paint Shop Pro 4 but for modern Linux and in Rust. Here's a copy of the executable to get you started.") is a much more straightforward, old-fashioned kind of copyright infringement. I don't see why a court would treat it as different from "Let me decompile paintshoppro.exe to an IR and then recompile the IR for a Linux."
"build me a Fusion 360 alternative. Skip all of the collaboration stuff though, I just want to store files locally."
Just like you don't usually say what kind of machine code the compiler is supposed to generate.
This is slowly the case in SaaS products with integration workflows driven by AI.
Of course there all kinds of discrete points between the current situation and "perfect LLMs / AI".
The author didn’t tell us, so if we don’t know the model, what is the use of the article? The model matters a lot.
My main complaint is with the ergonomics of the IDE integration, does that differ between model?
I've used whatever has been the model de jour from the first public beta until fairly recently. Haven't experienced drastic quality changes between them.