Fast Typescript (code Complexity) Analyzer
Posted2 months agoActive2 months ago
ftaproject.devTechstory
calmmixed
Debate
40/100
Code AnalysisTypescriptSoftware DevelopmentComplexity Metrics
Key topics
Code Analysis
Typescript
Software Development
Complexity Metrics
The Fast TypeScript Analyzer is a tool for measuring code complexity, sparking discussion on its effectiveness and usefulness in improving code quality.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
3h
Peak period
4
6-7h
Avg / period
2.3
Comment distribution23 data points
Loading chart...
Based on 23 loaded comments
Key moments
- 01Story posted
Oct 25, 2025 at 1:51 AM EDT
2 months ago
Step 01 - 02First comment
Oct 25, 2025 at 5:01 AM EDT
3h after posting
Step 02 - 03Peak activity
4 comments in 6-7h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 25, 2025 at 2:53 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45701607Type: storyLast synced: 11/20/2025, 11:59:22 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I prefer redundancy analysis checking for duplicate logic in the code base. It’s more challenging than it sounds.
That's a failure to understand and interpret computational complexity in general, and cyclomatic complexity in particular. I'll explain why.
Complexity is inherent to a problem domain, which automatically means it's unrealistic to assume there's always a no-branching implementation. However, higher-complexity code is associated with higher likelihood of both having bugs and introducing bugs when introducing changes. Higher-complexity code is also harder to test.
Based on this alone, it's obvious that is desirable to produce code with low complexity, and there are advantages in refactoring code to lower it's complexity.
How do you tell if code is complex, and what approaches have lower complexity? You need complexity metrics.
Cyclomatic complexity is a complexity metric which is designed to output a complexity score based on a objective and very precise set of rules: the number of branching operations and independent code paths in a component. The fewer code paths, the easier it is to reason about and test, and harder to hide bugs.
You use cyclomatic complexity to figure out which components are more error-prone and harder to maintain. The higher the score, the higher the priority to test, refactor, and simplify. If you have two competing implementations, In general you are better off adopting the one with the lower complexity.
Indirectly, cyclomatic complexity also offers you guidelines on how wo write code. Branching increases the likelihood of bugs and makes components harder to test and maintain. Therefore, you are better off favoring solutions that minimize branching.
The goal is not to minimize cyclomatic complexity. The goal is to use cyclomatic complexity to raise awareness on code quality problems and drive your development effort. It's something you can automate, too, so you can have it side by side with code coverage. You use the metric to inform your effort, but the metric is not the goal.
It could be misapplied, of course, like every other principle. For example, DRY is a big one. Just like DRY, there are cases where complexity is deserved: if nothing else, simply considering that no code used in real world context can ever be perfect, it is useful to have another measure that could hint on what to focus on in future iterations.
You are free to interpret the score within the broader context of your own experience, the problem domain your code addresses, time constraints, etc.
(score is 7) function get_first_user(data) { first_user = data[0]; return first_user; }
Score better than this:
(score is 8) function get_first_user(data: User[]): Result<User> { first_user = data[0]; return first_user; }
I mean, I know that the type annotations is what gives the lower score, but I would argue that the latter has the lower cognetive complexity.
Not really. TypeScript introduces optional static type analysis, but how you configure TypeScript also has an impact on how your codebase is transpiled to JavaScript.
Nowadays there is absolutely no excuse to opt for JavaScript instead of TypeScript.
With source maps configured, debugging tends to work out of the box.
The only place where I personally saw this becoming an issue was with a non-nodejs project that used an obscure barreler, and it only posed a problem when debugging unit tests.
> Just feels like an extra layer of complexity in the deployment process and debugging.
Your concern is focused on hypothetical tooling issues. Nowadays I think the practical pros greatly outnumber the hypothetical cons, to the point you need to bend yourself out of shape to even argue against adopting TypeScript.
No? first_user = data[0] assigns User | undefined to first_user, since the list isn't guaranteed to be non-empty. I expect Return to be implemented as type Return<T> = T | undefined, so Return<User> makes sense.
I assumed `Return<User>` was a mistake, not a custom type as you suggest. But your interpretation seems more likely anyway.
This scores 6: function a(b) { return b[0]; }
This scores 3: const a = (a) => a;
I don't know about transpiling or performance, but cyclomatic complexity is associated with both cognitive complexity and code quality.
I mean, why would code quality not reflect cognitive load? What would be the point, then?
I think you didn't bothered to pay attention to the project's description. The quick start section is clear on how the "score" is an arbitrary metric that "serves as a general, overall indication of the quality of a particular TypeScript file." Then it's quite clear in how "The full metrics available for each file". The Playground page showcases a very obvious and very informative and detailed summary of how a component was evaluated.
> Maybe as a very senior TypeScript developer it could be obvious how to fix some things, but this isn't going to help anyone more junior on the team be able to make things better.
Anyone can look at the results of any analysis run. They seem to be extremely detailed and informative.
https://github.com/whyboris/TypeScript-Call-Graph
It may not be perfect in its outputs but I like it for bringing attention to arising (or still existing) hotspots.
I've found that the output is - at least on a high level - aligning well with my inner expectation of what files deserve work and which ones are fine. Additionally it's given us measurable outcomes for code refactoring which non technical people like as well.