Next.js Is Infuriating
Key topics
The author expresses frustration with Next.js, a popular React framework, citing its complexity and abstraction issues, sparking a heated discussion among commenters with varying opinions on the framework's strengths and weaknesses.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
95
0-6h
Avg / period
17.8
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 2, 2025 at 2:57 AM EDT
4 months ago
Step 01 - 02First comment
Sep 2, 2025 at 4:25 AM EDT
1h after posting
Step 02 - 03Peak activity
95 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 4, 2025 at 6:03 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Saying this as someone doing Web related development since 1998, glory days of Perl and CGIs.
Because anything Java, .NET and Python, it certainly requires configuration and related infrastructure.
Spring Boot doesn't provide a serious production quality deployment without configuration.
Bare bones logging into standard out, yes.
That isn't production quality.
Production quality is telemetry logging, log rotation and zipping, forwarding logs to Kibana or Datadog dashboard.
I think you haven't used .NET in a while. Nowadays, logging is absurdly easy to configure. Heck, you usually don't even need to configure it, because the basics are already included in most templates. You just use the Logger class and it works.
The only time you have to spend more than 30 minutes on it is when you use some external logging libraries. And most of them are quite sane and simple to use, because it's so easy to create a custom logging provider.
Java, .NET and nodejs are all over the place around here.
The point was without configuration.
Logger class doesn't do the work for production monitoring, without additional configuration so that its output appears on the necessary production dashboards.
For an upgrade someone has to pay for it anyway, so whatever pains there are, they are reflected on project budget anyway.
More devs should do the math of work hours to money.
I have not worked with older versions, but with V20 & signals, it has been pretty good.
https://github.com/angular/angular/pull/43529#issuecomment-9...
It's kind of funny in hindsight, but at least we didn't have to modify every project just to update such a minor thing which was working already anyway.
In this regard the thing that absolutely sucks is the migration tool. Your best course of action is to update the versions manually in package.json, read the documentation on breaking changes and act accordingly.
In my view Angular was always insane, but it's becoming saner with each subsequent version. We now have typed forms (that took a while), standalone components and, most importantly, signals, which do most of the stuff RxJS is doing, but without the junior-killing hidden state and memory leaks.
He still ended up sneaking some metaprogramming capabilities into it, though he stopped short of something recognizable like macros.
So when the highest grade vendors started taking front end extra seriously since I'd say 2008-2010, what they built basically bulldozed over the things that made JS per se tolerable. Instead, they built frameworks out of the standard imperative primitives in JS, which take things in more of a C++-inspired direction.
The "growth hack" is here, as with vaunted Apple, vertical integration. The only way to couple markup to state management to server/client flow does not amount to a framework; it amounts to no framework since the Web was not invented by VCs.
To get a framework out of it, i.e. couple developers' practical knowledge to your ecosystem (turning it from general to domain specific but fuck devs right), you also need to couple things in your stack at the wrong places, as exemplified by the already much-maligned misfeature explored in TFA.
I.e. for the onboarding flow of a hosting business to work in 2025, you first need to have been teaching bad architecture for a generation. (It's why Meteor.js didn't take.)
On the one hand, pragmatic of Vercel to exist in the long shadow of the React/TS monstrosities. On the other hand, it's just one more company whose mere existence in this world has contributed for my work and life becoming harder for no real reason, even though nobody I'm within a handshake is even their customer.
Like many, I only learned of them only when I googled, "who the hell made this horrible thing the frontend team over is now stuck with, it's 0.01xing their velocity and frying their brains besides" and, well, now they're here. I would like to remind them to try and measure their externalities.
So you need (1) the knowledge of what platform APIs exist (2) the ability to reason about existing abstractions (3) the ability to define abstractions.
In modern life all three are considered unsafe things. Therefore to prevent people from entering invalid states exists The Framework: useless abstraction layer that does nothing besides be conventional. That's a common enough pattern anywhere population's above Dunbar; whether it's embodied by React or TypeScript or Windows or an entirely different order of lowest-common-denominator monoculture is immaterial.
Thankfully, software maintains an objective material component - the code, which they're trying to now turn into another wishy-washy thing you interact with using endless imprecise human language. Purely in terms of that, it remains possible to propose some ways to get a project off the ground more efficiently by virtue of dodging the toxic mainstream:
TL;DR conventional: native DOM manipulation; state management with Observable; Vite; TDD.
TL;DR forward-thinking: same but in proper static language: write in Rust, compile to WASM.
TL;DR exotic: ClojureScript or another Lispy language that compiles to JS or WASM.
Ofc, unless totally cowboying it, you also gotta be able to counteract the illusion of social proof. Ideally you want to be actively shaming people for building and normalizing bloatware. Preferably packing 1-2 yesmen worth of social proof of your own, just to get basic parity with the "best practices" in close quarters.
As always, depends on what you're building, for what purpose, and, critically, with whom... fuck all that though, the objectively bestest solution is always https://github.com/Matt-Esch/virtual-dom /s
That repo is a milestone of where history took its next wrong turn. I remember it making some waves when it dropped, and it's certainly the first VDOM implementation I saw. I considered it a good, original idea.
IIRC, React came around about that time: to take the simple and sane "VDOM" optimization, apply it judiciously (i.e. where it doesn't), and make it possible to bolt enough shit onto it (redux! teaching you to type the same thing thrice before typescript! SAGA! that one weird DB paper from '74 applied to frontends JS' half-baked half-async "generators", just to demonstrate to people that they should revere the old CS publications, and definitely not anything like read them or reason about them!)
Just so that people could hold bloody bootcamps about it, where they'd be able to weed out any junes capable of reading MDN for themselves. ("Bootcamp" is another cosmic joke like "Instagram" and "Trump". That thing where they harass people into obedience then turn them into socially sanctioned murderers? Ok checks out, let's name our programming learning experience! It's nothing if not profoundly mission-driven!)
Back in those days or a little later, I remember Vue 1 and 2 being pretty great: it somehow managed to do its thing without first having to introduce three whole new dialects of JavaScript and counting.
Overall I'm glad to be out of the frontend space for a good while now and only learning about it from the confused screaming of those still trapped there. (Oh and also Next's "static build" is a Docker container I had to cook up myself, which is a next level of ridiculous; as with VDOM, you first need to have seen the past level in order to recognize it as regress, and I presume a lot of people simply haven't had the opportunity to pursue any form of informed comparison)
(For "agentic workflows" idk -- just don't use them ffs the externalities are not remotely priced in; otherwise you'd be the exact same problem as them -- the designed-by-corporate-committee frontend DX of the past decade certainly strike me as something that'd make more sense to a statistical model than to a human mind.)
Old Vue is nice, we still have some Vue apps and they’re just running without major headaches. I do recall some distinct issues with properties introduced on objects after initiating the component not being reactive, but it has mostly been an acceptable experience
That's probably the better outcome here. If you know enough things to find what I wrote entertaining, rather than vexing, chances are you're able to pick right tools for jobs just fine.
Meanwhile treadmills are gonna treadmill; if I've got one answer to everything it's to stay off them. Mighty difficult when everyone's trying to drag you onto one.
I do recall some distinct issues with properties introduced on objects after initiating the component not being reactive
Wasn't that what they fixed using Proxy (making ES6 finally a hard requirement for anything at all)?
>Old Vue is nice, we still have some Vue apps and they’re just running without major headaches. >but it has mostly been an acceptable experience
Better than one could say about the current generation stacks. Big vibe like vendors are trying to cargocult ZIRP-associated patterns (as if those were what produced value in pre-2020s Web and totally not all the human creativity that used to be channeled into the medium before the masks started falling off.)
Maybe, but I haven't touched Vue in a long time.
> Better than one could say about the current generation stacks.
For sure. I still have some projects that use what I like to call "jQueryScript", where there's just a bunch of unprocessed (or only minified) JS files with `$("#foo").click(function() { ... })` stuff everywhere. Looking back, it wasn't all that bad.
I just wish there was a pattern that is simple, works with plain, modern JS without any big dependencies (I don't want 600 node modules installing a frontend framework, please) and is easily modular and integrates nicely with other stuff (so likely either manipulating DOM nodes directly or using WebComponents)
Out of protest, I have some smaller projects that have their GUI templates stored in `<template>` tags and manipulated with plain Javascript. Anyone that opens the code will see it and think "wtf?" and then be like "oh... uh sure". The largest one I had written was 1000s of lines of code like that, kinda like a classical MVC pattern where a view (class manipulating DOM nodes) renders a model (a JS object or class). The controller would subscribe to custom events (defined by the view), update the model and call `render()` on the view. It had a lot of small classes, which was a bit too verbose to my liking.
Do you want to be a spirit haunting the threshold in half-agony, half-but-also-mockery? It's accomplished by the "Figuring Out How NPM Packaging Actually Works So You Can Write Proper Isomorphic Business Logic" ritual, a fell rite from the forbidden "Uncomfortable Truths About The Presentation Layer" grimoire, and one that a lot of them would be better off with you not knowing, and now you do ;-)
But whenever I work with Next, I feel like we lost the plot somewhere. I try a lot of frameworks and I like esoteric programming languages, but somehow Next.js, the frontier JavaScript framework embraced by React is the only experience where half the time, I have no idea what it’s error messages (if I get any to begin with) are trying to tell me. I can’t even count the hours I spent with weird hydration issues.
I was somewhat surprised when I noticed simple Next.js landing pages would break in Firefox. Worse yet, the failure mode was to overlay all of the content with a black screen and white text, "An application client side error has occurred". It was surprising in that a simple landing page couldn't render, but when I discovered that the cause was a JS frontend framework, I felt that it was par for the course.
Perhaps it makes sense to the advocates, but for those of us not on the bandwagon, it can be sincerely baffling.
If you don't mind my asking, what sort of applications have you worked on, how many contributors were there, how long was their lifespan, and how long did you work on them for? Personally, I've found the type of "vanilla" JS approach to be prohibitively difficult to scale. I've nearly exclusively worked on highly interactive SaaS apps. Using a substantial amount of JS to stitch together interactions or apply updates from the server has been unavoidable.
The engineering organizations at companies I've worked at have ranged in size from three devs to over 20,000. Projects I've worked on have ranged from three devs to maybe 500-1,000 (it's sometimes hard for me to keep track at a giant company). I've worked on projects using "vanilla" JS, Knockout, Backbone, Vue, and React[0]. The order in which I listed those technologies is also roughly how quickly the code became hard to maintain.
[0] This is not an exhaustive list of which frontend frameworks/libraries I've used, but it's the ones I have enough experience with to feel comfortable speaking of the long term support of[1]. For example, I used Ember heavily for about a year, but that year was split between two projects I spent six months each on. Similarly, I've used Next.js, but only for prototyping a few times and never deployed with it to anything other than a private server.
[1] Except Lightning Web Components, which I've used a lot but hate so much that I don't want to dishonor those other technologies by listing it alongside them.
I am happy for them and their money, but I can't use this anymore. I take Vite as the default option now, but I would prefer something more lightweight.
If I went back in time, I would have called it Routing Middleware or Routing Handler. A specific hook to intercept during the routing phase, which can be delivered to the CDN edge for specialized providers. It’s also a somewhat advanced escape hatch.
Since OP mentions logging, it’s worth noting that for instrumentation and observability we’ve embraced OpenTelemetry and have an instrumentation.ts convention[2]
[1] https://nextjs.org/blog/next-15-5#nodejs-middleware-stable
[2] https://nextjs.org/docs/app/api-reference/file-conventions/i...
> Since OP mentions logging, it’s worth noting that for instrumentation and observability we’ve embraced OpenTelemetry and have an instrumentation.ts convention
That makes it sound as though the answer to a clumsy logging facility is simply to add another heavy layer of complexity. Surely not every application needs OpenTelemetry. Why can’t logger().info() just work in a sensible way? This can't be such a hard problem, can it? Every other language and framework does it!
I think OTEL is pretty sensible for a vendor-free and if you want to have a console logger you can use the console exporter[0] for debug mode during local development. Also if Next is designed as a framework to make it easy to build production-grade apps, having a standardized way to implement o11y with OTEL is a worthwhile tradeoff?
If you view that as being overkill, perhaps you're not the target audience of the framework
[0] https://opentelemetry.io/docs/languages/js/exporters/#consol...
Most frameworks have powerful loggers out of the box, like Monolog in the PHP world.
There's even a handler for monolog in PHP - they are not necessarily mutually exclusive
https://github.com/open-telemetry/opentelemetry-php/blob/mai...
The fact that Monolog has a handler for this tool isnt relevant, but it shows that there is one more layer of complexity tacked on.
You can still log to a text file if you want to run locally, but for something like next.js where you're intended to deploy production to some cloud somewhere (probably serverless) the option of _just_ writing to a text file doesn't really exist. So having OTEL as an ootb supported way to do o11y is much better than the alternative of getting sucked into some vendor-specific garbage like datadog or newrelic
If you wanted "dead simple" text-based logging in a situation where a service is deployed in multiple places you'd end up writing a lot of fluff to get the same log correlation abilities that most OTEL drivers provide (if you can even ship your logs off the compute to begin with)
Which again comes back to the "maybe the framework isn't for you" if you're building an application that's a monolith deployed on a single VPC somewhere. But situations where you're working on something distributed or replicated, OTEL is pretty simple to use compared to past vendor-specific alternatives
People expect "middleware" to mean a certain thing and work a certain way.
I expect these things to be standardized by the framework and all the sharp edges filed off - thats why I go to a framework in the first place.
(My username has never been more appropriate!)
Here in this article, the author, failing to comprehend the domain differences, is applying the same approach to call a function everywhere. Of course it won't work.
The fallacy of nextjs is attempting to blend function domains that are inherently different. Stop doing that and you will be fine. Documentation won't work, it will be just more confusing. Blending edge and ssr and node and client-side into one is a mess, and the attempt to achieve that only results in layers upon layers of redundant framework complexity.
I think a big part of the negative sentiment derives from the fact that detailed documentation and reference documentation almost non-existant. The documentation mostly tells you what exists, but not how to use them, how they get executed, common pitfalls and gotchas etc etc.
The documentation is written to be easy and friendly to newcomers, but is really missing the details and nuances of whatever execution context a given api is in and does not touch on derived complexities of using react in a server environment etc.
This is a trend across a lot of projects these days - often missing all the nuances and details - writing good documentation is really hard. Finding the balance between making things user friendly and detailed is hard.
Keep it up
`npx @next/codemod@canary upgrade latest`
Thanks for the note! Indeed, it is also challenging when experience hides what things are not obvious or necessary to make further connections when reading the docs. It is an area of continuous improvement.
> The documentation is written to be easy and friendly to newcomers, but is really missing the details and nuances of whatever execution context a given api is in and does not touch on derived complexities of using react in a server environment etc.
I think on this particular topic, there had been an assumption made on the docs side, that, listing Edge runtime (when middleware was introduced), as its own thing, that might as well run in another computer, would also communicate that it does not share the same global environment as the underlying rendering server.
I'll do some updates to narrow this down again.
> The documentation mostly tells you what exists, but not how to use them, how they get executed, common pitfalls and gotchas etc etc.
Do you have anymore examples on this. I have been improving the revalidateTags/ Paths, layouts, fetch, hooks like useSearchParams, gotchas with Response.next, etc..
I know the OP post does talk about issues not being responded to, but that trend has been changing. If you do find/remember something as you describe, please do open a documentation issue, pointing to the docs page and the confusion/gotcha - we have been addressing these over the past months.
I really hate this stuff. Users raise feedback for something they need, the dev team considers the feedback, they spend a really long time thinking about the most perfect abstraction, scope the problem way out to some big fundamental system, and come up with an extremely complicated solution that is "best". The purist committee-approved solution could technically be used to address what the user asked for, with a lot of work, but that's no longer the focus. Pragmatism goes out the window; it's all about inventing fun abstract puzzles.
All the while, the user just wanted to log things.
Not saying that's the exact situation here, but the phrasing in the comment was all too real to me.
I spent a similar amount of time setting up opentelemetry with Next and while it would have been titled differently, I would have likely still written a blog post after this experience too.
This isn't your fault, but basically every opentelemetry package I had to setup is marked as experimental. This does not build confidence when pushing stuff to production.
Then, for the longest time I couldn't get the pino instrumentation working. I managed to figure it out eventually, but it was a pain.
First, pino has to be added to serverExternalPackages. If it's not, the OTel instrumentation does not work.
Second, the automatic instrumentation is extremely allergic to import order. And also for whatever reason, only the pino default export is instrumented. Again, this took a while to figure out.
Module local variables don't work how I would expect. I had to use globalThis instead.
And after all that I was still hit by this: https://github.com/vercel/next.js/issues/80445
It does work, but it was not great to set up. Granted, I went with the manual router (eg. not using vercel/otel).
Also shout out to Pinia, I love you!
Everyone complains that react is so slow and horrible, it isn't. It's their code that's slow and horrible, react is snappy as hell when you use it properly.
Why use something that you have to use "properly" when there are things out there that enforce being used properly?
And just to be clear I'm not saying react is better than Vue. I don't know Vue. Maybe it is better. All I'm saying is react is alright in my experience, the problem is people overcomplicate and mess things up. I've seen that in pretty much every piece of software I've ever worked on, backend/frontend/whatever. So the claim that Vue just magically can't be messed up is difficult for me to believe.
With Vue I started to use Pinia for my whole apps state management, which are data stores. Clean and centralized logic, with React idk what a substitute would be.
I know React can be clean too, but that (in my opinion) requires a lot more depth of knowledge about the framework.
That's the problem though, it's hilariously easy to shoot yourself in the foot with React. If almost every project makes the same common mistakes, it ceases being an issue with the people using it, it's a broader problem. With Vue or Svelte you'd have to try damned hard and go out of your way to mess up in similar ways, because the idiomatic way of writing Vue, especially with Options API, is so simple and straightforward. How many articles do we have out there begging people to please stop using `useEffect`, for example?
Plus, React's reactivity model is terrible and a source of a lot of performance pitfalls. Vue's and Svelte's Proxy/Signals-based approach is much more performant out the gate, and in the case of Vue the diff reconciliation algorithm is a lot better than React's, which itself will already prevent a bunch of useless re-renders compared to 'standard' React apps.
This isn't just a react problem by the way, people write horrible messy backend code as well so I'm having a hard time believing that they wouldn't find a way to make a horrible mess of a Vue app as well.
But maybe you're right, maybe it is better. I wouldn't know.
I'd recommend giving it, and especially Svelte, a try, especially with Options API in Vue (but CompAPI is nice too). It's really clear early on how simple it is, despite it paradoxically having more to it than React does (like the event/prop system)
A RoR app will just sit comfortably wherever you deploy it, slowly doing its job like a good, reliable tractor.
A typical Next.js app is smeared all across its origin, some geographically convenient Edge and the frontend. It's a very different use case.
I am wondering about giving Remix a whirl for an upcoming forum/CMS rewrite with custom auth. Anybody else have experiences with Remix?
Stick with a single version and you'd probably be happy though.
FWIW, I've been doing webdev-related work for a living since 1998, and React since 2016.
New devs coming in and expecting the framework to be with "batteries included", which it absolutely is not, will also have a bad time. Node apis, ALS/context, handling app version changes on deploys, running the server app itself (if in cluster mode, e.g. with pm2, what that means in terms of a "stateless" app, wiring up all the connnections and pool and events and graceful reloads and whatnot...), hell even basic logging (instead of console.xxx) ... all of that is up to you to handle. But the framework gives you space.
People new to React and/or Node will be confused as hell for quite a bit... in such a cases I would add like 3 months of personal dev time and learning to just wrapping your head around everything... React docs themselves say that you should use a framework if you're using React in 2025 - but it's not that easy. There is a cost, especially if you go the full stack route (client + server + hydration) of having everything under one "typescript roof". The payoff is big, but definitely not immediate.
Having to know node (or other supported server) and needing to implement your server business logic from scratch (models, db access, caching layer with redis or something, etc. etc.) is not a weakness, it's a strength.
The thing that .net tried to do for like a decade and failed to (one stack for both the server and the client), see Blazor bullshit etc. is handled here. Imagine you need a modern web app or a shop or whatever and that means React. It just does. Your backend stack is PHP or Rails or Go or whatever. Now you have to somehow square the hydration circle which you can't do without it being a massive pita (as PHP renders a page on the backend in.. well.. PHP) so you end up passing the data manually in some blobs of jsons or whatever with each request (or even worse, side to it in some fetches) that then React initializes from on the client and it's a total mess. Not to mention that unless you render the page in both php and react (try to keep both outputs the same lmao), you'll only see the full app "on the client" meaning crawlers/bots/google have to run JS on all your pages or see nonsense ... yikes.
SSR with hydration is there for this reason. You render the same React tree both on the server and on the client. The hydration process itself is made much easier by RR and its infrastructure (automatic passing of data to the client). Hell, thanks to SSR you don't even have to have JS enabled on the client if you're using just basic web standarts (see RRv7 docs on progressive enhancement and state management) and the site will work. This means even crawlers without JS runtimes will be able to index your site.
It handles edge-cases extremely poorly, and when you have those scenarios, you either need to find a workaround (so code becomes ugly and painful to maintain) or give up.
I've worked with it for a project, gladly never again.
I'll extend my feedback about Node.JS backend. Look how many flavors: deno, bun... It's a mess.
Node.js a terrible platform for server-side in comparison to Java/C#(performance/stability) & Ruby/Python(dev speed).
The only reason it is successful is that everybody knows JS/TS. But just because everybody knows it is rarely a reason to use it.
The event loop is quite nice and easy to reason about, but that's all there's to it. It is single-threaded and comes with gotchas, with at the end being in my opinion a negative aspect of it.
Many other langs/platforms offer also an event loop for concurrency, but they aren't often used. In the end no matter how much we hate threads, they serve a great purpose that has been battle-tested for so many decades.
Unfortunately companies adopt programming languages based on hype and trends, rather than technical reasons.
Until when we will have people hyped up about writing web servers in CSS, even though it doesn't make any sense to?
TLDR: It isn't only Next.JS, but the whole NodeJS thing isn't that great.
Everything can be a word if you know what you're trying to say. Don't let anyone tell you otherwise. If they try, say: фakdelengiчpolis!
Even though I'm a fan of looking for underlying motivations to what people do, I'd be wary of pointing out "only" reasons for anything. I know it's a rhetorical exaggeration, but still.
For example, I've found Node to do what Python does, only better. For example, Node does dependency resolution without involving the OS package manager. And ultimately, people end up using whatever they become comfortable with.
Which is not an exact science and people resist it being made into one. (Maybe if it was, it'd be much easier to sell people on working with the "least worst" tooling that is not actually good for anything.)
But the important thing is that on the backend they're all different. The baseline is the CPU, and you get to choose between real tools with real histories of real tradeoffs. Even if the tradeoff is "run JS" or "be written in C++" or smth else.
On the frontend, the baseline is JS, and whatever the browsers bolt to JS, and as us few sane keep pointing out, JS land is already such an "OS-within-an-OS" that there is very little point to building entire freaking frameworks between that and the application, just for the sake of having to swim through someone else's moat instead of invoking the APIs directly (which are also better designed for the most part).
So, in order to differentiate the market, one would need to build at least 1 "layer of layers" on top of JS, some products in which are gonna more pointless, while others are gonna be less pointless (all of this to all different people of course). That way the user gets to choose between what sucks more vs what sucks less, and one gets to feed on the attention paid to the choice of lesser evil; or -- if insufficient attention is paid -- to entirely direct the choice in whatever direction. It's a win-win.
Aside from the abysmal middleware api you also have the dubious decision to replace having a request parameter with global functions like cookies() and headers().
Perhaps there is some underlying design constraint that I'm missing where all of these decisions make sense but it really does look like they threw out every hard fought lesson and decided to make every mistake again.
[0] https://www.seangoedecke.com/good-api-design/
I use Gleam's Lustre and am not looking back. Elm's founder had a really good case study keynote that Next.js is basically the opposite of:
https://www.youtube.com/watch?v=sl1UQXgtepE
https://discord.gg/Fm8Pwmy
There's some good stuff there
Just write your SPA the grown up way. Write your APIs in a language and framework well suited to such work (pick your poison, Rails, Spring, whatever Microsoft is calling this year's .NET web technology). And write your front-end in Typescript.
There's absolutely no reason to tightly couple your front-end and backend, despite how many Javascript developers learned the word "isomorphic" in 2015.
I used to use Django and there were so many issues that arose from having to duplicate everything in JS and Python.
The issue with mixing languages is that they have different data models, even simple things like strings and integers are different in Python and JS, and the differences only increase the more complex the objects get.
Sometimes I write some code and I realise that this code actually needs to execute on the client instead of the server (e.g. for performance) or the server instead of the client (e.g. for security). Or both. Using one language means that this is can be a relatively simple change, whereas using two different languages guarantees a painful rewrite, which can be infectious and lead to code duplication.
However, at a certain point, you're better off not writing a web app anymore, just an app with a somewhat wonky, imprecise runtime, one that lacks any sort of speed and has many drawbacks.
And you lose one of the most fundamentally important parts of the web, interop. I'm sure other langs can be made to speak your particular object dialect, however the same problems that plague those other type systems will still plague yours.
Which circles back to my issue, no, sticking your head in the sand and proclaiming nothing else exists, does not, in fact, make things better.
You can write your front-end and back-end in the same language.
No shade to you for finding a productive setup, but Next.js tightly couples your front-end and back-end, no question.
I'd question that statement, since it's wrong. There's no requirement to connect your NextJS server to your backend databases, you can have it only interact with your internal APIs which are the "real backends". You can have your NextJS server in a monorepo alongside your APIs which are standalone projects, and Next could exist solely to perform optimized payloads or to perform render caching (being the head of a headless CMS). It seems like a weird choice to make but you could also build almost a pure SPA have have Next only serve client components. The tightness of the coupling is entirely up to the implementor.
I'll give you one reason: Gel [1] and their awesome TypeScript query builder [2].
[1] https://www.geldata.com/ [2] https://www.geldata.com/blog/designing-the-ultimate-typescri...
If I take a look at other languages, these kind of multi-threading issues are usually represented by providing a separate context or sync package (that handle mutexes and atomics) in the stdlib.
And I guess that's what's completely missing in nodejs and browser-side JS environments: An stdlib that allows to not fall into these traps, and which is kind of enforced for a better quality of downstream packages and libraries.
If the handle() method of the middleware API would have provided, say, a context.Context parameter, most of the described debugging issues would have been gone, no?
I used to think Javascript everywhere was an advantage, and this is exactly why I now think it's a bad idea.
My company uses Inertia.js + Vue and it a significantly better experience. I still get all the power of modern frontend rendering but the overall architecture is so much simpler. The routing is 100% serverside and there's no need for a general API. (Note: Inertia works with React and Svelte too)
We tried Nuxt at first, but it was a shit show. You end up having _two_ servers instead of one: the actual backend server, and the server for your frontend. There was so much more complexity because we needed to figure out a bunch of craziness about where the code was actually being run.
Now it's dead simple. If it's PHP it's on the server. It's JS it's in the browser. Never needing to question that has been a huge boon for us.
After looking through the 20 different popular front end frameworks and getting confused by SSR, CSR, etc. I decided to use Nuxt. I thought oh this is great, Vue makes a lot of sense to me and this seems like it makes it easer to make Vue apps. I could not have been more wrong. I integrated it with Supabase + Vercel and I had so many random issues I almost scrapped the entire thing to just build it with squarespace.
In what way has that been a boon? Context switching between languages, especially PHP, seems like an even bigger headache. Is it strlen($var) or var.length or mb_strlen($var)?
Do you ever output JavaScript from PHP?
My biggest question though is how do you avoid ever duplicating logic between js and PHP? Validation logic, especially, but business logic leaks between the two, I've found. Doing it all in Next saves me from that particular problem.
why would anyone send JavaScript from the php? why care about duplicating a couple json translations and null checks... it's all code is today anyway.
and switching languages? you can't use most of js as it is. even something as simple as split() have so many weird bugs that everyone just code from a utils lib anyway.
spoken like someone who's not experienced enough to realize that duplicated code needs to be kept in sync, and then when it inevitably isn't, it'll lead to incidents, and also can't write JavaScript without using leftpad.
It's positioned as a ramp up for companies where frontend and backend devs work at loggerheads and the e-commerce / product teams need some escape hatch to build their own stateless backend functions
Annoying, obnoxious, and always trying to get your email but god damn do they get your attention.
They hire the core contributors of all major web frameworks to continue development under their roof. Suddenly, ongoing improvement of the web platform is largely dependent on the whims of Vercel investors.
They pretend to cater to all hosting providers equally, but just look at Next, which will always be tailored toward Vercel. When will it happen to Nuxt? Sveltekit? Vercel is in a position to make strategic moves across the entire SSR market now. Regardless of whether they make use of that power, it’s bad enough they wield it at all.
When has this ever been a good idea? When has it produced a good outcome? It never has, and it never will.
The fact you use the term ssr market is a key point to how effective vercel’s marketing has been via techfluencers. There isnt a market for ssr in the way you use it, only web hosting.
There is a world of engineering outside js framework relevancy wars. It isnt only vercel pushing the framing btw. Other players are trying to replicate the vercel marketing playbook. Use/hire techfluencers/oss devs to push frameworks/stacks on devs who then push it up the company stack.
Usually it doesnt work for a startup but vercel proved it can.
Enter a million lite framework wrappers around well known tech, like supabase and postgres, upstash and redis, vercel and aws.
And I think their paid hosting was actually really good, up until they switched their $20/month plan to a whatever-it-may-cost and we-send-you-10-cryptic-emails-about-your-usage-every-month plan. That's when they lost me, not because it got more expensive but because it became intransparent and unpredictable and annoying instead of carefree.
Serverless framework attempted to make this stack to run yourself for Next but it is buggy due to the complexity of Next. Open source includes being able to run it. Releasing the framework and funding OSS that also enhances NextJS is nice, but it is a trap because if it comes time to seriously run it, your only option is vercel.
This is the exact problem with the App Router. It makes it extremely difficult to figure out where your code is running. The Pages Router didn't have this issue.
"use client" does NOT mean it only renders on the client! The initial render still happens on the server. Additionally, all imports and child components inherit the "use client" directive even when it's not explicitly added in those files. So you definitely cannot just look for "use client".
See what I mean now?
From the docs:
```
On the server, Next.js uses React's APIs to orchestrate rendering. The rendering work is split into chunks, by individual route segments (layouts and pages):
Server Components are rendered into a special data format called the React Server Component Payload (RSC Payload).
Client Components and the RSC Payload are used to prerender HTML.
```
HUH?
```
On the client (first load) Then, on the client:
HTML is used to immediately show a fast non-interactive preview of the route to the user. RSC Payload is used to reconcile the Client and Server Component trees.
```
HUH? What does it mean to reconcile the Client and Server Component trees? How does that affect how I write code or structure my app? No clue.
```
Subsequent Navigations On subsequent navigations:
The RSC Payload is prefetched and cached for instant navigation. Client Components are rendered entirely on the client, without the server-rendered HTML.
```
Ok...something something initial page load is (kind of?) rendered on the server, then some reconciliation (?) happens, then after that it's client rendered...except it's not it actually does prefetching and caching under the hood - surprise.
It's insanely hard to figure out and keep track of what is happening when, and on what machine it's actually happening on.
If you try to use browser functionality in a component without 'use client' or to use server functionality in a client component, you'll get an error.
Hum... You make an entire app in node, load the UI over react, pile layers and more layers of dynamicity on top (IMO, if next.js didn't demonstrate those many layers, I wouldn't believe anybody made them work), eschew the standard CDN usage, and then want distributed execution to solve your latency issues?
With all the other crazy shit people are doing (multi-megabyte bundle sizes, slow API calls with dozens of round-trips to the DB, etc) doing the basics of profiling, optimizing, simplifying seems like it'd get you much further than changing to a more complex architecture.
But these solutions keep coming up because they bring one thing: Self-contained / "batteries included". Just the other day there was a thread in hackernews about Laravel vs Symphony and it was the same thing: shit breaks once complexity comes in.
If you compare those solutions with the old model that made NodeJS / React SPA get so popular, so fast: Buffet-style tooling/libraries. You basically build your own swiss army knife out of spare parts. Since all the spare parts are self-contained they have to target really low abstraction levels (like React as a component library, HTTP+Express as a backend router, Postgres as DB).
This approach has many disadvantages but it really keeps things flexible and avoids tower-of-babel style over-engineering. As in a lot of layers stacked on top of each other. Not that the complexity goes away, but instead you have a lot of layers sibling to each other and it is more doable to replace one layer with another if things aren't working well.
It is understandable why "batteries included" is so popular, it is really annoying to stitch together a bunch of tools and libraries that are slightly incompatible with each other. It definitely needs people with more experience to set up everything.
This is my job. We're a small team and my job is to keep things up to date. Insanely time consuming. Packages with hard dependencies and packages that stopped being supported 5 years ago.
Fact is the only way around this in the frontend without a monolitic "batteries-included" all-encompassing all-knowing all-mighty framework is through standardization which can only be pushed by the browsers. Like if browsers themselves decided how bundlers should work and not have them be extensible.
And this tooling-hell is not only a browser frontend problem only either, it is also quite common in game development. Where you also have these monstrosities like Unreal Engine that "includes batteries" but makes it really hard to troubleshoot problems because it is so massively big and complex. A game engine is basically a bundler too that combines assets and code into a runnable system to be run on top of a platform.
The previous model was that you simply have code that runs on a server when a request comes in, sends a response to the client, and then the code in that response is run on the client. Instead, now we have a situation where some bits run on the server, some of them on the client, which call out to some bits on the server again, and all of this can happen either before or after the server started sending the response. And then on e.g. Vercel, you also have edge functions as an additional permutation.
Which is kinda neat, but also massively complicates the mental model.
There are a lot of gotchas and considerations with Nextjs, but it is a framework and it is not unexpected that unless it is developed by yourself, frameworks require some getting used to.
There are some things that could be better, especially in non-Vercel-hosted scenarios, and if the backend for frontend part becomes too complicated, I would switch to Nest.js, but overall I am satisfied with Next.
Not Next though. We built a pretty large app on Next and it was painful from start to finish. Every part of it was either weird, slow, cumbersome or completely insane.
We still maintain the app and it is the only "thing" I hate with a passion at this point. I understand that the ecosystem is pretty good and people seem to be happy with the results given that it is extremely popular. But my own experience has been negative beyond redemption. It's weird.
419 more comments available on Hacker News