🔥 My NextJS Handbook

I am a developer creating open-source projects and writing about web development, side projects, and productivity.
Next.js is a React-based framework that allows you to build server-side-rendered applications with ease. With Next.js, you can create dynamic and fast-loading web pages that are optimized for search engines and social media platforms. Some of the key benefits of using Next.js include:
Automatic code splitting for faster page loads.
Server-side rendering for improved SEO and performance.
Built-in support for CSS modules and styled components.
Easy deployment with Vercel, the platform that was built specifically for Next.js.
Today, I’m going to explain some advanced concepts of Next.js that most developers don’t know. You can use them to optimize your App and improve the Developer experience.
Understanding Next.js Rendering Strategies — SSG, CSG, SSG, and ISR
Sources: https://www.patterns.dev/react/
One of the advantages of using Next.js is the versatility in regard to how it builds the page on the user’s browser. If you can comprehend how these strategies work, you will have an easier time building a faster and more efficient site.
Next.js has four rendering strategies:
Server-Side Rendering (SSR)
Client-Side Rendering (CSR)
Static-Site Generation (SSG)
Incremental-Statis Regeneration (ISR)
React Server Components (RSCs)
I will explain each strategy, including the process behind the scenes, these use cases, and the pros and cons.
Server-side Rendering

Server-side rendering (SSR) is one of the oldest methods of rendering web content. SSR generates the full HTML for the page content to be rendered in response to a user request. The content may include data from a datastore or external API.
The connect and fetch operations are handled on the server. HTML required to format the content is also generated on the server. Thus, with SSR, we can avoid making additional round-trip requests for data fetching and templating. As such, rendering code is not required on the client, and the JavaScript corresponding to this need not be sent to the client.
With SSR, every request is treated independently and will be processed as a new request by the server. Even if the output of two consecutive requests is not very different, the server will process and generate it from scratch. Since the server is common to multiple users, the processing capability is shared by all active users at a given time.

Pros and Cons
Executing the rendering code on the server and reducing JavaScript offers the following advantages.
Lesser JavaScript leads to quicker FCP and TTI: In cases where there are multiple UI elements and application logic on the page, SSR has considerably less JavaScript when compared to CSR. The time required to load and process the script is thus lesser. FP, FCP and TTI are shorter and FCP = TTI. With SSR, users will not be left waiting for all the screen elements to appear and for it to become interactive.
Provides additional budget for client-side JavaScript: Development teams are required to work with a JS budget that limits the amount of JS on the page to achieve the desired performance. With SSR, since you are directly eliminating the JS required to render the page, it creates additional space for any third party JS that may be required by the application.
SEO enabled: Since all processing takes place on the server, the response from the server may be delayed in case of one or more of the following scenarios(Multiple simultaneous users causing excess load on the server, Server code not optimized)
Full page reloads required for some interactions: Since all code is not available on the client, frequent round trips to the server are required for all key operations causing full page reloads. This could increase the time between interactions as users are required to wait longer between operations. A single-page application is thus not possible with SSR.
Client-side Rendering

In Client-Side Rendering (CSR) only the barebones HTML container for a page is rendered by the server. The logic, data fetching, templating, and routing required to display content on the page is handled by JavaScript code that executes in the browser/client. CSR became popular as a method of building single-page applications. It helped to blur the difference between websites and installed applications.
As the complexity of the page increases to show images, display data from a data store and include event handling, the complexity and size of the JavaScript code required to render the page will also increase. CSR resulted in large JavaScript bundles, which increased the FCP and TTI of the page.

As shown in the above illustration, as the size of bundle.js increases, the FCP and TTI are pushed forward. This implies that the user will see a blank screen for the entire duration between FP and FCP.
Pros and Cons
With React most of the application logic is executed on the client and it interacts with the server through API calls to fetch or save data. Almost all of the UI is thus generated on the client. The entire web application is loaded on the first request. As the user navigates by clicking on links, no new request is generated to the server for rendering the pages. The code runs on the client to change the view/data.
CSR allows us to have a Single-Page Application that supports navigation without page refresh and provides a great user experience. As the data processed to change the view is limited, routing between pages is generally faster making the CSR application seem more responsive. CSR also allows developers to achieve a clear separation between client and server code.
Despite the great interactive experience that it provides, there are a few pitfalls to this CSR.
SEO considerations: Most web crawlers can interpret server rendered websites in a straight-forward manner. Things get slightly complicated in the case of client-side rendering as large payloads and a waterfall of network requests (e.g for API responses) may result in meaningful content not being rendered fast enough for a crawler to index it. Crawlers may understand JavaScript but there are limitations. As such, some workarounds are required to make a client-rendered website SEO friendly.
Performance: With client-side rendering, the response time during interactions is greatly improved as there is no round trip to the server. However, for browsers to render content on client-side the first time, they have to wait for the JavaScript to load first and start processing. Thus users will experience some lag before the initial page loads. This may affect the user experience as the size of JS bundles get bigger and/or the client does not have sufficient processing power.
Code Maintainability: Some elements of code may get repeated across client and server (APIs) in different languages. In other cases, clean separation of business logic may not be possible. Examples of this could include validations and formatting logic for currency and date fields.
Data Fetching: With client-side rendering, data fetching is usually event-driven. The page could initially be loaded without any data. Data may be subsequently fetched on the occurrence of events like page-load or button-clicks using API calls. Depending on the size of data this could add to the load/interaction time of the application.
The importance of these considerations may be different across applications. Developers are often interested in finding SEO friendly solutions that can serve pages faster without compromising on the interaction time. Priorities assigned to the different performance criteria may be different based on application requirements. Sometimes it may be enough to use client- side rendering with some tweaks instead of going for a completely different pattern.
Relevant resources: SPAs Are a Performance Dead End
Static-Site Generation (SSG)

Based on our discussion on SSR, we know that a high request processing time on the server negatively affects the TTFB. Similarly, with CSR, a large JavaScript bundle can be detrimental to the FCP, LCP and TTI of the application due to the time taken to download and process the script.
Static rendering or static generation (SSG) attempts to resolve these issues by delivering pre-rendered HTML content to the client that was generated when the site was built.
A static HTML file is generated ahead of time corresponding to each route that the user can access. These static HTML files may be available on a server or a CDN and fetched as and when requested by the client.
Static files may also be cached thereby providing greater resiliency. Since the HTML response is generated in advance, the processing time on the server is negligible thereby resulting in a faster TTFB and better performance. In an ideal scenario, client-side JS should be minimal and static pages should become interactive soon after the response is received by the client. As a result, SSG helps to achieve a faster FCP/TTI.

SSG - Key Considerations
A large number of HTML files: Individual HTML files need to be generated for every possible route that the user may access. For example, when using it for a blog, an HTML file will be generated for every blog post available in the data store. Subsequently, edits to any of the posts will require a rebuild for the update to be reflected in the static HTML files. Maintaining a large number of HTML files can be challenging.
Hosting Dependency: For an SSG site to be super-fast and respond quickly, the hosting platform used to store and serve the HTML files should also be good. Superlative performance is possible if a well-tuned SSG website is hosted right on multiple CDNs to take advantage of edge-caching.
Dynamic Content: An SSG site needs to be built and re-deployed every time the content changes. The content displayed may be stale if the site has not been built + deployed after any content change. This makes SSG unsuitable for highly dynamic content.
Incremental Static Regeneration (ISR)
Static Generation (SSG) addresses most of the concerns of SSR and CSR but is suitable for rendering mostly static content. It poses limitations when the content to be rendered is dynamic or changing frequently.
Think of a growing blog with multiple posts. You wouldn’t possibly want to rebuild and redeploy the site just because you want to correct a typo in one of the posts. Similarly, one new blog post should also not require a rebuild for all the existing pages. Thus, SSG on its own is not enough for rendering large websites or applications.
The Incremental Static Generation (iSSG) pattern was introduced as an upgrade to SSG, to help solve the dynamic data problem and help static sites scale for large amounts of frequently changing data. iSSG allows you to update existing pages and add new ones by pre-rendering a subset of pages in the background even while fresh requests for pages are coming in.
iSSG works on two fronts to incrementally introduce updates to an existing static site after it has been built.
Adding New Pages: The lazy loading concept is used to include new pages on the website after the build. This means that the new page is generated immediately on the first request. While the generation takes place, a fallback page or a loading indicator can be shown to the user on the front-end. Compare this to the SSG scenario discussed earlier for individual details page per product. The 404 error page was shown here as a fallback for non-existent pages. With fallback: true, if the page corresponding to a specific product is unavailable, we show a fallback version of the page, eg., a loading indicator as shown in the Product function above. Meanwhile, Next.js will generate the page in the background. Once it is generated, it will be cached and shown instead of the fallback page. The cached version of the page will now be shown to any subsequent visitors immediately upon request. For both new and existing pages, we can set an expiration time for when Next.js should revalidate and update it. This can be achieved by using the revalidate property as shown in the following section. In ISR,
fallback: falsereturns 404 for unknown pages.
fallback: trueshows a loading state while the page is generated.
fallback: 'blocking'waits for the page to be generated and then returns the full HTML without a loading state.Update Existing pages: To re-render an existing page, a suitable timeout is defined for the page. This will ensure that the page is revalidated whenever the defined timeout period has elapsed. The timeout could be set to as low as 1 second. The user will continue to see the previous version of the page, till the page has finished revalidation. Thus, iSSG uses the stale-while-revalidate strategy where the user receives the cached or stale version while the revalidation takes place. The revalidation takes place completely in the background without the need for a full rebuild.
React Server Components
React’s Server Components enable modern UX with a server-driven mental model. This is quite different from Server-side Rendering (SSR) of components and results in significantly smaller client-side JavaScript bundles
Update (React 18+ / Next.js 13+): React Server Components are now a production reality in Next.js 13+ with the App Router. Unlike classic SSR, RSCs allow you to render part of your UI on the server ahead of time without sending the associated JS to the client—dramatically shrinking client bundles (early reports show 20%+ reductions). The Container/Presentational pattern is a great candidate for RSC: the “container” (data-fetching logic) can be a Server Component that fetches data and passes it as props to a presentational Client Component, meaning the fetching logic never ships to the browser.
In Next.js App Router, you no longer use
getServerSideProps—instead, any React component in theapp/directory can be async to fetch data on the server. React Server Components are not a replacement for SSR—they complement it. You typically use RSC for the majority of the page (rendered and streamed as part of SSR), and add'use client'directives for components that need interactivity.Server Actions (stabilizing in React 19/20) allow you to define form or event handlers on the server using
'use server'directive and call them from client components, further blurring the line between client and server.
React’s new Server Components compliment Server-side rendering, enabling rendering into an intermediate abstraction format without needing to add to the JavaScript bundle. This both allows merging the server-tree with the client-side tree without a loss of state and enables scaling up to more components.
Server Components are not a replacement for SSR. When paired together, they support quickly rendering in an intermediate format, then having Server-side rendering infrastructure rendering this into HTML enabling early paints to still be fast. We SSR the Client components which the Server components emit, similar to how SSR is used with other data-fetching mechanisms.
This time however, the JavaScript bundle will be significantly smaller. Early explorations have shown that bundle size wins could be significant (-18-29%), but the React team will have a clearer idea of wins in the wild once further infrastructure work is complete.
[RFC]: If we migrate the above example to a Server Component we can use the exact same code for our feature but avoid sending it to the client - a code savings of over 240K (uncompressed):
Will Server Components replace Next.js SSR?
No. They are quite different. Initial adoption of Server Components will actually be experimented with via meta-frameworks such as Next.js as research and experimentation continue.
To summarize, a good explanation of the differences between Next.js SSR and Server Components from Dan Abramov:
Code for Server Components is never delivered to the client. In many implementations of SSR using React, component code gets sent to the client via JavaScript bundles anyway. This can delay interactivity.
Server components enable access to the back-end from anywhere in the tree. When using Next.js, you’re used to accessing the back-end via getServerProps() which has the limitation of only working at the top-level page. Random npm components are unable to do this.
Server Components may be refetched while maintaining Client-side state inside of the tree. This is because the main transport mechanism is much richer than just HTML, allowing the refetching of a server-rendered part (e.g such as a search result list) without blowing away state inside (e.g search input text, focus, text selection)
Next.js Server Actions Lessons Learned
I have been building web apps for years, and one thing that has always been a pain is managing that messy back-and-forth between the client and the server. Next.js made things easier, especially with server-side rendering, but it still felt like there was a missing piece. Then came Server Actions, when I first heard about them, I was skeptical — “Server code directly in React components? Sounds like a recipe for disaster“. But after using them for a few projects, I’m a convert. This section is my brain dumb on Server Actions — how they work, what they matter, and when they can be useful (and when they might be trouble than they are worth).
What are the Next.js Server Actions?
So, what exactly are these Server Actions? Basically, they are the way to write server-side code — the stuff that is used to live in separate API routes — right inside your React components. Instead of creating a separate file for server interactions or business logic, you can now directly put that logic where they are being used.
The secret sauce is the “use server“ directive. Think of it as a tag that tells Next.js, “run this code on the server“. You can tag the entire file or just specific functions. Here’s a quick example.
// app/actions.ts
"use server";
export async function addItemToCart(itemId: string, quantity: number) {
// This runs on the server
console.log(`Adding ${quantity} of item ${itemId} to the cart...`);
}
Now, if you’ve used Next.js before, you might be thinking, “Wait, isn’t that what API routes are for?” And yes, API routes have been the traditional way to handle server stuff. But Server Actions are different. They are more tightly integrated with your components. Instead of separate files and a bunch of fetch calls, you can have the logic right there, next to your UI.
Of course, Server Actions aren’t meant to replace API routes entirely. If you are building an public API or need to talk to the external services, API Routes are still a way to go. But for those common tasks that are deeply tied to your UI, especially data mutations, Server Actions can be a game-changer. They are like specialized tools in your toolbox, not a replacement for the whole toolbox.
How does Next.js Server Actions Work Under The Hood
Understanding the underlying mechanisms is the key to using them effectively, and, of course, debugging them when things go wrong.
First up, that "use server" directive. As we touched upon, it’s your way of telling Next.js what code should run on the server. You can either put it at the top of the file, which makes every exported function in that file a Server Action, or you can add it to individual functions. Generally, it's cleaner to keep Server Actions in dedicated files. It makes things more organized. Here is an example of a file with multiple Server Actions:
// app/actions/products.ts
"use server";
export async function addProduct(data: ProductData) {
// ... runs on the server
}
export async function deleteProduct(productId: string) {
// ... also runs on the server
}
Now, when you call a Server Action from a client component, it’s not just a regular function call. Next.js does some magic behind the scenes — it’s like an RPC (Remote Procedure Call) process. Here’s the breakdown: your client code calls the Server Action function. Next.js then serializes the arguments you passed — basically, converting them into a format that can be sent over the network. Then, a POST request is fired off to a special Next.js endpoint, with the serialized data and some extra info to identify the Server Action. The server receives the request, figures out which Server Action to run, deserializes arguments, and executes the code. The server then serializes the returned value and sends it back to the client. The client receives the response, deserializes it, and — this is the cool part — automatically re-renders the relevant parts of your UI.
The serialization part is where things get interesting. We’re not just dealing with simple strings and numbers here. What if you need to pass a Date object or a Map? Next.js handles the serialization and deserialization. Here is an example to demonstrate that:
// app/actions/data.ts
"use server";
export async function processData(date: Date, data: Map<string, string>) {
console.log("Date:", date); // Correctly receives the Date object
console.log("Data:", data); // Correctly receives the Map object
return { updated: true };
}
Server Actions are tightly integrated with React’s rendering. For instance, you can hook a Server Action directly to a form submission using the action attribute. Next.js handles all the messy details for you. Like this:
// app/components/MyForm.tsx
"use client"
import { myServerAction } from '@/app/actions';
export default function MyForm() {
return (
<form action={myServerAction}>
{/* Form fields */}
<button type="submit">Submit</button>
</form>
);
}
Or, if you want more control, just call the Server Action from an event handler:
"use client"
import { myServerAction } from '@/app/actions';
export default function MyComponent() {
const handleClick = async() => {
const result = await myServerAction();
// Handle the result
}
return <button onClick={handleClick}>Click Me</button>
}
Anh, the best part? After the Server Action completes, Next.js automatically re-renders the parts of your UI that might have changed because of it. No more manually fetching data or updating the state after a mutation. It just works if the user doesn’t have JavaScript enabled or if it’s still loading; forms with Server Actions will work as regular HTML forms. Once JS is available, it will be enhanced by Next.js.
Here’s a diagram to visualize the process:

Now, Server Actions aren’t a magic bullet, and I’ve run into a few gotchas, which we’ll get to later. But they do streamline a lot of the tedious work involved in client-server communication.
Why Do Server Actions Matter in the Current Landscape?
Let’s be real, the world of web development is constantly throwing new things at us. So, why should we care about Server Actions? Here’s the deal: building modern web apps is complicated. We want these rich, interactive experiences, but managing the communication between the client and server can be a real pain. We often end up spending more time on the plumbing — API routes, data fetching, state management — than on the actual features users care about.
Server Actions tackle this problem head-on. By letting us put server-side code right in our React components, they drastically simplify things. Think about it: no more separate API route files, no more manually fetching data after a mutation. Your code becomes more concise and easier to follow, especially for smaller teams or solo developers. I’ve found that on smaller projects, Server Actions have cut down development time significantly.
And it’s not just about convenience. Server Actions can also boost performance. By reducing those back-and-forth trips between the client and server, especially for things like updating data, we can make our apps feel snappier. Fewer network requests mean faster loading times, and that’s a win for user experience. Plus, they play nicely with Next.js’s caching features, so you can optimize things even further.
Security is another big win. With Server Actions, sensitive operations — database queries, API calls with secret keys, etc. — stay on the server. That’s a huge relief in today’s world of increasing security threats. Also, they are always invoked with POST request.
Server Actions are also part of a bigger trend. Full-stack frameworks like Next.js are blurring the lines between frontend and backend. Server Actions are a natural step in that direction, letting developers handle more of the application lifecycle without needing to be a backend guru. This doesn’t mean specialized roles are going away, but it does mean that full-stack developers can be more efficient and productive.
Now, I’m not saying Server Actions are perfect or that they should replace every other way of doing things. But they do offer a powerful new approach, especially for data-heavy applications. They’re a significant step forward for Next.js and, in my opinion, for full-stack development in general.
The Caveats and Criticisms of Server Actions: A Reality Check
Like any technology, they have their downsides, and it’s important to go in with eyes wide open. I’ve learned a few things the hard way, and I’m here to share them.
One of the biggest criticisms is the potential for tight coupling. When your server-side code lives right inside your components, it’s easy to end up with a less modular, harder-to-maintain codebase. Changes to your backend logic might force you to update your frontend, and vice versa. For complex projects or teams that need a strict separation of concerns, this can be a real problem. You need to be disciplined and organized to prevent your codebase from becoming a tangled mess.
Then there’s the learning curve. While the basic idea of Server Actions is simple, mastering all the nuances — serialization, caching, error handling — takes time. You need to really understand the difference between client and server code execution and how to structure your actions for optimal performance and security. The mental model is different, and it takes some getting used to.
Debugging can also be a pain. When something goes wrong in a Server Action, you can’t just rely on your trusty browser dev tools. You’ll need to get comfortable with server-side debugging techniques — logging, tracing, and so on. Next.js has improved its error messages, but it’s still more complex than debugging client-side code.
Performance is generally a plus with Server Actions, but if you overuse them, you can actually make things worse. Every Server Action call is a network request. Too many requests and your app will feel sluggish. Next.js’s caching helps, but you need to be strategic about it. They’re great for handling data mutations but might not be ideal for complex queries or aggregations.
Finally, there’s the issue of vendor lock-in. Server Actions are a Next.js thing. If you decide to move away from Next.js in the future, you’ll have to rewrite all your Server Actions. That’s something to consider, especially if you’re worried about long-term flexibility.
So, are Server Actions worth it despite these drawbacks? In my opinion, yes, but they’re not a magic solution. You need to use them thoughtfully and understand their limitations. They’re a powerful tool, but like any tool, they can be misused. They are best used for data mutations and operations that are tightly coupled to your UI and need to be on the server.
When to use Server Actions
Event handling / Performing Database Mutations
Server actions allow you to perform server operations and database mutations securely without exposing database logic or credentials to the client. They drastically reduce and simplify your code because they remove the need to write a specific API route for your operations.
'use server'; export async function handleServerEvent(eventData: any) { // Process any server event const res = await someAsyncOperation(eventData); if (!res.ok) { throw new Error('Failed to handle event'); } return { res, message: 'Server event handled successfully' }; }Handling form submissions
Similar to the first point, Server Actions are particularly useful for processing form inputs that need server-side handling. They provide a straightforward way to handle form data on the server, ensuring data validation and integrity without exposing the logic to the client and without having to implement elaborate API endpoints.
'use server'; export async function handleFormSubmit(formData: FormData) { const name = formData.get('name') as string; const email = formData.get('email') as string; const message = formData.get('message') as string; // Process the form data const res = await saveToDatabase({ name, email, message }); if (!res.ok) { throw new Error('Failed to process the form data'); } return { res, message: 'Form submitted successfully' }; }Fetching data from client components
Server Actions can also be useful for quick data fetching, where a clean developer experience (DX) is crucial. It can simplify the fetch process by typing data access directly to a component without the need for intermediary API layers. Moreover, when using Typescript, Server Actions make using types seamless because everything is within the same function boundary.
// Simple server action to fetch data from an API 'use server'; export async function fetchData() { const res = await fetch('https://api.example.com/data'); if (!res.ok) { throw new Error('Failed to fetch data'); } return res.json(); }Working with Next.js and its server component, you already have this very practical way of using server-side code to fetch data and pre-render the page on the server.
But Server Action now introduces a brand new way to also do that from your client-side components! It can simplify the fetch process by typing data access directly to the component that needs it, without the need to use
useEffectshooks or client-side data fetching libraries.Moreover, when using TypeScript, Server Actions make typing seamless because everything is within the same function boundary, providing a great developer experience overall.
Potential Pitfalls with Server Actions
- Don’t use server actions from your server-side components
The simplicity and great DX of Server Actions could make it tempting to use them everywhere, including from a server-side component, and it would work! However, it doesn't really make any sense. Indeed, since your code is already running on the server, you already have the means to fetch anything you need and provide it as props to your page. Using Server Actions here would delay data availability as it causes extra network requests.

For client-side fetching, Server Actions might also not be the best option. First of all, they always automatically use POST requests, and they can not be cached automatically like a GET request. Secondly, if your app needs advanced client-side caching and state management, using tools like TanStack Query (React Query) or SWR is going to be way more effective for that. However, I haven’t tested it myself yet, but it’s apparently possible to combine both and use TanStack Query to call your server actions directly.
- Server Actions Do Not Hide Requests
Be extremely careful when using server actions for sensitive data. Server Actions do not hide or secure your API requests. Even though Server Actions handle server-side logic, under the hood, they are just another API route, and POST requests are handled automatically by Next.js.
Anyone can replicate them by using a Rest Client, making it essential to validate each request and authenticate users appropriately. If there is sensitive logic involved, ensure you have proper authentication and authorization checks within your Server Actions.
Note: Additionally, consider using the very popular next-safe-actions package, which can help secure your actions and also provide type safety.
- Every Action Adds Server Load
Using Server Actions might feel convenient, but every action comes at a cost. The more you offload onto the server, the greater the demand for server resources. You may inadvertently increase your app’s latency and cloud cost by using Server Actions when client-side processing would suffice. Lightweight operations that could easily run on the client, like formatting dates, sorting data, or managing small UI state transitions, should stay on the client side to keep your server load minimal.
- Classic API Routes Might Be More Appropriate
There are cases when sticking with traditional API routes makes more sense, particularly when you need your API to be accessible to multiple clients. Imagine if you need the same logic for both your web app and a mobile app, duplicating the same Server Action logic into an API route will only double the work and maintenance. In these situations, having a centralized API route that all clients can call is a better solution, as it avoids redundancy and ensures consistency across your different clients.
- Next.js Dependency and the Moving Target
It’s important to note that Server Actions are closely integrated with Next.js, and both Next.js and React are evolving rapidly. This pace of development can introduce compatibility issues or breaking changes as these frameworks continue to update. If your application prioritizes stability and long-term support, relying heavily on cutting-edge features like Server Actions could result in unwanted technical debt. Weighing the stability of traditional, well-established methods against the appeal of new features is always advisable.

Next.js Server Actions Mistakes
So, you have jumped on the Server Actions bandwagon. And, why wouldn’t you? They are fantastic. The feeling of just writing a simple async function right in your component and having us just work in pure magic.
It simplifies our code, reduces boilerplate, and feels like the future of data mutations in Next.js.
But, here is a thing: with great power comes great responsibility. It’s incredibly easy to fall into traps that silently murder your app’s performance and user experience.
I have made these mistakes myself, and I have seen them in code reviews.
Let’s break down the five biggest performance-killers, and more importantly, how to fix them.
- Not using
useTransitionfor Pending States
This is the most common rookie mistake (I was this rookie). You call your Server Actions and … nothing happens. The user clicks on the Submit button again and again, thinking it didn’t work. Behind the scenes, the action is running, but the UI provides zero feedback.
The Problem: No Immediate Feedback
Without wrapping your action in useTransition, you have no easy way to show a loading spinner, disable the button, or provide any indication that work is processing. This led to a poor user experience and often to duplicate submissions.
The Lazy Code:
// app/components/AddToCart.jsx
'use client';
import { addToCart } from '@/app/actions';
export function AddToCart({ productId }) {
return (
<button onClick={() => addToCart(productId)}>
Add to Cart
</button>
// User has no idea if it worked or is working!
);
}
The Performance-Friendly Fix:
// app/components/AddToCart.jsx
'use client';
import { useTransition } from 'react';
import { addToCart } from '@/app/actions';
export function AddToCart({ productId }) {
const [isPending, startTransition] = useTransition(); // 👈 Hook here
const handleClick = () => {
startTransition(() => { // 👈 Wrap the action
addToCart(productId);
});
};
return (
<button onClick={handleClick} disabled={isPending}>
{isPending ? 'Adding...' : 'Add to Cart'}
</button>
// 👍 User now gets clear feedback!
);
}
By using useTransition, you make the interface responsive and prevent the user from spamming your server action.
- Skipping Validation on the Client
Server Actions run on the server. This is where your final, absolute validation should happen. However, waiting for a full network round-trip to tell the user they forgot to fill in a required field is a waste of time and resources.
The Problem: Unnecessary Network Requests.
You are sending a request to the server just for it to immediately send back a validation error. This consumes your server’s bandwidth and processing power for something that could have been caught instantly on the client.
The Lazy Code:
// A form with no client-side validation
<form action={submitForm}>
<input name="email" type="email" />
<button type="submit">Sign Up</button>
</form>
The Performance-Friendly Fix:
// Using simple HTML5 validation
<form action={submitForm}>
<input name="email" type="email" required /> {/* 👈 `required` attribute */}
<button type="submit">Sign Up</button>
</form>
// Or, using a more robust hook (pseudo-code)
'use client';
import { useForm } from 'react-hook-form';
import { zodResolver } from '@hookform/resolvers/zod';
import { submitForm } from './actions';
export function MyForm() {
const { register, handleSubmit, formState: { errors, isSubmitting } } = useForm({
resolver: zodResolver(validationSchema), // Catches errors on client first
});
const onSubmit = async (data) => {
// This action only runs if client-side validation passes!
await submitForm(data);
};
return (
<form onSubmit={handleSubmit(onSubmit)}>
<input {...register('email')} />
{errors.email && <p>{errors.email.message}</p>}
<button type="submit" disabled={isSubmitting}>Sign Up</button>
</form>
);
}
Catch errors on the client to save precious server cycles for the real work.
- Forgetting to Add
revalidatePathorrevalidateTag
This mistake doesn’t make your app slow; it makes it wrong. Server Actions often change data (e.g., adding a todo, updating a name). If you don’t tell Next.js to refetch the data that was changed, your UI will show stale information.
The Problem: State Data After Mutation
The user adds a new item, the action succeeds, but the list doesn’t update. They refresh the page to see their changes. This breaks the immersive “app-like“ feel.
The Lazy Code:
// app/actions/todos.js
'use server';
import { createTodo } from './db-lib';
export async function addTodo(formData) {
const todo = await createTodo(formData);
// ❌ The `/` page still has the old list of todos!
return todo;
}
The Performance-Friendly Fix:
Always revalidate the data cache after a mutation.
// app/actions/todos.js
'use server';
import { createTodo } from './db-lib';
import { revalidatePath } from 'next/cache'; // 👈 Import this!
export async function addTodo(formData) {
const todo = await createTodo(formData);
revalidatePath('/'); // 👈 Tell Next.js to re-fetch the home page
// or revalidateTag('todos') if using fetch tags
return todo;
}
This ensures your UI is always in sync with your database without requiring a full page reload.
- Importing Heavy Client-Side Libraries in Server Actions
Server Actions are server-side code. They run in the Node.js environment. If you import a massive client-side library, like a full PDF generator, a browser-specific charting library, or even a component, you will bloat your server bundle and likely cause runtime errors.
The Problem: Massive Server Bundle & Runtime Crashes
You’ll see errors like window is not defined or document is not defined because those browser APIs don’t exist in Node. Even if it doesn’t crash, you are shipping a huge amount of unused JS to your server.
The Lazy Code:
// app/actions/generate-pdf.js
'use server';
import { heavyPdfLibrary } from 'client-side-pdf-lib'; // ❌ This will break!
export async function generatePdf(data) {
// `heavyPdfLibrary` uses `window`, this will throw an error.
return heavyPdfLibrary.generate(data);
}
The Performance-Friendly Fix:
Use server-side compatible libraries and be ruthless about your imports.
// app/actions/generate-pdf.js
'use server';
// ✅ Choose a server-side compatible library
import { pdf } from '@react-pdf/renderer';
export async function generatePdf(data) {
// Use the server-safe library
const pdfStream = await pdf(<MyDocument data={data} />).toBuffer();
return pdfStream;
}
Always check if a library is designed for the server. When in doubt, look for ssr or node in its documentation.
- Making Sequential Instead of Parallel Calls
This is a classic async/await mistake that’s easy to make anywhere, including Server Actions. If you have multiple independent operations, waiting for each one to finish before starting the next is a huge waste of time.
The Problem: Adding Unnecessary Delay
If each operation takes 100ms, doing three of them sequentially takes at least 300ms. Doing them in parallel takes at most ~100ms.
The Lazy Code:
// app/actions/checkout.js
'use server';
export async function finalizeOrder(orderId) {
// These operations don't depend on each other...
const auditLog = await writeToAuditLog(orderId); // Waits for this to finish...
const email = await sendConfirmationEmail(orderId); // ...then waits for this...
const analytics = await updateAnalytics(orderId); // ...and finally this.
return { auditLog, email, analytics };
}
// Total time ~300ms+
The Performance-Friendly Fix:
Use Promise.all to run independent operations in parallel.
// app/actions/checkout.js
'use server';
export async function finalizeOrder(orderId) {
// Kick off all three promises at once
const [auditLog, email, analytics] = await Promise.all([
writeToAuditLog(orderId),
sendConfirmationEmail(orderId),
updateAnalytics(orderId),
]);
return { auditLog, email, analytics };
}
// Total time ~100ms+
This simple change can dramatically reduce the response time of your Server Actions.
Real-World Example: Add to Cart
Let’s see how Server Actions can be applied in a real-world scenario. Imagine we’re building an e-commerce platform, and we need a feature to add products to a shopping cart. Here’s how we could implement it using a Server Action, incorporating some crucial best practices along the way.
// app/actions.ts
"use server";
import { db } from "@/lib/db"; // Your database client
import { revalidatePath } from "next/cache";
export async function addItemToCart(userId: string, productId: string, quantity: number) {
try {
// Input validation
if (!userId || !productId || quantity <= 0) {
throw new Error("Invalid input data");
}
// Check for product existence
const product = await db.product.findUnique({
where: { id: productId },
});
if (!product) {
throw new Error("Product not found");
}
// Handle the cart item
const existingCartItem = await db.cartItem.findFirst({
where: { userId, productId },
});
if (existingCartItem) {
await db.cartItem.update({
where: { id: existingCartItem.id },
data: { quantity: existingCartItem.quantity + quantity },
});
} else {
await db.cartItem.create({
data: { userId, productId, quantity },
});
}
// Cache revalidation to reflect the changes on the pages
revalidatePath(`/products/${productId}`);
revalidatePath(`/cart`);
return { success: true, message: "Item added to cart" };
} catch (error) {
console.error("Error adding item to cart:", error);
// Handle errors gracefully
return { success: false, message: "Failed to add item to cart" };
}
}
// app/components/AddToCartButton.tsx
"use client";
import { useState } from "react";
import { addItemToCart } from "@/app/actions";
import { useSession } from "next-auth/react";
export default function AddToCartButton({ productId }: { productId: string }) {
const { data: session } = useSession();
const [loading, setLoading] = useState(false);
const [message, setMessage] = useState("");
const handleClick = async () => {
setLoading(true);
setMessage("");
// Call the Server Action, passing data and handling the result
const result = await addItemToCart(session?.user?.id, productId, 1);
setLoading(false);
if (result.success) {
setMessage(result.message);
// or other side effects
} else {
setMessage("Error adding item to cart");
}
};
return (
<div>
<button onClick={handleClick} disabled={loading}>
{loading ? "Adding..." : "Add to Cart"}
</button>
{message && <p>{message}</p>}
</div>
);
}
This example demonstrates a few key best practices:
Input Validation: The Server Action validates the input to prevent errors and security vulnerabilities.
Error Handling: The
try...catchblock ensures that errors are handled gracefully and informative messages are returned to the client.Database Interaction: We use a hypothetical database client (
db) to interact with the database. In a real app, you'd likely use an ORM like Prisma.Cache Revalidation: We use
revalidatePathto keep the product and cart pages up-to-date.UI Logic Separation: The
AddToCartButtoncomponent handles the UI and user interactions, keeping the Server Action focused on data and server-side logic.
This streamlined example showcases how Server Actions can simplify common e-commerce tasks while adhering to essential best practices. Remember to modularize your actions, keep UI logic separate, and always validate user inputs. While this provides a good starting point, more complex scenarios might require more sophisticated error handling, caching strategies, and database interactions.
https://medium.com/@sureshdotariya/next-js-15-mastery-series-part-2-server-actions-a7939ca5514e
What “use client“ Really Does in React or Next.js
React’s “use client“ directive might look like an annotation at the top of your file, but it presents a profound shift in how we structure applications. Ever since Next.js 13 introduced a new app router and React Server Components, developers have been grappling with these two-word directives. On the surface, “use client“ marks a component to run on the browser. Under the hood, however, it opens a gateway between the server and client environments in a way that’s both elegant and technically sophisticated. In fact, React core team member Dan Abramov argues that the invention of use client(and its counterparts, “use server“) It is as fundamental as the introduction of async/await or even structured programming itself. That is a bold claim for the little string at the top of the file. So, what does “use client“ really do? And, why is it so important for the future of React?
From Server to Client: Bringing Two Worlds
To understand the meaning of “use client“, it helps to consider the context in which it emerged. In Next.js 13 App Router, components are server-first by default. This means if you write a component without any directives, Next.js will render it on the server (producing static HTML) and send that HTML to the browser without any client-side JavaScript for that component. This is great for performance. Your pages can load with minimal JS, but it poses challenges when you do need interactivity or state. How do we tell React that a certain component (say, a counter button or a dynamic form) needs to be interactive and runs on the browser? That’s exactly what “use client“is for.
When you add “use client“ to the top of the file (above any imports), you are declaring that this module and everything it imports should be treated as a Client Component, meaning it will execute on the client side and can use interactive features like state, effects, and browser APIs. In essence, “use client“ draws a boundary line in your app’s module graph, on one side of that line, components run on the server; on the other side, components run on the client. This directive flips the historical default. In traditional React apps (and in Next.js’s old Page Router), every component was a client-side component by default, and you opted into server rendering. Now, with server components, we default to running on the server and explicitly opt into the client side for interactive parts.
Crucially, “use client“ is more than a marker for “put this code in the browser“. It serves as a bridge between two environments. A way for the server to include client-run code in the app’s output in a controlled, declarative manner. Dan Abramov describes “use client“ as essentially a typed <script> tag. Just as a script tag in HTML tells the browser to execute some bundled JavaScript, “use client“ tells React’s tooling that “This module is UI code that the browser needs“. The server can import that module and hand off rendering to it, much like opening the door from the server world into the client world. In other words, “use client“ allows the server to reach into the client bundle and say, “I need this component to come alive in the browser.“. It’s a formal, first-class way to intertwine server-rendered content with client-side interactivity.
How does “use client“work under the hood?
The technical mechanics of “use client“ are fascinating. When you mark a module with “use client“, you are signaling to the build system and to React that this file (and its dependencies) belong in the client bundle. If a Server Component tries to import something from that file, the server won’t import the component’s implementation directly. Instead, it imports a stub or reference to it. Think of it like a placeholder or a token that stands in for the real component. The server-rendered output will include a pointer to that client component rather than the component’s HTML, indicating, “there is a client component here, which will be rendered on the client side“. React’s server component payload (often a special JSON behind the scenes) might include an identifier for the component, such as a module path and export name. For example, the server output could contain something like:
{
"type": "/src/frontend.js#LikeButton",
"props": { "postId": 42, "likeCount": 8, "isLiked": true }
}
This is not literal HTML, but a description. It says there should be a <LikeButton> there with those props, and it references the component by module (/src/frontend.js) and name (LikeButton). The React runtime uses this to generate actual script tags for the browser. When the response reaches the client, the framework knows it needs to load the /src/frontend.js module (the file where LikeButton is defined) as a separate JavaScript chunk. It injects a <script src="frontend.js"></script> for that file, and once loaded, it hydrates the component by calling LikeButton({...props}) on the client. In essence, the “use client“ directive allows the server to embed a reference to a client-side component in its output, and that reference is resolved into a real interactive UI in the browser.
One important nuance is that making a component with “use client“ directive does not meant it won’t be rendered on the server at all. In fact, Next.js will still pre-render the initial HTML for client components in many cases (just like it did in the old pages architecture) and then hydrate them on the client. The “use client“ directive simply ensures that the component’s JavaScript is sent to the browser and React knows to hydrate it. That means you don’t lose the SEO and performance benefits of server-side rendering by using a Client Components; you are just opting into sending additional JS for interactivity. A common rookie mistake is thinking that adding “use client“ makes your entire page purely client-side rendered. In reality, a Client Component is Next.js 13+ is usually still rendered to HTML on the server first, then made interactive on the client, which is exactly how React pages have traditionally worked. This big difference is that now you have a choice, part of the page with no “use client“stays purely server-rendered (no hydration needed at all), and parts with “use client“ get that two-step treatment of SSR + hydration.
Because of how the boundaries work, you typically only need to put “use client“ at the top of entry points for interactive islands of your application. Once you mark a component as a Client Component, all of its children and imports automatically become part of the client bundle as well. You do not need to “use client“ on every file that contains a hook or browser API call. For example, if you create a Counter.tsx component with 'use client' (so it can use useState and handle clicks) and then import it into a parent server-rendered page, so that Counter and anything it imports will be bundled for the client. If that Counter itself renders other components(passed in as children or imported within it), those can actually be server components if they don’t need interactivity. React will seamlessly render those on the server and slot their HTML into the client’s component output before hydration. This flexibility can be mind-bending. You can have a Server Component inside a Client Component, which is inside a Server Component, and so on. The framework’s job is to sort out which parts run where. As developers, our job is just to label the boundaries correctly. And thanks to 'use client', those boundaries are explicit and easy to reason about.
Why use client Matters (More Than You Might Think)
The introduction of 'use client' has significant implications for how we architect React applications, especially in frameworks like Next.js. First and foremost, it enables fine-grained performance optimization. By defaulting everything to server-rendered and then opting specific pieces into client-side hydration, we send far less JavaScript to the browser than a traditional SPA would. A page that might have previously bundled the logic of every component can now ship only the code for truly interactive parts. This “eat your cake and have it too” approach, full server rendering for most of the UI, and rich interactivity where needed, is essentially an implementation of the elusive ideal of progressive hydration or the so-called “islands architecture.” You can think of each 'use client' component as an island of interactivity amid a sea of purely server-rendered HTML. If a part of your UI doesn’t need interactivity, simply leave out the directive and it remains an island of static content (no hydration overhead). This leads to better loading performance and less JavaScript bloat on the client. Next.js 13+’s architecture actively encourages this. It makes you consciously add 'use client' only where necessary, nudging you into keeping most of your UI logic on the server by default.
Second, 'use client' improves the developer experience and code maintainability in a full-stack React app. In the past, to make a client-side interactive widget that also fetched or updated data on the server, you had to write a lot of boilerplate. Define an API route or endpoint, call fetch from the client, handle state for loading or errors, and so on. Now, consider the new world with Server and Client Components. The server can render a component and pass it data directly as props, and the client component can, in turn, directly call back to server functions (using 'use server', which goes hand-in-hand with 'use client'). In Dan Abramov’s Like button example, instead of manually writing API endpoints for “like” and “unlike” and then writing client code to fetch those, you can simply write a server function likePost and import it into your client component with 'use server'. React will handle turning that into an API call for you. On the flip side, you write a LikeButton component with 'use client' and import it into your server-rendered UI; React will handle sending that component’s code to the browser and hydrating it. The connection is expressed through the module system (via import/export), not through ad-hoc API contracts. This means your editor and type system can understand the relationship. You can navigate to definitions, get type checking across the boundary, and treat the client–server interaction as a function call rather than a network call. As Abramov puts it, the 'use client' import “expresses a direct connection within the module system” between the part of the program that sends the <script> (server) and the part that lives inside that script (client), making it fully visible to tools and type-checkers. In practical terms, this can reduce bugs and make code more discoverable compared to the old way of string-typed API endpoints.
Using 'use client' also forces a clearer separation of concerns between your purely presentational/server-driven components and your interactive ones. In a large codebase, this can be a healthy discipline. You might designate most of a page (navigation bars, content sections, data displays) as server-rendered and free of client-side logic, and only sprinkle a few 'use client' components for things like forms, modals, or widgets that truly need it. Those client components can still leverage server-side data by receiving props or calling server actions, but they won’t inadvertently drag the entire page’s code into the client bundle. Many developers, upon first migrating to Next 13+, felt it was annoying to add 'use client' everywhere they used hooks. But this “annoyance” is intentional. It makes you stop and consider “Does this code really need to run on the client?”. If not, perhaps it could be refactored to a server component, leaving just a tiny client component for the interactive bit. In time, teams find that this leads to smaller, more purpose-driven client modules and a more robust rendering strategy. It’s a new mental model, but one that aligns with the performance needs of modern apps.
One caution, because 'use client' scopes an entire module to the client, you do have to be mindful about what you import inside a client module. Anything you import into a 'use client' file becomes part of the client-side bundle (unless it’s a purely type import or something that gets compiled away). This means you wouldn’t want to import a Node-only library or a huge server-only module inside a client component. It either won’t work (if it relies on Node APIs) or it will bloat your bundle. Next’s compiler will usually warn or error if you try to import server-only code into a client module. In short, keep client components focused and lean. Use them for UI and interactivity, not heavy data fetching or processing (those belong on the server side). Fortunately, the system makes this natural: heavy data fetching is easier to do in Server Components, and they can feed the results into Client Components as props. The end result is an app that is modularized by environment**,** server logic and rendering over here, client logic and interaction over there, both living in the same codebase but clearly delineated.
The Future of 'use client' and the React Ecosystem
It’s early days for React Server Components and the 'use client' directive, but the impact is already being felt. As of Next.js 13+ (and the evolving React 18+ ecosystem), we’re seeing a rethinking of how UI and backend logic intermingle. The success of these directives could influence other frameworks and the broader web platform in interesting ways. Dan Abramov suggests that the ideas behind 'use client'/'use server' are not limited to React; they are a generic approach to distributed applications, essentially a form of RPC (remote procedure call) built into the module system. Imagine a future where your codebase seamlessly spans multiple runtimes (web browser, server, maybe even mobile or worker contexts), with the boundaries declared in the code and handled by compilers and bundlers. The React team expects these patterns to “survive past React and become common sense” in web development. It’s a bold vision, a world where sending code to the client or calling into the server is as straightforward as calling a function, with tools taking care of the messy details of networking and serialization.
In practical terms, the ecosystem is already adapting. Libraries that provide React components are starting to consider how they’ll work in a Server Components world. For example, a date picker or charting library might mark its components with 'use client' so that if you use them in a Next 13+ app, the library’s code is correctly included on the client side. Tooling is also improving. Since these directives are just string literals, they rely on build tooling to do the right thing. We might see better ESLint rules or even language support to catch mistakes like forgetting to add 'use client' when needed, or conversely, adding it unnecessarily. There’s active discussion in the community about how to make the developer experience smoother. Could future React versions infer 'use client' automatically for certain components based on usage of hooks? Possibly, though the React team seems to prefer explicit boundaries for now, as automation might be error-prone. What’s more likely is continued guidance and patterns for structuring apps. Over time, using 'use client' may feel as natural as using useState, just another part of React’s vocabulary.
We should also watch for how other frameworks respond. The idea of partial hydration and islands of interactivity isn’t unique to React. Frameworks like Astro, Qwik, and Marko have been exploring similar territory, each with their own spin. React’s approach with 'use client' and 'use server' is distinctive in that it integrates deeply with JavaScript modules and bundlers, rather than introducing a completely new DSL. This means it could be adopted beyond React if standardized, for instance, a future build tool could allow any JavaScript project to designate certain modules for the client or server environment using similar directives. It’s not hard to imagine the concept spreading: the benefits of clarity, performance, and type safety at the boundary are not something any full-stack developer would want to pass up. On the other hand, React’s solution is opinionated: it assumes a single unified project that produces both server and client artifacts, which fits frameworks like Next.js perfectly. Not every project will have that shape, so there will continue to be alternatives and variations.
In the short term, we can expect the React community to establish best practices around 'use client'. Already, the recommendation is to use it sparingly and purposefully. The ideal React Server Components app uses 'use client' only components that truly need it, and sometimes that means writing a small wrapper component just to hold some client state or effect while the rest stays on the server. This granularity might feel like extra work, but it pays off in load performance and gives you a clearer understanding of your app’s runtime behavior. There’s also an educational aspect: understanding 'use client' inevitably means understanding how the client–server continuum works in a React app, which makes one a better full-stack developer. It forces you to confront where the state lives, where data comes from, and what code runs where. Those who embrace this mindset are likely to build apps that scale better and are easier to debug across environments.
The Yin-Yang of modern React architecture. Many have begun to view their React apps as a yin-yang symbol of server and client, two complementary halves of a single whole. The 'use client' and 'use server' directives are the two gates that let data and code flow between these halves in a controlled way, each gate opening in one direction. 'use server' lets the client safely invoke server-side functions (essentially turning an import into a network call), while 'use client' lets the server include interactive client-side UI (turning an import into a script reference). Together, they allow “seamless composition across the network”, meaning you can build features that feel like one cohesive program even though under the hood they involve browser code talking to server code. It’s a powerful illusion that improves practicality rather than just abstracting it away: you still know which parts run where, but you no longer have to hand-stitch the plumbing every time you cross the boundary.
In conclusion, 'use client' is far more than a mere hint to “do this in the browser.” It is a cornerstone of React’s new architecture, enabling a new level of integration between server and client logic while preserving performance and clarity. Its importance will only grow as more of the React ecosystem adopts Server Components. Yes, it requires learning a new way of thinking about React, one where you occasionally have to pop open a different mental toolbox for client versus server concerns, but the payoff is an application that can be both highly performant and richly interactive. For busy developers working on complex apps, 'use client' offers a way to write code that is simultaneously efficient and expressive, bridging worlds that used to be separate. As we continue to refine these patterns, it’s likely that in a few years using 'use client' (and 'use server') will feel as natural as writing an async function. It’s a small change with big implications, and it’s pointing the way toward a future in which the line between front-end and back-end code is blurred by design, not by accident. In that future, the phrase “full-stack developer” might take on a more literal meaning, and 'use client' will have been one of the keys that opened the door.
Next.js 15 App Router — Architecture and Sequence Flow
Overview of Server vs. Client Components
Next.js App Router leverages React Server Component (RSC) by default for improved performance. By default, all page and layout files are server components, meaning they render on the server and their code is not sent to the client. Client Components (marked with the “use client“ directive) are used only when interactivity, state, or browser APIs are needed. In practice, a Server Component can include and import a Client Component (to add interactive parts), but not vice versa. This allows a single page’s component tree to interleave server-rendered UI with interactive client-side widgets. By pushing as much UI as possible into Server Components, Next.js reduces the amount of JavaScript that must hydrate on the client, improving performance.
Key Characteristics:
Server Components: Render on the server only (never in the browser), can safely access databases, secrets, and perform data fetching with
await(e.g.await fetch(...)) directly in the component code. They are never hydrated on the client and do not include React state or event handlers (they output static HTML).Client Components: Render on the server and then hydrate on the client. These are needed for any stateful or interactive UI (hooks like
useState, event handlers, browser-only APIs likewindoworlocalStorage). A file with'use client'at the top is treated (and all of its imports) as part of the client-side bundle. During the initial page load, Client Components are still server-side rendered to HTML (for faster first paint), but afterwards their JS code runs in the browser to handle interactivity.
Composition: Server and Client components can be mixed. For example, a Server Component page might import a <Navbar> that is mostly server-rendered but includes a <SearchBar> marked as a Client Component for interactivity. React will render the Server Components to an RSC Payload (a serialized representation), including placeholders for any Client components. The Client Component’s actual HTML will be injected on the client side during hydration. This lets heavy lifting (data fetching, markup generation) occur on the server, and interactive pieces will be added on the client without a full re-render of the entire page.
App Router Structure: Layouts, Templates, and Pages
Next.js organizes routes in /app directory using nested folders. Each folder can contain special files that define the UI for that route segment. The primary ones are: Layout, Template, and Page.
Layouts (
layout.tsx): A Layout is a wrapper UI that persists across pages. Layouts are defined at any route segment and apply to all pages under that segment. On navigation, layouts do not unmount or re-run; they preserve state and remain interactive without rerendering. This means if you have stateful Client Components in a layout (e.g. a sidebar or header), they won’t reset when moving between child pages. Layout files are hierarchical – a child segment’s layout is nested inside its parent layout. The top-levelapp/layout.tsxis the Root Layout (required, must include<html>and<body>tags) wrapping the entire app.Templates (
template.tsx): A Template is very similar to a layout in structure (wraps children segments), but does not persist state across navigations. Instead, a template re-mounts afresh for each navigation even if you stay in the same segment. In effect, it’s a “re-rendered layout” used when you want certain parent UI to reset or run again on each page change. For example, use a Template for an animation or to reset scroll position or state whenever the user navigates between sibling pages. According to Next.js conventions, “template” files are rendered after layouts and before the page component, and a new instance is created when navigating between pages using that same template. (In contrast, layouts “always precede templates” and remain mounted.) Use-case tip: Use Layouts by default; use a Template only if you need to reset state or re-run effects on navigation.Pages (
page.tsx): A Page is the leaf component for a route – it defines the content for a specific URL. Pages are always Server Components (unless explicitly made client) and can be asynchronous (e.g. toawait fetchdata). They are rendered as children of the nearest Layout/Template wrappers. Each folder typically contains a singlepage.tsx(except for dynamic routes). The page component’s output is what ultimately gets rendered inside all the surrounding layouts/templates for that route.
How they compose: When a user navigates to a URL, Next.js matches a chain of layouts/templates down to the page. For example, consider a route /dashboard/profile with the following structur
app/
├─ layout.tsx (Root Layout – e.g. site chrome)
└─ dashboard/
├─ layout.tsx (Dashboard Layout – persists for all /dashboard/* pages)
├─ template.tsx (Dashboard Template – re-renders on each navigation under /dashboard)
├─ page.tsx (Dashboard index page, e.g. /dashboard)
└─ profile/
└─ page.tsx (Profile Page, at route /dashboard/profile)
When loading /dashboard/profile, Next.js will:
Render
app/layout.tsx(root layout) at the top,Inside it, render
app/dashboard/layout.tsx,Then render
app/dashboard/template.tsx(its output wrapping the page),Finally render the
app/dashboard/profile/page.tsxcontent inside those wrappers.
The Root Layout might include the global HTML structure and navigation; the Dashboard layout might include the sidebar that remains persistent; the Template layout could ensure that navigating between sub-pages (Dashboard index and Profile) resets certain state; and the Profile page provides the main page content. Next.js does this assembly automatically based on the folder structure. Notably, layouts render in parallel with pages, meaning the server doesn't block pages until the parent layout is done — this improves performance by avoiding sequential waterfalls.
Initial Page Load: Request-Response Sequence
On the first load of a page (or a direct visit URL), the sequence involves both server-side rendering and client-side hydration:

Browser -> Next.js Server: The browser requests a page (say
/dashboard/profile). This request hits the Next.js server (Node.js or Edge runtime). The App Router locates the matching route segments and loads the corresponding components: all required Layout(s), Template(s), and the Page component for/dashboard/profile.Server Rendering with RSC: Next.js performs an SSR render using React’s server rendering pipeline. This happens in two phases:
Render to RSC Payload: React runs through the Server Components (layouts, page, and any nested Server Components) to produce a React Server Component Payload (RSC payload). The RSC payload is a serialized binary format containing the rendered output of server components, plus placeholders for Client Components and references to their JS bundles. Essentially, it’s a description of the UI: HTML for server-rendered parts, and instructions for where client-rendered parts go.
Generate HTML: Next.js then uses the RSC payload results along with the known client component boundaries to assemble the HTML for the response. Server Components’ output becomes HTML content, whereas each Client Component is left as a lightweight placeholder (often an empty container or loading hint) in the HTML. This HTML can be streamed to the browser (enabled by React Suspense boundaries), allowing the user to see partially rendered content sooner without waiting for all data to finish. At this stage, the server also includes scripts/tags to send the RSC payload to the client (for example, in a
<script type="application/json">or streaming over a network channel) so that the client-side React can pick it up.
3. Server Response: The Next.js server returns the initial HTML along with the RSC payload (and the necessary JS bundles references for any client components). The HTML already contains the fully rendered UI of all Server Components (e.g. text, markup, etc.), so the user sees meaningful content on first paint. This HTML, however, is non-interactive at first — any buttons or client-side UI controls won’t yet respond.
4. Browser Rendering & Static Content: The browser receives and parses the HTML. Immediately, it can display the server-rendered content. This gives a fast First Contentful Paint since no client-side code is needed yet to show the UI. At this point, the page looks complete but isn’t wired up to React on the client.
5. Hydration Phase (Browser): In the background, the Next.js client runtime (hydration script) takes over. It loads the JavaScript for any Client Components that were included on the page (as referenced in the RSC payload). React on the client uses the RSC payload to reconcile the Server and Client Component trees, injecting the actual Client Component UI and state into the DOM where the placeholders were. Then hydration attaches event handlers and reactivates the interactive parts. Essentially:
The RSC payload tells React what the server output was for each component, so React can create a virtual DOM tree matching it.
For each Client Component boundary, React will load its JS module and hydrate it: attach its event listeners, initialize state, etc. (using
ReactDOM.hydrateRoot). Hydration makes the previously static HTML “live”.All of this happens concurrently: while some Client Components hydrate, other parts of the page (Server Components) were already usable as static content, and any remaining streaming content can continue to load. React’s concurrency and Suspense allow hydration to be interleaved with any late-arriving chunks of the stream.
6. Interactive Page: Once hydration completes, the page is fully interactive. The user can now click buttons, use forms, open menus, etc. The initial load is now essentially a hydrated React app in the browser. Importantly, any purely server-rendered parts of the UI (Server Components without client logic) remain simply static DOM — they don’t incur additional JS overhead on the client beyond what’s needed to stitch them into React’s tree. Only the designated Client Components carry a client-side cost.
Client-Side Navigation Flow (App Router)
After the initial load, navigating between pages is typically done via client-side transitions (using <Link> or router APIs) to avoid full page reloads. The Next.js App Router handles these subsequent navigations efficiently:
When the user clicks a Next
<Link>to another route (e.g. from/dashboardto/dashboard/profile), the browser does not perform a traditional page refresh. Instead, the Next.js client intercepts the click and triggers a fetch to the server for the new route’s data. Specifically, Next will request the RSC payload for the new route (this is often an HTTP call to an internal API endpoint that returns the React Server Component payload for that page).The Next.js Router on the client keeps a cache of previously fetched RSC payloads (the Router Cache). If the new route was preloaded or visited before, its RSC payload might already be cached, enabling near-instant navigation. (Next.js by default prefetches routes in the background when
<Link>is in viewport, caching their RSC payload.)The server generates the RSC payload for the new page (just like in initial load, but usually without needing to resend full HTML). This payload describes the portions of the UI that change. Because layouts can persist, Next.js will reuse any parent layout components that are common between the current page and the next page, and only fetch/render the segments that differ. For example, navigating between
/dashboardand/dashboard/profileuses the sameapp/layout.tsxandapp/dashboard/layout.tsx; those layouts stay mounted on the client. The server may only need to send the RSC payload for theprofile/page.tsxcontent (and maybe a template, if present).The browser receives the new RSC payload (as JSON or binary data). React then merges the new server-rendered content into the existing DOM. This is done by computing the differences between the current UI and the new one, based on the RSC payload. React will update the DOM to reflect the new page — injecting, updating, or removing elements as needed. Crucially, this happens without unloading the JavaScript environment: the React app remains running, so any state in persistent layouts or already-mounted client components can be preserved.
Any new Client Components required by the navigation will be loaded and hydrated as part of this process. Since no full page reload occurred, the already-mounted Client Components in parent layouts remain live (they do not re-mount). Any Client Components that are no longer needed (from the previous page) will be unmounted, and new ones will be initialized.
The result is a seamless SPA-like transition. Next.js also supports streaming in new content during navigation: you can use React Suspense boundaries with
loading.tsxin App Router to show a fallback UI while waiting for the new content to load. The RSC payload can stream, so pieces of the new page can progressively fill in. This provides a smooth UX for navigation, even if some data is slightly delayed.
In summary, subsequent navigations fetch and apply an RSC payload instead of a full document, using cached data when possible. According to Next.js: *“On subsequent navigations, the RSC Payload is prefetched and cached for instant navigation, and Client Components are rendered entirely on the client, without the server-rendered HTML.”*This means after first load, pages update via client-side React rather than a full SSR roundtrip (though the server still provides fresh data through RSC). The App Router intelligently preserves layout state (thanks to layouts not unmounting) and only changes what’s necessary, enabling fast transitions.
Data Fetching and Built-In Optimizations
Next.js v15 provides powerful built-in data fetching mechanisms that integrate with the RSC architecture:
Async Server Components with
fetch: In the App Router, you can fetch data directly inside a Server Component by making the componentasyncand using the Webfetch()API (or any async call). For example, you canawait fetch('https://...')at the top of apage.tsxcomponent to retrieve data on the server. This removes the need for separate data fetching methods (like getServerSideProps) – data is co-located with the component. Next.js extends thefetchAPI to improve performance: by default, fetch calls are automatically cached and deduplicated during the rendering process. If the same request URL is called multiple times in a single request (say in a layout and a page), Next.js will perform it once and reuse the result, avoiding duplicate work. This is often called request memoization – React 18 handles it under the hood for GET requests. By default, Next.js will cache fetch responses indefinitely when possible (during build for static props, or in-memory on the server between requests if not opted out). You can customize this with fetch options:fetch(url, { cache: 'force-cache' }): uses Next.js Data Cache – serve cached data if available (fresh) or fetch and then cache it.fetch(url, { cache: 'no-store' }): always fetch fresh data (no caching).fetch(url, { next: { revalidate: 10 } }): set a time-based revalidation (stale-while-revalidate) in seconds. This controls how long a cached response is considered fresh. Settingrevalidate: 0is equivalent tono-store(no caching).By using these options or environment (development vs production), Next.js allows both static caching (during build) and dynamic data fetching where needed. The default for most Server Components is to statically pre-render and cache the data on first request, unless you mark it dynamic. (Good to know: In development, caching is usually disabled so that fetch always runs, to ease debugging.)
React.cache()utility: Not all data fetching goes throughfetch. If you are querying a database via an ORM, or using a third-party SDK (which might not use fetch under the hood), you can still benefit from deduplication. React provides acache()function (in React 18+) to memoize any async function on a per-request basis. For instance, you can wrap your DB query function:const getProducts = cache(async () => db.findAll()). When called multiple times during the rendering of a single page, the cached version ensures the actual operation runs only once. Next.js recommends usingfetch(which is auto-memoized) when possible, orcache()for custom data functions, to avoid duplicate data fetching in layouts and pages. This fits the App Router’s pattern of fetching in multiple components without lifting all data up to a single load – you simply fetch where needed and trust the framework to avoid unnecessary network calls.server-onlyandclient-only: Next.js provides special packages to enforce separation of concerns. If you have a module that should only run on the server (e.g. it contains secret keys or Node-only code), you can import'server-only'at the top of it. This will cause Next to throw a build error if that module is ever imported into a Client Component (preventing accidental leakage of server code to the client). Conversely, a'client-only'package exists to mark modules that should only run in the browser (e.g. rely onwindow). These safeguards help catch mistakes where you might import something like a database client into a Client Component. Next.js already automatically strips out most server-only code from client bundles (e.g.process.envwithoutNEXT_PUBLICwill be an empty string on client), but usingserver-onlygives an explicit guarantee and clearer error messages.React’s
use()Hook for Streaming: A new addition as of React 18/Next 15 is theuse()hook, which allows a Client Component to consume an async resource (like a Promise) directly during rendering. Next.js uses this to enable streaming SSR with partial hydration. The typical pattern is: a Server Component kicks off a data fetch without awaiting it, and passes the Promise down to a Client Component via a prop. The Client Component, being wrapped in a<Suspense>boundary, callsuse(promise)to read the result of that async call. React will suspend rendering of that Client Component until the promise resolves, allowing the server to stream the rest of the content and the client to show a fallback UI. Once the promise resolves (data is ready), the Client Component will hydrate with the real data. This mechanism essentially lets you split data fetching: fetch on the server, but defer consuming it to the client at render-time, which is useful for cases where you need client-side interactivity with server-fetched data. It’s an advanced technique, but it’s built-in with the App Router (no need for state management libraries just to bridge server->client data).Progressive Hydration and Streaming: The App Router’s architecture inherently supports streaming and selective hydration. Using
<Suspense>boundaries and special files likeloading.tsx, you can create skeleton UIs or spinners that show instantly while deeper parts of the page load data. Next.js will stream HTML in chunks for each Suspense boundary that resolves, and hydrate components as they arrive. Hydration is also selective: only Client Components need hydration, and they can hydrate independently. For example, a slow, large Client Component can be wrapped in Suspense so that other interactive components on the page hydrate sooner, without waiting for the slow one. This fine-grained control is a direct benefit of RSC and the App Router.
Terminology Mapping
Next.js v15 App Router introduces a paradigm where the server is deeply involved in rendering React components, while the client takes on hydration and navigation responsibilities. The above architecture can be summarized by the flow of data and control:
Browser -> Server: Requests a route; Next.js server constructs the React tree (Layouts, Templates, Page) and renders to HTML + RSC payload.
Server -> Browser: Sends down HTML (UI markup) and RSC payload (serialized component tree data) in the response.
Browser (React client): Immediately displays HTML, then uses the RSC payload to load/hydrate Client Components and attach event handlers (hydration).
User Interaction / Subsequent Route Change: Next.js fetches new RSC payload (using built-in caching for speed) and updates the client-side React state, reusing persistent Layouts and rehydrating new parts. No full page reload occurs.
All of this is achieved with no third-party state libraries or client-side routers — just Next.js’s built-in capabilities and React’s advancements. By understanding the roles of Server Components vs Client Components, and how Layouts persist while Templates can reset state on navigation, you can architect a Next.js app that is both high-performance (minimal client JS, efficient data fetching) and highly dynamic (rich interactions via hydration). Next.js v15’s App Router provides a robust foundation out of the box to handle routing, data fetching, caching, and rendering in a unified, developer-friendly way.
How to Client-Side-Render a Component in Next.js
Whether something in Next.js is client-side or server-side rendered can be easily determined. We will work with this simple component:
<>
<p>Hello World!</p>
</>
Using Next.js’ default server-side rendering, the generated HTML looks like this:

You can see the component’s XML is rendered on the server.
Meanwhile, using client-side rendering, the initial HTML response looks like this:

Of course, the “Hello World!“ still appears in the DOM, but it takes some time because it is rendered through client-side JavaScript.
Here are 3 ways to achieve CSR in Next.js:
Method 1: Timing with useEffect
If you use Next.js’s new App Router, every component is a server component by default and can’t use React hooks. Therefore, we declare it as a client component on top by stating “use client“.
'use client'
import { useEffect, useState } from 'react'
export default function Index() {
const [isMounted, setIsMounted] = useState(false)
useEffect(() => {
setIsMounted(true)
}, [])
if (!isMounted) {
return <p>loading...</p>
}
return (
<>
<p>Hello World!</p>
</>
)
}
Method 2: Dynamic components
Using a dynamic import, a component can also be rendered only on the client side. The reason is simple: The component is imported once the wrapping component is rendered; therefore, all the work is happening on the client.
import dynamic from 'next/dynamic'
const HelloWorld = dynamic(() => import('../components/HelloWorld'), {
ssr: false,
})
export default function Index() {
return <HelloWorld />
}
This approach works both for clients and for server-side components.
Method 3: Use the window object (Only Pages Router)
Hint: This approach doesn’t work in the new App Router
This trick is so simple: When we render something on the server, the window object isn’t available in our code. Why? Well, because the window object is exclusive to the browser, of course.
Since we can access this special object in Next.js, we check if we are on the server or not. Therefore, we can save if server-side rendering is happening in a variable like this:
const SSR = typeof window === 'undefined'
SSR is true if server-side rendering of our JSX is happening. To only client-side render something, use this variable:
const SSR = typeof window == 'undefined'
export default function Index() {
return <>{!SSR ? <p>Hello World!</p> : null}</>
}
Next.js Middleware — The Secret Weapon Every Developer Should Use
When building modern web apps with Next.js, there are times when you need to run code before a request is completed — for authentication, logging, redirects, security, or personalization.
That’s where Next.js Middleware comes in.
It’s like a bouncer at a club 🕶️ — checking requests before they enter your pages.
What is the middleware in Next.js?

Middleware in Next.js is a function that runs before a request is completed.
Think of it like a gatekeeper — it runs before the page is rendered.
It runs on the Edge, meaning it’s super fast (close to the user geographically).
It lets you modify requests and responses, redirect, rewrite paths, and more — without a full server.
📁 Middleware is created in a special file:
/middleware.ts or /middleware.js
Use Case #1: Authentication Check (Protected Routes)
Let’s say you have pages like /dashboard, /profile, or /admin.
You don’t want unauthenticated users accessing them.
Here’s how you can restrict access using middleware:
// middleware.ts
import { NextResponse } from 'next/server';
import type { NextRequest } from 'next/server';
export function middleware(request: NextRequest) {
const token = request.cookies.get('token')?.value;
// Protect these routes
const protectedRoutes = ['/dashboard', '/profile', '/admin'];
if (protectedRoutes.includes(request.nextUrl.pathname)) {
if (!token) {
return NextResponse.redirect(new URL('/login', request.url));
}
}
return NextResponse.next();
}
✅ Why this works:
Middleware reads the token from cookies (set when the user logs in).
If it’s missing and the user tries to visit a protected route, they get redirected to
/login.
Use Case #2: Locale-Based Redirects (Geo Redirects)
Imagine your site supports English and Spanish users, and you want to auto-redirect based on their location or headers.
// middleware.ts
import { NextResponse } from 'next/server';
import type { NextRequest } from 'next/server';
export function middleware(request: NextRequest) {
const { nextUrl, headers } = request;
const country = headers.get('x-vercel-ip-country') || 'US';
if (nextUrl.pathname === '/') {
if (country === 'ES') {
return NextResponse.redirect(new URL('/es', request.url));
}
return NextResponse.redirect(new URL('/en', request.url));
}
return NextResponse.next();
}
✅ Bonus Tip:
If you deploy on Vercel, the x-vercel-ip-country header comes built-in!
You don’t need to manually detect IPs or use third-party services.
Use Case #3: Logging or Analytics
Need to track page visits, log IPs, or count hits per route?
Middleware is perfect for this since it runs before anything renders.
// middleware.ts
import { NextRequest } from 'next/server';
export function middleware(request: NextRequest) {
console.log(`[Page Visit] ${request.nextUrl.pathname} from IP: ${request.ip}`);
return new Response(null, { status: 204 }); // Or just: return NextResponse.next();
}
✅ When to use this:
Custom logging
Tracking traffic patterns
Counting route visits without adding client-side JS
📌 Pro Tip: Instead of console.log, sending logs to services like Logtail, Sentry, or even your own API.
Middleware Matchers (Optional Filtering)
Don’t want your middleware to run on every request?
Use matchers!
// middleware.ts
export const config = {
matcher: ['/dashboard/:path*', '/profile', '/admin'],
};
This makes your middleware run only for these routes.
Limitations
Runs before the cache layer (so can’t directly use server actions).
No access to React state or hooks (it’s not in the React tree).
Designed for lightweight logic, not heavy data fetching.
When To Use React Query With Next.js Server Components
React Server Components have revolutionized how we think about data fetching in React applications. But what happens when you want to use React Query alongside server components? Should we always combine them? The answer might surprise you.
With React now running on both client and server, developers are grappling with how traditional client-side libraries like React Query fit into this new paradigm. The reality is more nuanced than simply “use both everywhere“.
Setting up React Query for Server Components
The foundation of using React Query with server components lies in proper setup. Here’s a key pattern:
The Query Client Factory
import { isServer, QueryClient } from "@tanstack/react-query";
function makeQueryClient() {
return new QueryClient({
defaultOptions: {
queries: {
staleTime: 60 * 1000, // 1 minute
},
},
});
}
let browserQueryClient: QueryClient | undefined = undefined;
function getQueryClient() {
if (isServer) {
// Always create a new query client on the server
return makeQueryClient();
} else {
// Create a singleton on the client
if (!browserQueryClient) {
browserQueryClient = makeQueryClient();
}
return browserQueryClient;
}
}
Why This Pattern Matters
The server-client distinction is crucial:
Server: Always create a new query client instance for each request to avoid data leakage between users.
Client: Maintain a singleton to persist across component re-renders and suspend boundaries.
This pattern is especially important in Next.js, where the layout component is wrapped in a Suspense boundary behind the scenes. Without the singleton pattern, you would lose your client instance every time a component suspends.
The Server-Client Data Flow
Here’s how data flows from server to client:
1. Server Component (Prefetching)
// posts/page.tsx - Server Component
export default async function PostsPage() {
const queryClient = getQueryClient();
// Prefetch data on the server
await queryClient.prefetchQuery({
queryKey: ['posts'],
queryFn: getPosts,
});
return (
<HydrationBoundary state={dehydrate(queryClient)}>
<PostsClient />
</HydrationBoundary>
);
}
2. Client Component (Consumption)
// PostsClient.tsx - Client Component
'use client';
export default function PostsClient() {
const { data: posts } = useQuery({
queryKey: ['posts'],
queryFn: getPosts,
});
return (
<div>
{posts?.map(post => (
<div key={post.id}>{post.title}</div>
))}
</div>
);
}
3. The Hydration Bridge
The HydrationBoundary component bridges the server and client by:
Dehydrating the query client state on the server
Rehydrating it on the client
Making prefetched data immediately available
This Is Actually Good!
Data is prefetched on the server
Hydrated to the client without additional network requests
The user sees data immediately without loading states
This is exactly what makes React Query with Next.js so powerful — you get server-side rendering with client-side cache management, giving you the best of both worlds for performance and user experience.
What Not To Do
A common mistake is fetching data in server components and trying to use it directly:
// ❌ Don't do this
export default async function PostsPage() {
const queryClient = getQueryClient();
const posts = await queryClient.fetchQuery({
queryKey: ['posts'],
queryFn: getPosts,
});
return <PostsClient posts={posts} />;
}
Why this breaks: Server components don’t re-render. If the client-side query cache gets invalidated or updated, your server component will display stale data, creating UI inconsistencies.
Why doesn’t the client refresh data on the first load?
When using React Query with Next.js Server Components, something important happens behind the scenes during the initial page load.
First, your server component fetches the data ahead of time using React Query’s prefetchQuery. This means the server already has the data ready before sending the page to the browser.
Then, using the <HydrationBoundary>, this prefetched data is passed down to the client, so React Query on the client side starts with a fully populated cache.
Because the data is already available and considered fresh, React Query doesn’t make a new network request when the page loads in the browser. It simply reads from the cache. This improves performance and avoids unnecessary data fetching.
However, if you change a filter or the query becomes stale, React Query will then fetch new data as needed.
This setup allows you to:
Render data instantly on first load
Avoid duplicate fetching
Keep the client-side declarative (using useQuery)
Maintain a clean separation between server and client responsibilities
In short, the server does the heavy lifting up front, and the client reuses that work efficiently.
When to Use React Query with Server Components
React Query with server components makes sense when you need:
1. Client-Specific Features
Infinite Queries for pagination and infinite scroll:
// Server prefetch
await queryClient.prefetchInfiniteQuery({
queryKey: ['posts'],
queryFn: ({ pageParam = 1 }) => getPosts(pageParam),
});
// Client usage
const { data, fetchNextPage, hasNextPage } = useInfiniteQuery({
queryKey: ['posts'],
queryFn: ({ pageParam = 1 }) => getPosts(pageParam),
getNextPageParam: (lastPage) => lastPage.nextPage,
});
2. Real-time Updates
When you need optimistic updates, cache invalidation, or real-time synchronization across components.
3. Complex State Management
For applications requiring sophisticated caching strategies, background refetching, or retry logic.
**When to Skip React Query
**Often, you’re better off with pure server components:
// Simple and effective
export default async function PostsPage() {
const posts = await getPosts();
return (
<div>
{posts.map(post => (
<PostCard key={post.id} post={post} />
))}
</div>
);
}
This approach offers:
Better performance: No JavaScript bundle for data fetching
Simpler architecture: Fewer moving parts
Better SEO: Content rendered on the server
Faster initial load: No client-side fetching delay
Making the Right Choice
Consider these questions:
Do you need client-side interactivity like infinite scroll, real-time updates, or optimistic mutations?
Is your data relatively static or does it change frequently?
Do you need complex caching strategies or is simple server-side fetching sufficient?
Are you building a highly interactive app or primarily displaying content?
Best Practices
1. Start Simple
Begin with pure server components. Add React Query only when you need client-specific features.
2. Use Appropriate Stale Times
Set longer stale times when prefetching to avoid immediate refetches:
defaultOptions: {
queries: {
staleTime: 60 * 1000, // Prevent immediate refetch after prefetch
},
}
3. Separate Concerns
Keep your client components unaware of server prefetching. They should work independently.
4. Consider Bundle Size
React Query adds to your JavaScript bundle. Ensure the benefits outweigh the costs.
Real-world use cases
Example 1: Basic Query — Filter-based query caching
This example demonstrates React Query’s fundamental capabilities with Next.js App Router. It showcases:
Server-side prefetching that hydrates the client cache on initial load
Filter-based query keys that maintain separate cache entries per filter
Automatic cache invalidation after the configured stale time (60 seconds)
Clean separation between server and client components
The UI displays a collection of shoes that can be filtered by category. When users switch filters, you can observe React Query’s intelligent caching, only fetching new data when needed, while serving cached data instantly.
Example 2: Infinite Query — Infinite scroll with pagination
This example showcases React Query’s advanced infinite scrolling capabilities. Key features include:
Implementation of useInfiniteQuery for paginated data loading
Automatic loading of the next pages as the user scrolls to the bottom
Server-side prefetching of the initial data page with prefetchInfiniteQuery
Cursor-based pagination handling a dataset of 100 items in batches of 10
Intersection Observer integration for detecting when to load more data
The UI demonstrates a real-world infinite scroll implementation with loading indicators and smooth state transitions, all while maintaining the benefits of React Query’s caching system.
These examples together provide a comprehensive look at how React Query integrates with Next.js to solve common data fetching challenges.
5 Next.js Image Pitfalls That Hurt Performance
Last month, I watched my e-commerce product page from a decent load time to a painful 4.2 seconds after implementing Next.js Image across our dynamic product catalog. Our bounce rate spiked, customers abandoned carts, and I realized something brutal: the “Image Component“ was destroying my site’s performance. After weeks of debugging slow load times, failed builds, and frustrated users, I discovered that sometimes the optimization creates more problems than it solves.
While working on the e-commerce website using Next.js 14, I learned the hard way that the Next.js Image Component isn’t always the performance hero it meant to be. After switching to targeted alternatives to our dynamic images from third-party APIs, I cut out product pages’ load times from 4.2 seconds to 1.8 seconds — a 57% improvement that saved our conversion rate.
Don’t get me wrong — the Next.js Image Component is powerful. It automatically handles lazy loading, WebP conversion, responsive sizing, and more. But if you are dealing with dynamic image sources, high-volume image pages, or mysterious performance bottlenecks, this section might save you weeks of headaches like it did for me.
Understanding Next.js Image Component (And Its Hidden Cost)
The Next.js Image Component promises seamless image optimizations through features like:
Automatic format conversion (WebP, AVIF)
Lazy loading with Intersection Observer
Responsive sizing with srcset generation
Blur placeholder support
Priority loading for above-the-fold images
Under the hood, Next.js Images rely on either Vercel’s Image Optimization API (in production) or the Sharp library (for local development and self-hosted deployments). This means every image gets processed on-demand, which sounds great in theory.
The reality? On-demand optimization can become a performance bottleneck, especially when dealing with unpredictable image sources or high-traffic scenarios. Here’s when that “optimization“ starts working against you.
5 Key Scenarios When You Should Not Use Next.js Image
1. Dynamic Image Sources from Third-Party APIs
The Problem: This is the big one that hit my e-commerce site hard. When your images come from external APIs with unpredictable URLs, domains, or formats, Next.js Image becomes a liability rather than an asset.
Real-World Example: Our product catalog pulls images from a supplier’s API. These images come from random CDN domains, vary in size and format, and change frequently. Here’s what happens with Next.js Image.
// This creates a bottleneck
<Image
src={`https://random-supplier-cdn-${Math.random()}.com/product-${id}.jpg`}
alt="Product image"
width={500}
height={500}
/>
What Goes Wrong:
Domain Configuration Nightmare: Next.js 14 requires pre-configuring image domains in
next.config.js. With dynamic third-party sources, this becomes impossible to maintain.Runtime Processing Delays: Each new image URL triggers real-time optimization, adding 2–3 seconds to initial load times.
404 Errors: Unconfigured domains default to broken images or fallback behavior.
The Better Solution:
// Direct img tag with external CDN optimization
<img
src={`https://your-cdn.com/optimize?url=${encodeURIComponent(dynamicImageUrl)}&w=500&q=80`}
alt="Product image"
loading="lazy"
style={{ width: '100%', height: 'auto' }}
/>
Performance Impact: After switching to direct <img> tags with Cloudinary URL transformations, our product page load times dropped from 4.2 seconds to 1.8 seconds.
2. High-Volume Image Pages (Product Grids, Galleries)
The Problem: Pages displaying dozens or hundreds of images simultaneously can overwhelm Next.js Image’s optimization pipeline.
Real-World Example: Our category pages show 48 products per page. With Next.js Image, the browser attempts to optimize multiple images simultaneously, creating a processing queue that delays the entire page render.
// This kills performance on high-volume pages
{products.map(product => (
<Image
key={product.id}
src={product.imageUrl}
alt={product.name}
width={300}
height={300}
/>
))}
What Goes Wrong:
Server Overload: Each image optimization request consumes server resources
Concurrent Processing Limits: Most hosting platforms limit simultaneous image processing
Waterfall Loading: Images load sequentially rather than in parallel
Cost Implications: Vercel’s Image Optimization API charges per optimization request
The Better Solution:
// Pre-optimized images with native lazy loading
{products.map(product => (
<img
key={product.id}
src={`${product.optimizedImageUrl}?w=300&h=300&fit=crop`}
alt={product.name}
loading="lazy"
style={{ aspectRatio: '1/1', objectFit: 'cover' }}
/>
))}
Performance Impact: Moving to pre-optimized CDN images reduced our category page Largest Contentful Paint (LCP) from 3.4s to 1.2s.
3. Unsupported or Unpredictable Image Domains
The Problem: Next.js Image’s security model requires explicit domain configuration, but modern applications often work with dynamic or user-generated content from unknown sources.
Real-World Example: Our platform allows users to upload images or import from social media. These images come from countless domains we can’t predict or pre-configure.
// next.config.js becomes unmaintainable
module.exports = {
images: {
domains: [
'cdn1.example.com',
'cdn2.example.com',
'user-uploads.s3.amazonaws.com',
'instagram.com',
'facebook.com',
// ... hundreds more?
],
},
}
What Goes Wrong:
Maintenance Hell: Constantly updating domain configurations
Security Concerns: Wildcard domains create vulnerabilities
Build Failures: Invalid or inaccessible domains break deployments
User Experience: Broken images when domains aren’t configured
The Better Solution:
// Proxy through your own domain or use unrestricted img tags
const ProxiedImage = ({ src, alt, ...props }) => {
const proxiedSrc = `/api/image-proxy?url=${encodeURIComponent(src)}`;
return <img src={proxiedSrc} alt={alt} loading="lazy" {...props} />;
};
4. Custom Optimization Requirements
The Problem: Next.js Image applies default optimization settings that might not suit specialized use cases like high-resolution product zoom, medical imaging, or artistic portfolios.
Real-World Example: Our drop-shipping e-commerce site needs pixel-perfect product zoom functionality. Customers expect to see every detail, but Next.js Image’s default 75% quality compression destroys the fine details.
// Default optimization destroys image quality
<Image
src="/diamond-ring-4k.jpg"
alt="Diamond ring detail"
width={2000}
height={2000}
quality={75} // Not enough for detailed zoom
/>
What Goes Wrong:
Quality Loss: Default compression settings reduce image fidelity
Limited Control: Fewer customization options compared to dedicated image services
Format Restrictions: Automatic format conversion might not preserve quality
Size Limitations: Processing very large images can timeout or fail
The Better Solution:
// Custom optimization pipeline for high-quality zoom
const HighQualityImage = ({ src, alt }) => {
return (
<img
src={`https://your-cdn.com/transform?url=${src}&q=95&f=auto&w=2000`}
alt={alt}
style={{ maxWidth: '100%', height: 'auto' }}
/>
);
};
5. Build-Time and SSG Issues
The Problem: Static Site Generation (SSG) with Next.js Image can fail when image URLs are unavailable at build time or when dealing with large image sets.
Real-World Example: Our product catalog generates static pages for 10,000+ products. During build time, some third-party image URLs become temporarily unavailable, causing the entire build to fail.
// This can break SSG builds
export async function getStaticProps({ params }) {
const product = await fetchProduct(params.id);
return {
props: {
product: {
...product,
image: product.dynamicImageUrl // Might be unavailable at build time
}
}
};
}
What Goes Wrong:
Build Failures: Invalid URLs cause build processes to crash
Increased Build Times: Image optimization during build significantly slows deployment
Memory Issues: Processing many large images can exhaust build server memory
Inconsistent Results: Some images optimize successfully while others fail
The Better Solution:
// Defer image loading to client-side
const ProductImage = ({ src, alt }) => {
const [imageSrc, setImageSrc] = useState('/placeholder.jpg');
useEffect(() => {
// Validate and set image source on client-side
const img = new Image();
img.onload = () => setImageSrc(src);
img.onerror = () => setImageSrc('/fallback.jpg');
img.src = src;
}, [src]);
return <img src={imageSrc} alt={alt} loading="lazy" />;
};
Best Practices for Next.js Image Alternatives
When you decide to skip Next.js Image, here are proven alternatives that maintain performance:
1. Native Lazy Loading
Modern browsers support native lazy loading:
<img src="/image.jpg" alt="Description" loading="lazy" />
2. CDN-Based Optimization
Use services like Cloudinary, Imgix, or ImageKit:
const optimizedUrl = `https://res.cloudinary.com/your-cloud/image/fetch/w_500,q_auto,f_auto/${originalUrl}`;
3. Custom Hook for Image Loading
const useOptimizedImage = (src, options = {}) => {
const { width = 500, quality = 80 } = options;
return `https://your-cdn.com/transform?url=${encodeURIComponent(src)}&w=${width}&q=${quality}`;
};
4. Intersection Observer for Custom Lazy Loading
const LazyImage = ({ src, alt, ...props }) => {
const [isLoaded, setIsLoaded] = useState(false);
const [isInView, setIsInView] = useState(false);
const imgRef = useRef();
useEffect(() => {
const observer = new IntersectionObserver(
([entry]) => {
if (entry.isIntersecting) {
setIsInView(true);
observer.disconnect();
}
},
{ threshold: 0.1 }
);
if (imgRef.current) {
observer.observe(imgRef.current);
}
return () => observer.disconnect();
}, []);
return (
<div ref={imgRef} {...props}>
{isInView && (
<img
src={src}
alt={alt}
onLoad={() => setIsLoaded(true)}
style={{
opacity: isLoaded ? 1 : 0,
transition: 'opacity 0.3s ease'
}}
/>
)}
</div>
);
};
When You SHOULD Still Use Next.js Image
To be fair, Next.js Image is excellent for:
Static images in your
/publicdirectoryKnown, pre-configured domains with predictable image sources
Low-traffic sites where optimization delays aren’t noticeable
Simple blogs or portfolios with minimal image complexity
When you need automatic blur placeholders and built-in lazy loading
The Next.js Image component is a powerful tool, but it’s not a silver bullet. After months of real-world testing on a high-traffic e-commerce site, I’ve learned that premature optimization can be worse than no optimization.
Before reaching for Next.js Image, ask yourself:
Are my image sources predictable and configurable?
Do I need real-time optimization, or can I pre-optimize?
Will the optimization delay impact user experience?
Am I dealing with high-volume image scenarios?
Do I have custom quality or format requirements?
If you answered “NO” to the first question or “YES” to any of the others, consider alternatives. Sometimes a simple <img> tag with proper CDN optimization delivers better performance with less complexity.
Mastering Next.js Caching
Have you ever noticed that some websites load really quickly while some others take a long time? Caching is a key part of how fast the website loads, and Next.js has powerful features to help you achieve it as well. You can make your app run faster and give users a better experience by optimizing the cache, even if you are not launching on Vercel, while potentially saving on hosting costs. Caching basically saves data that is used a lot so that it can be quickly retrieved. The server can serve a cached version of a page from your Next.js app when a user wants it instead of having it from scratch. This translates to:
Lightning-fast load times: Pages that have been cached load almost instantly, giving your guests a smooth and responsive experience.
Less work for the server: When you serve stored pages, your server has more resources to do other things (which makes your application more scalable)
To put it simply, optimizing the cache keeps your Next.js app going at its fastest, which makes users happier and lowers server costs.
Next.js gives you a number of ways to improve your cache in a setting that doesn’t depend on Vercel. It automatically caches statically generated pages, improving performance for frequently accessed parts of your application. You can also control data fetching cache behavior by specifying the duration the browser stores the data, ensuring it stays fresh while optimizing speed. For advanced scenarios, you can build custom caching logic by using libraries or techniques like localStorage. By utilizing Next.js’s built-in caching mechanisms and data fetching strategies, you can greatly enhance your application performance without depending on external platforms like Vercel.
Static Site Generation (SSG)
The practice of assembling and displaying web pages at build time, as opposed to on demand, is known as static site generation. With HTML, JavaScript, and CSS among other static files, this produces a set that is prepared for user delivery. SSG is accomplished in Next.js via the use of functions like getStaticProps and getStaticPaths. Run during build time, these routines retrieve data and produce the HTML required for every page. You can retrieve data and send it as a prop to your page with getStaticProps, so the content is ready even before the user requests it.
Benefits of Build Time Caching of Static HTML
Pre-rendering of your information provides SSG a number of benefits:
Lightning-Quick Load Times: Your visitors will have a seamless experience as pages load nearly instantaneously because the HTML is already built.
Greater Scalability: Because SSG doesn’t have to make content for each request, it lightens the strain on your server. This makes traffic handling by your application more efficient.
Improved Search Engine Optimization: Pre-rendered content allows search engines to crawl and index your website faster. SSG may result in lower server costs because of a lower server load.
Implementation Tips and Best Practices for SSG in Next.js
Now that you understand the core SSG process, here are some key implementation tips to optimize your workflow:
Choose the Right Content: Periodically updated material, such as product pages or blog posts, is best served by SSG. It works less well with often-changing material.
Use getStaticProps: The Next.js function is your workhorse during the construction process. Use it to retrieve data and make it available to your pages.
export async function getStaticProps(context) { // Fetch data here const res = await fetch('https://api.example.com/data'); const data = await res.json();// Return as props return { props: { data }, }; }Dynamic Routes: For dynamic pages, use getStaticPaths and indicate which routes need to be pre-rendered.
export async function getStaticPaths() { const response = await fetch('https://api.example.com/products'); const products = await response.json(); // Set fallback: false to indicate all pre-defined paths are generated at build time return { paths: paths, // Assuming 'paths' is defined elsewhere with product IDs or slugs fallback: false, }; }
Incremental Static Regeneration (ISG)
ISG builds on the foundation of SSG. During the build process, Next.js renders your application’s HTML pages with the most recent data. These pre-built pages are subsequently served to users, providing extraordinary speed. While ISG uses pre-rendered pages in the same way that SSG does, it adds dynamism. When a request for a page is received, Next.js examines the cache. If a cached page is fresh (within a defined time frame), it is served immediately. If the cached material is obsolete, Next.js initiates a background re-generation process to retrieve new data and update the HTML content. The user continues to see the cached page while the update occurs in the background, ensuring a smooth experience.
It’s important to understand that ISR doesn’t achieve real-time updates on a static page at runtime. Why? This is because while the page is being generated in the background, the user continues to see the cached version. This regeneration can take a few seconds, depending on how you retrieve data. The validation window you set determines how fresh the item is. By default, there is no revalidation, but you can choose a time period (for example, 60 seconds) for Next.js to check your updates. This means the content may be slightly out of date compared to a real-time system.
How to Implement ISR in Next.js Applications
Next.js makes implementing ISR straightforward. Here’s a basic example:
export async function getStaticProps() {
const response = await fetch('https://api.example.com/data');
const data = await response.json();
// Revalidate the data every 60 seconds (can be adjusted)
return {
props: {
data: data,
},
revalidate: 60, // In seconds
};
}
In this example, getStaticProps retrieves and returns data as props. The revalidate parameters control how frequently Next.js checks for updates in the background. When the new data becomes available, the cached page is automatically refreshed. Incremental Static Regeneration allows you to build Next.js applications that are both fast and keep your content fresh. Understanding ISR’s strengths and limits allows you to use it to produce a seamless and up-to-date user experience.
Cache Control Headers
Cache-control Headers are necessary for managing the caching behaviors of online resources. They teach browsers and intermediaries, such as CDNs, how to handle caching, which can have a substantial impact on a website’s performance and efficiency. They determine how, where, and how long the resources should be cached. They help to reduce utilization bandwidth and server burden, resulting in faster load times. They ensure that customers receive the most current materials without excessive requests.
Setting up Cache Control Headers in Next.js
While Next.js automatically configures some default Cache-Control Headers, you may want to change them for specific uses. Here is a summary on the options:
- During SSG, you can define Cache-Control headers within the getStaticProps function. This approach is ideal for static content with a set expiration time.
export async function getStaticProps() {
// ... fetch data
return {
props: {
data: data,
},
revalidate: 60, // Optional: revalidate every 60 seconds with ISR
cacheControl: 'public, max-age=3600', // Cache for 1 hour
};
}
- For API routes, you can set headers directly in the response object.
export default async function handler(req, res) {
const data = await fetch('https://api.example.com/data');
res.status(200).json(data).setHeader('Cache-Control', 'public, max-age=86400'); // Cache for 1 day
}
Best Practices for Cache Control Headers
Differentiate between static content (e.g., blog articles) and dynamic content (e.g., user profiles) and use appropriate headers. Static material can have longer cache periods, whereas dynamic content may need shorter caching or none at all.
Set appropriate max-age values based on your content’s update frequency as it specifies how long a resource can be cached before being considered stale
Use the immutable directive for really static items that never change (for example, photos or JavaScript files). This informs caches to never revalidate the resource, optimizing performance.
export async function getStaticProps() {
// ... fetch data (for non-immutable content)
return {
props: {
data: data,
},
cacheControl: 'public, immutable', // Never revalidate
};
}
- Use no-cache and no-store together with appropriate revalidation mechanisms (for example, SSR or ISR) to ensure that users always view the most recent version. However, use caution because they can have a considerable impact on performance if used excessively.
export async function getServerSideProps() {
// ... fetch data (for dynamic content)
return {
props: {
data: data,
},
cacheControl: {
// Revalidate every 60 seconds with ISR (consider appropriate strategy)
maxAge: 60,
noCache: true, // Invalidate cached data but doesn't prevent caching entirely
noStore: true, // Don't store data in any cache (use with caution)
},
};
- Try out several Cache-Control Header Settings and monitor real-world performance from browser developer tools in the network or timing tabs or tools like Google Speed Insights or Lighthouse, and so on to determine the best mix of the benefits of caching and the freshness of the content.
Client Side Caching
Client-side caching includes storing data obtained from the server on the user's browser. This cached data can be utilized for future requests, avoiding the need to retrieve from the server. Typical client-side caching methods include:
Browser Cache (HTTP Cache): Using Cache-Control headers from the server, the browser stores resources in cache on its own. Some of these headers are already handled by Next.js by default.
Scripts known as “service workers” operate in the background of a webpage even when it isn’t being used actively. If there is cached data available, they can serve it and intercept network queries.
Web Storage: There are two primary choices for web storage offered by the browser:
localStorage: Information is kept around even when the browser window is closed.
sessionStorage: When a browser window or tab is closed, data is removed.
CDN Integration
Picture a network of servers positioned throughout the globe. Such is a Content Delivery Network (CDN). By integrating CDN with your Next.js application, you essentially spread your static content — images, JavaScript, and CSS — over these servers. By bringing your material closer to users, loading times are greatly accelerated.
Benefits of CDN Integration:
Faster Load Times: By serving material from the closest CDN server, users’ experience is delivered with less latency.
Greater Scalability: High traffic spikes can be managed by CDNs without affecting your primary server.
Reduced Bandwidth Costs: Your server will be less taxed when static material is offloaded to the CDN, which may result in lower bandwidth bills.
Increased Availability: Because CDNs are dispersed geographically, your material is still available even in the event that one server goes down.
Next.js Cache Control: staleTimes, revalidatePath, relavidateTags
Enter two Next.js features that solve different pieces of this puzzle: staleTimes and revalidatePath. They both deal with caching, but they operate completely in different realms and serve distinct purposes.
What is staleTimes?
stateTimes is Next.js’s experimental solution for controlling client-side router cache duration. Think of it as a timer that determines how long your app keeps cached page data in the browser before fetching fresh content.
Here’s the basic setup:
// next.config.js
const nextConfig = {
experimental: {
staleTimes: {
dynamic: 30, // 30 seconds for dynamic pages
static: 180, // 3 minutes for static pages
},
},
}
The configuration works with two key properties:
dynamic: Controls cache duration for pages that aren’t statically generated (default: 0 seconds in Next.js 15)
static: Controls cache duration for statically generated pages and prefetched links (default: 5 minutes)
What is revalidatePath?
revalidatePath is your server-side cache invalidation tool. It’s like a precision strike that tells Next.js to throw away cached data for specific routes and fetch fresh content on the next visit.
import { revalidatePath } from 'next/cache'
// Revalidate a specific URL
revalidatePath('/blog/my-post')
// Revalidate all URLs matching a pattern
revalidatePath('/blog/[slug]', 'page')
// Nuclear option: revalidate everything
revalidatePath('/', 'layout')
Imagine you have a blog, and you add a new post:
// app/actions.ts
'use server'
import { revalidatePath } from 'next/cache'
import { db } from './db'
export async function addPost(formData) {
// Add new post to your database
await db.post.create({
data: {
title: formData.get('title'),
content: formData.get('content'),
},
})
// Immediately revalidate the blog listing page
revalidatePath('/blog')
}
The function accepts two parameters:
path: The route to revalidate
type: Either ‘page’ (specific page only) or ‘layout’ (page and all nested routes)
What is revalidateTag?
Next.js has a cache tagging system for fine-grained data caching and revalidation.
When using
fetchorunstable_cache, you have the option to tag cache entries with one or more tags.Then, you can call
revalidateTagto purge the cache entries associated with that tag.
For example, you can set a tag when fetching data:
// Cache data with a tagfetch(`https://...`, { next: { tags: ['a', 'b', 'c'] } })
Then, call revalidateTag with a tag to purge the cache entry:
// Revalidate entries with a specific tagrevalidateTag('a')
There are two places you can use revalidateTag, depending on what you're trying to achieve:
Route Handlers - to revalidate data in response to a third-party event (e.g., webhook). This will not invalidate the Router Cache immediately, as the Router Handler isn't tied to a specific route.
Server Actions - to revalidate data after a user action (e.g, form submission). This will invalidate the Router Cache for the associated route.
Cutting Redundant Data Fetches with React’s cache in React 19
React 19 introduced new features that improve data fetching in server-side React apps. One of them is the cache function, a built-in mechanism for memorizing data of a function call on the server. For simpler terms, the cache function allows you to avoid repeating expensive computations or data fetches when the same input occurs again on a single render cycle. It does this through memoization, storing the results of function calls and reusing them in the future if the same inputs are passed.
The cache API is only available in React Server Components (RSC). In client-side React, you would use other techniques (like hooks useMemo and useEffect) for caching, but on the server, the cache function is the right tool.
The problem of redundant data fetches
In a framework such as Next.js 15, it’s common to break pages into components or even use parallel routes (multiple segments rendered concurrently). These components might need the same data. For example, two different parts of the page both require a user’s profile or a list of products.
Without caching, this often means making duplicate database (DB) or API calls. For instance, if two sibling components (or parallel route segments) each query the DB for the current user’s info, you’d normally end up hitting the DB twice for the same query. This slows your app, increases server load, and potentially leads to inconsistent data if one request finishes before the other.
In the past, avoiding this redundancy required workarounds like lifting data fetching up to a common parent and passing props down or implementing a manual caching layer. However, lifting the state up makes components less modular. Custom caching logic also gets messy. What we want is a way for each component to request the data it needs independently while automatically deduplicating identical requests during the render cycle. This is exactly the use case React’s cache function is designed to handle.
How React’s cache works
React’s cache function creates a memoized version of any asynchronous function. When you wrap a function with cache, React will store its result in memory the first time it's called. Subsequent calls to the cached function with the same arguments will return the cached result instantly, skipping the actual function execution. In other words, the first call is a cache miss (triggers the real fetch or computation), and later calls (with identical parameters) are cache hits that reuse the result.
This caching is scoped to a single server render lifecycle. During a server-side render, if multiple components invoke the same cached function with the same arguments, React runs the underlying operation (e.g., the DB query) only once. It prevents redundant, expensive operations by reusing the cached result. That avoids unnecessary network requests. This means fewer DB hits or API calls for that page render, leading to better performance and consistency. All components get the same result object, so they stay in sync.
Equally important, the cache is automatically cleared after the render is complete. React clears the cache after every server request. It invalidates all cached results at the end of each server request/render, so the next time you render that page (or another user requests it), React fetches fresh data.
There’s no need to worry about manual cache invalidation or stale data in subsequent requests. By design, each new request gets a new, empty cache. This makes the cache function a per-request memoization. Each call to cache(fn) returns a new memoized function with its own storage, and errors are cached the same way as successful results.
It differs from persistent caches (like data cached to disk or long-lived in-memory caches). Those would need explicit invalidation strategies, but this built-in cache is transient and lives only for the duration of the render. As the Next.js docs note, the cache (for memoized fetches or functions) "lasts the lifetime of a server request until the React component tree has finished rendering" and is not shared across requests.
In practice, this means each page load or navigation gets fresh data, which is ideal for most cases where you want real-time updated info on a new request.
Using React’s cache in a Next.js App (Supabase Example)
Let’s walk through a concrete example to see the cache function in action. With a Next.js app using the App Router, you have two parallel route segments on a dashboard page, say, a user profile section and a sidebar with user stats. Both segments need to retrieve the user's profile data from the DB. We'll use Supabase as the DB client for this example (Supabase provides a JS client to query your DB directly from server code).
First, we can set up a Supabase client (for example, using the project URL and anon key in an environment config). Then we write a data-fetching function to get the user profile. We’ll wrap this function with the cache function:
Next.js 15 defaults
fetchtocache: 'no-store'. Identical calls are still deduplicated in one render, but each new request fetches fresh data unless you opt in withcache: 'force-cache'or revalidation.
// utils/dataFetcher.ts
import { createClient } from '@supabase/supabase-js';
import { cache } from 'react';
const supabase = createClient(SUPABASE_URL, SUPABASE_ANON_KEY);
// Define a cached function to fetch a user profile by ID
export const getUserProfile = cache(async (userId: string) => {
const { data, error } = await supabase
.from('profiles')
.select('*')
.eq('id', userId)
.single();
if (error) throw error;
return data;
});
In the code above, getUserProfile is a memoized function. The first time you call getUserProfile('abc123') for a given request, it will actually run the query against Supabase. If you call getUserProfile('abc123') again (anywhere else in the React tree during the same render), it will instantly return the cached result instead of querying the DB a second time. This pattern can be seen in real projects. For example, a Supabase utility might export cached queries like this to prevent duplicate calls.
Now, suppose our Next.js page uses parallel routes or multiple components that need this data:
// app/dashboard/@main/page.tsx
import { getUserProfile } from '@/utils/dataFetcher';
export default async function MainSection({ userId }) {
const user = await getUserProfile(userId);
return <ProfileDetails user={user} />;
}
// app/dashboard/@sidebar/page.tsx
import { getUserProfile } from '@/utils/dataFetcher';
export default async function Sidebar({ userId }) {
const user = await getUserProfile(userId);
return <UserStats user={user} />;
}
In this simplified example, MainSection and Sidebar are two parallel segments (or could simply be two sibling server components) that both fetch the user’s profile. Thanks to React’s cache, the database is hit only once for getUserProfile(userId) during the server render. The first call (say, in MainSection) will query the DB (cache miss), and the second call (in Sidebar) will find the result already in cache (cache hit), avoiding another DB round-trip. Without the cache, we would have executed two identical queries to Supabase. With cache, React makes sure those components share the work and render the same snapshot of data.
It’s worth noting that this caching works for any kind of function, not just DB calls. You could use it to memoize expensive calculations or calls to other APIs or CMS systems. The key is that the function should be called within the React server render cycle (e.g., inside an async server component or during data fetching in a Next.js route).
If you try to call the cached function outside of rendering (for example, at the top level of your module when defining it or in a non-React context), it will still execute but won’t use React’s cache. React only provides cache access during the rendering of components. In practice, you usually call the cached function inside your server components as shown above, which is the correct usage.
Benefits of Using React’s cache
Fewer DB/API calls: By deduplicating identical requests, you drastically cut down on redundant calls. The same function called multiple times with the same arguments will hit the server or DB only once. This reduces server workload and network traffic.
Improved Performance: Reusing cached data results in faster response times and a smoother user experience. The page can render faster since subsequent components don’t have to wait for repeat fetches. In our example, the second component gets the data almost instantly from memory.
Consistent Data: When multiple parts of the UI request the same info, using a single cached result means they all render with the same data. There’s no risk of one component showing stale data while another fetches fresh data moments later. During that render, they share the exact result.
Simplified Code: React’s
cachelets each component fetch what it needs without complex prop drilling or higher-level coordination. This keeps components more independent and readable, while the caching happens transparently under the hood. You don’t need ad hoc context providers or singletons to share data. Just call the cached function wherever needed.No Manual Invalidation Needed: Because the cache resets on every new request, you get fresh data on the next render by default. This ephemeral caching means you don’t have to write extra logic to invalidate or update the cache for new requests. (If you do want to cache across requests, you’d use other Next.js opt-in caching features, but that’s a different mechanism beyond our scope here.)
How To Fix the Most Annoying Hydration Errors In Next.js
We have all been there.
We excitedly spin up a Next.js project, everything looks good in development, and then — boom — a wild “Hydration failed“ error appears in the console when you hit to production.

You refresh.
You curse.
You Google.
You copy-paste a random useEffect fix from Stack Overflow.
And yet, the error still haunts you.
Don’t worry, I have been down this road too, and I’m here to help you actually understand why hydration errors happen in Next.js and how to fix them properly.
First, what even is a hydration error?
Next.js uses server-side rendering to send the full-rendered HTML page to the browser. Then React takes over on the client side, reconciling what was sent on the server with what it needs to render on the client.
A hydration error happens when React notices a mismatch between what was pre-rendered on the server and what’s rendered on the client.
Warning: Text content did not match. Server: "Hello, John" Client: "Hello, Jane"
Translation:
The server expected “Hello, John”*, but by the time the client rendered, it became **“Hello, Jane”**.*
React freaks out because it doesn’t know which version is correct.
Common Causes Of Hydration Errors (and How To Fix Them)
1. State That Changes on Mount (a.k.a. The Classic useEffect Trap)
You might be fetching data only on the client side, which means the content is different on the initial render vs. after hydration.
❌ The Problem

Why does this fail?
On the server, it renders “Welcome, Guest”.
On the client,
useEffectruns and updates it to "Welcome, John Doe".React panics because the DOM content doesn’t match.
✅ The Fix: Use useState Only If You Need It
Instead of hydrating with different content, make sure the state is consistent on both the server and client:

***Key Takeaway:***Fetch data on the server if possible, so the client and server match.
2. Using window, document, or localStorage During Render
Hydration errors often happen when you try to access browser-specific APIs before the component mounts.
❌ The Problem

Why does this fail?
localStoragedoesn’t exist on the server.Your component crashes because it tries to access
localStoragebefore hydration.
✅ The Fix: Use useEffect to Access the Browser API

***Key Takeaway:***Only use browser APIs inside
useEffect, since it runs after hydration.
3. Components That Render Differently on the Server and Client
Sometimes, your component’s initial render is different between SSR and CSR.
❌ The Problem

Why does this fail?
The server renders one timestamp.
The client renders a different timestamp.
Boom! Hydration error.
✅ The Fix: Only Render on the Client
Use dynamic imports with ssr: false to disable SSR:

Or, use useEffect to update the time after hydration:

**Key Takeaway:
If your component depends on the current time, user data, or client-only state*, either* disable SSR or render it only after hydration*.*
The Nuclear Option: suppressHydrationWarning
If all else fails, Next.js lets you tell React to ignore the mismatch:

***Use this only if you know what you’re doing.***It hides the warning but doesn’t actually fix the issue.
How I Made a Next.js App Load 10x Faster
Set proper HTTP Headers for Caching
Caching is essential for improving performance and reducing server load. When a browser caches static assets (like JavaScript, CSS, and images), it can reuse those files instead of downloading them again on every visit. This significantly speeds up page loads for returning users.
For example, by setting a Cache-Control header ike public, max-age=31536000, immutable, you tell the browser to cache the file for one year and not check for updates. This works well for assets that don’t change frequently, such as fonts, logos, or versioned build files.
You can configure caching headers in next.config.js using the headers() async function. This ensures consistent behavior across all your static files and can boost your app’s performance, especially on repeat visits.
module.exports = {
async headers() {
return [
{
source: '/:all*',
headers: [
{ key: 'Cache-Control', value: 'public, max-age=31536000, immutable' },
],
},
];
},
};
Embrace Server Components to minimize client-side JavaScript
One of the biggest features in Next.js 13 (fully embraced in Next.js 15) is React Server Components (SRCs). RSCs allow you to render the component on the server and send the pure HTML to the client, without shipping any JavaScript bundles to the browser. Results? Drastically reduce JS payloads and faster load times. In other words, components that don’t need the interactivity can be delivered that are already “cooked“ from the server — the browser just needs to display the HTML without any hydrations and JavaScript bundles.
Why it helps: Every bit of JavaScript that the browser doesn’t have to download and execute makes the page load faster and respond quicker. By keeping as much as possible on the server, you reduce the bundle size and improve startup time. As React expert Josh Comeau notes, “Server Components don’t get included in the bundle size, which reduces the amount of JavaScript that needs to run, leading to better performance.” In practice, we have seen a project where moving a heavy data processing from the client-side to the server cut the client-side bundle to 30%, significantly improving load time*.*
How to use it: In Next.js App Router, components are served by default (unless you add the "use client" directive). Design your pages so that any component that doesn’t require browser interactivity (displaying data, static content, etc.) remains a Server Component. Fetch data directly in these components (e.g. using fetch() in an async component) and render the content on the server. For parts that do need interactivity (e.g., a button click handler or dynamic state), isolate them into small Client Components "use client" at the top. This way, you send minimal JS to the client–only the code needed for interactive elements – while the bulk of the UI arrives as static HTML.
Before/After example: Imagine a dashboard page that originally loads all data through a client-side JavaScript and renders a big React bundle on the browser. Users see a loading spinner for a couple of seconds while waiting for the browser to fetch data. By refactoring to use Server Components, the HTML page now comes pre-filled with data (no big client fetch needed), and the JS bundle size drops by 50%. The Time to First Byte (TTFB) improved because the server does data fetching in parallel, and First Input Delay (FID) improved since the browser has less JavaScript to execute before handling interactions. Users can now see the content instantly and are able to interact with the content within milliseconds of page load.
Stream and Selectively Hydrate for Instant Interaction
Delivering HTML fast is only half of the story — we also want users to interact with the page as soon as possible, without waiting for all JavaScript to finish loading. This is where React 18’s streaming SSR and selective hydration come into play. Next.js 15 (powered by React 18+) can stream HTML in chunks and hydrate portions of the UI incrementally, rather than blocking on the entire page. In practice, that means your app can show useful content sooner and become interactive faster, improving metrics like TTFB and FID.
With Streaming Server-Side Rendering, the server doesn’t wait for the entire page before sending HTML. Instead, it can send a piece of the page as soon as possible when the data is ready (often gated by React <Suspense> boundaries). Meanwhile, Selective Hydration ensures that once those HTML chunks arrive. If the user interacts with the part of the page that hasn’t hydrated yet, React will prioritize hydrating that part first — no more waiting for the entire app to hydrate before any interactions work.
Why it helps: Traditional SSR would send the full HTML, but the page stayed non-interactive until all the JavaScript was downloaded and all components were hydrated in one go (an “all-or-nothing” hydration). That could lead to a long delay in large apps, poor FDI, and a frustrating user experience. With streaming and selective hydration, smaller components can hydrate and become interactive immediately without waiting for larger, slower parts. This greatly improves First Contentful Paint (FCP) and FID. React 18’s architecture is a game-changer: “HTML streaming sends partial HTML as it’s ready, and selective hydration means you can start interacting with the app before all components have hydrated.”.
How to use it: Next.js handles a lot of this automatically under the hood, but you should structure your app to take advantage of it. Use React’s <Suspense> boundaries to wrap parts of the UI that can load asynchronously. For example, you might suspense-wrap a product reviews section or user-specific info. Provide a lightweight fallback (like a spinner or placeholder) that can be rendered immediately. Next.js will stream the page HTML with the fallback in place of the slow section, and hydrate the rest. Once the data for that section is ready, the server streams the HTML for it, and React swaps in the real content and hydrates it. The key is: split your UI into bite-sized chunks and use Suspense for anything that might delay rendering. This enables progressive hydration.
Real-world example: The Next.js team introduced Partial Prerendering in v15, which combines streaming and caching strategies. For instance, a dashboard page can prerender static metrics at build time (so they show up instantly) and stream in user-specific live data (like recent activity) when ready. The static parts are interactive immediately, and the dynamic part hydrates once loaded. This approach “significantly improves TTFB and LCP” by getting useful content on screen fast. In one experiment, enabling streaming SSR with Suspense cut the Largest Contentful Paint from ~3s to ~1.5s, because the largest element (a hero image and headline) was sent immediately and not held up by slower widgets. Users could also click navigation links almost right away, whereas before they had to wait for the entire app bundle to load. The page felt instant.
Optimize Images For Faster Loads (Use Next/Image)
High-quality images are often the largest part of the page, so optimizing images is one of the impactful things we can do to improve load times and LCP. Next.js provides a built-in solution: the <Image> component (next/image), which handles a ton of images for you automatically. By using Next.js’s Image Components for all your images, you will get out-of-the-box performance benefits:
Responsive sizing: It will automatically serve the right size image for each device, generating multiple versions and using modern formats like WebP. This avoids sending a huge 2000px-wide image to a mobile device that only needs 400px, saving bandwidth and time.
Lazy loading: Images are by default lazy-loaded (only fetched when about to scroll into view), which dramatically reduces initial load time. Studies show that loading images on scroll instead of all upfront can decrease initial page load time by up to 60%.
Preventing layout shift: Next/Image fixes a common culprit of bad CLS by requiring width/height (or using intrinsic sizes) so the browser knows the image’s space in advance. No more content jumping around when images load — it’s handled.
Modern formats & compression: Next.js will automatically convert and serve images in newer formats like WebP/AVIF when supported, often 30% smaller than JPEG/PNG for the same quality. It also allows setting quality to balance size vs clarity. All of this means faster loads for users.
Blur-up placeholders: You can enable a low-res blurred placeholder that shows while the image loads, giving an immediate preview and improving perceived performance. This is a nice touch to avoid blank spaces.
In short, Next.js’s image optimization delivers smaller, smarter images and defers non-critical ones, boosting your LCP and overall performance. As the docs put it, the Next/Image component provides “visual stability (no layout shift) and faster page loads by only loading images when they enter the viewport, with optional blur placeholders”. And you hardly have to do a thing — just use <Image src={...} width={...} height={...} /> instead of a raw <img> tag.
Pro Tips:
Always specify dimensions (width and height) for your images (or use
fillfor responsive layout). This ensures Next can reserve space to avoid CLS. If you import a static image, Next will auto-populate its intrinsic width/height for you.Use
priorityfor above-the-fold images (like hero banners or logo) to load them ASAP, and let less important images stay lazy. This improves the Largest Contentful Paint if that image is the LCP element.Leverage the
sizesattribute on<Image>for responsive layouts. This helps the browser choose the optimal image variant (e.g. serve a smaller image on mobile).If you have many icons or small graphics, consider SVG or icon fonts, or use CSS sprites, to reduce HTTP requests. But for photos and complex imagery, Next/Image is your best friend.
Before/After example: A travel blog homepage was struggling with an LCP of 4.0 seconds due to multiple large image thumbnails loading at once. By switching those <img> tags to Next <Image> with proper sizes and enabling lazy loading, the initial payload dropped by several MB. The LCP (which was the hero image) improved to 1.8s (well under the 2.5s good threshold), and overall page weight went down ~70%. The CLS issues caused by images suddenly popping in were eliminated (CLS went from 0.25 to near 0). The site felt snappy, and images still looked great – they were just delivered in a smarter way.
Optimize Fonts and Prevent Layout Shifts (Use Next/Font)
Custom web fonts can subtly slow down your site and even cause layout jank (text shifting or appearing late). In Next.js 15, you have a powerful tool to optimize fonts: the next/font module. This utility will automatically handle font loading best practices — including self-hoisting fonts, preloading them, and controlling how they swap — all to boost performance.
When you use next/font (either with Google Fonts or your custom fonts), Next.js will:
Inline critical font CSS and remove external network requests to font providers. This means no more waiting on Google Fonts servers; the font files are served from your own site, often faster and more reliably.
It preloads the fonts and uses efficient loading strategies. By default, Next fonts use
font-display: swap(or similar) to avoid blocking text from rendering. Text will show in a fallback font immediately and then swap to the custom font when ready, which prevents long invisible text paint delays (important for good FID).Next.js can even subset fonts (only include the characters you need) or use variable fonts to reduce the number of font files. All of this reduces the download size for typography.
According to Vercel’s guidance, “next/font will automatically optimize your fonts (including custom fonts) and remove external network requests for improved privacy and performance.” This improves performance by cutting out an entire round-trip to fetch fonts and by ensuring text is visible ASAP, avoiding a flash of invisible text (FOIT) or sudden layout shifts when fonts load (FOIT/flash of unstyled text can cause CLS when the font metrics differ).
Tips for fonts:
- Use
next/font/googlefor Google Fonts: Instead of<link>tags in your<head>, use the Next font loader. For example:
import { Roboto } from 'next/font/google';
const roboto = Roboto({ subsets: ['latin'], weight: '400' });
// then in your layout/component:
<div className={roboto.className}>Hello</div>
This ensures the font CSS is included in the build and the font is preloaded.
Use variable fonts or limit weights: If possible, use a variable font that covers multiple weights/styles in one file. This reduces the number of font files to load. For example, the Inter font variable version can cover many styles in one file.
Include fallbacks: Choose a fallback system font that has similar metrics to your web font and include it in your CSS stack (e.g.
font-family: Roboto, sans-serif;). This way if the custom font is delayed, the fallback is used without much visual difference. Next/font helps here by simplifying the setup.Beware of huge font files: If your font file is very large (many glyphs or languages), consider using subsets (only the characters needed) or separate font files for different locales. Unused font glyphs are just dead weight.
By optimizing fonts, you’ll see improvements in CLS and FCP. No more pages where content jumps around once the font loads or, worse, text that isn’t visible for a second. It’s a small tweak that makes your site feel polished and fast. In our experience, moving from a standard Google Fonts embed to Next’s automatic font optimization shaved about 200ms off the First Contentful Paint on a news site (since the text could render immediately with fallback and then swap smoothly). Core Web Vitals improved: CLS went practically to zero once we eliminated the layout shift that occurred when the custom font kicked in. Next.js 15’s font handling is robust — by using it, you basically check off another big item in the performance list with minimal effort.
Analyze and Trim Your Bundles (Bundle Analysis and Tree Shaking)
Sometimes the biggest performance gains come from simply shipping less code. Large JavaScript bundles delay both download and execution, hurting metrics like TTI (Time To Interactive) and FID. That’s why a key step in optimization is to analyze your bundles and eliminate anything that isn’t absolutely needed on the client. The gold: Keep your client-side JavaScript bundles as lean as possible.
Start with bundle analysis: Next.js provides an official plugin for bundle analysis (@next/bundle-analyzer). Enable this to get the visual tree map of your JS bundles. This will show you the size of each npm package and module in your app. Often, you will discover some surprising things: maybe a profile or a big library being included by accident, or duplicate copies of a dependency. As a best practice: “Regularly use tools like Bundle Analyzer to visualize your bundle composition and identify opportunities for optimization“.
What to look for:
Large libraries: Do you really need that heavy date library, or can you use a lighter alternative? For example, moment.js (huge) could be replaced with a smaller library or native Intl APIs. If a library is necessary, consider importing only specific parts (many libraries support modular imports)
Duplication: Ensure you are not importing two versions of a library. Bundle Analyzer will highlight if, say, lodahsh is included twice. This can happen if you have multiple versions in sub-dependencies. Resolve them or use a consistent version to avoid bloat.
Dead code: Tree Shaking usually removes code you don’t use, but if only that code is written in a tree-shaking way (in ES module). Ensure libraries are up to date and use ESM, and avoid sneaky patterns that prevent tree-shaking(like importing an entire library when you only need one).
Modern build tools can eliminate a lot of dead code. In fact, projects leveraging aggressive tree-shaking often see 30%-50% reductions in bundle size. Next.js (with Turbopack and Webpack 5) will tree-shake as much as dependencies allow it. You can help by not importing things you don’t need. For example, if you only need an icon from an icon set, don’t import the whole set.
Actionable steps to trim bundles:
Run
next buildwith analysis (usingANALYZE=trueenvironment or the plugin) and open the bundle report. Identify top offenders.Remove or split out heavy modules: If you find a module that’s huge and only used on one page, consider dynamically importing it (see next tip) so it’s not in the main bundle. Or find a lighter alternative.
Optimize dependencies: Import only what you use. For instance,
import lodashbrings in everything (if not tree-shaken); instead, doimport debounce from 'lodash/debounce'to get that one function (or use lodash-es, which is tree-shakable). Similarly, for date-fns, import individual functions instead of the whole library.Polyfills: Next.js by default polyfills only as needed. But double-check you’re not unintentionally pulling heavy polyfill packages. Use core-js with targets if needed to avoid full polyfills.
Strip dev code: Ensure
process.env.NODE_ENVis set to production in the build (Next does this by default), so that any dev-only code or logging is dropped.
By iteratively trimming the fat, you can often get your main bundle to well under 100 KB gzipped (depending on the app). For example, one of our projects had a ~300 KB bundle; after analysis, we removed an unused markdown parser and switched a charts library for a lighter one, dropping to ~90 KB. The payoff was visible: faster load and interactivity. As one guide noted, even going from ~150KB to under 100KB can make a difference in load time. And of course, smaller bundles mean less JavaScript blocking the main thread, improving FID.
Remember, “reduce code on the client to a minimum“ is the core principle for performance. Every byte and every millisecond counts. By shipping only what is necessary, you not only speed up your app but also reduce the memory usage and battery drain on devices. Bundle Analysis and Tree-Shaking are your allies in this quest for a slimmer, faster Next.js app.
Code-Split By Dynamic Imports (Load Only What You Need)
Even after trimming your bundle, don’t load all that code upfront if users don’t immediately need it. Code-splitting is a technique to split your JS bundle into smaller chunks that can be loaded on demand. In Next.js, you automatically get code-splitting by page; each page’s JS is split out, and you can and should go further with dynamic imports for parts of your pages that can be deferred. The ideal is to ship the minimal code for the initial load, and load the rest asynchronously when needed.
Using Next.js dynamic imports (or React.lazy), you can turn virtually any component into a separately loaded chunk. For example, suppose you have a very large chart component of the dashboard that isn’t visible until the user scrolls down or interacts — you can import it dynamically so it’s not in the initial JS. This improves initial load times dramatically.
Benefits: Studies indicate that a well-structured code-splitting strategy can improve initial loading performance by ~40%. We’ve seen cases where dynamic imports reduced the initial bundle by 50%, making the app load twice as fast on first visit. It also has a business impact: faster initial loads can lead to higher conversion; one report noted 20–25% increased conversions with faster pages (users stick around when the first page loads quickly!).
How to do it in Next.js:
- Dynamic Imports In Next.js
Dynamic Imports are a cornerstone of lazy loading in Next.js, enabling code-splitting at the component level. This powerful feature allows you to load specific components or entire libraries when they are required by the user, rather than including them in the initial JavaScript bundle. Next.js provides the next/dynamic utility, which is a composite of React's React.lazy() and Suspense, offering a seamless way to implement this optimization.
const Chart = dynamic(() => import('../components/Chart'), {
loading: () => <p>Loading chart...</p>, // Displays this while the Chart component loads
});
export default function Dashboard() {
return (
<div>
<h2>Sales Dashboard</h2>
<Chart />
</div>
);
}
Key Points:
The HeavyComponent is excluded from the initial bundle: This means the JavaScript for
HeavyComponentis not downloaded when theHomepage initially loads.Loaded only when rendered: The component’s code is fetched and executed only when React attempts to render
HeavyComponentin the DOM.Automatic Code Splitting: Next.js automatically handles the creation of separated JavaScript chunks for dynamically imported components. Server Components are code-split by default, and lazy loading specifically applies to Client Components.
- Conditional Rendering
This strategy dynamically involves importing and rendering components only when a specific user interaction or condition is met. This is particularly useful for features that are not immediately visible or essential on page load, such as models, accordions, and video players that only active on a click.
import dynamic from 'next/dynamic';
import { useState } from 'react';
const VideoPlayer = dynamic(() => import('../components/VideoPlayer'));
export default function VideoSection() {
const = useState(false);
return (
<div>
<button onClick={() => setShowPlayer(true)}>Play Video</button>
{/* VideoPlayer component is only loaded and rendered when showPlayer is true */}
{showPlayer && <VideoPlayer />}
</div>
);
}
In this example, the VideoPlayer component's code is only downloaded and rendered when users click a “Play Video“ button. This prevents the browser from downloading a potentially large video player library until the user explicitly requests the functionality.
- Intersection Observe API
The Intersection Observe API provides a performant way to detect when an element enters or exits the viewport. This is ideal for lazy loading components that are “below-the-fold“ (not immediately visible on the screen) or for implementing infinity scrolling patterns. By loading components when they are about to become visible, you can significanly reduce the initil load time and resource comsumption.
'use client'; // This component uses browser APIs and React Hooks
import { useEffect, useRef, useState } from 'react';
import dynamic from 'next/dynamic';
const Reviews = dynamic(() => import('../components/Reviews'));
export default function ReviewsSection() {
const ref = useRef(null);
const [isVisible, setIsVisible] = useState(false);
useEffect(() => {
const observer = new IntersectionObserver(([entry]) => {
if (entry.isIntersecting) {
setIsVisible(true);
observer.disconnect(); // Stop observing once the component is visible and loaded
}
});
if (ref.current) observer.observe(ref.current);
return () => observer.disconnect(); // Clean up observer on unmount
},);
return (
<div ref={ref}>
{/* Reviews component is only loaded and rendered when it becomes visible in the viewport */}
{isVisible && <Reviews />}
</div>
);
}
Here, the Reviews component is dynamically loaded only when its containing div (referenced by ref) enters the user's viewport. This is the common pattern for sections like customer reviews, comments, or image galleries that appear further down a page.
- Route-Based Code Splitting
Next.js inherently optimizes performance through automatic route-based code splitting. This means that each page (or route) on your application is automatically split into its own independent JavaScript bundle. For example, the JavaScript for
pages/about.tsx (or app/about/page.tsx in the App Router, it is only loaded when the /about route is accessed.
This automatic behavior is a significant advantage, as it ensures that users only download the code necessary for the specific pages they are viewing, rather than a monolithic bundle containing the entire application’s JavaScript. This leads to faster initial page loads and improved overall performance.
Tips: To maximize the benefits of route-based code splitting, avoid importing large libraries or components globally (e.g., in _app.tsx or a root layout in the App Router) unless they are truly essential for every single page of your application. Global imports can negate the advantages of code splitting by forcing unnecessary code into every page’s initial bundle.
- Lazy Loading Third-Party Scripts
Third-party scripts, such as analytics trackers, advertising scripts, or social media embeds, can often be significant performance bottlenecks. They can block the main thread, delay rendering, and natively impact Core Web Vitals. Next.js provides the next/script component to give you fine-grained control over when and how these external scripts are loaded, preventing them from hindering your application’s performance.
import Script from 'next/script';
export default function Page() {
return (
<>
{/* This script will load during the browser's idle time, after the page is interactive */}
<Script
src="https://example.com/analytics.js"
strategy="lazyOnload"
/>
<p>Page content</p>
</>
);
}
Strategies Available with next/script:
strategy="beforeInteractive": This loads the script before the page becomes interactive (before hydration). Use this for scripts that are critical for the page's initial functionality or appearance, but still need to be loaded by Next.js.strategy="afterInteractive": This loads the script after the page has become interactive (after React has hydrated the page). This is a good default for most non-critical scripts that don't need to block the initial render.strategy="lazyOnload": This defers the loading of the script until the browser's idle time, after the page has fully loaded and become interactive. This is ideal for scripts that are not essential for the core user experience, such as analytics or chat widgets.
By choosing the appropriate loading strategy, you can prevent third-party scripts from negatively impacting your site’s initial load and interactivity.
Real-world example: On an e-commerce site, we had a hefty product comparison feature (with a big library) that was initially loading on every product page, though the user rarely used it. We changed it to load dynamically only when the user clicks “Compare”. This removed ~100 KB from the product page bundle. The immediate effect was that he product pages loaded about 30% faster and FID improved because less JS was executed upfront. Users who needed the compare feature experienced a slight delay only when they invoked it, which is a fair trade-off. Overall engagement went up as more users stayed (fast content drew them in, and by the time they clicked compare, the chunk was loaded or in progress).
Another scenario: imagine a blog site with an interactive comments widget that loads below the article. By splitting the comments component (and maybe even using an <IntersectionObserver> to preload it when the user scrolls near), the initial article content loads faster (good LCP), and the comments JS only loads if the user is going to see it. Users who just read and leave aren’t penalized by that extra JS.
Bottom line: “Users only download the necessary modules.” for what they are doing right now. This keeps the app fast and responsive. Next.js makes it easy with dynamic imports, so identify parts of your app are lazy-loaded and implement them. Monitor your webpack bundle analysis before and after — you should see certain chunks carved out. Also, test the user experience to ensure the loading states are acceptable (provide a nice spinner and skeleton if needed). When done right, users won’t even notice that parts of your app are loaded on demand; they will just feel the app is fast.
Leverage Edge Functions and CDN For Low TTFB
If your Next.js app is global (most of these days), one way to speed up responses is to run your code as code as possible to the user. Edge Functions allows you to deploy the server-side logic to data centers around the world, reducing latency and improving Time To First Byte (TTFB). Next.js on Vercel supports edge runtimes (for middlewares or API routes) that run on Vercel’s Edge Network, and similarly, you can deploy on Cloudflare Workers. The ideal is to avoid that slow transoceanic round trip for each request.
Think of Edge Functions as your “mini servers“ distributed worldwide. Instead of a user from Asia having to hit a server from North America (taking hundreds of milliseconds for distance), the request can be handled in Asia as well. As one developer described: “When logic executes close to the user, the TTFB drops significantly. Imagine a user in Singapore hitting an API request to a server in Northern Virginia — that round-trip is brutal. With Edge Functions, the request is handled in Singapore itself*: Lower TTFB, better FCP, and happier users.*“
Ways to use edge in Next.js:
Edge Runtime API Routes: You can create a file under
pages/apior the new App Router’s route handlers, and exportconfig = { runtime: 'edge' }. This will deploy that function to the edge. Use it for things like personalization, geolocation-based content, authentication checks, etc., where responding quickly is crucial.Middleware: Next.js middleware (under
middleware.js) always runs at the edge by default. Use it for redirecting or rewriting on the fly with virtually no latency hit, since it runs close to the user.Cache and static assets on CDN: Next.js automatically serves static files (including static pages) via CDN. Ensure you take advantage of this by using
getStaticProps/getStaticPathsor the App Router’sfetchcaching and revalidation settings. Static assets should be CDN-served so that a user in London gets the file from a European server, one in New York gets it from US East, etc. This happens by default on platforms like Vercel.
By using edge functions and globally distributed caching, you’ll improve not just TTFB but often other metrics like FCP (first paint) since initial HTML arrives sooner. It also allows for dynamic content that’s still fast. For example, you can personalize content at the edge (e.g., show region-specific promos, or the user’s name if logged in) without a slow origin fetch. One can “inject personalization at the edge without slowing down client-side load”, achieving fast dynamic pages with no hydration jankdev.to.
Real-world story: A startup had its Next.js app deployed on a single region. Users far from that region experienced TTFB of 500ms to 1s. After migrating key APIs to edge functions and enabling full page caching on the edge for public pages, global users saw TTFB drop to ~100–200ms. For instance, a user in India got a response from a Mumbai edge node in 100ms, whereas before it was ~700ms from a US server. This shaved significant time off the First Contentful Paint as well — the content started arriving faster. Core Web Vitals improved; for example, faster TTFB contributed to an FCP improvement on slow 3G connections by nearly 30%. Edge functions essentially acted as a frontend performance multiplier, as one article put itdev.to, by bringing server-side rendering closer to users.
Note: Edge functions do have some limitations (no full Node.js environment, and cold starts, though typically very small). Use them for the performance-critical path, and test under load. The payoff is worth it when your audience is globally distributed. If using Vercel, also consider their Edge Middleware and Edge Config for quick data lookups at the edge (like feature flags or A/B tests) without back-and-forth to the origin.
In summary, run your app at the edge and cache wisely. It lowers latency, yielding snappier interactions. “If you’re serious about slashing TTFB and building experiences that feel instant, edge functions are a must in your stack.”.
Pre-Render and Cache as Much as Possible (SSR, ISR, and Partial Prerendering)
Fetching data and rendering on every request can be expensive and slow. Whenever feasible, let Next.js pre-render pages or parts of pages ahead of time and serve them from cache. Next.js 15 provides a spectrum of rendering strategies: Static Site Generation (SSG) for fully static pages, Incremental Static Regeneration (ISR) for updating static pages periodically, and now Partial Prerendering for a hybrid approach where some parts are static and others are dynamic. Using these wisely can give you the best of both worlds: fast, cached content plus freshness.
Static pre-rendering: If a page’s content can be generated at build time (or even on a schedule), do it! A static page served from a CDN is about as fast as it gets (nearly zero TTFB). Next.js supports SSG via getStaticProps. In the Next 15 App Router, you can achieve similar behavior with the fetch API by using cache: 'force-cache' (for truly static data) or revalidate options for ISR. For example, an e-commerce product page could prerender product details statically and revalidate every hour, so most users get a cached page, and once an hour it updates with any changes. This drastically reduces the load on your servers and speeds up the response.
Partial Prerendering: A new concept in Next.js 15 is the ability to selectively prerender parts of a page. As the Next.js team describes it, “Partial Pre-rendering improves performance by selectively pre-rendering only essential parts of your page during build, while dynamic content loads progressively when needed.”. For instance, you might prerender a blog post's content (which changes rarely) but not prerender the comments section (which is dynamic); that dynamic part can load client-side or via SSR. The initial HTML contains the important stuff, giving a fast LCP, and the rest comes in after. This approach was shown to improve Core Web Vitals like TTFB and LCP in Next.js Conf demos. Essentially, users see the main content quickly, and any live-updating portions stream in.
To use partial prerendering in App Router, you can combine static and dynamic segments in your page. One method is using the Suspense pattern as shown in a Next 14 exampledev.todev.to: render <StaticDashboardMetrics /> (no suspense, so prerendered) and wrap <UserActivity /> in <Suspense> so it’s fetched at request time. The static parts are built once, and the dynamic part can be SSR or even client-side. This yields an instant static shell with dynamic data loading after boosting perceived performance significantly.
Caching on the server: Next.js 15 also gives fine-grained control with the new caches. You can designate certain fetches to use force-cache (always cache) or no-store (always live) or timed revalidation. For example, fetch(url, { next: { revalidate: 60 } }) in a Server Component will cache that request for 60 seconds. Use this to cache API responses and avoid re-fetching on every request. Cached responses = faster responses.
Use a CDN for static assets and pages: Deployed on platforms like Vercel, your static pages (SSG) and public assets automatically get served via a global CDN. Ensure cache headers are set so that repeat visits are blazing fast (and consider using service workers or next/pwa if you want aggressive client-side caching for repeat visits.
Example transformation: A content site with mostly article pages moved from SSR (rendering each request) to ISR with a 5-minute revalidate. Most users then got a static cached page from the nearest edge. The TTFB for those pages dropped from ~800ms (SSR from origin) to ~100ms (cached HTML), and LCP improved because the browser was getting HTML almost immediately. Even when data updates frequently, caching for even a short duration (like 60 seconds) can absorb a lot of traffic and speed up responses for the majority of users. Another example: a dashboard used partial prerendering — it prerendered the layout and static widgets, and SSR’d the user-specific data. The initial paint (with the static content) happened in ~1s, whereas before, the whole thing took ~2.5s to render fully. Users saw something useful very quickly and perceived it as faster, even though some data was still loading.
Bottom line: Don’t generate content on the fly if you don’t need to. Pre-generate it, cache it, and serve it like static whenever possible. Next.js gives you the tools to do this at a granular level (down to per-request or per-component caching). Use ISR for things that can be slightly stale. Use full SSG for truly static content. And with partial prerendering, carve out a static skeleton for dynamic pages. This reduces server load and massively improves scalability and performance. As one blog said about Next 15: partial prerendering delivers “instant static content while dynamically loading personalized elements, significantly improving TTFB and LCP.”. Exactly what we want!
Monitor Performance Continuously with the Right Tools
You can’t improve what you don’t measure. To ensure your Next.js app remains lightning fast, you should continuously monitor performance metrics and catch regressions early. Thankfully, there are excellent tools for both lab and real-user measurements that you can integrate into your workflow.
Here are some recommended tools and how to use them:
Lighthouse CI: Google’s Lighthouse (as in Chrome DevTools Audits or PageSpeed Insights) provides lab performance tests. Using Lighthouse CI in your continuous integration pipeline can automatically run performance audits on every build or pull request. Set up a budget (e.g., FCP under 2s, LCP under 2.5s, bundle size under X KB), and Lighthouse CI can fail the build if a change introduces a significant slowdown. This ensures performance is monitored just like tests are. Start with basic Lighthouse checks in CI and gradually add more custom metrics as your team gets comfortable. Over time, you can enforce stricter budgets — e.g., if a developer adds a heavy library, you’ll know before it hits production.
Vercel Analytics: If you’re deploying on Vercel, their Analytics feature provides real user monitoring of Core Web Vitals. It inserts a tiny script to measure actual LCP, FID, etc. from your users and reports to a dashboard. The great thing is it gives you a Real Experience Score (RES) — an aggregate of your site’s performance as experienced by real users. This lets you catch if maybe users on certain devices or regions are slow, or if a new release hurts performance in the field. Unlike lab tests, this is actual data from real sessions. You can use this in combination with Google’s CrUX data or your own analytics.
Chrome DevTools & Performance Tab: For local profiling and deep dives, nothing beats running your app in Chrome (or Edge) and recording with the Performance tab. This shows you every task on the main thread, paint timings, script evaluations, etc. It’s excellent for diagnosing why a certain interaction is slow or what’s blocking the main thread. For example, the Performance tab can reveal a long task that causes a 300ms input delay. Our team often uses it to pinpoint exactly which function or component is a hotspot. As one CTO said, it went from intimidating to an indispensable tool once you learn to read the flame charts. Use it during development to fine-tune.
WebPageTest: This is a powerful tool for synthetic testing, especially for simulating slow networks or devices. You can run a test from various locations and throttle the connection (e.g., 3G or Slow 4G) to see how your site performs under less-than-ideal conditions. WebPageTest gives incredibly detailed waterfall charts, filmstrip views of rendering, and core vital measurements. It’s great to test a production deployment and see, for example, how the LCP behaves, which resources are loading late, etc. Many teams use WebPageTest for spot checks or integration with performance budgets (there’s an API and even GitHub actions).
Google Analytics (GA4) or custom telemetry: GA4 can track Web Vitals as well (with some custom code or plugins). Alternatively, there are specialized services (SpeedCurve, Calibre, DebugBear, etc.) that continuously monitor your site’s performance from multiple regions. These can alert you if, say, LCP degrades beyond a threshold.
In the Next.js context, you can also use the built-in reportWebVitals function. Next.js allows you to export a reportWebVitals(metric) in your app, which will get called with each web vital (if you have Analytics disabled or want to send to your own endpoint). You could use this to send data to an analytics service of your choice.
Make performance monitoring a habit: Set up dashboards visible to the team, so everyone can see current perf scores. Perhaps have a weekly check-in on performance budgets. A culture of performance means issues get caught and fixed early. For example, if a code change accidentally adds 100KB to your bundle and slows down FID, a Lighthouse CI budget failure, or a jump in Vercel Analytics’ RES will alert you, and you can address it before it impacts all userspagepro.covercel.com.
Case study: Our team introduced performance budgets in CI for a Next.js project. One day, a dependency update caused the bundle to grow unexpectedly, and Lighthouse CI flagged that the performance score dropped from 95 to 88. Investigating, we found the culprit (a misconfigured polyfill). We fixed it before merging to main. Without these tools, we might not have noticed until users complained or analytics showed a slowdown. That safety net is invaluable.
In summary, use lab tools (Lighthouse, WebPageTest, DevTools) to optimize in controlled environments, and field tools (Vercel Analytics, real-user metrics) to ensure real users are getting the experience you expect. Combine that with automation (CI checks, alerts) so you maintain your hard-earned optimizations over time. As the saying goes, “performance is a journey, not a destination.” Continuous monitoring will keep you on the right track.
Build a Performance-First Mindset (Apply to all types of apps)
Our final tip is a bit more holistic: cultivate a performance-first mindset throughout your development process. Next.js gives you many tools, but it’s up to the team to use them effectively and prioritize performance from the start. This tip ties everything together and ensures that, whether you’re building an e-commerce site, a SaaS application, or a content hub, you consistently apply these optimizations as second nature.
What does a performance-first mindset look like?
Plan for performance from day one: When designing features, consider the performance implications. For example, if you’re adding a new image-heavy section, plan to use Next/Image and perhaps lazy load it. If you’re integrating a third-party script (analytics, ads, etc.), consider its cost and how to mitigate it (maybe load after user interaction or on idle).
Establish performance budgets & goals: Set concrete goals (e.g., LCP < 2s on median mobile, FID < 100ms, etc.). Having targets makes it easier to make decisions (you might decide against a fancy but heavy library if it would break the budget). Many teams include these in requirements, just like functionality.
Everyone on the team owns performance: It’s not just for one “performance engineer” — developers, designers, and product managers all should value it. For instance, designers should know that huge background videos might hurt performance; developers should review each other’s code for potential bloat or inefficiencies. As one company put it, they created a performance-first culture where performance is a shared responsibility and part of the definition of done.
Regularly audit and learn: The web evolves, and so do best practices. Make time for periodic performance audits of your Next.js app. This could be as simple as scheduling a monthly deep dive where you profile the app and see if any regressions or new opportunities have arisen. Also, encourage team knowledge sharing — if someone learned a new trick (like a new Next.js feature or React optimization), share it with everyone.
Use the latest Next.js features: Next.js 15+ is introducing things like stable Turbopack, enhanced React 19 features, etc., which often come with perf benefits. Keep your Next.js version up to date (within reason) and read release notes for anything that could help performance. For example, if Next 15.2 announces an improved
<Image>or better hydration technique, consider adopting it.
A quick example of applying this mindset: Let’s say you’re tasked with building a new pricing page for a SaaS app. With a performance-first approach, you would: optimize all images (maybe use SVG for logos, Next/Image for others), ensure the page is static (SSG) since pricing doesn’t change often, maybe use partial hydration if there’s a dynamic calculator widget (so the static content loads immediately), test it on slow network to ensure it’s under budget, and perhaps set up a Lighthouse CI threshold from the start for it. The result is a page that not only looks good but is technically optimized from day one, requiring no retroactive fixes.
Teams that do this find that performance isn’t an afterthought or a one-off project — it’s just part of building the app. When new team members join, they see that pull requests include discussions about bundle size or using the correct Next.js features, and they adopt the same approach.
Finally, remember that performance benefits everyone: it improves accessibility (fast sites work better on low-end devices), it pleases users (nobody ever said “I love how slow this site is!”), and it drives business metrics (better SEO, more engagement). So it’s absolutely worth the investment.
Conclusion
In this article, we have covered several advanced Next.js concepts. By mastering these concepts, you will be able to build powerful, performant web applications with Next.js. Whether you are building a small blog or a large-scale e-commerce platform, Next.js has the tools and features you need to deliver a seamless user experience.
References
https://www.freecodecamp.org/news/nextjs-vs-react-differences/
https://javascript.plainenglish.io/next-js-client-side-rendering-56a3cae65148
https://blog.devgenius.io/advanced-next-js-concepts-8439a8752597
https://blog.stackademic.com/you-must-use-middleware-like-this-in-next-js-64d59bb4cd59
https://yohanlb.hashnode.dev/when-should-i-use-server-action-nextjs-14?ref=dailydev
https://blog.devgenius.io/10-powerful-next-js-optimization-tips-f78288d284e1
https://javascript.plainenglish.io/next-js-hates-me-until-i-made-it-10x-faster-cae6d1b65876
https://medium.com/yopeso/a-year-with-next-js-server-actions-lessons-learned-93ef7b518c73
https://medium.com/gitconnected/when-to-use-react-query-with-next-js-server-components-f5d10193cd0a
https://levelup.gitconnected.com/nextjs-image-performance-issues-and-fixes-40db2061ffe1
https://blog.stackademic.com/what-use-client-really-does-in-react-and-next-js-1c0f9651c4e1
https://medium.com/@sureshdotariya/next-js-15-app-router-architecture-and-sequence-flow-3a6ffdd2445c
https://semaphoreci.medium.com/cache-optimization-on-nextjs-without-vercel-c5927177ea02




