Skip to main content

Command Palette

Search for a command to run...

React & Javascript Optimization Techniques - Part I

Updated
50 min read
React & Javascript Optimization Techniques - Part I
T

I am a developer creating open-source projects and writing about web development, side projects, and productivity.

When we start our journey as programmers, our primary concern is often making code run with zero errors. Initially, we may not prioritize code optimization. However, optimizing code is a crucial aspect that fosters growth, leading one toward becoming a senior or a lead developer.

Moreover, research by Portent suggests that a website loading in one second boasts a conversion rate five times higher than a site taking 10 seconds to load.

In this article, we will introduce some of the most common techniques for optimizing code, applicable to any application. We will use sample code written in React and JavaScript. The upcoming sections will cover the following techniques:

Debouncing, Throttling

Memoization

Bundle Size optimization

Keeping component state local where necessary

Avoid memory leaks in React

DOM size optimization

Applying web workers in React

Asset optimization

Rendering patterns

Debouncing, Throttling

Why do we need Debouncing and Throttling?

Consider these scenarios:

  1. Processing the user's search query: Every time the user modifies the text, an event handler is invoked, initiating a request to the server that returns the results to the user. If the server takes an extended period to respond and numerous requests are made whenever the user alters the search text, it can significantly harm performance.

  2. Responding to specific user actions: Actions like resizing the browser, mouse movement, and page scrolling require adjustments to the site's content. Without techniques to control the number of calls, these events can trigger numerous calls to handler functions, adversely affecting performance.

What is Debouncing and Throttling?

Debouncing and throttling are a programming technique used to optimize the processing of functions that consume a lot of execution time. These techniques involve preventing those functions from executing repeatedly without control, which helps improve the performance of our applications.

Throttling: skips function calls with a certain frequency
Debounce: delays a function call until a certain amount of time passed since the last call.

That is quite hard to understand, isn't it? Let me explain it to you:
For instance, when you invoke a function handler on user scrolling, you can't precisely predict how many times the function will be called—perhaps 20, 30, or even 100 times.

With Throttling applied, the function is triggered only after a specific time interval (e.g., 200ms). The sequence unfolds as follows: Function call => Wait 200ms => Function call => Wait 200ms => ...

On the other hand, after applying Debouncing, the function is triggered only after a designated time has passed since the last call (e.g., 200ms). The sequence unfolds as follows: Wait 200ms (Step 1) => If another function is called before completing the 200ms wait, reset the timer and continue waiting for another 200ms (Step 2) => Loop through Step 2 until completing the 200ms wait => Function call => Wait 200ms => Repeat Step 2 => Function call => ..."

How do we implement Debouncing and Throttling?
Debouncing:

const debounce = (callback, time = 200) => {
    let timer;
    return () => {
        clearTimeOut(timer);
        timer = setTimeout(callback, time);
    };
};

Throttling:

const throttle = (callback, time = 200) {
    let pause = false;
    return () => {
        if(pause) return;
        pause = true;
        callback();
        setTimeout(() => {
            pause = false;
        }, time);
    }
}

If we don't wanna implement it manually, we could import it from lodash library:

import debounce from "lodash/debounce";
import throttle from "lodash/throttle";

How to use Debouncing and Throttling in real cases?

// Use lodash's debounce to delay the search function by 500 milliseconds
const debounceSearch = _.debounce((event) => {
  const query = event.target.value;
  searchUsers(query);
}, 500);

// Add an event listener to the search input
searchInput.addEventListener('input', debounceSearch);
// Use lodash's throttle to trigger the action when the user scrolls, limited to once every 500 milliseconds
const throttleScroll = _.throttle(() => {
  // Check if the user has reached the bottom of the content
  if (contentElement.scrollTop + contentElement.clientHeight >= contentElement.scrollHeight) {
    loadMoreContent();
  }
}, 500);

// Add an event listener to the content element for the scroll event
contentElement.addEventListener('scroll', throttleScroll);

Memoization

Why is memoization necessary for improving performance and speed?

In the context of a sizable React component with numerous states, certain functions can be resource-intensive. When any state updates, React re-renders the entire component, even if the updated state is unrelated to these resource-heavy functions. Consequently, we end up recalculating these expensive functions, negatively impacting performance. Memoization techniques come to our rescue, providing a solution to mitigate these performance issues. Yayyy!

What is memoization?

Memoization is a technique that involves storing the result of a function in a memory space that allows for later retrieval. This is done to avoid having to recalculate the result every time the function is called with the same parameters. This technique only should be applied for functions that consume a lot of resources. It isn't necessary and causes a bad effect on performance if you apply this technique everywhere in a project.

This technique can be used in React as well, we can make use of the following memorization features in React:

React.memo

useMemo

useCallback

Implementing React.memo()

React.memo() is a higher-order component used to wrap the pure component to prevent re-rendering if the props are not changed.

import React, { useState } from "react";

// ...

const ChildComponent = React.memo(function ChildComponent({ count }) {
  console.log("child component is rendering");
  return (
    <div>
      <h2>This is a child component.</h2>
      <h4>Count: {count}</h4>
    </div>
  );
});

If the count prop remains unchanged, React will skip rendering the ChildComponent and reuse the previous result, even if the parent component of ChildComponent re-renders.

React.memo() functions effectively when we pass down primitive values, such as numbers or strings. On the other hand, for non-primitive values like objects, including arrays and functions, we should use useCallback and useMemo hooks to return a memoized value between renders.

It's important to note that React.memo will not prevent component renders caused by updates in state or context React.Context, as it only considers props to avoid unnecessary re-renders.

Implementing React.useMemo hook

useMemo is a React Hook that lets you cache the result of a costly function between component renders and only re-execute the function if its dependencies change.

This hook should only be used within components or other hooks, not within loops to conditionals.

Here is an example of how to use useMemo in a component:

import React, { useState, useMemo } from "react";

const expensiveFunction = (count) => {
  // artificial delay (expensive computation)
  for (let i = 0; i < 1000000000; i++) {}
  return count * 3;
};

export default function Component({ count }) {
  // ...
  const myCount = React.useMemo(() => {
  return expensiveFunction(count);
}, [count]);
  return (
    <div>
      <h3>Count x 3: {myCount}</h3>
    </div>
  );
}

In this example, we use useMemo to avoid the costly execution of expensiveFunction on every component render, which can slow down the application. In this way, the hook will return the last cached value and only update that value if the dependencies change.

It is recommended that the function used withuseMemobe pure.

Please note that useMemo is not a general javascript function for memorization, it's only a built-in hook in React that is used for memorization.

Implementing useCallback hook

useCallback hook lets you cache a function, preventing unnecessary re-creations of the function definition

It's very important to note that useMemo executes functions and saves the value in the cache. In contrast, useCallback does not execute functions, it only saves and updates their definition to be executed later by us.

As I mentioned above this hook can be useful for cache functions that are passed to child components as props.

Here is an example of how to use useCallback in a component:

const Dashboard = ({ month, income, theme }) => {
  const [data, setData] = useState([]);

  useEffect(() => {
    // Fetch data
    // ...
  }, []);

  const onFilterChange = useCallback((filter) => {
    // Handle expensive filter change logic
    // ...
    setData(result);
  }, [month, income]);

  return (
    <div className={`dashboard ${theme}`}>
      <Chart data={data} onFilterChange={onFilterChange} />
      {/* Other components */}
    </div>
  );
};

Bundle Size Optimization

Most of the websites are now server-side and include javascript code more than HTML, and CSS. The bundle is sent to clients, and then the clients will parse compile, and execute. Imagine that if you send to the browser 100kb of the app bundle and 50kb, Which one will make the browser load faster?

Optimizing bundle size will make the browser load quickly. Here are some strategies to optimize the React app bundle effectively.

Code splitting

You might wonder why we need a code-splitting technique to optimize the bundle size. Let me explain to you. Firstly, we should know what is called Bundling. Bundling is the process of following imported files and merging them into a single file "a bundle".This bundle can then be included on a webpage to load the entire webpage at once. Most React apps will be using tools like Webpack, Rollup, and Browserify to bundle their applications. Bundling is great, but as your app grows, your bundle will grow too. Especially, if you are including large third-party libraries. You have to keep your eyes on the size of the bundle file and if your bundle file becomes too big then your apps will take a long time to load. To avoid winding up with a large bundle, it's good to get ahead of the problem and start "splitting" your bundle. Code splitting is a technique that allows you to split the bundle into small chunks.

You might worried that the Webpack splitting configuration is complicated and you don't know how to do it. But when Webpack comes across this syntax, it automatically starts code-splitting your app. If you are using the Create React app tool, This is already configured for you and you can start using it immediately. it's also supported out of the box in Next.js. If you wanna set up Webpack by yourself, read this guideline.

React provides an easy way to implement code splitting by using lazy imports and Suspense.

import { lazy, Suspense } from 'react';
import { BrowserRouter as Router, Routes, Route } from "react-router-dom";

const HomeComponent = lazy(() => import('./HomeComponent'));
const AboutComponent = lazy(() => import('./AboutComponent'));

const App = () => {
  return (
    <Router>
      <Suspense fallback={<div>Loading...</div>}>
        <Routes>
          <Route path="/" exact element={HomeComponent} />
          <Route path="/about" element={AboutComponent} />
        </Routes>
      </Suspense>
    </Router>
  );
};

export default App;

By doing this way, we won't let import the entire bundle at once, It can slow down application load time. The idea behind React.lazy this is to reduce the bundle file by importing only what we need at the moment. This can be especially helpful for large or complex applications with many components.

It is recommended that components be imported by React.lazy should encapsulated within another React component called Suspense. This component allows us to display fallback while waiting for the lazy components to load, we could use a message or loading animation to let the user know that something is being loaded.

It's important to note that React.lazy currently only supports default exports. if the module you want to import is named exports, you can create an intermediate module that reexports it as the default. This way ensures that tree sharking keeps working and doesn't pull unused code to bundle files.

// ManyComponents.js
export const MyComponent = /* ... */;
export const MyUnusedComponent = /* ... */;
// MyComponent.js
export { MyComponent as default } from "./ManyComponents.js";
// MyApp.js
import React, { lazy } from 'react';
const MyComponent = lazy(() => import("./MyComponent.js"));

Error Boundaries

Error Boundaries are React components that catch Javascript errors(for example, due to network errors) anywhere in their child component tree, log those errors, and display the fallback instead of the app crashing.

It's important to note that Error boundaries do not catch errors for event handlers, asynchronous code, or errors that occur in the boundary itself.

class MyErrorBoundary extends React.Component {
  constructor(props) {
    super(props);
    this.state = { hasError: false };
  }

  static getDerivedStateFromError(error) {
    // Update state so the next render will show the fallback UI.
    return { hasError: true };
  }

  componentDidCatch(error, errorInfo) {
    // You can also log the error to an error reporting service
    logErrorToMyService(error, errorInfo);
  }

  render() {
    if (this.state.hasError) {
      // You can render any custom fallback UI
      return <h1>Something went wrong.</h1>;
    }

    return this.props.children; 
  }
}
import React, { Suspense } from 'react';
import MyErrorBoundary from './MyErrorBoundary';

const OtherComponent = React.lazy(() => import('./OtherComponent'));
const AnotherComponent = React.lazy(() => import('./AnotherComponent'));

const MyComponent = () => (
  <div>
    <MyErrorBoundary>
      <Suspense fallback={<div>Loading...</div>}>
        <section>
          <OtherComponent />
          <AnotherComponent />
        </section>
      </Suspense>
    </MyErrorBoundary>
  </div>
);

Dependency audit and clean-up

Regularly auditing your project dependencies is essential. Use commands like npm audit or yarn audit to identify vulnerable packages or outdated dependencies. You can also use tools like depcheck to find unused or redundant packages, which you can remove to minimize your application's size.

# Check for vulnerabilities in your project's dependencies
npm audit

# Check for unused dependencies
npx depcheck

# Remove an unused package
npm uninstall package-name

Tree sharking

https://www.developerway.com/posts/bundle-size-investigation

Tree Sharking is a process used by bundlers like Webpack to eliminate unused code from your final bundle. It relies on the static structure of ES6 modules (using import and export statements), allowing bundlers to analyze the dependency graph of your modules and determine which pieces of code are actually unused.

Key Points:

  • Static Analysis: Tree Sharking works with ES6 modules, which have a static structure.

  • Dead Code Elimination: Only the code that is actually used is included in the final bundle.

  • Effectiveness: The effectiveness of tree sharking depends on the code’s structure and the bundler’s ability to analyze it.

// Avoid CommonJS syntax ❌
const React = require('react'); // Not tree-shakable
const Component = React.memo(() => { ... })

// Use ES6 import syntax ✅
import { memo } from 'react'; // Tree-shakable
const Component = memo(() => { ... })

All code is bundled, regardless of whether or not it's used

Only used code is bundled

Examples of Tree Sharking in React.js

  1. Basic Utility Functions

Consider a file with multiple utility functions:

// utils.js
export function add(a, b) {
  return a + b;
}

export function subtract(a, b) {
  return a - b;
}

export function multiply(a, b) {
  return a * b;
}

export function divide(a, b) {
  return a / b;
}

And in your main file, you only use the add function:

// main.js
import { add } from './utils';

console.log(add(5, 3));

The final bundle will only include the add function. The subtract, multiply, and divide functions will be excluded from the bundle because they are not used.

Tree Sharking works best when modules are designed to be modular and side-effect-free. By importing only the functions or components you need, you help the bundle identify what can be removed.

  1. React Components with Side Effects

Consider React components that have side effects:

// components.js
export function Header() {
  console.log('Header component loaded');
  return <h1>Header</h1>;
}

export function Footer() {
  console.log('Footer component loaded');
  return <footer>Footer</footer>;
}

export function Sidebar() {
  console.log('Sidebar component loaded');
  return <aside>Sidebar</aside>;
}

And use only the Header component:

// App.js
import React from 'react';
import { Header } from './components';

function App() {
  return (
    <div>
      <Header />
    </div>
  );
}

export default App;

The Footer and the Sidebar components may still be included in the bundle due to their effects. This happens because the bundle might not be able to determine that these components are not used if they have side effects like console.log.

To make tree sharking more effective, avoid including side effects in your modules. Keep components and utility functions focused on their primary purpose and avoid global effects.

  1. Dynamic Imports and Tree Sharking

Dynamic imports can complicate tree sharking:

// utils.js
export function add(a, b) {
  return a + b;
}

export function subtract(a, b) {
  return a - b;
}

// main.js
async function loadUtils() {
  const { add } = await import('./utils');
  console.log(add(5, 3));
}

loadUtils();

Because the import is dynamic, the bundle might include both add and subtract in the final bundle. Dynamic Imports are resolved at runtime, making it harder for the bunder to predict which code will be used.

While dynamic imports are useful for code-splitting and loading modules on demand, they can reduce the effectiveness of tree sharking. Use them judiciously and be aware that they may impact the final bundle size.

  1. Default Exports vs Named Exports

Default Exports can be less effective for tree sharking:

// mathUtils.js
export default function add(a, b) {
  return a + b;
}

export function subtract(a, b) {
  return a - b;
}

// App.js
import add from './mathUtils';

console.log(add(5, 3));

The subtract function may not be removed effectively because the default exports can make it harder for the bundler to identify which parts of the module are used.

Named Exports are generally more effective for tree sharking because they allow the bundler to more easily track which exports are used. Consider using named exports for better optimization.

  1. Complex React Application

In a more complex setup:

// components/Button.js
export const Button = () => <button>Click me</button>;

// components/Input.js
export const Input = () => <input type="text" />;

// components/Checkbox.js
export const Checkbox = () => <input type="checkbox" />;

// App.js
import { Button } from './components/Button';

function App() {
  return (
    <div>
      <Button />
    </div>
  );
}

export default App;

The Input and Checkbox components will be excluded from the final bundle because they are not used in App.js. Only the Button component will be included.

Organizing components and functions into separate modules helps the bundler more effectively identify and eliminate unused code. Keep components modular and only import what is necessary.

  1. Avoiding common Pitfalls

Consider a module with a large dataset:

// data.js
export const data = [1, 2, 3, 4, 5];

// main.js
import { data } from './data';

console.log(data);

Even if data is used minimally, the entire data.js file file might be included in the final bundle due to its import. This can increase the bundle size unnecessarily.

For large datasets or configurations, consider alternatives like code-splitting or fetching data from external sources. This can help keep the bunder size smaller and more manageable.

Remove dead code

Removing dead code is indeed an art form. At times, it can be challenging to distinguish between live and dead code. There are tools available, such as unimported, that can assist with this task, but I often find it challenging to fully utilize them. The best approach is to remain vigilant and identify dead code continuously while coding.

One method I frequently employ is commenting out potentially dead code and observing if any issues arise. If everything functions as expected, I proceed to delete the code. You'd be surprised by how much dead code you can eliminate by periodically employing this practice.

Dedupe Dependencies

One trick that I recently learned was to run npm dedupe in my project after making any changes to the dependencies. This is a built-in command from npm that looks through your project’s dependency tree as defined in the package-lock.json file and looks for opportunities to remove duplicate packages.

For example, if package A has a dependency of package B, but package B is also a dependency of your project (and thus at the top level of your node_modules), running npm dedupe will remove package B as a dependency of package A in your package-lock.json since package A will already have access to package B as it looks up the tree. This probably won’t have dramatic effects on your bundle size. But it definitely will have some effect. Plus, why not use a built-in tool from npm to simplify and clean up your package-lock.json?

Loading Javascript asynchronously

One of the most common issues faced by front-end dev is to optimize site performance. If your website fetches more than 1 Mb of data, then it's bad design for users with slow internet connections(even worse, if your target audience is from emerging markets). One simple thing that I found to be pretty useful is the deferred loading of JavaScript files.

As we know it is better to keep our JS files at the end of the page(as much as possible) so that it doesn't interfere while the DOM loads(HTML renders itself). Extending the same logic, CSS files, and page-specific styles should ideally be referenced at the beginning of the page. Coming back to the point, if your page fetches JavaScript files while the DOM is rendering, it's an indicator of bad code placement. See the image below for understanding.

Comparison

When to useasyncanddefer

  • Useasync for scripts that are independent of each other and do not rely on the DOM being fully parsed, such as analytics scripts or ads.

  • Usedefer for scripts that need to maintain order and depend on the DOM being fully parsed, such as scripts that manipulate the DOM or rely on other scripts.

Example

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Async and Defer Example</title>
    <!-- Analytics script that doesn't depend on other scripts -->
    <script src="analytics.js" async></script>
    <!-- DOM manipulation script that depends on the DOM being fully parsed -->
    <script src="dom-manipulation.js" defer></script>
</head>
<body>
    <h1>Hello, World!</h1>
    <p>This is an example of using async and defer attributes.</p>
</body>
</html>

Manual Chunking With Vite

While Vite provides automatic chunking out of the box for source code, there are scenarios where manual chunking gives you more control over your build output.

By default, Vite third-party libraries and source code are bundled together in what we call the index chunk. This chunk is the first chunk to load and is always loaded no matter the route.

You should aim to break down large chunks in order to reduce fetching delays:

  • Aim for chunks between 100-300kB

  • Too many small chunks can increase HTTP requests

  • Too few large chunks can delay the initial load

Before we optimize chunking, it would be great to visualize it first so that we can observe the difference. Using rollup-plugin-visualizer, we can view a neat graph of all the chunks and how much size we take up.

Third-party packages are not dependent on any application source code, so they can all be easily separated into separate chunks without any breakages. Let’s call this chunk the vendor chunk.

You can further divide the vendor chunk into multiple separate chunks. Be careful here, because third-party libraries are often interdependent on each other, so it’s important to keep interdependent packages in the same chunk else they might fail to load at the runtime.

A possible scenario that will lead to an error:

// Will FAIL

// Dependency graph
@lexical/core -> @lexical/react -> @lexical/core

// Manual chunking
if (id.includes("@lexical/react")) return "lexical-react";
if (id.includes("@lexical/core")) return "lexical-core";

// Output chunks
lexical-react-[hash].js
lexical-core-[hash].js

In the above example, we are keeping inter-dependent packages in separate chunks.

When lexical-react chunk is loaded and parsed, it will see that there is an import to the lexical-core chunk. Parsing will be paused for "lexical-react" and move to loading and parsing the “lexical-core“ chunk.

The lexical-core chunk will have an import statement defined for lexical-react. But the lexical-react chunk is still not loaded yet, so we are stuck in a deadlock and the parsing fails and an error is thrown.

For this reason, we need to keep related packages in the same chunk.

// Will SUCCEED

// Dependency graph
@lexical/core -> @lexical/react -> @lexical/core

// Manual chunking (will include both `@lexical/core` and `@lexical/react` in same chunk)
if (id.includes("lexical")) return "lexical";

// Output chunks
lexical-[hash].js

It’s not recommended to implement manual chunking in source code due to the high likelihood of creating circular dependencies that will break your app too often. The right way to create separate chunks in source code is to use dynamic imports where it makes sense. Each route should ideally be dynamically imported so that the module bundler knows to analyze the dependency graph and separate the chunks accordingly.

Here is the full manual configuration.

// vite.config.ts
const viteConfig = defineConfig(() => ({
  build: {
    rollupOptions: {
      output: {
        manualChunks
      }
    },
    ...
  },
  ...
}))

function manualChunks(id: string): string | void {
  const isVendor = id.includes("node_modules");
  if (isVendor) {
    // Creates manual chunks from node_module deps. Reducing the vendor chunk size.
    if (id.includes("core-js")) return "core-js";
    if (id.includes("react-flow-renderer")) return "react-flow-renderer";
    if (id.includes("recharts")) return "recharts";
    if (id.includes("lexical")) return "lexical";
    if (id.includes("d3-")) return "d3";
    if (id.includes("lodash-es")) return "lodash";
    if (id.includes("moment")) return "moment";
    if (id.includes("dexie")) return "dexie";
    if (id.includes("date-fns")) return "date-fns";
    if (id.includes("react-popper")) return "react-popper";
    if (id.includes("handlebars")) return "handlebars";
    if (id.includes("@sentry")) return "sentry";
    if (id.includes("@radix-ui")) return "radix-ui";
    if (id.includes("@dnd-kit")) return "dnd-kit";
    if (id.includes("jsoneditor-react") || id.includes("brace"))
      return "jsoneditor";

    // Default chunk name for third-party packages
    return "vendor";
  }
}

All chunks required for the initial page load are loaded in parallel. So the largest chunk can be the bottleneck, which is where manual chunking helps to break down chunks further.

The Secret to Cleaner DOM Code

So the other day, I was debugging a nasty little UI bug in a project, a to-do list app from 2018. I inspected the dev tool, and lo and behold: there were over 50 click listeners added to the DOM.

I sighed, cracked my knuckle, and whispered the words many junior devs fear: “It’s time… for event delegation“

What is Event Delegation?

Event Delegation is a pattern where instead of attaching event listeners to each individual child element, you attach one listener to a common parent, and use the event’s .target property to figure out which child triggered the event.

Why does this matter?

Because the DOM is dynamic, lists grow. Elements get removed. Binding an individual listener to every single element? That’s an inefficient and error-prone.

The Wrong Way: One Listener per Child

Let’s say we’re building a simple to-do list:

<ul id="todo-list">
  <li>Buy milk</li>
  <li>Walk the dog</li>
  <li>Write Medium article</li>
</ul>

A naïve (but common) approach is this:

Looks clean. But here’s the catch:

  • Add a new item later with JS? No listener attached.

  • 1,000 items? That’s 1,000 listeners in memory.

  • Remove one? Now you’re managing cleanup.

You’re building tech debt in real time.

The Better Way: Event Delegation

Instead, let’s bind one listener to the #todo-list parent:

Let’s walk through this:

  • We’re attaching the listener to ul, not li.

  • e.target tells us which element actually got clicked.

  • We check if it’s an LI (because the UL could have other stuff later).

Done. One listener. Infinite items.

CSS Best Practices for Faster Above-the-Fold Loading

When users land on the page, they judge you in the first 500-1000ms. If your header, nav, and hero show up late or snap into place. You are paying a tax in FCP/LCP and trust. This fix isn’t a mystery; it’s about how you deliver CSS (and fonts), and how fancy your CSS is.

  1. Inline Critical CSS (and only that)

Goal: Let the browser paint the first screen without waiting for external stylesheets.

What counts as “critical”? Header, logo, top nav, hero container, primary text styles, basic spacing. Aim for ≤ 5–14 KB gz.

HTML pattern (safe):

<head>
  <!-- 1. Inline tiny ATF CSS -->
  <style>
    :root { --brand:#4f46e5; }
    html,body { margin:0; height:100% }
    body { font:16px/1.5 system-ui,-apple-system,Segoe UI,Roboto,Arial,sans-serif; color:#111 }
    header { display:flex; align-items:center; justify-content:space-between; padding:1rem; border-bottom:1px solid #eee }
    .btn { background:var(--brand); color:#fff; padding:.6rem 1rem; border:0; border-radius:.5rem }
    .hero { max-width:70ch; margin:2.5rem auto; padding:0 1rem }
    .hero h1 { margin:.2rem 0 1rem; font-size:clamp(1.8rem,3vw+1rem,3rem) }
    .hero p { color:#444; margin:0 0 1.25rem }
  </style>

  <!-- 2. Preload + async apply full CSS -->
  <link rel="preload" href="/assets/app.css" as="style">
  <link rel="stylesheet" href="/assets/app.css" media="print" onload="this.media='all'">
  <noscript><link rel="stylesheet" href="/assets/app.css"></noscript>
</head>

Tips

  • Generate automatically with Penthouse, Critters, or html‑critical‑webpack‑plugin.

  • ATF differs per route; generate per page for complex apps.

  1. Preload what the first screen actually needs

Goal: Pull critical assets to the front of the waterfall.

Preload the main CSS (if you didn’t inline everything):

<link rel="preload" href="/styles/main.css" as="style">
<link rel="stylesheet" href="/styles/main.css">

Preload the first text font (exact file you use in the hero):

<link rel="preload" href="/fonts/inter-var.woff2" as="font" type="font/woff2" crossorigin>

Preconnect to third‑party origins you can’t avoid:

<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>

Rules of thumb

  • Preload sparingly (CSS + 1 font). Over‑preloading delays everything else.

  • Always include the real <link rel="stylesheet">; preload doesn’t apply styles.

  1. Tame Web Fonts (Stop FOIT/FOUT/CLS)

Fonts are invisible page‑speed killers. Your hero headline shouldn’t vanish or jump.

Self‑host and control display behavior:

@font-face {
  font-family: 'Inter';
  src: url('/fonts/inter-var.woff2') format('woff2');
  font-weight: 100 900;
  font-display: swap; /* or 'optional' for zero‑risk CLS */
}
body { font-family: Inter, ui-sans-serif, system-ui, -apple-system, Segoe UI, Roboto, Arial, sans-serif; }

Choose good fallbacks with similar metrics (reduces layout shift).
Preload only what you render above the fold. Don’t preload 6 weights you won’t use in the hero.

Pro move: For a “rock‑solid” first paint, set the hero to system font in critical CSS, let Inter swap in later with matched line‑height.

  1. Keep the CSS bundle lean (Minify, Purge, Split)

Goal: Shorten “time to first styled paint” by reducing parse time.

  • Minify: cssnano / Clean‑CSS / esbuild.

  • Purge: Tailwind content scanning or PurgeCSS to remove unused selectors.

  • Split: Keep a tiny base/ATF CSS, load the rest asynchronously.

Tailwind config (purge):

// tailwind.config.js
module.exports = {
  content: ["./src/**/*.{js,ts,jsx,tsx,html}"],
  theme: { extend: {} },
  plugins: [],
};

Don’t: Use Tailwind’s CDN generator in production (it builds CSS in the browser).

  1. Priority Load Order (And Don’t Import CSS at Runtime)

Goal: Make CSS discoverable early; avoid late injection.

Do

  • Put inline critical CSS first.

  • Then preloads (CSS + first font).

  • Then the blocking stylesheet (if used).

  • Defer scripts.

Don’t

  • Use @import inside CSS for other stylesheets—it’s serial and blocks.

  • Inject core styles via JS (CSS‑in‑JS at runtime) without SSR extraction.

Script loading that won’t block CSS discovery:

<script src="/app.js" defer></script>
  1. Eliminate CLS in the first screen

Goal: Your ATF content must not shift after paint.

Reserve space for anything that loads late:

<!-- Image with explicit dimensions or aspect-ratio -->
<img src="/hero.jpg" width="1600" height="900" alt="Hero">
<!-- or -->
.hero-img { aspect-ratio: 16 / 9; width:100%; display:block }

Avoid late font swaps, causing reflow

  • Use font-display: optional for the hero if brand tolerance allows.

  • Match fallback metrics (size/line‑height/letter‑spacing).

No lazy‑load above the fold

  • Lazy‑load below the fold; eager‑load ATF images.
  1. Use Cascade Layers to Control Overrides (No !important Wars)

Goal: Decide by design what wins — base → components → utilities — so you can ship tiny critical CSS without fighting later.

@layer reset, base, components, utilities;

@layer reset   { *,*::before,*::after { box-sizing: border-box } }
@layer base    { body { font:16px/1.5 system-ui } }
@layer components { .btn { background:#111; color:#fff; } }
@layer utilities  { .bg-brand { background:#4f46e5 } }

Use utilities to tweak ATF without shipping a second ruleset. Utilities layer intentionally overrides components — no !important.

  1. Use Media/Attribute Tricks to defer Non-critical CSS

Goal: Load the rest of the CSS without blocking the first paint.

Async apply pattern:

<link rel="preload" href="/css/noncritical.css" as="style">
<link rel="stylesheet" href="/css/noncritical.css" media="print" onload="this.media='all'">
<noscript><link rel="stylesheet" href="/css/noncritical.css"></noscript>

Or load after paint via JS:

<script>
  (function(){
    var l=document.createElement('link');
    l.rel='stylesheet'; l.href='/css/noncritical.css';
    l.media='only x'; l.onload=function(){ l.media='all' };
    document.head.appendChild(l);
  })();
</script>

Use responsibly: Don’t defer CSS that’s required for ATF.

  1. Treat Icons and Third-Party Widgets Carefully

Icons

  • Prefer SVG sprite or inline SVGs for ATF icons. Avoid blocking icon font CSS.

  • If using icon fonts, preload the font (woff2) and set font-display: swap.

Third‑party widgets

  • Don’t let them inject blocking CSS in your <head>.

  • Provide minimal fallback CSS inline so the ATF layout doesn’t break if the widget is late.

  • For chat widgets/trackers: load after the first paint.

  1. Measure, Regress, Automate

Lab

  • Lighthouse: FCP/LCP; watch “Eliminate render‑blocking resources.”

  • DevTools → Performance: record on Slow 3G + 4× CPU.

  • Coverage: see how much CSS is unused on first view.

Field

  • Capture Web Vitals (p75 LCP/CLS) with a tiny web‑vitals snippet.

  • Watch region/network splits; slow radios are where optimizations pay off.

CI

  • Generate critical CSS per route (Penthouse/Critters).

  • Budget CSS sizes; fail builds when bundles bloat.

  • Hash filenames and set:

Cache-Control: public, max-age=31536000, immutable

Avoid memory leaks in React

A memory leak is a common faced issue when developing React applications. It causes many problems, including:

  • This affects the project's performance by reducing the amount of available memory.

  • Slowing down the application

  • crashing the system

You might see this warning message while developing the React application, and you don't know where this thing comes from:

Can't perform a React state update on an unmounted component. This is a no-op, but it indicates a memory leak in your application. To fix, cancel all subscriptions and asynchronous tasks in auseEffectcleanup function.

Consider the scenario where you execute the asynchronous call to get data and display it to the user. But when the request is being called and still hasn't been done, you navigate to another page. Since the component was unmounted and the function is being called in a component that is no longer mounted, it causes a memory leak issue - and in the console, you will get a warning.

Example of unsafe code:

const [value, setValue] = useState('checking value...');
useEffect(() => {
    fetchValue().then(() => {
      setValue("done!"); // ⚠️ what if the component is no longer mounted ?
      // we got console warning of memory leak
    });
}, []);

There are a few ways to eliminate memory leaks. Some of them are as follows.

  • Using Boolean Flag

      const [value, setValue] = useState('checking value...');
      useEffect(() => {
      let isMounted = true;
      fetchValue().then(() => {
            if(isMounted ){
            setValue("done!"); // no more error
            } 
          });
         return () => {
          isMounted = false;
          };
      }, []);
    

    In the above code, I’ve created a Boolean variable isMounted, whose initial value is true. When isMounted is true, the state is updated, and the function is returned. Otherwise, if the action is unmounted before completion, then the function is returned isMounted as false. This ensures that when a new effect is to be executed, the previous effect will be first taken care of.

  • Using use-state-if-mounted Hook

      const [value, setValue] = useStateIfMounted('checking value...');
          useEffect(() => {
              fetchValue().then(() => {
                setValue("done!"); // no more error
              });
          }, []);
    

    In the above code, I’ve used a hook that works just like React’s useState, but it also checks that the component is mounted before updating the state!

  • Using AbortController

      useEffect(() => {  
          let abortController = new AbortController();  
          // your async action is here  
          return () => {  
          abortController.abort();  
          }  
          }, []);
    

    In the above code, I’ve used AbortController to unsubscribe the effect. When the async action is completed, I abort the controller and unsubscribe from the effect.

We've just covered five optimization techniques for enhancing application performance in React and JavaScript. Since this discussion has been quite extensive, let's continue our exploration in Part II. See you there!

Using Code-Splitting

Code-splitting is a technique that allows you to break up your application into smaller “chunks“ that are loaded on demand. This strategy results in faster initial load times and improved overall performance.

Using code-splitting with Lazy and Suspense

Using React.lazy() and <React.Suspense> allows you to seamlessly perform code-splitting in your React Router implementation. This results in faster initial page loads and on-demand loading for navigated pages.

Here’s an example of applying code-splitting with React.lazy and Suspense:

By using React.lazy(), the Home and About components are loaded on demand, ensuring a more performant browsing experience.

Implementing Preloading Techniques

Preloading allows you to load specific content or components in advance, reducing wait times for users. This technique can significantly enhance the user experience, especially when applied strategically.

Here’s an example of preloading components as users hover over navigation links:

In this example, hovering over the NavLink components preloads the Home and About components, making navigation faster and smoother.

React at Scale: What It Takes to Handle Millions of Records

Lessons Learned from Scaling Intensive Web Applications and the Performance Optimizations That Actually Matter.

Over the past few years, I have had opportunities to work on several large-scale React web applications that handle massive datasets — from e-commerce platforms with hundreds of thousands of products, to real-time dashboards visualizing live metrics, to internal enterprise tools managing deeply nested and complex data structures. Each project presents unique challenges and requires thoughtful performance strategies, architectural decisions, and data handling techniques to ensure a smooth user experience at scale.

Through trial, error, and countless performance profiling sessions, I have discovered techniques and patterns that make the difference between an app that crashes browsers and one that handles massive datasets gracefully. Here are the most impactful lessons I have learned from building data-intensive React applications that actually perform.

Handling Large Datasets

Here’s what happens when you are trying to load the whole of the large dataset at once:

  • Memory usage explodes: The browser consumes several gigabytes of RAM as it creates DOM elements and JavaScript objects for every single record, pushing most systems beyond their limits.

  • Everything becomes painfully slow: Scrolling takes several seconds to respond, clicks experience significant delays, the entire interface becomes unresponsive, and the main thread struggles to manage the massive DOM elements.

  • Users abandon the app: What starts as a functional application becomes completely unusable. Navigations break down, interactions feel broken, and the performance is so poor that users can’t accomplish their tasks.

The problem is clear: attempting to render everything simultaneously doesn’t scale when dealing with massive datasets. We need a fundamentally different approach that respects both browser limitations and human perception patterns.

🧮 Enter Virtualization

Instead of rendering a large dataset of DOM elements, I used Tanstack Virtualization to create a virtualized list that only renders items currently in the viewport. At any given moment, only 100-200 elements exist in the DOM, regardless of the total dataset size.

🔄 Smart Data Fetching

I implement chunked data fetching with cursor-based pagination. The frontend requests data in digestible 10,000-item chunks, loading new batches as users scroll. This approach eliminates the initial loading bottleneck and makes the application responsive from the first interaction.

The result? The large dataset now loads in under 2 seconds and scrolls as smoothly as a 100-item list.

⚙️ Next.js Optimization

If you are using Next.js, you can leverage server-side prefetching when loading large datasets. This improves the initial load, and once hydrated on the client, you can use client-side fetching for pagination or filtering. This can be accomplished using React Query. I have explained this approach in detail here.

🧩 AG Grid: The Enterprise Solution

AG Grid offers a robust alternative with both client-side, Infinity, and server-side row models, plus exceptional built-in UI components that provide an excellent user experience out of the box.

For medium datasets (up to 100k records): If your dataset contains around 100k records and you can fetch them in one go, AG Grid’s client-side row model handles all the heavy lifting. It provides filtering, sorting, virtualization, and pagination automatically without additional configuration.

If you do not want to shift all the data from your server to your client, as the amount of data is too large to shift over the network or to extract from the underlying datasource, then use either Infinity or Server-side. Each one takes data from the server in different ways.

For large datasets (100k+ records): When dealing with datasets larger than 100k+ records, AG Grid’s server-side or Infinity Row model becomes the better choice. It delegates filtering, sorting, and pagination to your backend, keeping the frontend performant while maintaining all the rich grid functionalities users expect.

The key advantage is that AG Grid abstracts away most of the complexity we have discussed, providing a production-ready solution with minimal setup effort.

Smart Loading Techniques

Performance Optimization is often about being smart - doing as little work as possible until you absolutely have to

⏳ Load on Demand Strategy

One of my applications has two main views: a list showing item summaries and a detailed panel for selected items. Initially, I fetched the completed data for every item just in case. This was wasteful and slow.

Instead, I implemented what I call the “load on demand“ strategy:

  1. Minimized Data Fetching: The list view only required basic attributes like name, ID, and category. I fetched just this data initially and deferred loading full item details until necessary.

  2. Lazy Loading with Prefetching: Using React Query, I implemented a hover-triggered prefetch. When a user hovered over an item, its details were fetched after a 300ms delay. By embedding the item ID in the DOM, a single event handler managed all items efficiently.

  3. Smart Caching: Used React Query to cache fetched details for instant subsequent access.

  4. Adaptive Fetching for Different List Sizes: For small lists, we can fetch full item data to enable real-time performance. For large lists, we only fetch the minimal data required for the list view, dynamically loading details when requested.

This approach reduced initial payload size by 90% while maintaining the perception of instant loading.

🧱 Flattening Data

In traditional database design, keeping data organized is important. But in frontend performance, flattening data can be a game-changer.

I restructured my API responses to match what the UI actually needs rather than following strict database patterns. This approach is known as the Backend for Frontend (BFF pattern) — creating API endpoints specifically tailored to frontend requirements.

Instead of making multiple requests to piece together item data, I flattened related information into single, focused responses.

Before (Separate Requests)

GET /items → [{id: 1, categoryId: 5, brandId: 12}]
GET /categories/5 → {name: "Electronics"}
GET /brands/12 → {name: "Apple"}

After (Combined Data)

GET /items → [{id: 1, categoryName: "Electronics", brandName: "Apple"}]

This reduced network requests by 70% and eliminated the complexity of managing multiple loading states.

❓When BFF Isn’t an Option: Frontend Data Transformation

If you can’t implement a Backend for Frontend pattern, we can still achieve similar results by handling data normalization on the frontend. Here are the main approaches:

React Query with Normalizr

import { normalize, denormalize, schema } from 'normalizr';

// Define schemas
const category = new schema.Entity('categories');
const brand = new schema.Entity('brands');
const item = new schema.Entity('items', {
  category: category,
  brand: brand
});

const useNormalizedItems = () => {
  const { data: rawData } = useQuery('itemsWithDetails', fetchItemsWithDetails);

  return useMemo(() => {
    if (!rawData) return { items: [], entities: {} };

    // Normalize the data
    const normalized = normalize(rawData, [item]);

    // Denormalize for component use
    const denormalizedItems = denormalize(
      normalized.result, 
      [item], 
      normalized.entities
    );

    return { items: denormalizedItems, entities: normalized.entities };
  }, [rawData]);
};

RTK Query with Entity Adapter

import { createEntityAdapter } from '@reduxjs/toolkit';
import { createApi, fetchBaseQuery } from '@reduxjs/toolkit/query/react';

// Create entity adapters
const itemsAdapter = createEntityAdapter();
const categoriesAdapter = createEntityAdapter();
const brandsAdapter = createEntityAdapter();

export const apiSlice = createApi({
  reducerPath: 'api',
  baseQuery: fetchBaseQuery({ baseUrl: '/api' }),
  endpoints: (builder) => ({
    getItems: builder.query({
      query: () => '/items',
      transformResponse: (response) => itemsAdapter.setAll(itemsAdapter.getInitialState(), response)
    }),
    getCategories: builder.query({
      query: () => '/categories',
      transformResponse: (response) => categoriesAdapter.setAll(categoriesAdapter.getInitialState(), response)
    }),
    getBrands: builder.query({
      query: () => '/brands',
      transformResponse: (response) => brandsAdapter.setAll(brandsAdapter.getInitialState(), response)
    })
  })
});

// Selector to get enriched items
const selectEnrichedItems = createSelector(
  [
    (state) => state.api.queries['getItems()']?.data,
    (state) => state.api.queries['getCategories()']?.data,
    (state) => state.api.queries['getBrands()']?.data
  ],
  (items, categories, brands) => {
    if (!items || !categories || !brands) return [];

    return itemsAdapter.getSelectors().selectAll(items).map(item => ({
      ...item,
      categoryName: categoriesAdapter.getSelectors().selectById(categories, item.categoryId)?.name,
      brandName: brandsAdapter.getSelectors().selectById(brands, item.brandId)?.name
    }));
  }
);

Trade-offs of Frontend Normalization:

  • More network requests but better caching

  • Follows REST principles

  • More complex state management

  • Potential for inconsistent loading states

While frontend normalization works, the BFF pattern remains the cleaner solution when possible. Check out this article for more details:

Optimizing React Rendering Beyond Memoization

Scaling React apps requires careful attention to rendering logic, especially when handling dynamic datasets. While memoization (e.g., React.memo and useMemo) is helpful, it’s not a silver bullet.

🎯 Normalized State

Normalize your data like a relational DB: one store for all items, one for visible IDs.

// Redux example
const state = {
  itemsById: {
    "1": { id: 1, name: "Item 1" },
    "2": { id: 2, name: "Item 2" },
    ...
  },
  visibleItemIds: [1, 2, 3], // Only IDs rendered now
  searchResultIds: [5, 8, 10] // Optional
}

Benefits:

  • Updates only what’s needed

  • Tools like reselect or memoized selectors ensure re-renders are scoped

  • RTK Query handles fetching/caching large datasets

  • RTK’s createEntityAdapter makes normalization trivial

With Redux Toolkit + RTK Query, you can define normalized slices using createEntityAdapter, then populate them using RTK Query’s transformResponse. This gives you full control over normalized client-side state with server-state integration built in.

Alternatively, with React Query + normalizr, you can normalize API responses in the select or transform functions of useQuery. The normalized entities can be cached in memory or used to render views scoped to visible/search result IDs. React Query manages the async state, caching, and revalidation, while normalizr structures the data like a relational graph — ideal for apps that don’t need a full Redux store but still want predictable, performant renders for large datasets.

🛑 Avoiding Large-Scale map() Calls:

Mapping over a million items led to memory spikes. Instead, we can use for loops and processed items in smaller chunks. This approach can avoid creating unnecessary arrays, reducing memory overhead.

📉 Efficient Array Operations

One of the biggest performance killers was innocent-looking code like this:

// This innocent line caused 2-second freezes
const filteredItems = items.filter(item => item.category === selectedCategory)

When items contains a million elements, JavaScript's array methods become performance bottlenecks. We can replace high-level array operations with more efficient alternatives:

Instead of map() for large datasets:

// Slow: Creates a new array
const processed = items.map(transform)

// Fast: Processes in place
for (let i = 0; i < items.length; i++) {
  processItem(items[i])
}

Streamlining Network Performance

Handling millions of items required fetching vast amounts of data, which led to slow requests, retries, and concurrency bottlenecks.

🧩 Chunking and Cursor-Based Pagination

Instead of fetching all results at once, I implemented chunking with cursor-based pagination. For example:

  • Data was divided into chunks of 10,000 elements, fetched incrementally.

  • If a request failed, only the failed chunk was retried, reducing overhead.

🧬 Columnar Response Format

Switching from a row-based to a column-based wire format reduced payload size. Instead of repeating column names (e.g., ID and category) for each row, I sent them once as metadata. This reduced redundancy and took advantage of backend languages with efficient memory models.

**Row-Based Format (Traditional JSON)
**Each row is a complete object, including all column names:

[
  {"id": 1, "name": "Item 1", "price": 100},
  {"id": 2, "name": "Item 2", "price": 200}
]

**Columnar Format (Optimized)
**The data is split by column (field name), and values are grouped into arrays:

 {
  "ids": [1, 2],
  "names": ["Item 1", "Item 2"],
  "prices": [100, 200]
}

This format is highly friendly for:

  • Low-latency APIs

  • Large result sets

  • Frontend rendering with column-based tables (e.g., grids)

🛠️ Understanding Bottlenecks with Chrome DevTools:

  • Initiator Tab: Helped trace the source of network requests, identifying unnecessary or redundant calls.

  • Timing Tab: Revealed queuing delays, server response times, and download speeds, highlighting areas for optimization.

♻️ Intelligent Retry Logic

When dealing with large datasets, network failures are inevitable. I implemented smart retry logic that only retries failed chunks, not entire requests. This meant a single failed 10,000-item chunk didn’t force re-downloading of successfully fetched data.

Building a Performance Culture

Technical solutions only work if your team adopts a performance-first mindset. Here’s what I learned about creating that culture:

🎯 Performance Budgets

I set strict performance budgets:

  • Initial bundle size: < 500KB

  • Time to interact: < 3 seconds

  • Memory usage: < 100MB for 10,000 visible items

🧪 Continuous Monitoring

I integrated performance monitoring into my CI/CD pipeline. Any PR that degrades loading time by more than 10% requires a performance review before merging.

📊 Real User Monitoring (RUM)

RUM (Real User Monitoring) tracks how actual users experience your application in production. Unlike synthetic monitoring, RUM gives visibility into:

  • Load times

  • Errors and retries

  • Network performance

  • Interactions (clicks, scrolls, etc.)

I integrated Datalog RUM to continuously monitor real-world user behavior and optimize the UI/UX. This allowed us to perform:

  • Funnel analysis — to understand drop-offs across the user journey

  • Heatmaps — to visualize engagement patterns

  • API latency tracking — segmented by user device and location

  • Device-based analytics — to optimize for various screen sizes and hardware

  • Geo analytics — to spot regional performance issues and tailor optimizations accordingly

  • Core Web Vitals (LCP, FID, CLS)

7 Advanced React Performance Patterns Everyone Should Master

React applications start simple, but as they grow, performance bottlenecks emerge. Component re-renders unnecessarily, state updates cascade unpredictably, and your once-snappy app begins to lag.

The difference between a good React developer and a great one isn’t knowing the API — it’s mastering the patterns that keep applications fast, scalable, and maintainable even as complexity grows.

These are 7 advanced performance patterns that will transform how you approach React development. Each pattern addresses real performance challenges you will face in production applications.

Why Performance Patterns Matter?

While basic React gets your app working, these advanced patterns provide:

  • Optimized Rendering: Eliminate unnecessary re-renders and improve user experience.

  • Memory Efficiency: Prevent memory leaks and reduce garbage collection overhead.

  • Scalable Architecture: Build applications that perform well as they grow

  • Predictable Behavior: Create components that behave consistently under load.

1. Smart Memoization with Dependency Tracking

Don’t just memo everything — use strategic memoization that actually improves performance.

import { memo, useMemo, useCallback, useState } from 'react';

// ❌ Over-memoization can hurt performance
const BadExample = memo(() => {
  const [count, setCount] = useState(0);

  // This creates a new function on every render anyway
  const handleClick = () => setCount(c => c + 1);

  return <button onClick={handleClick}>{count}</button>;
});

// ✅ Strategic memoization with proper dependencies
const ExpensiveUserList = memo(({ users, searchTerm, sortBy }) => {
  const filteredAndSortedUsers = useMemo(() => {
    console.log('Expensive computation running...');

    return users
      .filter(user => 
        user.name.toLowerCase().includes(searchTerm.toLowerCase()) ||
        user.email.toLowerCase().includes(searchTerm.toLowerCase())
      )
      .sort((a, b) => {
        switch (sortBy) {
          case 'name': return a.name.localeCompare(b.name);
          case 'email': return a.email.localeCompare(b.email);
          case 'date': return new Date(b.createdAt) - new Date(a.createdAt);
          default: return 0;
        }
      });
  }, [users, searchTerm, sortBy]); // Only recompute when these change

  return (
    <div>
      {filteredAndSortedUsers.map(user => (
        <UserCard key={user.id} user={user} />
      ))}
    </div>
  );
});

// Memoize child components that receive objects as props
const UserCard = memo(({ user }) => {
  return (
    <div className="user-card">
      <h3>{user.name}</h3>
      <p>{user.email}</p>
      <small>{new Date(user.createdAt).toLocaleDateString()}</small>
    </div>
  );
});

👉 Real-world impact: In a dashboard with 10,000+ items, proper memoization reduced render time from 800ms to 50ms.

2. Virtualization for Large Lists

Handle thousands of items without killing performance using virtual scrolling.

import { useState, useMemo, useCallback } from 'react';

const VirtualizedList = ({ items, itemHeight = 50, containerHeight = 400 }) => {
  const [scrollTop, setScrollTop] = useState(0);

  const visibleItems = useMemo(() => {
    const startIndex = Math.floor(scrollTop / itemHeight);
    const endIndex = Math.min(
      startIndex + Math.ceil(containerHeight / itemHeight) + 1,
      items.length - 1
    );

    return {
      startIndex,
      endIndex,
      items: items.slice(startIndex, endIndex + 1)
    };
  }, [items, itemHeight, containerHeight, scrollTop]);

  const handleScroll = useCallback((e) => {
    setScrollTop(e.target.scrollTop);
  }, []);

  const totalHeight = items.length * itemHeight;
  const offsetY = visibleItems.startIndex * itemHeight;

  return (
    <div
      style={{ height: containerHeight, overflow: 'auto' }}
      onScroll={handleScroll}
    >
      {/* Spacer for total height */}
      <div style={{ height: totalHeight, position: 'relative' }}>
        {/* Visible items container */}
        <div
          style={{
            transform: `translateY(${offsetY}px)`,
            position: 'absolute',
            top: 0,
            left: 0,
            right: 0,
          }}
        >
          {visibleItems.items.map((item, index) => (
            <div
              key={visibleItems.startIndex + index}
              style={{ height: itemHeight }}
              className="list-item"
            >
              <ListItem item={item} />
            </div>
          ))}
        </div>
      </div>
    </div>
  );
};

// Usage with large datasets
function ProductCatalog() {
  const [products] = useState(() => 
    // Simulate 50,000 products
    Array.from({ length: 50000 }, (_, i) => ({
      id: i,
      name: `Product ${i}`,
      price: Math.random() * 100,
      category: ['Electronics', 'Clothing', 'Books'][i % 3]
    }))
  );

  return (
    <VirtualizedList
      items={products}
      itemHeight={60}
      containerHeight={500}
    />
  );
}

👉 Real-world impact: Rendering 50,000 items went from crashing the browser to smooth 60fps scrolling.

3. State Colocation and Lifting

Keep state close to where it’s used, lift it only when necessary.

// ❌ Lifting state too early causes unnecessary re-renders
function BadApp() {
  const [userSearch, setUserSearch] = useState('');
  const [productSearch, setProductSearch] = useState('');
  const [currentTab, setCurrentTab] = useState('users');

  // Both searches cause the entire app to re-render
  return (
    <div>
      <TabNavigation currentTab={currentTab} onTabChange={setCurrentTab} />
      {currentTab === 'users' && (
        <UserPanel 
          searchTerm={userSearch} 
          onSearchChange={setUserSearch} 
        />
      )}
      {currentTab === 'products' && (
        <ProductPanel 
          searchTerm={productSearch} 
          onSearchChange={setProductSearch} 
        />
      )}
    </div>
  );
}

// ✅ State colocation - keep state close to where it's used
function GoodApp() {
  const [currentTab, setCurrentTab] = useState('users');

  return (
    <div>
      <TabNavigation currentTab={currentTab} onTabChange={setCurrentTab} />
      {currentTab === 'users' && <UserPanel />}
      {currentTab === 'products' && <ProductPanel />}
    </div>
  );
}

// Search state is colocated within each panel
function UserPanel() {
  const [searchTerm, setSearchTerm] = useState('');
  const [users, setUsers] = useState([]);

  // Only this component re-renders when search changes
  const filteredUsers = useMemo(() => 
    users.filter(user => user.name.includes(searchTerm)),
    [users, searchTerm]
  );

  return (
    <div>
      <SearchInput value={searchTerm} onChange={setSearchTerm} />
      <UserList users={filteredUsers} />
    </div>
  );
}

👉 Real-world impact: Reduced re-renders by 70% in a complex dashboard by moving state closer to where it’s used.

4. Compound Components with Context

Create flexible, performant component APIs that prevent prop drilling.

import { createContext, useContext, useState, useCallback } from 'react';

// Context for internal component communication
const AccordionContext = createContext();

// Main compound component
function Accordion({ children, allowMultiple = false }) {
  const [openItems, setOpenItems] = useState(new Set());

  const toggleItem = useCallback((itemId) => {
    setOpenItems(prev => {
      const newSet = new Set(prev);

      if (newSet.has(itemId)) {
        newSet.delete(itemId);
      } else {
        if (!allowMultiple) {
          newSet.clear();
        }
        newSet.add(itemId);
      }

      return newSet;
    });
  }, [allowMultiple]);

  const value = {
    openItems,
    toggleItem,
    allowMultiple
  };

  return (
    <AccordionContext.Provider value={value}>
      <div className="accordion">{children}</div>
    </AccordionContext.Provider>
  );
}

// Individual accordion item
function AccordionItem({ children, itemId }) {
  const { openItems, toggleItem } = useContext(AccordionContext);
  const isOpen = openItems.has(itemId);

  return (
    <div className={`accordion-item ${isOpen ? 'open' : ''}`}>
      {typeof children === 'function' 
        ? children({ isOpen, toggle: () => toggleItem(itemId) })
        : children
      }
    </div>
  );
}

// Accordion trigger component
function AccordionTrigger({ children, itemId }) {
  const { toggleItem } = useContext(AccordionContext);

  return (
    <button
      className="accordion-trigger"
      onClick={() => toggleItem(itemId)}
      type="button"
    >
      {children}
    </button>
  );
}

// Accordion content component  
function AccordionContent({ children, itemId }) {
  const { openItems } = useContext(AccordionContext);
  const isOpen = openItems.has(itemId);

  if (!isOpen) return null;

  return (
    <div className="accordion-content">
      {children}
    </div>
  );
}

// Attach sub-components to main component
Accordion.Item = AccordionItem;
Accordion.Trigger = AccordionTrigger;
Accordion.Content = AccordionContent;

// Usage - Clean, declarative API
function FAQSection() {
  return (
    <Accordion allowMultiple>
      <Accordion.Item itemId="item-1">
        <Accordion.Trigger itemId="item-1">
          What is React?
        </Accordion.Trigger>
        <Accordion.Content itemId="item-1">
          React is a JavaScript library for building user interfaces.
        </Accordion.Content>
      </Accordion.Item>

      <Accordion.Item itemId="item-2">
        <Accordion.Trigger itemId="item-2">
          How do hooks work?
        </Accordion.Trigger>
        <Accordion.Content itemId="item-2">
          Hooks let you use state and other React features in functional components.
        </Accordion.Content>
      </Accordion.Item>
    </Accordion>
  );
}

👉 Real-world impact: Eliminated prop drilling in a complex form builder, reducing component coupling by 60%.

5. Optimistic Updates with Rollback

Provide instant feedback while gracefully handling failures.

import { useState, useCallback, useRef } from 'react';

function useOptimisticUpdates(initialData, updateFn) {
  const [data, setData] = useState(initialData);
  const [isLoading, setIsLoading] = useState(false);
  const rollbackRef = useRef(null);

  const optimisticUpdate = useCallback(async (optimisticData, serverUpdate) => {
    // Store current state for potential rollback
    rollbackRef.current = data;

    // Apply optimistic update immediately
    setData(optimisticData);
    setIsLoading(true);

    try {
      // Attempt server update
      const result = await serverUpdate();

      // Success: update with server response
      setData(result);
      rollbackRef.current = null;
    } catch (error) {
      // Failure: rollback to previous state
      setData(rollbackRef.current);
      rollbackRef.current = null;

      // Re-throw for component to handle
      throw error;
    } finally {
      setIsLoading(false);
    }
  }, [data]);

  return { data, isLoading, optimisticUpdate };
}

// Usage in a like button component
function LikeButton({ postId, initialLikes, initialIsLiked }) {
  const [error, setError] = useState(null);

  const { data, isLoading, optimisticUpdate } = useOptimisticUpdates(
    { likes: initialLikes, isLiked: initialIsLiked },
    async (newState) => {
      const response = await fetch(`/api/posts/${postId}/like`, {
        method: newState.isLiked ? 'POST' : 'DELETE',
      });

      if (!response.ok) {
        throw new Error('Failed to update like status');
      }

      return response.json();
    }
  );

  const handleLike = useCallback(async () => {
    setError(null);

    const newState = {
      likes: data.isLiked ? data.likes - 1 : data.likes + 1,
      isLiked: !data.isLiked
    };

    try {
      await optimisticUpdate(
        newState,
        () => fetch(`/api/posts/${postId}/like`, {
          method: newState.isLiked ? 'POST' : 'DELETE',
        }).then(res => res.json())
      );
    } catch (err) {
      setError('Failed to update. Please try again.');
    }
  }, [data, optimisticUpdate, postId]);

  return (
    <div>
      <button
        onClick={handleLike}
        disabled={isLoading}
        className={`like-button ${data.isLiked ? 'liked' : ''}`}
      >
        ❤️ {data.likes}
      </button>
      {error && <div className="error">{error}</div>}
    </div>
  );
}

👉 Real-world impact: Improved perceived performance by 300% in social media feeds by providing instant feedback.

6. Intersection Observer for Smart Loading

Load content efficiently based on visibility, not just mount status.

import { useState, useEffect, useRef, useCallback } from 'react';

function useIntersectionObserver(options = {}) {
  const [isIntersecting, setIsIntersecting] = useState(false);
  const [hasIntersected, setHasIntersected] = useState(false);
  const targetRef = useRef(null);

  useEffect(() => {
    const target = targetRef.current;
    if (!target) return;

    const observer = new IntersectionObserver(
      ([entry]) => {
        setIsIntersecting(entry.isIntersecting);

        if (entry.isIntersecting && !hasIntersected) {
          setHasIntersected(true);
        }
      },
      {
        threshold: 0.1,
        rootMargin: '50px',
        ...options
      }
    );

    observer.observe(target);

    return () => {
      observer.unobserve(target);
    };
  }, [hasIntersected, options]);

  return { targetRef, isIntersecting, hasIntersected };
}

// Lazy loading image component
function LazyImage({ src, alt, placeholder, className }) {
  const [imageLoaded, setImageLoaded] = useState(false);
  const [imageSrc, setImageSrc] = useState(placeholder);
  const { targetRef, hasIntersected } = useIntersectionObserver({
    threshold: 0.1,
    rootMargin: '100px' // Start loading 100px before visible
  });

  useEffect(() => {
    if (hasIntersected && !imageLoaded) {
      const img = new Image();
      img.onload = () => {
        setImageSrc(src);
        setImageLoaded(true);
      };
      img.src = src;
    }
  }, [hasIntersected, src, imageLoaded]);

  return (
    <div ref={targetRef} className={className}>
      <img
        src={imageSrc}
        alt={alt}
        style={{
          opacity: imageLoaded ? 1 : 0.7,
          transition: 'opacity 0.3s ease'
        }}
      />
    </div>
  );
}

// Smart content loading
function ContentSection({ children, fallback }) {
  const { targetRef, hasIntersected, isIntersecting } = useIntersectionObserver({
    threshold: 0.1,
    rootMargin: '200px'
  });

  return (
    <div ref={targetRef} style={{ minHeight: '200px' }}>
      {hasIntersected ? children : fallback}
      {isIntersecting && <div>Content is visible!</div>}
    </div>
  );
}

// Usage in a feed
function InfiniteFeed() {
  const [posts, setPosts] = useState([]);
  const [page, setPage] = useState(1);
  const [loading, setLoading] = useState(false);

  const loadMorePosts = useCallback(async () => {
    if (loading) return;

    setLoading(true);
    try {
      const response = await fetch(`/api/posts?page=${page}`);
      const newPosts = await response.json();
      setPosts(prev => [...prev, ...newPosts]);
      setPage(prev => prev + 1);
    } finally {
      setLoading(false);
    }
  }, [page, loading]);

  return (
    <div>
      {posts.map((post, index) => (
        <ContentSection
          key={post.id}
          fallback={<div>Loading post...</div>}
        >
          <article>
            <h2>{post.title}</h2>
            <LazyImage
              src={post.imageUrl}
              alt={post.title}
              placeholder="/placeholder.jpg"
            />
            <p>{post.excerpt}</p>
          </article>
        </ContentSection>
      ))}

      <ContentSection fallback={<div>Load more trigger</div>}>
        <div onViewportEnter={loadMorePosts}>
          {loading ? 'Loading more...' : 'Load more posts'}
        </div>
      </ContentSection>
    </div>
  );
}

👉 Real-world impact: Reduced initial page load time by 60% and bandwidth usage by 40% in image-heavy applications.

7. Error Boundaries with Recovery

Handle errors gracefully while providing users with recovery options.

import { Component, createContext, useContext } from 'react';

// Error boundary context for recovery actions
const ErrorRecoveryContext = createContext();

class ErrorBoundary extends Component {
  constructor(props) {
    super(props);
    this.state = {
      hasError: false,
      error: null,
      errorInfo: null,
      errorId: null
    };
  }

  static getDerivedStateFromError(error) {
    return {
      hasError: true,
      error,
      errorId: Date.now().toString()
    };
  }

  componentDidCatch(error, errorInfo) {
    this.setState({ errorInfo });

    // Log to error reporting service
    console.error('Error caught by boundary:', error, errorInfo);

    // Report to analytics
    if (this.props.onError) {
      this.props.onError(error, errorInfo);
    }
  }

  handleRetry = () => {
    this.setState({
      hasError: false,
      error: null,
      errorInfo: null,
      errorId: null
    });
  };

  handleReportError = () => {
    // Send detailed error report
    fetch('/api/error-reports', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({
        error: this.state.error?.message,
        stack: this.state.error?.stack,
        errorInfo: this.state.errorInfo,
        timestamp: new Date().toISOString(),
        userAgent: navigator.userAgent,
        url: window.location.href
      })
    });
  };

  render() {
    if (this.state.hasError) {
      const contextValue = {
        retry: this.handleRetry,
        reportError: this.handleReportError,
        error: this.state.error
      };

      return (
        <ErrorRecoveryContext.Provider value={contextValue}>
          {this.props.fallback ? (
            this.props.fallback(this.state.error, this.state.errorInfo)
          ) : (
            <DefaultErrorFallback />
          )}
        </ErrorRecoveryContext.Provider>
      );
    }

    return this.props.children;
  }
}

// Hook for accessing error recovery actions
function useErrorRecovery() {
  const context = useContext(ErrorRecoveryContext);
  if (!context) {
    throw new Error('useErrorRecovery must be used within an ErrorBoundary');
  }
  return context;
}

// Default error UI with recovery options
function DefaultErrorFallback() {
  const { retry, reportError, error } = useErrorRecovery();
  const [isReporting, setIsReporting] = useState(false);

  const handleReport = async () => {
    setIsReporting(true);
    try {
      await reportError();
      alert('Error reported successfully');
    } catch (err) {
      alert('Failed to report error');
    } finally {
      setIsReporting(false);
    }
  };

  return (
    <div className="error-boundary-fallback">
      <h2>🚨 Something went wrong</h2>
      <details style={{ whiteSpace: 'pre-wrap' }}>
        <summary>Error details</summary>
        {error?.toString()}
      </details>

      <div className="error-actions">
        <button onClick={retry} className="retry-button">
          Try Again
        </button>
        <button 
          onClick={handleReport} 
          disabled={isReporting}
          className="report-button"
        >
          {isReporting ? 'Reporting...' : 'Report Issue'}
        </button>
        <button onClick={() => window.location.reload()}>
          Refresh Page
        </button>
      </div>
    </div>
  );
}

// Usage with different fallback strategies
function App() {
  return (
    <ErrorBoundary 
      onError={(error, errorInfo) => {
        // Send to monitoring service
        console.error('App error:', error, errorInfo);
      }}
    >
      <Header />

      {/* Critical section with custom fallback */}
      <ErrorBoundary
        fallback={(error) => (
          <div className="critical-error">
            <p>Unable to load main content</p>
            <button onClick={() => window.location.reload()}>
              Reload Application
            </button>
          </div>
        )}
      >
        <MainContent />
      </ErrorBoundary>

      {/* Non-critical section with minimal fallback */}
      <ErrorBoundary
        fallback={() => <div>Sidebar temporarily unavailable</div>}
      >
        <Sidebar />
      </ErrorBoundary>
    </ErrorBoundary>
  );
}

👉 Real-world impact: Reduced user-facing crashes by 90% and increased error reporting accuracy by providing contextual recovery options.

Key Takeaways

These performance patterns transform React applications from functional to exceptional:

  1. Smart Memoization — Optimize strategically, not universally

  2. Virtualization — Handle large datasets without performance degradation

  3. State Colocation — Minimize re-renders by keeping state local

  4. Compound Components — Build flexible APIs without prop drilling

  5. Optimistic Updates — Provide instant feedback with graceful rollbacks

  6. Intersection Observer — Load content intelligently based on visibility

  7. Error Boundaries — Handle failures gracefully with recovery options

Implementation Strategy

Start incorporating these patterns systematically:

  • Week 1: Audit your app for over-memoization and state lifting

  • Week 2: Implement virtualization for your largest lists

  • Week 3: Add optimistic updates to your most frequent actions

  • Week 4: Set up comprehensive error boundaries

Measuring Success

Track these metrics to validate your optimizations:

  • Time to Interactive (TTI) — Should decrease significantly

  • First Contentful Paint (FCP) — Faster initial renders

  • Memory Usage — More stable over time

  • Error Recovery Rate — Higher user retention after errors

Master these patterns, and you’ll build React applications that don’t just work — they excel under pressure, scale gracefully, and provide exceptional user experiences.

How Front-end Developers Can Handle Millions of API Requests Without Crashing Everything

At this point, we know … most front-end developers won’t wake up thinking, “How will I handle millions of API requests today?“.

We are usually busy fixing CSS bugs, debating dark-mode toggle designs, or wrestling with state management libraries.

But the comes scale.

How you design your front-end matters just as much as how scalable the backend is.

Because bad frontend API patterns = wasted requests = unnecessary server load = poor UX.

So let’s talk about how to handle millions of API requests efficiently… from the front-end developer’s lens.

  1. Cache Like Your Life Depends On It

Every unnecessary API call is a crime against your performance.

  • Browser cache: Use Cache-Control headers wisely. Let the browser do the heavy lifting.

  • Client-side cache: Store fetched data in context, Redux, or even React Query / TanStack Query. Don’t refetch the same thing again and again.

  • CDNs: If static, serve it once from the edge and stop bothering the origin.

Ask yourself: “Do I really need to hit the API again, or can I reuse what I already have?”

  1. Debounce and Throttle Requests

Ever built a search bar where every keystroke fires an API call? Congrats, you just DDosed your own backend.

  • Debounce: Wait until the user stops typing (300–500ms).

  • Throttle: For things like scroll-based fetching, limit calls to once every X ms.

It’s not just efficiency… It’s a basic respect for your backend team.

  1. Batch Requests Like a Pro

Instead of firing 50 requests at once:

  • Combines multiple queries into one bulk request if your API supports it.

  • Use GraphQL for fetching exactly what we need in a single payload

  • Or at least group similar calls together instead of spamming

The difference between 50 HTTP requests vs. 1 smartly structured one?

It’s the difference between your app feeling “buttery smooth” or “why is this so slow?”

  1. Use Background Refresh (Don’t Block the UI)

Not every API call needs to block the UI.

  • Render what you have first

  • Then, refresh data quietly in the background

  • Update only the diff, not the whole page

This is the same trick that Instagram, Twitter, and Medium itself use:

The feed loads instantly, then silently fetches related posts behind the scene.

  1. Fail Gracefully, Not Loudly

When millions of users hammer an API, something will fail.

Instead of blank screens or infinite spinners:

  • Show cached/last known data.

  • Display “something went wrong” with retry logic.

  • Implement exponential backoff for retries, don’t hammer the server harder when it’s already dying.

Great frontend = resilient frontend.

  1. Monitor From the Client Code

Don’t wait for backend dashboards to scream.

Use frontend monitoring tools (Sentry, LogRocket, Datadog RUM, etc.) to track API error rates, latency, and retries.

This helps you catch bottlenecks that only happen on real user devices, not just in your staging environment.

  1. Know When to Push Back

Sometimes, the backend needs to change.

If your app is firing 10 API requests just to render a homepage, that’s not efficient. Push for:

  • API endpoints designed for your use case.

  • Aggregated responses.

  • Better rate limits and caching strategies on the server.

Frontend devs often forget: you’re not just a consumer, you can influence the API design too.

T

Điều khiến Thần Tài Sunwin trở nên thu hút là bảng giải thưởng vô cùng giá trị. Cùng khám phá những giải thưởng hấp dẫn Chi tiết : https://gideondefoe.com/than-tai-sunwin/

1

More from this blog

T

Tuanhadev Blog

30 posts

👋 Hi there, I'm tuanhadev! I am a developer creating open-source projects and writing about web development, side projects, and productivity.