Skip to main content

Command Palette

Search for a command to run...

Senior Front-end Engineer Interview Handbook

Updated
97 min read
Senior Front-end Engineer Interview Handbook

List of the Technical Questions

Programming Languages (JavaScript/TypeScript)

What is a closure in JavaScript, and how does it work?

As you know, JavaScript functions can be nested within each other, which creates a hierarchy of scope. Each function has its own local variables and is able to access the variables from the outer function. Closure is a powerful JavaScript feature that allows a function to access the outer scope even if the outer function is already finished executing. For example, you define a new createCounter() function. This function contains a variable called counter and returns a new function called calculateFunction that returns counter++. Even after the createCounter() function is finished executing, the calculateFunction is still able to access the counter variable. That’s how the JavaScript closure works.

function createCounter() {
  let count = 0;
  return function() {
    count++;
    return count;
  };
}
const counter = createCounter();
console.log(counter()); // 1
console.log(counter()); // 2

What is hoisting in JavaScript, and how does it work with const and let?

Hoisting is a feature that JavaScript will hoist the variables to the top of the global scope. For example, when you try to log a variable above the line where it was defined, then JavaScript will log undefined without throwing any errors. Most programmers only think that the hoisting feature only works with the var variable and doesn’t work with the const and let variables because they see JavaScript throw an error when they do that. But the truth is, you should notice the error JavaScript throws. When you try to log let, const variables before initialization, then the error message will be “Can not access before initialization, “ but when you try to log a variable that is not defined, then the error will be “Variable is not defined“. That means JavaScript knows the variable but doesn’t automatically initialize a value for that variable as it does with a var variable.

Modern Front-end Frameworks (React.js/Next.js)

Can you explain how the React Rendering Process works under the hood?

I already published a detailed explanation about this topic on my blog. In short, when a React Component is rendered, it goes through two main phases: the render phase and the commit phase. On the render phase, React will create a new virtual DOM by converting the React component to JSX and then to a React Element, and the final result is a new Virtual DOM. When we have a new Virtual DOM, React will start to compare the new one with the previous one by using React’s Reconciliation mechanism to figure out the list of changes that need to be updated to the actual DOM. Now React knows exactly the changes that need to be made, which is why we don’t need to update the full DOM when a state update happens. Next to the commit phase, React will start to commit to the DOM based on the list of changes we have in the render phase by calling the DOM API like (createElement, setAttribute, appendChild,…), and the useEffectLayout will run after that. When the commit to the DOM process is done, it is time for the browser to work by executing the calculate layout, paint, and composite, so I will not talk about it. When the browser’s work is done, it’s time for the micro queue of useEffect to run. That is how React works under the hood.

What is the difference between a controlled and an uncontrolled component?

A controlled component is a component that React will be in charge of managing the behavior of the component by using state.

On the other side, an uncontrolled component is a component that just lets the DOM manage its own states by using refs.

A controlled component is mostly used in the project because it allows more controls and predictable behaviours.

In opposite, if your component is just a file upload input, no need for validations, no need for dependencies, then an uncontrolled component is preferred.

Can you explain the difference between useEffect and useLayoutEffect?

The main difference between useEffect and useLayoutEffect is when those functions are called and their purpose. Those are all running in the commit phase, but the useLayoutEffect is run before the browser paints process, while the useEffect runs after the browser paints. Also, the useLayoutEffect runs the code in a synchronous way, while the useEffect runs the code in an asynchronous way. That’s why React recommends you should be careful when using useLayoutEffect because the code run inside the useLayoutEffect will block the rendering process. React recommends we should use useLayoutEffect when we need to handle an immediate task relevant to DOM measurement, calculating the element’s position to prevent flicker, and use useEffect when we need to handle side effect tasks.

Can you list down all the factors that make the component re-render?

Well, that is the interesting question, but it takes quite take time to answer. Firstly, I think we should understand that React’s re-render happens when React needs to update the UI with some new data. Necessary re-renders themself are not the problem, but too much unnecessary re-renders are the big problem for the performance. There are four reasons why a component would re-render itself: state changes, parents re-render, props change, context changes, and hook changes.

  • When the state changes, the component would re-render itself

  • When a parent component re-renders, then all of its children will also re-render. It always goes down the tree; the re-render of a child component doesn’t trigger the parent component's re-renders.

  • When props change, they are mostly updated by the parent component. That means the parent component would have to re-render and trigger all of its component re-renders regardless of its props.

  • When a value of the Context Provider changes, then all the components that use this context will re-render, even if that component doesn’t use the changed portion of the data directly.

  • Everything that happens inside a hook belongs to the component that uses it. That means the same rules regarding State and Context apply here. And, we should notice that the hook can be chained; the same rules apply to all of them.

How many approaches do you know that we can use to prevent unnecessary re-rendering?

Wow, that is a big question. There are many ways we could use to prevent unnecessary re-rendering. Firstly, we should understand that unnecessary re-renders themself are not actually a big problem. React is fast and able to deal with them without users noticing anything. However, if it happens too often and on a heavy component, then that could cause the performance issue to end users.

  • We should avoid creating a new component inside other components. On every re-render, React will re-mount the component (destroy it and recreate it from scratch), which is going to be much slower than the normal render.

  • We should move the state down as close to the component that consumes it. For example, instead of putting open/close dialog state on the parent component, we should move it down to the dialog component, then every time the state changes, only the dialog component will be re-rendered, not affecting the parent component

  • We should encapsulate the state changes into a smaller component by passing slow components as props. Because the props are not affected by the state changes, heavy components won’t re-render.

  • We can prevent unnecessary re-renders by using React.memo, useMemo, useCallback

  • We also need to implement the React key in the correct way if we don’t want our elements to be re-rendered every time.

  • We can prevent re-renders caused by Context with those techniques, like memoizing context value, splitting data into chunks, and context selectors

Those are all the techniques that I can remember right now for preventing unnecessary re-renders in React.

Can you explain exactly the usage of useMemo and useCallback, and when we should avoid them?

Most of the Front-end engineers know about the usage of those memoized React hooks like useMemo and useCallback. The useMemo hook is used to memoize a value when that value requires a heavy calculation. The useCallback hook is used to memoize a function. But they don’t actually understand how these functions work under the hood and how to implement them correctly to prevent heavy components from re-rendering. I used to like them and blindly used those hooks without understanding. React only uses value to compare if the prop is a primitive value; if the prop is an object or a function, React will use reference to compare. That is why we need to memoize the function and the object prop type. For example, if I already memoized the heavy value and pass it as a prop to a child component, the child component still re-renders if we don’t wrap the child inside React.memo function. Also, if we want to put an object or a function into the dependency array of useEffect, we have to memoize them first if we don’t want useEffect to always re-run. And, we shouldn’t overuse these React memoized functions because the memoization is not free, it adds a few extra works like storing previous value, comparing the dependency array,… in every re-render. We should only use it when it is actually needed.

What are React keys used for, and what happens if we don’t use them correctly?

React’s key is the mechanism that allows React to control the rendering process by identifying which element of a list has changed and needs to be re-rendered. The React key needs to be both unique and static. The best practice is to use the id property in each item as a key, or we could use the index of the array as a key if that array can’t be modified (but we don’t recommend that way). If you don’t use the React key in the correct way, like using Math.random(), it will be a disaster for the performance of the website. You will destroy the most important DOM reconciliation mechanism of React and will make the whole array element re-render, causing some mysterious UI bugs that are really hard to trace.

How does React’s reconciliation process work?

Well, that is an interesting question that requires a deep understanding of how React works under the hood. React’s reconciliation process is a really important mechanism in React that allows React to compare the Virtual DOM to figure out specific changes that need to be made to the actual DOM. That technique made React re-render elements efficiently, only re-rendering the changed parts instead of re-rendering the whole DOM. React’s reconciliation algorithm is based on two simple assumptions:

  • Two elements of different types will produce two different trees. For example, if an element changes from the span into the div, then React will assume that everything inside has changed and rebuild the entire subtree. And, if the element tag is not changed, then React will start comparing props and states. React does a shallow comparison so you have to use those memo techniques if you don’t want that element will be re-rendered

  • The React key provides stability hints: React requires us to provide a key for each item element when we want to build a listing. By doing that, React will know exactly which item on that list has changed, been added, deleted, or no changes need to be made.

Tell me what you know about CSR?

Client-side rendering is the strategy where the server only sends the minimum HTML to the client, and the data fetching, templating, and routing required to display content for the page are handled by JavaScript that executes on the browser. Almost all of the UI is generated on the browser. The entire application is loaded on the first request. CSR enables the creation of a Single-Page application that supports navigation without page refreshes and provides a great user experience. As the user navigates to another page by clicking a link, there is no request to the server to generate a new page; the code runs on the client to change the view/data. While CSR provides a rich interactive experience after the initial load, it has the well-known drawback of negatively impacting first-page load performance and SEO, which means users must see a blank page before the large JavaScript bundle is fully downloaded and executed.

Tell me what you know about SSR?

SSR is one of the oldest methods of rendering web content, where the HTML is rendered on the server and sent to the client browser as a response. With SSR, every request is treated individually and will be processed as a new request to the server. It helps to reduce the JavaScript bundle size sent to the browser and improve the user experience by reducing the time for the page content to be visible. We still have to wait for the data to be fetched and pre-rendered on the server before sending it to the client, and also wait for the hydration process to be finished before the user can interact with the UI.

Tell me what you know about SSG?

SSG comes to resolve the weakness of SSR by moving the rendering HTML process from the runtime to the build time and serving the page as a static site. For example, when users request a page, that page is already generated at build time and is served as a static file by CDN or served from cache. That static file is immediately sent to the browser without waiting for data to be fetched, waiting for HTML to be prepared, and users can see the page instantly. SSG is ideal for the static content.

Tell me what you know about ISG?

As you know, SSG is only suitable for static content. It has limitations for applying to dynamic content or content that changes frequently. The Incremental Static Generation was introduced as an upgrade to SSG. It allows for refetching new content in the background at a period of time and automatically applies it to the old one without the need for a full rebuild. Once generated, the new version of the static file becomes available and will be served for any new requests in subsequent minutes.

What are Suspense and Streaming in React?

In the normal SSR implementation, the server will wait to generate all the HTML it is going to as a string and send it to the client as a big chunk. That made you have to wait for all the data for the entire page to prepare before you can see, hydrate, and interact with anything.

With streaming SSR implementation, the server will first create a Node.js stream. Then, it will use renderToPipeableStream to render the React app chunk by chunk into that Node stream. It lets React “flush whatever parts of HTML are ready and postpone slow parts for later“

The chunk boundaries for this process are those components that are wrapped by . Whenever you wrap a component with , that means you tell React not to wait for the data of this component, and start streaming HTML for the rest of the page, and show the fallback UI for that component instead.

What are Server Components?

The biggest issue with SSR is “no interactivity gap“. While SSR helps reduce the load time for the initial load but users need to wait and can’t interact with it until the hydration process is finished. RSCs were introduced at React 18 to allow you to render part of the UI on the server ahead of time without sending any JavaScript to the browser, dramatically shrinking client bundles. RSCs are rendered on the server and send a special data format called the React Server Component Payload to the browser. The client-side React runtime then uses the payload to hydrate HTML. The Container/Presentation pattern is a great candidate for RSCs, while the container component could be an RSC that fetches data and passes it as props to the presentational client component, meaning the fetching logics never ship to the browser.

Will React Server Components replace the SSR?

No. RSCs do not replace SSR. They complement it. In practice, we can still do SSR for the initial render, but many components can be RSC then significantly reduces the JavaScript needed to ship to the browser.

There are a few differences between SSR and RSCs:

  • Code of the RSC is never sent to the browser, while on some implementations of SSR, their code is still included in the JavaScript bundle and sent to the browser.

  • SRCs enable access to the back end everywhere on the tree. But on the SSR implementation, for example, in Next.js. We have to fetch all of the data via getServerSideProps, which has limitations of only working on the top of the page.

  • SRCs may be refetched in the background while maintaining the client-side state of the tree. This is because the main transport mechanism is richer than just pain HTML that allows re-fetch on the server parts without blowing up the client state, such as search input text, focus.

How does React insert real data into a streaming component?

For the first render, React will send the fallback UI, like the loading skeleton, immediately to the user and attach an ID to that element while waiting for the actual data to be loaded and streamed. Once it’s ready, React includes it in the ongoing HTML stream, along with the lightweight JavaScript that handles the magic: replace the new actual data with the old placeholder based on the ID we attached before. This makes this transition feel smooth and effortless to the user due to no full rerenders, no flicker, just progressive enhancements. It’s a part of the internal React streaming process, not something that we can handle manually. But understanding it helps connect the dots between the fallback we see on the screen and seamlessly be replaced by the real content.

What are the hydration, Progressive hydration, and Selective Hydration? How does the selective hydarion solve the pain points of traditional hydaration?

In the traditional SSR rendering before React 18, when the user first sees the page, the user can’t interact with the page, such as clicking on a button, or see any animations. The user needs to wait for the JS bundle of the whole page to be loaded and executed before they can become responsive to interaction. The hydration is the process that the browser attaches event handlers and everything to the DOM, which makes the page become interactive. This process takes a while, so users have to wait.

Progressive hydration is a new approach that was released in React 18 for resolving the pain point of traditional hydration. Instead of hydrating the entire page at once, we can also progressively hydrate the DOM nodes. Progressive hydration makes it possible to individually hydrate nodes over time, which makes it possible to only request the minimum necessary JavaScript. It delays hydration of the less important parts of the page, like those parts that aren’t visible in the first viewport. On the downside, progressive hydration may not be suitable for a dynamic app where every element on the screen is available to the user and needs to be made interactive on load.

The selective hydration is a new hydration mechanism that was introduced in React 18, along with a progressive hydration approach. Besides trying to hydrate as soon as possible based on the DOM tree, it prioritizes what matters most to the user — the parts they touch. This creates an illusion of instant hydration because interactive components are hydrated first, even if they are deeper in the component tree. For example, both the blog component and header component could stream on the server independently. By default, React starts with the first Suspense boundary it encounters in the tree — it’s header component in this case. But, imagine users click on the blog section before it is hydrated. Instead of waiting for the header component to finish, React intercepts the event during the capture phase and synchronously hydrates the blog section first, so the interaction works immediately.

How do you structure a large-scale React application for scalability and maintainability?

That is a big question that takes a lot of time to fully answer. There are a few important points that I constantly follow up on when I am setting up a large-scale React application:

  • Firstly, I organize files not just by technical roles, but also by domain responsibilities. Organizing files based on technical roles might be fast at the start. But when the application grows, updating things like cart features, we need to go through multiple folders, such as components/, stores/, and utils/. This separation slows you down and creates mental overhead. On the other hand, when grouping components by domain responsibilities, everything related to the feature lives in one folder. Features become self-contained, refactoring becomes safer, and your team grows; each squad might handle different domains without tight coupling and conflicts with each other. This clear boundary makes parallel development much easier. Teams can refactor their own features without worrying about breaking others.

  • I always prefer to create a dedicated common module for all generic components and utilities used across different pages and modules. This helps avoid redundancy and promotes reusability across the application.

  • For each module, I apply the Locality of Behaviour principle by keeping things as close to where they are used. For example, I locate child components, hooks, and utils near where they are used within the application. This strategy improves readability and maintainability as when a developer works on a feature, they have all the related files in proximity, which makes it easier to understand and maintain.

  • I also apply Layered Architecture with the Separation of Concerns Principle (refer to the next questions for better explanation)

  • Furthermore, I also apply the Atomic Design Pattern for managing the growing complexity of component structures (refer to the next question)

Can you tell me more about the Layered Architecture with Separation of Concerns?

When we design a Front-end application, it is better to split the application into multiple layers for easier management and to avoid causing bugs in unrelated parts. Most scalable Front-end systems can be thought of as layers, each with a purpose. Each layer has its own responsibilities and knows only as much as it needs. This separation creates clarity, reduces bugs, and increases developer speed. Commonly, we could split an application into 5 layers: UI Layer, Behavior Layer, State Management Layer, Services Layer, and Utilities/Core Logic Layer.

  • The UI Layer should be predictable and reusable. No useEffect, no useState, just props in and UI out. It knows nothing about where data comes from, just receives props and renders markup.

  • The Behavior Layer is where the logic lives. It lives in a custom hook and is responsible for local states, effects, and interaction logics. We can plug the same behavior for different UIs without duplicating a line of logic.

  • The State Management Layer is where we centralize shared states. When our app grows, we need a place to manage our shared state to be shared across components. We can use React Context for small apps and other popular libraries, such as Redux and Zustand, for large apps

  • The Service Layer is where we are talking with the outside world. This layer is our interface with APIs, storage, and anything that lives outside of the app.

  • The Utilities/Core Logic Layer is where shared helpers, validation functions, formatters, and pure logic reside.

What is the Atomic Design Pattern?

When we build scalable applications in React, we often encounter challenges in managing the growing complexity of component structures. The Atomic Design Pattern has emerged as a powerful methodology for organizing and structuring applications. This pattern suggests breaking down the interface into multiple fundamental building blocks, promoting a more modular and scalable approach to application design.

The Atomic Design Methodology breaks down design into 5 distinct levels:

  • Atoms: These are basic building blocks of an application, such as inputs, buttons, etc.

  • Molecules: Molecules are groups of atoms that are combined together to form a functional unit, such as a search feature that includes a label, an input, and a submit button.

  • Organisms: Organisms are relatively complex UI components composed of groups of molecules and atoms, such as a header of a page.

  • Templates: Templates are page-level objects that place components into a layout. They usually consist of groups of organisms, representing a complete layout.

  • Pages: Pages are specific instances of templates that show what a UI looks like with real data.

The Atomic Design Pattern aligns perfectly with React’s component-based architecture and brings a lot of benefits, such as promoting reusability, ensuring consistency, etc. While it provides many benefits, we should be cautious not to over-engineer and introduce unnecessary complexity.

How many React patterns do you know?

Well. There are several popular React patterns that we can apply in our React code to make it more optimized, readable, and maintainable, such as the Higher Order Pattern, Render Props Pattern, Container/Presentation Pattern, Hooks Pattern, and Compound Pattern.

  • A HOC is a component that receives another component. The HOC contains a certain logic that we want to apply to the other component by passing that component as a parameter. This pattern allows us to share the same logic throughout multiple components. For example, when we want to apply a custom loading logic called useLoading to multiple components, we can apply HOC.

  • Another way of making components more reusable is by applying the Render Props Pattern. A Render Prop is a prop on a component whose value is a function that returns JSX. The component itself does not render anything, but calls the render prop.

  • In React, one way to enforce separation of concerns is by using the Container/Presentation Pattern. With this pattern, we can separate the view from the logic. For example, when we want to create a component to display a list of products, container components are responsible for fetching the data, formatting it, and passing the data as props to the presentational components to show.

  • Hooks were the new feature that was released on React 16.8. Although Hooks are not necessarily a design pattern but they play a very vital role in React applications. The traditional ways to share code across multiple components are by using HOC and the Render Props Pattern. Although both patterns are still valid and good practice but Hooks mostly replaced them.

  • The Compound pattern is really useful when we have to deal with multiple components that belong to each other through a shared state. The Compound Component Pattern allows you to create a component that all work together to perform a task. For example, when we want to create a custom complex select component that contains multiple select items and other sub-components.

How can you structure state management architecture for a large-scale application?

Not all states are the same, so instead of treating it as one thing, we should clarify it into layers, with its own natural place in the application.

  • The local state should stay local, and we should keep it as close as possible to where it is used. For example, a simple open/close state of a modal should be local and not belong to Redux or Context. Components died, local states died

  • Server States are anything that originates from the external sources, like API responses, data from the backend. It usually comes with its own complexities (state data, retries, invalidation), so it should be treated as a dedicated layer. That is exactly why tools like React Query, Apollo, or SWR exist. They handle caching, refetching, and synchronization for us, allowing client state to remain cleanly separate from server data.

  • Everything that is happening in the website’s URL is a state in a way, then we can treat it as a separate layer. For example, we can pass some queries or parameters to reflect the UI. When the URL changes, then the UI changes. These days, some of the routers also provide built-in methods like useSearchParams for you to easily manage the URL states, or you can use a popular library like nuts instead

  • Not all states can stay isolated. Sometimes, data has to move beyond a single component. That’s where the shared state layer comes in. But the shared client state needs boundaries. And we should follow the simple principle that “The State should rise only as high as its consumers require“. If it is just shared between siblings, prop drilling, then a small Context is perfectly fine. If it cuts across multiple features or pages, then it earns its place in a dedicated store like Redux, Zustand.

By drawing these lines, the decision of how to manage a state becomes clearer.

Can you explain what React Context is and how it works?

As we know, when a React app grows then the amount of state grows. Additionally, React uses one-way data flow, which means data can only go down the component tree, making passing state between components at the same level a really hard task. If we don’t manage it properly in time, it could lead to uncontrolled data flow, causing props drilling issues, and making it harder to debug. React Context is a built-in mechanism that creates a kind of “global store“ for specific parts of the application. React Context allows us to pass data through the component tree without manually passing props through immediate components. However, while Context is a great choice for providing access to data, it has some limitations, such as Context does not provide a standardized way to modify states, and it becomes harder to track where state changes occur because we have multiple contexts that wrap different parts of the project.

Can you list a few ways to optimize React Context?

We all know React Context allows us to pass the props from a component to other deep components in the React tree without passing props through multiple components. Passing data in this way can improve the performance of our React app by avoiding the re-rendering of components in between. However, it could be dangerous if we don’t manage it properly because all components that consume state from the Context Provider will be re-rendered if a state in the Provider gets updated. Even if components do not consume the changed parts, there are no standard memorization techniques that can prevent it. Below are some techniques that we can use to optimize the usage of React Context:

  • To minimize the React Context re-renders, we should always memoize the value we pass to the Context Provider.

  • We can split from one provider to multiple providers to further minimize re-renders. Switching from useState to useReducer can help with this.

  • Even though we don’t have the proper selectors for Context like Redux, we can use the trick by combining the Higher-Order Function and React.memo.

What is Redux, and how does it work?

Redux is a state management library that implements ideas inspired by the Flux architecture. It ensures a one-way data flow that makes the application state flow more predictable and easier to track.

Redux introduces a more structured approach with clearly defined elements:

A Store element holds the entire application state in a single object and manages dispatching actions. The logic for how state changes is defined in reducers, not in the store itself.

Action elements are plain JavaScript objects that describe what happened in the application.

Dispatch is a function that sends actions to the store and starts the update process.

Reducer elements are pure functions that take the previous state and an action and return the next state.

Selector elements are functions that read or derive specific pieces of data from the Redux state.

And the standard Redux flow is:

The user interacts with the view, then creates an action by clicking on a button → The action is dispatched to send the action to the store → The store receives the action and passes it to the reducers → The reducer receives the action and previous state and returns a new state → The store replaces the old state with the new state returned by the reducers → Components use selectors to read the updated state and re-render.

What are the differences between React Context and Redux?

Well. React Context is a built-in feature of React, while Redux is an external library inspired by the Flux architecture. They are both developed by the React team and provide abilities for managing global state across the application. React Context is more suitable for a small app, or when we just need to share the state across multiple components in the hierarchy. On the other hand, Redux is a preferred choice for managing state in a large application where we need to share the state across multiple features or pages. Additionally, React Context is only good for providing access to data and does not provide a standardized way to modify states. Also, it becomes harder to track where state changes occur since we have multiple Contexts that wrap different parts of the project. In contrast, Redux introduces a more structured approach with defined elements such as (Store, Reducer, Dispatchers, Actions,….) and ensures one-way data flow that makes the state flow of the application more predictable and easier to track.

Can you explain the approach for managing complex global state?

For managing a complex global state, where using React Context is not enough, I will look into state management libraries, such as Redux, Zustand, Jotai, and MobX. Honestly, I haven’t joined any business projects that applied those libraries before. I used to apply Redux and MobX to my pet project for learning a few years ago. I worked with the React Context mostly because it is enough. I do know the Flux architecture and the core data flow of Redux. There is a pattern that I have read somewhere that: “A State should rise only as high as its consumers require“. We should keep the boundaries clear and avoid both over-engineering and unnecessary sprawl.

What is useEffect? How do you use it in the project?

As its core, useEffect is a powerful hook that runs after the browser’s paint process. This hook allows us to perform side-effect tasks that go beyond the pure render logic, such as API calls and DOM mutations. Because it’s about keeping React components in sync with something outside of React, so if we are using useEffect to synchronize one piece of React state to another piece, we are doing it wrong. We should treat useEffect as the last option instead of the first option.

What do you care about data fetching on the client?

Nowadays, instead of writing a lot of code for fetching data over the network, to cover various aspects such as error handling, caching, and canceling requests. We have several libraries that will handle almost everything for us, like axios, swr, and tanstack query. But I do believe that the choice of technology doesn’t matter much here, and no library can improve the performance of the app by itself. I do focus on the fundamentals of data fetching and data orchestration patterns and techniques. I am aware of the browser limitations of parallel requests, prefetching critical resources even before React is initialized, and techniques for handling waterfalls and race conditions issues.

What is the React Server Action?

React Server Action is a way to write server-side code, the stuff usually handled by API Routes, that can now be written inside the React Component. In Next.js, we just need to add the directive “use server“at the top of the file to tell Next.js that this function should only be called on the server. The React Server doesn’t mean to replace the API Routes entirely; it’s a game-changer for tasks deeply tied to the UIs.

How does Next.js Server Actions work under the hood?

Firstly, to mark a function as a Next.js Server Action, we only need to put a “use server“ directly at the top of the file. By doing so, it will tell Next.js that every exported function in that file is a Next Server Action. When we call Server Actions on Client-side components, it is not just a regular function call. Next.js does some magic behind the scenes; it’s like an RPC(Remote Procedure Call) process. The flow will be: the client code calls the Server Action function => Next.js serializes arguments, converting them into a format that can be sent over the network => the POST request is fired off to the special Next.js endpoint => The server receives the request, deserializes arguments, and executes the code => When the process on the server is finished, the server then serializes the returned value and send it back to the client => The client receives the responses, deserializes it and automatically re-renders the relevant parts of the UI. The best part is no more manually refetching the data or updating the state after mutation. Next.js will automatically re-render the parts that need to be changed because of it.

Can you list all the benefits of the Server Action?

As we know, managing the communication between the client and the server is the real pain. Server Action comes to resolve that real pain by letting us put the server-side code right into the components. That makes things simpler, no more APi routes, no more re-fetching after mutation. React Server also boosts the performance by reducing those back-and-forth trips between the client and the server. Security is also another win when we put sensitive information like database queries and keys on the server.

Can you list all the downsides of using the Server Action?

While Server Actions bring a lot of benefits, it also has some downsides like any other technology. Below are some of them:

  • One of the biggest issues is the potential for tight-coupling. By allowing the server logic to live right inside the component, it is easy to end up less modular and harder to maintain codebase. Changing the server logic might force the front-end to be updated.

  • The second one is a learning curve. While understanding the basic Server Action is simple, learning advanced things like serialization, caching, and error handling takes time.

  • The third issue is debugging. When something goes wrong with Server Action, we cannot just rely on the network tab of the Chrome Dev Tool. We need to get comfortable with debugging techniques on the server-side.

  • The last issue is about performance if we overuse React Actions. Each Server Action is a network request; if there are too many network requests, that could make things worse.

Server Action is a powerful tool, but like other tools, it can be misused. It is best for data mutations and operations where the server logic tight couple with the UI and needs to be lived on the server.

When should we use Server Actions?

The Server Actions align perfectly with tasks like Performing Data Mutations. It allows performing server operations and database mutations without exposing the credentials and sensitive information to the client. It also dramatically reduces and simplifies the code by removing the need to write API routes for your operations.

What are the potential pitfalls with Server Actions?

While Server Actions bring us a lot of benefits, if we use them everywhere without understanding how it works, it could hurt our application’s performance. I could list some of the mistakes when we use Server Actions in the wrong way:

  • For client-side fetching, Server Actions might not be the best option. They automatically use the POST requests, so the data can’t be cached like with the GET requests. Furthermore, Server Actions shouldn’t be used with the Server Components. Because using Server Actions here would delay the data availability, as it causes extra network requests.

  • Even though Server Actions handle server-side logic, under the hood, they are just another API route, and Next.js handles the POST requests automatically. Because of that, Server Actions do not hide the requests, and anyone can replicate them by using the Rest Client. That is why we should use proper authentication and authorization checks when working with sensitive data.

  • Using Server Action might be convenient, but every action comes at a cost. We should avoid using them everywhere and consider before using them. For tasks that could be easily handled on the client side, keep them on the client side.

  • When we need our API to be accessible to multiple clients, classic API routes might be a more appropriate solution. Imagine if we need the same logic for both web app and mobile app, duplicating the same Server Action logic will duplicate the work and make it complicated for maintenance.

Can you list some common mistakes when applying Server Actions?

Server Actions help to simplify our code, reduce the code boilerplate, and look like the future of data mutations in Next.js. But if we don’t use it in the right way, it would cause a serious problem with our application’s performance.

  • The most common mistake is not using the useTransition hook for the pending state. The user clicks on a submit button again and again, the Server Action is running, but the UI provides zero feedback. We need to wrap our action inside the useTransition to make the interface interactive and avoid users spamming the Submit button

  • The second mistake is that we are skipping validation on the client side. For example, it it so waste of resources if we trigger a Server Action and make the full network round trip to tell the user that they are missing a required field. It’s better to validate the input before passing it to Server Action.

  • The third mistake is that we are missing adding validateTag or validatePath after calling the Server Action. This mistake is not slowing the performance, but it will make the user still see the stale data after executing the data mutations. By adding those functions, it will tell Next.js to refetch the data that was changed, and the user can see the new data.

  • The fourth mistake is that we are importing the large client-side library into a server actions file. Server Actions are server-side code; they run in a Node.js environment. If we import a massive client-side library, we will get errors. And the solution is that we should choose the server-side compatible library.

What is a Route Group in Next.js, and what is it used for?

A Route Group is a folder wrapped in parentheses (e.g., (marketing)) in the App Router. The parentheses signal to Next.js to exclude that segment from the URL path, so the folder exists purely in the file system. This lets you organize routes and share layouts without polluting the URL structure — for example, (marketing)/about/page.tsx still resolves to /about, not /marketing/about.

What are the main use cases?

  1. Shared layouts without URL pollution — wrap related routes in a group with a common layout.tsx. For example, (auth) routes share a minimal layout with no nav, while (app) routes share a full shell with sidebar and header — all without those group names appearing in the URL.

  2. Code organization — group routes by feature, team, or domain (e.g., (shop), (blog), (admin)) purely for developer clarity. This is especially valuable in large codebases where a flat app directory becomes hard to navigate.

  3. Multiple root layouts — each group can define its own root layout.tsx, giving entirely different page shells to different sections of the app. This avoids messy conditional rendering inside a single root layout.

What is unstable_cache in Next.js?

unstable_cache is a Next.js utility that lets you cache the result of any async function — not just fetch calls. It wraps a function and stores its return value in the Next.js Data Cache, with support for cache tags, revalidation intervals, and on-demand invalidation via revalidateTag. The unstable_ prefix signals it's a stable-enough API but still subject to change before a finalized name is given.

import { unstable_cache } from 'next/cache';

const getCachedUser = unstable_cache(
  async (id: string) => db.user.findUnique({ where: { id } }),
  ['user'],          // cache key parts
  { revalidate: 60, tags: ['users'] }
);

Why is it necessary for non-fetch data sources?

Next.js automatically caches and deduplicates fetch requests in Server Components. However, database queries, ORM calls, third-party SDKs, filesystem reads, or any non-HTTP data access bypass that cache entirely — they re-execute on every request by default. unstable_cache fills that gap by bringing those data sources under the same caching model as fetch, giving you equivalent control over revalidation without changing your data-fetching layer.

Where should context providers be placed in the App Router, and why can't they go directly in a Server Component?

Context providers rely on createContext and React's runtime context API, which are client-only features. Server Components cannot render providers directly because they don't participate in the React component tree at runtime on the client. Placing a provider in a Server Component will throw an error. Instead, you must extract the provider into a dedicated 'use client' wrapper component, then use that wrapper inside your layout.

// app/providers.tsx
'use client';
import { ThemeProvider } from './ThemeContext';

export function Providers({ children }: { children: React.ReactNode }) {
  return <ThemeProvider>{children}</ThemeProvider>;
}

Where exactly in the App Router tree should the Providers wrapper be placed?

It belongs in the root layout.tsx (or the closest layout that covers all routes needing that context), wrapping {children}. This keeps the provider as high as needed but no higher — following the principle of placing context at the lowest common ancestor.

// app/layout.tsx  (Server Component)
import { Providers } from './providers';

export default function RootLayout({ children }: { children: React.ReactNode }) {
  return (
    <html lang="en">
      <body>
        <Providers>{children}</Providers>
      </body>
    </html>
  );
}

What is revalidatePath and how does it differ from revalidateTag?

revalidatePath purges the cache for a specific URL path, causing that page to be re-rendered on the next request. revalidateTag purges all cached data entries carrying a specific tag, regardless of which pages consume them. The core difference is: revalidatePath thinks in terms of pages, revalidateTag thinks in terms of data.

revalidatePath('/blog/my-post');   // purges the /blog/my-post page cache
revalidateTag('contentful-post-123'); // purges the data tagged across any page that uses it

When should you use revalidateTag over revalidatePath?

Prefer revalidateTag when the same data appears on multiple pages — a product price shown on a listing page, a detail page, and a related items widget all need to update together. Using revalidatePath would require knowing and calling every affected URL manually, which is brittle and doesn't scale. revalidateTag lets you invalidate the data once and every page consuming it gets fresh content on next request.

Use revalidatePath when the invalidation is inherently page-scoped — e.g., a user submits a form that only affects one specific page's output, or you want to force a layout re-render that isn't tied to a tagged fetch. It's simpler and more explicit when the data-to-page relationship is 1:1.

What is revalidateTag and what problem does it solve?

revalidateTag is a Next.js server-side function that purges all cached data associated with a specific tag on demand. Without it, cached data only refreshes on a time-based interval (revalidate: 60). That model is too blunt for real-world apps — if a user updates their profile, you want that data fresh immediately, not after 60 seconds. revalidateTag lets you invalidate precisely the right cache entries the moment the underlying data changes.

// app/actions/updateUser.ts
'use server';
import { revalidateTag } from 'next/cache';

export async function updateUser(id: string, data: UserData) {
  await db.user.update({ where: { id }, data });
  revalidateTag('users'); // purges all cache entries tagged 'users'
}

How does it work end-to-end with fetch and unstable_cache?

Tags are assigned at the data-fetching layer and consumed by revalidateTag at the mutation layer:

  • With fetch: pass { next: { tags: ['users'] } } in the options.

  • With unstable_cache: pass { tags: ['users'] } in the config object.

When revalidateTag('users') is called — inside a Server Action, Route Handler, or middleware — Next.js marks every cached entry carrying that tag as stale. The next request for that data triggers a fresh fetch and re-populates the cache.

Why did the Page Router cause 30-minute rebuilds for small Contentful changes?

In the Page Router with static rendering (getStaticProps), pages are pre-built at deploy time into static HTML. Any content change in Contentful required a full next build to regenerate every static page — even if only one blog post title changed. While Page Router did offer ISR (revalidate: N), it was time-based and page-scoped, meaning you still had to wait for the interval to expire or trigger a manual redeploy. There was no efficient way to say "only rebuild the pages affected by this specific content change."

How does the App Router with revalidateTag solve this?

With the App Router, pages are Server Components that fetch and cache data at runtime — not at build time. You tag your Contentful fetches with meaningful identifiers, then configure Contentful webhooks to call a Route Handler that fires revalidateTag the moment a content entry is published:

// app/api/revalidate/route.ts
import { revalidateTag } from 'next/cache';
import { NextRequest } from 'next/server';

export async function POST(req: NextRequest) {
  const { contentType, entryId } = await req.json();
  revalidateTag(`contentful-\({contentType}-\){entryId}`);
  return Response.json({ revalidated: true });
}
// Contentful fetch tagged at the data layer
const post = await fetch(https://cdn.contentful.com/..., { next: { tags: [contentful-blogPost-${entryId}] } });

Now when a Contentful editor publishes a change, the webhook fires instantly, revalidateTag purges only that entry's cache, and the next visitor gets fresh content within milliseconds — no rebuild, no waiting, no CI pipeline involved.

What happens internally the moment revalidateTag is called?

When revalidateTag('users') is called, Next.js does not immediately delete or refetch the cached data. Instead, it writes a staleness marker against all cache entries carrying that tag in the Next.js Data Cache (an in-memory + persistent cache layer managed by the Next.js server runtime). The cached data is still physically there — it's just flagged as stale. No network requests are made, no pages are re-rendered at this point. It's a near-instant, low-cost operation.

What happens on the next incoming request after the tag is invalidated?

When a request comes in for a page that depends on the invalidated tag, Next.js detects the stale marker during rendering and re-executes the original data-fetching function (the fetch or unstable_cache wrapped function) to get fresh data. The result is stored back into the Data Cache with a new freshness timestamp, replacing the stale entry. The page is then re-rendered with the fresh data and the new HTML is served — and optionally written to the Full Route Cache if the route is statically cacheable. Subsequent requests hit the warm cache again until the next invalidation.

How did static pages work in the Page Router, and what changes in the App Router?

In the Page Router, static pages were explicitly opted into with getStaticProps — Next.js pre-rendered them to HTML at build time and served them as flat files. In the App Router, every Server Component is statically rendered by default — there is no getStaticProps. If a route has no dynamic behavior (no cookies, headers, search params, or uncached data fetches), Next.js automatically pre-renders it to static HTML at build time without any configuration. The mental model shifts from "opt in to static" to "static unless you introduce something dynamic."

What determines whether an App Router page stays static or becomes dynamic after migration?

Next.js inspects the route at build time and checks for dynamic signals:

Signal Effect
cookies(), headers() Forces dynamic rendering
searchParams prop Forces dynamic rendering
fetch with no-store Forces dynamic rendering
export const dynamic = 'force-dynamic' Forces dynamic rendering

If none of these are present and all fetches are cached, the route is statically rendered and written to the Full Route Cache as HTML. After migrating from Page Router, pages that previously used getStaticProps with no user-specific data will automatically become static in the App Router — often with zero extra configuration needed.

What is the key behavioral difference to watch for post-migration?

In the Page Router, getStaticProps pages were rebuilt only on redeploy or ISR interval. In the App Router, static pages cached in the Full Route Cache can be invalidated at runtime via revalidateTag or revalidatePath — without a redeploy. This means post-migration your static pages are no longer tied to the build pipeline, which is a significant operational improvement but also means you need to be intentional about cache invalidation strategy. A page that was "always fresh on deploy" now needs explicit revalidation logic if its data changes between deploys.

Does the App Router with Server Components replace SSG?

Functionally, yes — but the underlying mechanism is different. In the Page Router, SSG was an explicit pattern you opted into with getStaticProps + getStaticPaths. In the App Router, static generation is the default behavior: if a Server Component route has no dynamic signals, Next.js automatically pre-renders it to static HTML at build time and caches it in the Full Route Cache. You get the same end result — pre-built HTML served at the edge — without writing any SSG-specific APIs. The concept of SSG didn't disappear; it was absorbed into the default rendering model.

What does the App Router give you that SSG couldn't?

SSG was an all-or-nothing commitment at the page level — the entire page was either static or dynamic. The App Router introduces per-component granularity. A single page can have a statically rendered Server Component shell (cached) wrapping a dynamically rendered section using <Suspense>, with client-interactive islands via 'use client'. This means you can statically cache the expensive parts (navigation, product listings) while keeping user-specific parts (cart count, personalized recommendations) dynamic — something SSG fundamentally couldn't express without client-side hydration workarounds.

When would you still think in "SSG terms" in the App Router?

When dealing with dynamic route segments at scale — e.g., /blog/[slug] with thousands of posts. App Router still supports generateStaticParams (the successor to getStaticPaths) to pre-render known paths at build time. Without it, those pages render on first request and get cached afterward, meaning the very first visitor to a new URL pays the render cost. For high-traffic or SEO-critical pages, pre-generating them at build time via generateStaticParams is still the right call — so SSG as a strategy remains relevant even if SSG as an API is gone.

Do Client Components appear in the initial HTML on first load in the App Router?

Yes — and this is a common misconception. Despite being called "Client Components," they are still server-rendered to HTML on the first request. Next.js renders the entire component tree (both Server and Client Components) to HTML on the server for the initial page load, so the user sees meaningful content immediately without waiting for JavaScript. The 'use client' directive doesn't mean "skip server rendering" — it means "this component needs to be hydrated and made interactive on the client after the HTML arrives."

What is the difference then between Server and Client Components in terms of the initial HTML?

Both appear in the initial HTML, but what happens after that HTML lands in the browser is different:

  • Server Components — rendered to HTML on the server, never hydrated. No JavaScript is sent to the client for them. They are inert HTML.

  • Client Components — rendered to HTML on the server (for the initial load), then their JavaScript bundle is sent to the browser and React hydrates them — attaching event listeners and restoring interactive state.

The practical implication is that both contribute to First Contentful Paint, but only Client Components add to the JavaScript bundle size and hydration cost.

What is the senior-level insight about hydration and the "use client" boundary?

The 'use client' boundary defines where the React component tree splits — everything below that boundary in the import tree is bundled and sent to the client for hydration. This is why keeping the boundary as deep and narrow as possible matters: a 'use client' on a large layout component pulls its entire subtree into the client bundle, increasing JS payload and hydration time. The optimal pattern is to push interactivity to small leaf components (a button, a dropdown) while keeping data-heavy parent components as Server Components — maximizing static HTML and minimizing what React needs to hydrate.

Web Performance

What is “Critical Rendering Path“?

As we know, the browser needs to know exactly the minimum resources required to download before rendering to avoid presenting an obviously broken experience. On the other hand, the browser can’t let the user wait for too long to download unnecessary resources for presenting some of the content to the user. The sequence of processes that the browser takes before rendering content on the first render is called the “critical rendering path“. Since the browser can’t complete the initial rendering without those critical resources, it is known as “rendering blocking resources“. The browser absolutely needs at least three types of resources:

  • The initial HTML that is received from the server to construct the actual DOM

  • The important CSS files are used to style the initial HTML. Otherwise, if the browser keeps proceeding without waiting for them, the user will see the weird “flash“ of unstyled elements.

  • The critical JavaScript files are used to construct the layout synchronously.

Can you tell me the overall process of rendering the“Critical Rendering Paths“?

The Critical Rendering Path includes the Document Object Model (DOM), the CSS Object Model (CSSOM), the render tree, and layout.

A request for a web page or application starts with an HTTP request, and the server sends back an HTML response. The browser then begins parsing the HTML and converts the received bytes into the DOM tree. While parsing the HTML, the browser discovers external resources such as stylesheets, scripts, and images, and starts downloading them in parallel.

The browser continues parsing the HTML and building the DOM until it reaches the end of the document. At the same time, CSS files are parsed to build the CSSOM. Once both the DOM and CSSOM are ready, the browser combines them to build the render tree, which contains only the visible elements and their computed styles.

After the render tree is created, the browser performs layout to calculate the size and position of each element. Finally, the browser paints the pixels on the screen, rendering the page that the user sees.

What are “Google Web Vitals“ and “Core Web Vitals“?

Google Web Vitals is a set of standardized performance metrics defined by Google to measure the real user experience of the website. They focus on how fast it loads, how responsive it feels, and how stable it is visually while loading.

Core Web Vitals are a subset of Web Vitals that Google considers the most critical for user experience and uses as ranking signals in search results. There are three metrics that can be seen as Core Web Vitals: Largest Contentful Paint, Interaction To Next Paint, and Cumulative Layout Shift.

  • Largest Contentful Paint (LCP): It is used to measure how long it takes for the largest visible content element to appear

  • Interaction to Next Paint (INP): It is a relatively new metric that was introduced in 2023, which measures how fast the app responds to user interaction with it. Basically, how “snappy“ the interaction feels

  • Cumulative Layout Shift (CLS): It is used to measure unexpected layout shifts while the page is loading

Can you explain the “Time To First Byte (TTFB)“ Metric?

When we open the browser and navigate to a website, the browser first sends a GET request to the server and receives an initial HTML in return. The time it takes to do that is known as the “Time To First Byte“ metric

Can you explain the "Time To Interactive (TTI)" Metric?

Time to Interactive is a lab metric for measuring load responsiveness. It helps identify the case when the page looks interactive but actually isn't. A fast TTI helps to ensure the page is usable with fully interactive elements. This metric was removed from Lighthouse and Core Web Vitals, but it is still useful to measure the performance of the SSR app. For example, an SSR app can lead to a scenario where a page looks interactive (links and buttons are visible on the screen), but it is not interactive because the main thread is blocked or the JavaScript controlling those elements hasn't loaded. To measure, just look at the end time of the longest task before the quiet window.

When do the DOMContentLoaded and Load Event occur?

DOMContentLoaded and Load are two key browser events that mark the different stages of page loading.

The DOMContentLoaded is fired when the HTML document has been completely parsed, and the DOM tree is built. That indicates the page structure is ready for user interaction, but it may not be fully complete yet

The Load Event is fired when the entire page and all its dependent resources have finished loading. That indicates the page is fully loaded and visually complete.

What are the “First Contentful Paint (FCP)“ and “Largest Contentful Paint (LCP)“?

First Contentful Paint is one of the most important performance metrics since it measures the perceived initial load. Basically, it is the user’s first impression of how fast your website is. Until this moment, the user has to stare at the blank screen. According to Google, FCP is ideally below 1.8 seconds before users lose interest in the website.

Somewhere in the FCP process is where the Largest Contentful Paint (LCP) happens. Instead of the very first element, like FCP, it represents the main content of the page, the largest text, image, or video visible in the viewport. According to Google, this number should ideally be below 2.5 seconds. More than that, users will think our website is slow.

Tell me about the “Lighthouse“ tool?

Lighthouse is the Google performance tool integrated in the Chrome Dev Tools and can also be run as a shell script, web interface, and a Node module. We can use a Node module to run it inside our build and detect regressions before they hit production. Those Google metrics can be measured by this tool.

Lighthouse only gives surface-level information and doesn’t allow for simulating different scenarios, like a slow network or low CPU. It’s just a great entry point and an awesome tool to track the performance changes over time. To dig deeper into what is happening, we need the “Performance“ panel.

How important is CDN in improving the website’s load time?

The primary purpose of any CDN (Content Delivery Network) is to reduce the latency and deliver content to users as soon as possible. They implement multiple strategies for this, but the two most important ones are “caching“ and “distributed servers“.

A CDN server will have several servers in various geographical locations. These servers are closer to the end user and are able to store those copies of static files on your website and send them to users when they request.

How would you know the resource is getting from the browser’s cache instead of getting a new version?

When a server receives a request for a file, it could check when the file was last modified. The browser knows because the browser sends cache validation headers like If-Modified-Since and If-None-Match allowing the server to compare versions. If the date is the same as the cached file on the browser’s side, it returns with a 304 status code with an empty body(that’s why the size is too small). This indicates to the browser that it’s safe to reuse it and there's no need to re-download it.

Can you explain the behaviour of the Cache-Control header that servers set in the response?

The Cache-Control header can contain multiple directives in different combinations, but the most important:

  • max-age with a number controls how long the particular response is going to be stored

  • must-revalidate directs the browser always send the request for a refreshed version if the response is stale. The response will be stale if it lives in the cache for longer than the max-age setting.

If we set the max-age=0 and must-revalidate, the browser will always check with the server and never use the cache right away. And if we set to max-age=31536000(1 year), it will tell the browser to get the content right away from the browser’s cache.

Do modern bundlers like Vite, Rollup, and Webpack always create “immutable“ JS and CSS files?

Well, they are not truly "immutable", of course. But those tools generate file names with a hash string that depends on the file's content. If the file's content changes, then the hash changes, and the name of the file changes. As a result, when the website is deployed, the browser will re-fetch a completely fresh copy of the file regardless of the cache settings. The cache is "busted", exactly like in the exercise before when we manually renamed the CSS file.

How does the component of those libraries, such as Next.js, Remix, Tanstack, work under the hood?

All of those modern frameworks give us a component that prevents the default link behaviour and some ways to render different pages for different routes without reloading the page itself. From regex pattern matching to folder-based routing. There is no more “traditional” page navigation, which includes asking the server for the new HTML, parsing it, and so on. Instead, the already initialized JavaScript just destroys the entire page or a part of it and then generates a new page and injects it into the old page. This part is usually invisible to us and is handled entirely by React.

For example:

Somewhere in the code, there is a Link component, inside of which there is this code:

<a
 onClick={(e) => { 
  e.preventDefault();
  navigate(href); 
}}>
{children}
</a>

A normal tag, where the default behavior on click is prevented - i.e., the "normal" link redirect won't happen. Instead, the navigation to the next page is triggered by JavaScript via navigate(href). Which is not a "navigation to the next page" per se, actually. Inside this navigate function, there will be something like this:

window.history.pushState({}, '', newPath); 
dispatchEvent(new PopStateEvent('popstate', { state: {} }));

Where window.history.pushState will simply update the URL part of the browser, and that's it, no redirects. The dispatchEvent part then dispatches a JavaScript event that can be listened to via addEventListener .

Somewhere else entirely, there will be a part that listens to that event and sets the state with the pathname value:

window.addEventListener('popstate', () => { 
  setPath(window.location.pathname);
});

And then somewhere in the third place, there will be code that renders different pages based on that state value:

switch (path) {
  case "/login":
    return <LoginPage />;
  default:
    return <DashboardPage />;
}

Why are no JavaScript environments so important?

The fact that real people are not the ones who can access the website. There are two other major things:

  • Search engine bots (crawlers), especially Google crawlers

  • Various social media and messages’ preview functionality

All of them work in a similar manner. First, they somehow get the URL of the website. This usually happens when users share the website through social media or when search bots mindlessly crawl through miliions websites out there.

Second, the bots send the request to the server and receive the response, like the browser usually does

Third, from the received HTML, they extract useful information like text, links, and meta tags. Based on that, they form the search index, and the page becomes “Googleable “. Social media previews grab the meta tags and build the nice preview we all have seen with the large picture, title, and sometimes a short description.

Many old web crawler bots have no JavaScript involved, but some of the popular search engines wait for JavaScript. However, this process means that the indexing of a website relies heavily on JavaScript and might be slow and budget-intensive. Then, it’s really important for the server to return the “proper“ HTML with all critical information on the very first request.

Can you tell me "the cost of server pre-rendering"?

When the server receives any requests, it reads the index.html file that the build step generated in advance, converts it to a string, and sends it back to whoever requests it. It is basically what all hoisting platforms that support SPA will do for us. To fix the "no JavaScript" problem, where the browser only receives the empty div without JavaScript enabled, we need to modify the HTML on the server now because nothing stops us from modifying that string before sending it back.

Adding a simple pre-rendering script, it introduces two problems in complexity:

  • The first problem is where we should deploy the app now. From now on, we need a server and no longer keep the hosting cost at zero for hosting static resources.

  • The second problem is the performance impact of having a server. Having a server introduces several problems with latency, including an unavoidable round-trip to the server for every initial load request.

Can you tell me what the "Server React DOM API" is and list all its methods?

The react-dom/server APIs let you server-side render React components to HTML. These API are only used on the top level of the React app to generate initial HTML. The framework calls them for us, and we mostly don't need to import them manually. There are three types of React server APIs:

  • Server APIs for Web Streams(renderToReadableStream, resume): These methods are only available in environments with Web Streams, which includes browser, Deno, and some modern edge runtimes.

  • Server APIs for Node.js Streams(renderToPipeableStream, resumeToPipeableStream, preRenderToNodeStream): These methods are only available in environments with Node.js Streams.

  • Legacy Server API for non-streaming environments(renderToString, renderToStaticMarkup): These methods can be used in environments that don't support streams

Explain to me the renderToString API method?

React calls the renderToString method to render our app to an HTML string, which can be sent with our server response. This will produce the initial non-interactive HTML output of our React components. On the client, we need to call hydrateRoot method to hydrate the server-generated response to make it interactive. renderToString method returns the string immediately; it does not support streaming content as its load. We could use other modern streaming methods as alternative solutions. For server-side rendering, they can stream content in chunks as it resolves on the server, so users can see the page being progressively filled in before the client loads. For static generation, they can wait for all content to be resolved before generating the static HTML.

What are the differences between createRoot and hydrateRoot?

createRoot lets us create a root to display React components inside the browser DOM node.

hydrateRoot lets us display React components inside the browser DOM node whose HTML content was previously generated by react-dom/server.

The main difference is createRoot will clear the existing HTML before generating the new HTML using JavaScript and inject it into the browser DOM node. On the other hand, hydration will reuse the generated HTML that is received on the Server and only attaches the event handlers. If our app is SPA, then we use createRoot. In contrast, if our app is server-rendered, createRoot is not supported, and we need to use the hydrateRoot method.

Can SSR make LCP worse?

Unstable, there are no silver bullets in performance. If someome tell us that SRR is a 100% increase in initial load for out SPA app, they are mistaken. In the scenario when the CPU is really fast, the networking is really slow and the browser's cache is turned on, the FCP/LCP metric in SPA seems better than SSR. That is because the browser took a long time to download the initial HTML due to the slowest network in SSR. In contrast, the size of the initial HTML in SPA is really small, while the CPU is really fast for JavaScript execution. So if your app is targeting that specific niche primarily, and our app is already SPA, trying to introduce SSR to it might make it worse.

Why does implemting conditional SSR Rendering is a big mistake?

We all know that some browsers' APIs, such as window, document, .. can't be used in the server's environment. A typical way to fix it would be to use a conditional check to determine whether the window is undefined or not, and then decide which parts of the code should be executed on server or browser environment. Another way that we can put the client code inside the useEffect or useLayoutEffect is because those hooks are called after hydration happens and won't be run on the server side.

const Component = () => {
// don't render anything while in SSR mode
if (typeof window === "undefined") return null;
  // render stuff when the Client mode kicks in
return ...
}

The natural temptation of implementing conditional SSR rendering is not going to work in the correct way. React expects that the same HTML produced by the server code is exactly the same as the HTML produced by the client code. Implementing this way makes the content on the server is different with the content on the client, that confuse it so much and React will replace the entire content of the "root" div and repalces with the freshly generated elements. The hydration mechanism is totally destroyed, with all the downsides that come with that. Sometimes, it can just introduce really weird layout bugs due to the rehydration issue.

The correct way to do this is to rely on React's life cycle to "hide" the non-SSR compatible blocks.

const Component = () => {
// initially, it's not mounted
const [isMounted, setIsMounted] = useState(false);
};

const Component = () => {
// initially, it's not mounted
const [isMounted, setIsMounted] = useState(false);
useEffect(() => { 
    setIsMounted(true);
}, []); 
};

On first render, both client and server receive the same null content, so the server thinks that I can trust this DOM and the hydration mechanism won't be broken.

Why does compressing JavaScript help to improve the web performance?

We all know that the more large in size in JavaScript bundle, the more time users need to wait for the browser complete downloading. Having a few megabytes of JavaScript on the page just feels wrong, so the issue of the bundle size and how to reduce it dominates performance-related discussions. In real life, when sending those files to users, we often would first compress them with something like gzip and brotli to reduce the size. Most of the hoisting providers have compression enabled by default, especually we are using CDN.

What is the JavaScript Evaluation?

If we are profiling a web application that ships a lot of JavaScript code, we might see a long task that is labeled with Evaluate Script. Script evaluation is a necessary part of executing JavaScript code in the browser, as JavaScript is compiled just-in-time before execution. When the JavaScript is evaluated, it is first parsed for errors. If the parser doesn't find any errors, then the script is compiled into bytecode, and the continue onto the execution part. The network cost of downloading JavaScript is the only thing users need to pay for once on the first visit. After that, resources can be gotten from cache, but the cost ofthe JavaScript Evaluation process is what users need to pay for every visit.

Which tools do you use to improve the performance of the website?

Well, there are a few tools that I use to measure the performance of the website. In the development phase, I use the React Profiler of the React Dev Tool to check if any components render unnecessarily. It helps me to identify which components are slow and why. I use the Performance tab on Chrome Dev Tools to analyze the main thread and identify what is blocking the UI during interactions. For example, when users report janky scrolling, you might find the long-running JS tasks causing frames drop. I use the Network tab when investigating slow load times. The waterfall chart reveals API delays, building issues, and caching problems. I also use the webpack-bundler-analyzer to identify dependencies that are inflating my bundle size. And Lighthouse is my go-to for auditing the overall performance, accessibility, and SEO before pushing to production.

Can you explain those terms, like minification and tree-shaking in the bundler?

Minification is the simplest optimization step, but surprisingly powerful. It will help you to remove all the redundant things from your final code without affecting the execution. Things such as whitespaces, comments, long variable names, and formatting, those are don’t need on your final bundle. Your final code works exactly the same, but becomes smarter, faster to download, and faster to parse.

Tree-shaking is a way to let the bundler remove the dead code, unused code from your final bundle. For example, you define 5 functions, but you only use one; the 4 left are never used in your codebase. Tree-shaking will remove those 4 dead code from your final bundle, which makes the final bundle to be more smaller. Tree-shaking relies heavily on ES modules (import/export). Because ES modules are static. The bundler can analyze them at build time and see which exports are actually used.

How do we need to use the Code Splitting technique to reduce bundle size?

The idea behind the Code Splitting technique is that when we have to send a large 5MB file to the users, if those 5Mb are packed into a single file, downloading that file will take more than a minute. So the question is: Why don't we just split that large file into multiple chunks, then let the browser download them in parallel? It will help me reduce the download time. The hard part is that we can't just cut that file into 10 random files; that is not how JavaScript works. We need to split it into isolated, independent modules and ensure that those modules can call functions in other modules. Doing this manually would be a huge headache, but luckily, most modern bundlers can do it for us.

https://vite.dev/guide/build.html#chunking-strategy

https://webpack.js.org/guides/code-splitting/

How do you decide when to split a large JavaScript bundle into multiple chunks?

Answer: I focus on Cache Stability and Resource Criticality. First, I separate vendor code from app code. Since node_modules change rarely, splitting them allows for long-term browser caching. For the dependency tree, if a library or feature is over 100KB, I'll extract it into its own chunk. I aim for chunks between 50KB–100KB to balance parallel download speed with compression efficiency.

Can you explain the risk of having TOO MANY small chunks in a web application?

Answer: There are two main risks. First is the Protocol Bottleneck: On HTTP/1.1, browsers are limited to ~6 concurrent connections. Too many chunks can create a "waterfall" that delays critical CSS. Second is Compression Overhead: Algorithms like Brotli work better on larger text blocks. Breaking code into tiny chunks reduces the compression ratio, potentially increasing the total transfer size.

What is the difference between how local development and production environments handle chunking?

Answer: The primary difference is the protocol. Local dev servers often use HTTP/1.1, while production CDNs use HTTP/2 or HTTP/3. In local testing, you might see performance regressions from many chunks due to connection limits. In production, those same chunks are requested in parallel. It is vital to verify performance in a production-like environment before final decisions.

If you move a specific UI component into a 'components' chunk, why might it still appear in the 'index' chunk?

Answer: This happens due to how the bundler traces the dependency tree. If the index entry point imports that component directly (or via a shared utility not included in the chunk rule), the bundler may prioritize including it in the main bundle to avoid extra round-trips. You must ensure the manualChunks configuration (like in Rollup/Vite) explicitly targets the file path and that imports are structured to allow isolation.

How does code splitting improve performance beyond just the initial download speed?

Answer: It significantly reduces JavaScript Execution and Compilation time. By splitting code, the browser can parse and compile individual chunks as they arrive, rather than waiting for a single massive file to finish downloading. This frees up the Main Thread much sooner, which improves responsiveness metrics like First Input Delay (FID) or Interaction to Next Paint (INP).


How would you approach a task where the production JavaScript bundle has suddenly grown by 2MB?

Answer: I would start by running a Bundle Analyzer (like webpack-bundle-analyzer or rollup-plugin-visualizer) to generate a treemap of the production build. I’d look for large "blobs" in the node_modules section. Once a culprit is identified, I’d trace its usage in the codebase. Often, it’s a case of someone importing an entire library (like lodash or MUI) instead of specific named exports, or a new dependency being pulled in by a sub-dependency.

What are the performance implications of using import * as UI from 'library-name' ?

Answer: This pattern generally prevents the bundler from performing effective Tree-Shaking. By using the namespace import (*), you are explicitly telling the bundler that you might need any part of that library at runtime. Even if you only use UI.Button, many bundlers will include the entire library in the final chunk, leading to massive "Bundle Bloat" and increased parse/compile times for the user.

Why might a "Unified Icon Library" file (one file that exports all project icons) be a bad architectural choice?

Answer: While it provides a clean developer experience (DX) and better autocomplete, it creates a bottleneck. If that central file imports 2,000 icons from a library to re-export them, every page that needs even a single "Close" icon will end up loading all 2,000 icons. This is because the bundler sees the central file as a single dependency that requires all those icons to be present.

If a bundle analyzer shows that a library is taking up 40% of your bundle, but you only use one function from it, what are your options?

Answer: First, I would check if the library supports Named Exports (e.g., import { functionName } from 'lib') which allows for tree-shaking. If it doesn't, I’d check for "sub-path imports" (e.g., import functionName from 'lib/functionName'). If the library is simply not tree-shakeable, I’d evaluate if there is a lighter alternative (like date-fns instead of moment.js) or if I can implement that specific logic manually to save those several hundred KB.

What is the "Comment-Out" strategy in bundle optimization?

Answer: It’s a quick sanity check used during the investigation process. Before committing to a complex refactor, you comment out the suspected heavy imports and rebuild the project. Even though the app won't run, the build will complete, allowing you to see exactly how much the bundle size drops. This confirms that your "investigation" is targeting the right package before you spend hours on the actual fix.

What is Tree-Shaking, and how does it differ from traditional Dead Code Elimination?

Answer: Tree-shaking is a form of dead code elimination that relies on the static structure of ES Modules (import and export). Unlike traditional DCE, which looks at segments of code that can't be reached, tree-shaking tracks the "live" exports across the entire module graph. If a module exports a function but nothing in the dependency tree imports it, the bundler "shakes" it off the tree to keep the final bundle lean.

Why does import * as Material from '@mui/material' often cause performance issues in production?

Answer: In theory, modern bundlers can tree-shake namespace imports. However, in practice, if that Material object is ever used as a dynamic reference (like Material[componentName]) or wrapped in another exported object (like export const Theme = { Material }), the bundler loses the ability to statically analyze which specific parts of the library are needed. To be safe, it includes the entire library, which for MUI includes hundreds of components and thousands of icons.

Can you explain a scenario where a component is imported but still excluded from the final bundle?

Answer: Yes. If I import MyDialog in App.tsx but don't actually reference it in the JSX/return statement, or if it's inside a conditional block that the bundler determines is "dead" (like if (false) { ... }), the bundler will see that the component is never actually "reached." Since the branch is dead, the component and all of its unique dependencies will be omitted from the production chunk.

How would you refactor a "Global UI Wrapper" that is causing a 5MB bundle size due to Material UI imports?

Answer: I would replace the import * as Material with Direct/Named Imports. Instead of mapping the entire library to a key, I would import only the specific components we use—like Button or Snackbar—and explicitly add them to the wrapper object. This restores the bundler's ability to perform static analysis, ensuring that only those two components (and their specific dependencies) are included in the vendor chunk.

Is it possible to use the "Namespace Pattern" (e.g., <UI.Button />) without breaking tree-shaking?

Answer: It depends on the bundler, but generally, the safest way to maintain that DX is to build the UI object manually using named imports. For example: import { Button } from './Button'; export const UI = { Button };. As long as you aren't using import * to grab the whole directory, the bundler can see that UI.Button only requires the Button file, and it will continue to tree-shake other unused components in that directory.

Why can a bundler tree-shake your own code easily, but often fails to tree-shake older libraries like Lodash?

Answer: Tree-shaking requires the ESM (ES Modules) format. Our own code uses import and export, which allows the bundler to map the dependency tree statically. Older libraries often distribute code in CommonJS or UMD formats. Because these formats allow dynamic requires and exports, the bundler cannot be 100% sure that a piece of code is unused, so it includes the entire library to avoid breaking the app.

What is "Cherry-picking" in the context of bundle optimization?

Answer: Cherry-picking is the practice of importing a specific function directly from its file path rather than the library's main entry point—for example, import trim from 'lodash/trim' instead of import { trim } from 'lodash'. This bypasses the potentially non-tree-shakeable main index file and ensures only the code for that specific utility is included in the bundle.

If npx is-esm returns "No" for a package, what does that tell you about its impact on your bundle?

Answer: It tells me that the package does not provide a standard ES Module entry point. Consequently, standard named imports (import { x } from 'pkg') will likely fail to tree-shake, and the entire library will be bundled. In this case, I would either look for an ESM-native alternative, search the documentation for sub-path "cherry-picking" imports, or evaluate if I can replace the functionality with native JavaScript.

How do you decide between using a utility library (like Lodash) and writing native JavaScript code?

Answer: I look at Browser Support and Bundle Cost. If I only need simple functions like trim() or toLowerCase(), I use native JavaScript because it costs 0 bytes and is highly optimized by the browser engine. I only reach for a library if I need complex, battle-tested logic (like deepClone or debounce) that would be error-prone to write manually, and even then, I ensure I'm importing only the specific functions needed.

What is the difference between import { Star } from '@mui/icons-material' and import Star from '@mui/icons-material/Star'?

Answer: The first is a Named Import from the main index. If the library is properly configured for ESM, this should tree-shake. The second is a Default Import from a specific file path. The second approach is "safer"—it guarantees that only the Star icon is pulled in, regardless of how complex the library's internal tree-shaking configuration is. Many developers prefer the second one for icons because it prevents "accidental" bloat from configuration errors.

What would you do if you found three different date-manipulation libraries in a single project’s bundle?

Answer: I would perform a Unification Audit. First, I’d use a search tool to see how many times each library is used. If a heavy library like moment is only used in a few places, I’d refactor those instances to use a more modern, tree-shakeable alternative like date-fns or native Intl APIs. The goal is to standardize on the single most efficient library and then uninstall the others to shrink the vendor chunk significantly.

How do you identify why a specific package is being included in your bundle if you haven't explicitly imported it?

Answer: I would use a dependency tracing tool like npm-why or yarn why. These tools show the dependency chain. For example, if I see @emotion in my bundle but I only use Tailwind, npm-why might reveal that it’s being pulled in as a sub-dependency of @mui/material. To get rid of it, I’d have to either replace the parent library (MUI) or find a version that doesn't rely on that transitive dependency.

Why did replacing one @emotion component with a Tailwind class not immediately remove Emotion from the bundle?

Answer: This is a classic case of Transitive Dependencies. Even if you stop using a library directly in your code, it will remain in the bundle if another library you are still using (like Material UI) depends on it. Removing a library from a bundle often requires a "recursive" cleanup of all packages that reference it.

What is the trade-off between using a UI library like Material UI vs. a headless primitive library like Radix UI?

Answer: Bundle size vs. Development speed. Material UI comes with pre-defined styles and heavy internal dependencies (like Emotion), which can bloat the bundle to several megabytes. Radix UI provides "headless" primitives (accessible logic without styles), which are much smaller and allow you to use your own lightweight CSS-in-JS or Tailwind. In performance-critical apps, switching from MUI to Radix can often save hundreds of kilobytes.

In a "Bundle Size Initiative," how do you prioritize which libraries to refactor first?

Answer: I use the "Impact vs. Effort" matrix. I start by looking at the Bundle Analyzer. Large, non-tree-shakeable blocks that are only used in a few places are "Low Effort, High Impact" wins. Redundant libraries (like having two different icon sets) are also top priorities. I save large-scale migrations—like moving an entire themed app away from MUI—for last, as they require significant regression testing.

Others

How do you debug issues?

There are a few effective strategies I used to resolve the most complex issues and help me save time and energy:

  • Firstly, I try to reproduce the error consistently. We can’t fix the error when we can’t see it. Once I know how to reproduce the bug, then I will try to make it fast for able to jump straight into the problem area. After that, before digging into the problem, I will try to collect as much context as possible (confirming with the QC, checking records, logs) to make sure it’s a bug and avoid missing information.

  • The next part is isolating the problem. Finding bugs in a large codebase is really hard. I apply the binary search approach that divides the code into working and non-working parts by commenting out the suspicious code. Repeat that process until I find exactly the problematic area. Additionally, I usually use console.log to find issues in most cases. Besides, I also use some debugger tools like VSCode Debugger, Chrome Dev Tools, and React Developer Tools if needed.

  • After I found the part of the code that causes the issue, I adopted a more scientific, methodological approach instead of making random changes. I start with forming a hypothesis about the root cause, add logging and breakpoints to verify it, observe the results, fix the code based on what I learn, verify the fix, and repeat the cycle again with a new hypothesis. Finding issues and fixing them depends on how much you understand the system, the flow, and how things work under the hood. Once you have a clearly knowledege how things work, you can find the issues quickly.

  • When I am stuck because the bug is particularly hard, I write down what I have tried. This way helps me to prevent doing the same thing twice and expecting different results. It also makes it easier for others to help me. When I feel like my brain is being shut down after hours trying to fix a bug, but couldn’t, I will step away from my laptop to take a 15 minutes walk to refresh my mind. Sometimes, I feel like it is the fastest way to solve a bug.

  • AI coding tools are really powerful these days, especially if we give them the right information. The right context is the key. I mostly let AI fix bugs after I found the issue and provided the solution to AI. For some simple issues, I just let the AI do all things and only review the final code.

List of the Behaviour Questions

Can you tell me about yourself?

Hi, I'm Tuan. I am a Senior Front-End Engineer with over five years of experience. I have a degree in Computer Science, which gave me a very strong foundation in programming and problem-solving.

Throughout my career, I have specialized in React, Next.js, and TypeScript. I care a lot about the 'fundamentals'—like making sure my code is clean, SEO-friendly, and accessible for all users.

In my most recent role, I worked for a large international digital consultancy. For more than four years, I was the Senior Front-End Lead for a major e-commerce platform in the Australia and New Zealand region (ANZ). I was responsible for building and designing key features that helped the business grow.

Something that defines me is my habit of learning. For the last five years, I have read technical articles every single day. I believe a senior engineer doesn't need to know everything, but they must know how to find the right solution quickly.

I also have a blog where I share what I learn. I think my blog is the best way to see how I think and solve technical problems.

Can you tell me about your work in your recent project?

In my last project, I spent four years contributing to the front-end development for a large e-commerce website in the ANZ region.

As a senior engineer, I was responsible for the most complex parts of the project. This included things like a new login flow, color discovery features, and product detail pages. I always make sure I fully understand the requirements before I start writing any code.

My process is simple but effective:

  • First, I analyze the task. If it's complex, I discuss it with my team and the lead to make sure we have a good plan.

  • Second, I focus on 'Clean Code' principles. I want my code to be easy to read and easy to maintain.

  • Third, I always write unit tests. This helps me find bugs early and deliver high-quality work on time.

Because of my impact on the project, I was promoted twice during my four years there.

What do you do to enhance your technical knowledge apart from your project work?

I have a very disciplined approach to learning. My first priority is always to deliver my project tasks on time and with high quality. However, I believe that staying updated is part of a senior engineer's job.

For over five years, I have maintained a daily habit of reading technical literature. I use tools like daily.dev, and I follow industry experts on Medium and Substack. This helps me stay ahead of the latest trends in the JavaScript and React ecosystem.

When I find a truly deep or useful article, I don't just read it—I save it into my personal knowledge base. Over time, this has built a massive library of resources that help me solve complex problems quickly.

To solidify my knowledge, I also run a technical blog. I believe that 'to teach is to learn twice.' Writing about advanced topics and architectural patterns helps me understand them deeply. This habit ensures that I never fall behind and that I am always ready to adopt new technologies when the project needs them.

Tell me about a time you had a disagreement with your manager.

In my experience as a Senior Engineer, I believe that a disagreement is actually an opportunity for collaboration. I generally have a very good relationship with my leads, but when we have different opinions on a technical solution, I follow a professional process.

For example, if my manager proposes a solution that I think could be more optimized, I don't just say 'no.' Instead, I prepare a clear technical proposal. I show the pros and cons of my idea compared to the original one.

My goal is never to 'win' the argument, but to find the best solution for the project. I believe in 'Strong Opinions, Weakly Held.' This means I will advocate for the best technical path, but once a final decision is made by the lead or the team, I fully support it and work hard to make it successful.

This approach has helped me maintain high technical standards while keeping a very positive and professional environment in the team.

Tell me about a situation when you had a conflict with a teammate.

I focus on maintaining a positive team environment, so I rarely have personal conflicts. However, I once had a technical disagreement during a code review.

A teammate was trying to improve performance by using Math.random() to generate React keys everywhere on the site. I knew this would cause major performance problems because React keys must be both 'unique and stable' for the reconciliation process to work correctly.

At first, he didn't realize why this was a problem. Instead of pointing out the mistake in the public group chat, I reached out to him privately. I explained how React works 'under the hood' and why stable keys are necessary. I also suggested better solutions, like using unique IDs from our data.

He understood the explanation, updated the Pull Request, and the task was completed successfully. By handling it privately and technically, I helped my teammate grow while keeping our professional relationship very strong.

Tell me about a time you failed. How did you deal with this situation?

Early in my career, I made a mistake by focusing too much on speed and not enough on quality. I wanted to prove I was fast, so I jumped straight into coding before fully understanding the requirements. I skipped unit tests and didn't confirm unclear details with the Business Analyst.

As a result, the feature was full of bugs and didn't meet the client's needs. This delayed the release and was a very tough lesson for me. I learned that 'being fast' is useless if the work is incorrect.

Since that day, I have followed a disciplined process that ensures high quality:

  • First, I analyze requirements deeply and ask questions early.

  • Second, I break big tasks into smaller, manageable parts.

  • Third, I always write unit tests and perform thorough 'smoke tests' before delivery.

This failure helped me become the reliable Senior Engineer I am today. I now complete my tasks on time, with a very low bug rate and high-quality code.

Describe a time when you led a team. What was the outcome?

While I haven't held a formal 'Team Lead' title yet, I have naturally taken on informal leadership responsibilities in my senior roles.

For example, I am the primary person for code reviews on my team, and I mentor junior developers to help them follow best practices. I also take the lead on technical documentation and architectural proposals for new features.

I am very interested in moving into a formal leadership role. I have been studying management principles from industry experts to understand how to motivate a team and manage project risks. I believe my strong technical foundation and my ability to communicate clearly make me ready for this next step in my career.

Tell me about a time that you worked well under pressure

I remember a critical issue that appeared right on a major release day. A navigation feature that was working perfectly in testing suddenly started 'flickering' in the production environment. This was a high-pressure situation because the client needed a fix immediately to avoid delaying the launch.

Even though the pressure was high, I stayed calm and followed a logical debugging process:

  • First, I reproduced the bug in a local environment to understand the root cause.

  • Second, I used a 'divide and conquer' approach by isolation—commenting out sections of code to find the exact source of the flicker.

  • Third, once I found the issue, I analyzed the logic and implemented a stable fix.

I was able to resolve the bug within two hours. The client was very pleased with the prompt and professional response. This experience taught me that staying calm and following a methodical process is the best way to handle high-pressure situations.

How do you handle a situation when you don’t know the answer to a question?

Answer:
"In my experience, especially during client demos, it's very important to handle unknown questions professionally to maintain trust. I remember a time when a client asked me about a specific technology I hadn't used before. Instead of pretending to know the answer, I stayed honest and professional. I told the client: 'That's a great question. I want to give you the most accurate information, so I need to do a quick research before I confirm the details. I will get back to you by the next meeting.' I then researched the topic and consulted with my team. At the next meeting, I provided a confident and correct answer. I believe being honest is the best way to build long-term trust with stakeholders."

Describe a time when you received tough or critical feedback.

Answer:
"Early in my career, my Technical Lead told me that while my delivery speed was excellent, my code quality needed more focus and I needed to be more open to teammates' opinions. At first, it was hard to hear, but I realized his feedback was a gift to help me grow. I decided to change my approach immediately by taking more time for self-reviews, performing strict 'smoke tests' before delivery, and being more open-minded during discussions. My lead was very satisfied with my improvement, and this feedback actually helped me transition into a Senior role. It taught me that a great engineer must be humble and focus on quality above all else."

Describe a time when you had to give someone critical feedback. How did you handle it?

Answer:
"In my senior role, I often mentor junior developers. I once worked with a talented developer who was misusing React memoization hooks like useMemo and useCallback everywhere. I arranged a private one-on-one meeting to provide constructive feedback. Instead of just highlighting the mistake, I explained how React works under the hood and why over-memoization can actually hurt performance. I also shared deep-dive resources to help him learn. He responded well, updated his code correctly, and thanked me for the guidance. For me, giving feedback is about helping the team grow together."

Describe a time when you anticipated potential problems and developed preventive measures

Answer:
"During a major performance refactoring phase, I noticed a significant risk in our codebase. Our team had agreed to refactor React keys to improve rendering, but a teammate incorrectly implemented a utility using Math.random() to generate keys globally. I recognized immediately that this would cause React to re-mount every component on every render, leading to a performance disaster. I rejected the Pull Request and organized a meeting to explain the reconciliation process and why keys must be both unique and stable. By catching this early and mentoring my teammate on React internals, I prevented a critical production issue and helped the team adopt a more robust implementation strategy."

Describe a situation when you had to deal with a difficult customer

Answer:
"In my role as a Senior Engineer, I primarily interface with project stakeholders and onsite partners rather than direct retail customers. I’ve found that the best way to prevent 'difficult' situations with any stakeholder is through consistent transparency and high-quality delivery. For example, by providing clear technical demos during bi-weekly standups and maintaining open communication on JIRA, I’ve built a high level of trust with our international partners. Because our team consistently hits milestones and maintains a low bug rate, my interactions with stakeholders have remained very positive and collaborative."

Tell me a time when you missed a deadline. What happened, and how did you handle it?

Answer:
"I am very disciplined with planning, so I rarely miss deadlines. However, I once encountered a situation where a critical requirement was missing from a ticket late in the sprint. Instead of rushing to code a partial solution, I immediately raised the risk to my Project Manager and Business Analyst. I worked closely with them to clarify the missing logic and provided a new, realistic estimate for the update. Even though the original internal target shifted, I committed to a new delivery time and worked diligently—including extra hours—to ensure the final feature was shipped with zero bugs for the main release. This taught me that early communication is the most important tool when facing a deadline risk."

Describe when your workload was heavy and how you handled it

Answer:
"During the implementation of a new authentication flow using Azure B2C, my workload suddenly doubled when my Lead was pulled into another urgent release. I was assigned the entire authentication UI suite—including login, sign-up, and password recovery—with a very tight deadline. These tasks required a deep dive into native HTML, CSS, and JavaScript within a complex specialized platform. I handled this by first performing a risk assessment and communicating the new effort requirements to my PM. Once the stakeholders were aligned, I focused on a methodical execution. I successfully shipped all forms on time, receiving high praise from the client for delivering a complex feature smoothly under pressure."

Describe a time you had to deal with a significant change at work. How did you adapt?

Answer:
"In my experience, significant changes are best handled through a structured technical investigation. While our project requirements are usually well-defined, I am always prepared for shifts in scope. If a major change occurs, my first step is to perform a deep-dive analysis into the existing codebase to identify similar patterns. I then research the official documentation and best practices for the new requirements. Before starting any implementation, I break the solution down into technical tasks with realistic estimations. I then proactively communicate the impact and the timeline to my Project Manager and Business Analyst. This structured approach ensures that the change is integrated smoothly without risking the stability of the overall release."

Describe a situation where you saw a problem and took an initiative to correct it

Answer:
"While working on a new feature, I noticed a critical bug in a common utility function that was being used across the entire site. A teammate had implemented an asynchronous operation inside a forEach loop, which was causing intermittent race conditions and mysterious bugs. Even though this wasn't part of my assigned ticket, I took the initiative to address it. I reached out to my teammate privately and explained why forEach doesn't wait for promises and how to correctly use for...of or Promise.all. I helped him refactor the utility and verify the fix. By taking this initiative, I not only fixed a major hidden bug but also shared important technical knowledge with the team."

Describe a time when there was a conflict with your team. How did you help resolve it?

Answer:
"Early in my career, I was sometimes too protective of my own technical choices, which occasionally led to friction during code reviews. I realized that being a great engineer isn't just about writing code; it's about collaboration. I made a conscious decision to change my mindset. Now, I view every comment as an opportunity to improve the project. When a conflict arises, I stay calm, take a breath, and evaluate the feedback objectively. If I disagree with a bug report or a review comment, I don't argue; instead, I arrange a quick call to explain my logic clearly. If we still don't agree, I involve a third party like a Tech Lead or BA to provide a neutral perspective. This transition from being 'stubborn' to being 'collaborative' has significantly boosted my productivity and strengthened my relationship with my team."

Describe a time when you went out of your comfort zone. What lessons did you learn?

Answer:
"The most significant time I stepped out of my comfort zone was when I moved from a local project to an international one with English-speaking clients in the ANZ region. Initially, I was very nervous about demoing my features in English during bi-weekly standups. I even tried to avoid them at first. However, I soon realized that to grow as a Senior Engineer, I needed to master technical communication. I started preparing for my demos much more thoroughly—writing scripts and practicing my speaking. I stopped hiding and began actively presenting my weekly achievements. This experience taught me that growth happens when you face your fears. My confidence improved, my PM noticed a huge difference, and I am now very comfortable presenting complex technical flows to international stakeholders."

Describe a time when you took a big risk, and it failed

Answer:
"I am generally a risk-averse engineer who prefers careful planning, but there was a situation where I had to step up during an emergency. A teammate went on leave right before a production release, leaving a complex GraphQL change unfinished. Even though I am primarily a Front-End expert, I volunteered to handle the backend integration. I worked extra hours to learn the new GraphQL patterns and coordinated across teams to meet the deadline. In the end, I only completed about 80% of the task on my own and needed a lead's help for the final integration. While I didn't finish everything 100% independently, it was a valuable experience. I learned how to manage high-pressure cross-stack tasks, and the client was very impressed with my willingness to take on a difficult challenge to ensure the release was a success."

Describe a time you had to explain a complex technical concept to someone non-technical

Answer:
"Early in my career, I struggled to explain technical details to non-developers like BAs or QCs. I eventually realized that effective communication is a core senior skill. My approach changed from focusing on how a feature works to why it matters for the business. Now, when I explain a technical concept, I use analogies and focus on the user impact rather than the implementation detail. For example, instead of discussing 'asynchronous state synchronization,' I talk about how the system ensures the user's data remains consistent across the whole site. This shift in perspective has made my collaborations with the QA and Product teams much smoother and more productive."

Tell me a time you disagreed with your colleague. How did you handle this situation?

Answer:
"I believe that healthy technical debate is essential for a high-quality project. When I have a disagreement with a colleague, I focus on staying objective and professional. I avoid personal tension and instead move the discussion to a dedicated technical meeting where we can look at the facts. During these sessions, I practice active listening to understand their reasoning before sharing my own perspective. If we reach a stalemate, I proactively involve our Technical Lead to provide a neutral 'third party' decision. My priority is always the best interest of the project, not 'winning' the argument. This approach ensures that we remain effective teammates even after a tough technical decision."

Tell me about a complex task you have worked on recently

Answer:
"Recently, I led the front-end implementation for a major authentication migration to Azure B2C. This was a high-stakes project delivered at the end of the year. The challenge was that I had to build a modern, high-fidelity UI using native HTML, CSS, and JavaScript, as the platform did not support modern frameworks like React. The documentation was also quite limited. I handled this by performing deep-dive research into the Azure B2C Custom Policy framework and breaking the large project into modular vanilla JavaScript classes. Despite the technical constraints and tight deadlines, I delivered the full suite of Login, Sign-up, and Password Recovery flows on time with zero critical bugs. It was a great example of using core web fundamentals to solve a complex architectural problem."

How do you stay up-to-date with the latest technological advancements?

Answer:
"I view continuous learning as a daily discipline rather than an occasional task. For over five years, I have dedicated at least two hours a day to staying updated with the JavaScript and React ecosystem. I rely on a curated library of resources, including daily.dev, Medium, and various expert newsletters. To ensure my knowledge is practical and stays with me, I maintain a technical blog where I document advanced concepts and 'lessons learned' from my projects. I believe a senior engineer's value comes from their ability to bring these new, optimized solutions into their team's workflow to prevent the project from falling behind technologically."

Give me an example of a time you had to debug a challenging technical issue.

Answer:
"I recently handled a challenging cross-device issue where a production bug only appeared on mobile Safari. Because our environment was behind a strict VPN, I couldn't use standard remote debugging tools easily. To solve this, I followed a methodical 'isolation' process. I first verified the requirement to ensure it wasn't a misunderstanding, then I used logging and targeted code removal to 'scope down' the bug locally. Once I identified the root cause—a specific CSS legacy behavior in Safari—I coordinated with the team to implement a stable fix that wouldn't impact other browsers. I find that a calm, step-by-step reproduction process is the most effective way to resolve even the most difficult technical issues."

References

https://github.com/ashishps1/awesome-behavioral-interviews

https://javascript.plainenglish.io/93-of-frontend-developers-cant-explain-react-profiler-vs-lighthouse-here-s-what-actually-2bdf07b0f253

https://javascript.plainenglish.io/the-one-frontend-interview-question-that-humbled-me-after-100-interviews-42935d92496c

https://medium.com/@jaganjvvn/real-world-frontend-interview-answers-javascript-react-typescript-b531f8298a8f

https://javascript.plainenglish.io/15-react-interview-questions-every-mid-level-developer-should-be-ready-for-in-2025-38ee70bc114c

https://javascript.plainenglish.io/mastering-the-senior-react-developer-interview-real-world-questions-you-must-prepare-for-46cda3df1c82

https://www.developerway.com/posts/server-actions-for-data-fetching

https://javascript.plainenglish.io/mastering-the-senior-react-developer-interview-real-world-questions-you-must-prepare-for-46cda3df1c82

https://thetshaped.dev/p/conscious-debugging-10-effective-debugging-strategies-debug-like-pro

https://www.developerway.com/posts/debugging-with-ai

https://medium.com/@kanishks772/every-bug-i-ever-fixed-made-sense-only-after-i-understood-these-7-layers-ebcae423399b

95 views