Introduction
Welcome to the foundational chapter of your Node.js backend engineering interview preparation. This chapter is meticulously crafted to equip you with a robust understanding of Node.js fundamentals and the essential JavaScript core concepts that underpin all Node.js applications. From interns taking their first steps to seasoned technical leads optimizing high-performance systems, a solid grasp of these principles is non-negotiable for success.
We will delve into Node.js’s unique architecture, its asynchronous, event-driven nature, the critical role of the Event Loop, and how JavaScript’s runtime behavior directly influences application performance and reliability. You’ll explore module systems, package management, and core Node.js APIs, building a strong base for more advanced topics. This chapter also includes practical coding questions and touches upon basic Data Structures and Algorithms commonly encountered in backend roles, ensuring you can articulate and apply your knowledge effectively in an interview setting.
Core Interview Questions
1. What is Node.js and why is it popular for backend development?
Q: Explain what Node.js is, its core characteristics, and why it has become a popular choice for building backend services, especially as of 2026.
A: Node.js is an open-source, cross-platform JavaScript runtime environment that executes JavaScript code outside a web browser. It’s built on Chrome’s V8 JavaScript engine, which is highly performant. Its popularity for backend development stems from several key characteristics:
- Asynchronous, Event-Driven, Non-blocking I/O: This is Node.js’s most distinguishing feature. It uses a single-threaded event loop model for most operations, which allows it to handle many concurrent connections efficiently without creating a new thread for each request. This makes it ideal for I/O-bound tasks like APIs, real-time applications (chat, streaming), and microservices.
- JavaScript Everywhere: Developers can use a single language (JavaScript) for both frontend and backend development, simplifying context switching, enabling code reuse, and often leading to faster development cycles.
- High Performance (V8 Engine): Leveraging Google’s V8 engine, Node.js compiles JavaScript directly into machine code, offering excellent runtime performance.
- Scalability: Its non-blocking nature makes it highly scalable for concurrent requests. Combined with clustering and worker threads (introduced in Node.js 10+, robust in Node.js 12+ and later versions like current v20/v21), it can effectively utilize multi-core processors.
- Rich Ecosystem (NPM): The Node Package Manager (npm) is the world’s largest software registry, offering millions of open-source libraries and tools, accelerating development.
- Microservices Architecture: Node.js’s lightweight and efficient nature makes it a strong candidate for building small, focused microservices.
Key Points:
- JavaScript runtime, V8 engine.
- Asynchronous, non-blocking I/O, event-driven architecture.
- Single language for full-stack.
- Scalability for I/O-bound applications.
- Rich NPM ecosystem.
Common Mistakes:
- Calling Node.js a “framework” or “language.” It’s a runtime.
- Assuming it’s good for CPU-bound tasks without discussing worker threads or external services.
- Ignoring the role of the V8 engine.
Follow-up:
- “How does Node.js handle concurrency with its single-threaded model?”
- “Can you name some real-world companies or applications that heavily rely on Node.js?”
2. Explain the Node.js Event Loop. How does it enable non-blocking I/O?
Q: Describe the Node.js Event Loop. How does it allow Node.js to achieve non-blocking I/O and handle concurrency effectively, despite being single-threaded?
A: The Node.js Event Loop is a core architectural pattern that allows Node.js to perform non-blocking I/O operations despite JavaScript being single-threaded. It continuously checks the call stack for tasks to execute and, if empty, checks the message queue for pending tasks. When an asynchronous operation (like reading a file, making a network request, or a timer) is initiated, Node.js offloads this operation to the system kernel (via
libuv, a C++ library). The JavaScript main thread does not wait for it. Instead, it continues executing other code. Once the asynchronous operation completes, the kernel notifies Node.js, and a callback function for that operation is placed into the Event Queue (or Message Queue). The Event Loop’s job is to pick up these callbacks from the queue and push them onto the Call Stack for execution when the Call Stack is otherwise empty. This mechanism ensures that the main JavaScript thread is never blocked by long-running I/O operations, making Node.js highly efficient for concurrent I/O.Key Points:
- Single-threaded JavaScript execution.
- Offloads I/O to
libuv(C++ bindings). - Uses a Call Stack and Event Queue (or Message Queue).
- Continuously cycles, pushing callbacks from queue to stack.
- Enables non-blocking I/O and efficient concurrency.
Common Mistakes:
- Believing Node.js is fully single-threaded;
libuvuses a thread pool for some operations (e.g., file I/O, DNS lookups). - Confusing the Event Loop with simply “asynchronous code.”
- Not mentioning
libuvor the Call Stack/Event Queue interaction.
- Believing Node.js is fully single-threaded;
Follow-up:
- “Can you describe the different phases of the Event Loop?” (Advanced)
- “What happens if a synchronous, CPU-intensive task runs on the main thread?”
3. Differentiate between process.nextTick() and setImmediate(). When would you use each?
Q: Explain the difference between
process.nextTick()andsetImmediate()in Node.js. Provide scenarios where you would prefer one over the other.A: Both
process.nextTick()andsetImmediate()schedule functions to be executed asynchronously, but they operate at different points within the Event Loop’s phases.process.nextTick(callback): This schedules a callback to be executed on the next tick of the event loop, meaning before any other I/O events orsetImmediatecalls. It runs immediately after the current operation completes and before the Event Loop advances to the next phase. This effectively meansnextTickcallbacks are processed at the end of the current phase, or rather, between the completion of a synchronous block and the start of the next phase.- Use Case: Deferring execution just after the current function finishes, but before I/O. Useful for guaranteeing a callback runs before any other asynchronous code, e.g., error handling, normalizing APIs to be async, or preventing stack overflow in recursive async calls.
setImmediate(callback): This schedules a callback to be executed in thecheckphase of the Event Loop. Thecheckphase runs after thepollphase, which handles I/O events, and before theclose callbacksphase.- Use Case: Executing code after I/O callbacks have completed, but before the next event loop iteration. Ideal for breaking up large, synchronous tasks or to allow the Event Loop to process pending I/O events before running the immediate callback.
Key Points:
nextTickruns before the next Event Loop phase, includingsetImmediateand I/O.setImmediateruns in thecheckphase, after I/O polling.nextTickis for “as soon as possible after current operation.”setImmediateis for “as soon as possible after current I/O.”
Common Mistakes:
- Confusing their execution order, especially in relation to
setTimeout(fn, 0). (setTimeout(fn, 0)is handled in thetimersphase and can execute aftersetImmediateif there are I/O operations). - Using
nextTickfor heavy computation, which can starve the Event Loop.
- Confusing their execution order, especially in relation to
Follow-up:
- “How does
setTimeout(fn, 0)compare toprocess.nextTick()andsetImmediate()?” (Mid/Senior) - “Can
process.nextTicklead to Event Loop starvation?”
- “How does
4. Explain the JavaScript this keyword and its behavior in Node.js, particularly with arrow functions and regular functions.
Q: How does the
thiskeyword behave in JavaScript within a Node.js environment? Discuss its behavior in regular functions versus arrow functions, and provide examples.A: The
thiskeyword in JavaScript is notorious for its flexible and context-dependent behavior. Its value is determined by how the function is called.- Global Context: In Node.js,
thisin the global scope (outside any function in a module) refers tomodule.exports(orexports), not the global object (global). In browser JS, it would bewindow. - Regular Functions (
functionkeyword):- Method Call: If a function is called as a method of an object (
obj.method()),thisrefers to theobj. - Simple Call: If called as a standalone function (
func()),thistypically defaults to the global object (globalin Node.js strict mode,undefinedin ES modules by default if not bound). - Constructor Call: With
new Func(),thisrefers to the newly created instance. - Explicit Binding:
call(),apply(),bind()explicitly set the value ofthis.
- Method Call: If a function is called as a method of an object (
- Arrow Functions (
=>): Arrow functions do not have their ownthiscontext. Instead, they lexically inheritthisfrom their enclosing scope at the time they are defined. This meansthisin an arrow function will be the same asthisin the scope where the arrow function was created.
- Global Context: In Node.js,
Key Points:
thiscontext is determined by how a function is called.- Global
thisin Node.js module refers tomodule.exports. - Regular functions have dynamic
this. - Arrow functions have lexical
this(inherit from parent scope).
Common Mistakes:
- Assuming
thisalways points to the instance in a class method (unless bound or an arrow function). - Not understanding the global context difference between Node.js modules and browser JS.
- Assuming
Follow-up:
- “How would you typically handle
thisbinding in class methods within Node.js?” - “When would an arrow function be particularly advantageous regarding
this?”
- “How would you typically handle
5. Explain Closures in JavaScript and their practical use cases in Node.js backend development.
Q: What is a closure in JavaScript? Provide an example of how closures can be beneficial in Node.js backend engineering.
A: A closure is the combination of a function bundled together (enclosed) with references to its surrounding state (the lexical environment). In simpler terms, a closure gives you access to an outer function’s scope from an inner function. Even after the outer function has finished executing, the inner function “remembers” its environment and the variables that were in scope when it was created.
Practical Use Cases in Node.js Backend:
- Private Variables/State Management: Closures can encapsulate data, mimicking private variables in languages that don’t natively support them.
function createCounter() { let count = 0; // 'count' is private to this closure return { increment: () => { count++; return count; }, decrement: () => { count--; return count; }, getCount: () => count }; } const counter = createCounter(); console.log(counter.increment()); // 1 console.log(counter.getCount()); // 1 // console.log(counter.count); // Undefined, 'count' is inaccessible directly - Middleware Functions (e.g., Express.js): Closures are often used to create configurable middleware.Here,
function authMiddleware(requiredRole) { return (req, res, next) => { if (req.user && req.user.role === requiredRole) { next(); } else { res.status(403).send('Forbidden'); } }; } // Usage: app.get('/admin', authMiddleware('admin'), (req, res) => {...});requiredRoleis “closed over” by the inner middleware function. - Memoization/Caching: Storing results of expensive function calls to avoid recalculation.
- Event Handlers and Callbacks: Ensuring callbacks have access to variables from their creation scope.
- Private Variables/State Management: Closures can encapsulate data, mimicking private variables in languages that don’t natively support them.
Key Points:
- Function “remembers” its lexical environment (variables from its parent scope).
- Allows for data encapsulation (private variables).
- Common in middleware, factory functions, and stateful components.
Common Mistakes:
- Creating memory leaks by accidentally holding onto large objects within closures if not managed properly.
- Not understanding that each call to the outer function creates new closures and new environments.
Follow-up:
- “Are there any downsides to using closures, particularly in terms of performance or memory?”
- “How do closures relate to the concept of higher-order functions?”
6. Compare and contrast CommonJS (CJS) and ES Modules (ESM) in Node.js. What is the modern standard (as of 2026)?
Q: Discuss the differences between CommonJS and ES Modules in Node.js. Which module system is considered the modern standard, and what are the implications for developers as of 2026?
A: Node.js initially adopted CommonJS (CJS) for module management, but with the standardization of ECMAScript Modules (ESM) in JavaScript, Node.js has progressively integrated ESM support. As of 2026 (Node.js v20/v21+), ES Modules are the modern standard and the recommended approach for new projects and for maximizing compatibility with the broader JavaScript ecosystem (browsers, bundlers).
CommonJS (CJS):
- Syntax: Uses
require()for importing andmodule.exportsorexportsfor exporting.// CJS module const myUtility = require('./myUtility'); module.exports = { greet: (name) => `Hello, ${name}!` }; - Loading: Synchronous. When
require()is called, the file is loaded and executed, and its exports are returned. This is blocking. - Resolution: Primarily file-based.
- Behavior: Exports are a copy of the module. Changes to exported values within the module won’t affect already imported values.
ES Modules (ESM):
- Syntax: Uses
importfor importing andexportfor exporting.// ESM module import myUtility from './myUtility.js'; export const greet = (name) => `Hello, ${name}!`; - Loading: Asynchronous (designed for the web, though Node.js loads them synchronously at runtime for initial graph construction).
- Resolution: Uses URL-based resolution.
- Behavior: Exports are live bindings (references). If an exported value changes in the original module, consumers see the updated value.
- Interoperability: Requires
.mjsfile extension or"type": "module"inpackage.jsonfor Node.js to interpret files as ESM. CJS modules can be imported into ESM modules (import pkg from 'pkg';), but ESM modules cannot be directlyrequire()d by CJS modules (requires dynamicimport()within CJS).
- Syntax: Uses
Key Points:
- ESM is the modern standard (
import/export), CJS is older (require/module.exports). - ESM is asynchronous by design, CJS is synchronous.
- ESM has live bindings, CJS exports copies.
- Requires
.mjsor"type": "module"for ESM in Node.js. - CJS can
import()ESM dynamically, but ESM cannot berequire()d.
- ESM is the modern standard (
Common Mistakes:
- Confusing
importandrequiresyntax. - Not understanding the implications of mixing CJS and ESM in a project (potential for complexity).
- Not being aware of the
package.json"type": "module"field or.mjsextension.
- Confusing
Follow-up:
- “How would you set up a new Node.js project to use ES Modules by default?”
- “What are the challenges of migrating a large CJS codebase to ESM?”
7. How do you handle errors in asynchronous Node.js code using Promises and Async/Await?
Q: Describe effective strategies for error handling in asynchronous Node.js applications, specifically using Promises and
async/await.A: Robust error handling is crucial for asynchronous Node.js applications.
1. Promises:
.catch()method: The most common way to handle rejections from Promises. A.catch()block handles errors that occur anywhere in the Promise chain before it.doSomethingAsync() .then(result => processResult(result)) .then(finalData => console.log(finalData)) .catch(error => console.error('An error occurred:', error));- Second argument to
.then(): Less common, butpromise.then(onFulfilled, onRejected)also works. However, using.catch()is generally preferred as it catches errors from any previousthenin the chain.
2. Async/Await:
try...catchblock: This is the most straightforward and recommended approach for handling errors withasync/await.async function fetchData() { try { const response = await fetch('https://api.example.com/data'); if (!response.ok) { throw new Error(`HTTP error! status: ${response.status}`); } const data = await response.json(); console.log(data); } catch (error) { console.error('Failed to fetch data:', error); // Propagate the error, or return a default value throw error; } }- Implicit Promise Rejection: If an error occurs within an
asyncfunction and is not caught bytry...catch, theasyncfunction will implicitly return a rejected Promise. This can then be caught by a.catch()when theasyncfunction is called.async function mightFail() { // This will cause an unhandled promise rejection if not caught internally throw new Error('Something went wrong!'); } mightFail().catch(error => console.error('Caught outside:', error)); - Global Unhandled Rejection Handler: For catching unhandled promise rejections that might escape individual
try/catchor.catch()blocks, Node.js providesprocess.on('unhandledRejection', (reason, promise) => {}). This is important for logging and graceful shutdown, but should not be relied upon for primary error handling.
Key Points:
- Promises: Use
.catch()for handling rejections. - Async/Await: Use
try...catchblocks. process.on('unhandledRejection')for global fallback logging/handling.- Always propagate or handle errors appropriately; don’t just swallow them.
- Promises: Use
Common Mistakes:
- Forgetting
awaitinasyncfunctions, leading to unhandled promises. - Not having a
.catch()at the end of a Promise chain, resulting in unhandled rejections. - Using global
unhandledRejectionas the only error handling mechanism.
- Forgetting
Follow-up:
- “What is an ‘unhandled promise rejection’ and how can you prevent it?”
- “When would you use custom error classes in Node.js?”
8. What are Node.js Streams? Explain their benefits and provide an example of their use.
Q: Describe Node.js Streams. What are the advantages of using streams, particularly for backend applications, and illustrate with a simple example.
A: Node.js Streams are abstract interfaces for working with streaming data. They are a way to handle reading/writing files, network communications, or any kind of end-to-end information exchange in a continuous, memory-efficient manner. Instead of loading an entire file into memory before processing, streams allow you to process data in chunks.
There are four primary types of streams:
- Readable Streams: For reading data (e.g.,
fs.createReadStream(),http.IncomingMessage). - Writable Streams: For writing data (e.g.,
fs.createWriteStream(),http.ServerResponse). - Duplex Streams: Both Readable and Writable (e.g.,
net.Socket). - Transform Streams: Duplex streams that can modify data as it’s written and read (e.g.,
zlib.createGzip()).
Benefits:
- Memory Efficiency: Process large amounts of data chunk by chunk without loading the entire dataset into memory, preventing memory overruns.
- Time Efficiency: Data can be processed as soon as it arrives, rather than waiting for the entire resource to load.
- Composability: Streams can be “piped” together (
source.pipe(destination)), creating a chain of operations. - Backpressure Handling: Built-in mechanisms to manage the flow of data between readable and writable streams, preventing a fast producer from overwhelming a slow consumer.
Example (Piping a large file to an HTTP response):
const http = require('http'); const fs = require('fs'); const path = require('path'); const server = http.createServer((req, res) => { if (req.url === '/large-file') { const filePath = path.join(__dirname, 'large_file.txt'); // Assume this file exists const readStream = fs.createReadStream(filePath); res.writeHead(200, { 'Content-Type': 'text/plain' }); readStream.pipe(res); // Pipes the readable stream directly to the writable response stream readStream.on('error', (err) => { console.error('File stream error:', err); res.end('Error serving file.'); }); } else { res.end('Hello Node.js!'); } }); server.listen(3000, () => { console.log('Server listening on http://localhost:3000'); });This example serves a large file without buffering the entire file in memory first.
- Readable Streams: For reading data (e.g.,
Key Points:
- Abstract interfaces for streaming data.
- Types: Readable, Writable, Duplex, Transform.
- Benefits: Memory efficiency, time efficiency, composability (
pipe), backpressure. - Ideal for large files, network I/O, and data processing pipelines.
Common Mistakes:
- Trying to use streams for small data where simple buffering might be easier/clearer.
- Not handling stream errors, which can leave connections open or lead to unexpected behavior.
- Ignoring backpressure when manually consuming streams (though
pipe()handles it).
Follow-up:
- “How does backpressure work in Node.js streams?” (Senior)
- “When would you not use streams?”
9. Implement a simple Node.js script to read a file, count word frequencies, and write the results to another file. (Coding Question - Junior/Mid)
Q: Write a Node.js script that performs the following:
- Reads the content of an input file (
input.txt). - Counts the frequency of each word in the file (case-insensitive).
- Writes the word frequencies (word: count) to an output file (
output.txt), one word per line, sorted alphabetically by word.
- Reads the content of an input file (
A:
const fs = require('fs/promises'); // Using promises API for fs module const path = require('path'); async function countWordFrequencies(inputFile, outputFile) { try { // 1. Read the content of the input file const inputPath = path.join(__dirname, inputFile); const content = await fs.readFile(inputPath, { encoding: 'utf8' }); // 2. Count the frequency of each word const wordFrequencies = {}; const words = content.toLowerCase().match(/\b\w+\b/g) || []; // Extract words, case-insensitive for (const word of words) { wordFrequencies[word] = (wordFrequencies[word] || 0) + 1; } // 3. Prepare output string, sorted alphabetically const sortedWords = Object.keys(wordFrequencies).sort(); const outputLines = sortedWords.map(word => `${word}: ${wordFrequencies[word]}`); const outputContent = outputLines.join('\n'); // 4. Write the results to an output file const outputPath = path.join(__dirname, outputFile); await fs.writeFile(outputPath, outputContent, { encoding: 'utf8' }); console.log(`Word frequencies written to ${outputFile}`); } catch (error) { console.error('An error occurred:', error.message); } } // Example usage: // Create a dummy input.txt for testing: // This is a test file. // Test node js features. // Node is great. countWordFrequencies('input.txt', 'output.txt');To run this, create an
input.txtfile in the same directory with some text.Key Points:
- Uses
fs/promisesfor modern async file operations. - Handles file reading, string manipulation, object iteration, and file writing.
- Basic error handling with
try...catch. - Case-insensitive word counting.
- Alphabetical sorting of output.
- Uses
Common Mistakes:
- Using synchronous
fsmethods (e.g.,readFileSync) which block the Event Loop. - Not handling potential file read/write errors.
- Incorrect regex for word extraction.
- Not sorting the output as requested.
- Using synchronous
Follow-up:
- “How would you modify this to handle extremely large input files efficiently, without loading the entire file into memory?” (Hint: Streams - Senior)
- “How would you handle punctuation or special characters within words?”
10. What is Buffer in Node.js and when would you use it?
Q: Explain the Node.js
Bufferclass. Why is it necessary, and in what scenarios would you use it in backend development?A: The
Bufferclass in Node.js is a global object that represents a raw binary data sequence. It’s essentially a temporary storage area for binary data, similar to an array of integers, but corresponding to a raw memory allocation outside the V8 JavaScript engine heap. This meansBufferinstances do not use V8’s garbage collection.Why it’s necessary: JavaScript traditionally handled strings, not raw binary data directly. However, in backend operations, dealing with network protocols (TCP/UDP), file system operations, image manipulation, or cryptography often requires working with raw byte streams.
Bufferbridges this gap, allowing Node.js to interact with binary data efficiently.Scenarios for use:
- File I/O: Reading or writing raw binary data to and from files (though
fsoften abstracts this,Bufferis used internally). - Network I/O: Sending and receiving raw data over network protocols, e.g., working with
netordgrammodules. - Image/Audio/Video Processing: Handling raw image pixels, audio samples, or video frames.
- Cryptography: Performing hashing, encryption, or decryption where data needs to be in a binary format.
- Data Encoding/Decoding: Converting between different character encodings (e.g., UTF-8, base64) to binary data and vice versa.
- Streams: Buffers are the fundamental data unit used by Node.js Streams to transfer data.
Example (Converting string to Buffer and back):
const stringData = 'Hello, Node.js!'; const buffer = Buffer.from(stringData, 'utf8'); // Creates a Buffer from a string using UTF-8 encoding console.log(buffer); // <Buffer 48 65 6c 6c 6f 2c 20 4e 6f 64 65 2e 6a 73 21> console.log(buffer.toString('base64')); // SGVsbG8sIE5vZGUuanMh console.log(buffer.toString('utf8')); // Hello, Node.js! const emptyBuffer = Buffer.alloc(10); // Creates a Buffer of 10 bytes, initialized with zeros console.log(emptyBuffer); // <Buffer 00 00 00 00 00 00 00 00 00 00>- File I/O: Reading or writing raw binary data to and from files (though
Key Points:
- Represents raw binary data, similar to byte arrays.
- Allocated outside V8 heap, not subject to GC.
- Essential for low-level I/O, network communication, cryptography, and binary data manipulation.
- Used internally by streams and many Node.js core modules.
Common Mistakes:
- Confusing
Bufferwith JavaScript’s regular arrays or strings. - Using
Buffer.allocUnsafe()without knowing the risks (may contain sensitive old data). - Not specifying encoding when converting between strings and buffers, leading to data corruption.
- Confusing
Follow-up:
- “What is the difference between
Buffer.alloc()andBuffer.from()?” - “How does
Bufferrelate toTypedArrayin modern JavaScript?”
- “What is the difference between
11. Implement a function to deep merge two JavaScript objects without mutating the originals. (Coding Question - Mid/Senior)
Q: Write a JavaScript function
deepMerge(target, source)that recursively merges properties fromsourceintotarget. The merge should be deep, meaning nested objects and arrays should also be merged/cloned, and neither thetargetnorsourceobjects should be mutated directly. Assumesourceproperties overwritetargetproperties in case of conflicts, unless both are objects/arrays, in which case they are merged.A:
/** * Deeply merges two JavaScript objects or arrays, returning a new merged object/array. * Neither the target nor the source objects/arrays are mutated. * Source properties overwrite target properties for primitives. * Nested objects and arrays are recursively merged. * * @param {Object|Array} target The target object or array. * @param {Object|Array} source The source object or array. * @returns {Object|Array} A new object or array with merged properties. */ function deepMerge(target, source) { // Create a deep copy of target to avoid mutation const output = Array.isArray(target) ? [...target] : { ...target }; if (isObject(target) && isObject(source)) { for (const key in source) { if (source.hasOwnProperty(key)) { if (isObject(source[key]) && isObject(output[key])) { // Both are objects, deep merge output[key] = deepMerge(output[key], source[key]); } else if (Array.isArray(source[key]) && Array.isArray(output[key])) { // Both are arrays, concatenate (or implement more complex array merge logic if needed) output[key] = [...output[key], ...source[key]]; } else { // Overwrite primitive or different types output[key] = source[key]; } } } } else if (Array.isArray(target) && Array.isArray(source)) { // If both are arrays at the top level, concatenate and then unique if desired. // For a simple merge, we just concatenate and deep merge elements if possible. // This is a simplified array merge. For objects in arrays, it might need more logic. const mergedArray = [...target]; source.forEach(item => { // If item is an object/array, we might need a deep copy of it too // For this problem, we'll assume direct concatenation for array elements mergedArray.push(item); }); return mergedArray; // Return the new merged array } else { // If types are different or one is not an object/array, source overwrites target return source; } return output; } function isObject(item) { return (item && typeof item === 'object' && !Array.isArray(item)); } // Example Usage: const obj1 = { a: 1, b: { c: 2, d: [10, 20] }, e: [1, { f: 3 }], g: 'hello' }; const obj2 = { b: { c: 3, h: 4 }, e: [{ f: 4 }, 5], g: 'world', z: 99 }; const merged = deepMerge(obj1, obj2); console.log(JSON.stringify(merged, null, 2)); /* Expected Output (simplified array merge for `e`): { "a": 1, "b": { "c": 3, "d": [ 10, 20 ], "h": 4 }, "e": [ 1, { "f": 3 }, { "f": 4 }, 5 ], "g": "world", "z": 99 } */ console.log(obj1); // Should be unchanged console.log(obj2); // Should be unchangedKey Points:
- Recursion is fundamental for deep merging.
- Handles both objects and arrays.
- Crucially, performs deep cloning to ensure immutability of originals.
- Type checking (
isObject,Array.isArray) is essential to determine merge strategy. - Source properties overwrite target properties at primitive levels.
Common Mistakes:
- Mutating the original
targetorsourceobjects. - Shallow copying instead of deep copying for nested structures.
- Incorrectly handling arrays (e.g., treating them as objects, or not concatenating them).
- Failing to handle different types gracefully (e.g., source is object, target is string).
- Mutating the original
Follow-up:
- “How would you handle circular references in the objects being merged?” (Senior/Staff)
- “What if you wanted a different merge strategy for arrays, e.g., merging objects within arrays by a unique ID?”
12. Discuss Promise.all(), Promise.race(), Promise.allSettled(), and Promise.any(). Provide use cases for each.
Q: Explain the purpose and behavior of
Promise.all(),Promise.race(),Promise.allSettled(), andPromise.any(). For each, provide a practical Node.js backend use case.A: These static methods of the
Promiseobject are crucial for orchestrating multiple asynchronous operations.Promise.all(iterable):- Purpose: Waits for all promises in the iterable to be fulfilled, or for any one of them to be rejected.
- Behavior: If all promises fulfill,
Promise.allfulfills with an array of their fulfillment values (in the same order as the input iterable). If any promise rejects,Promise.allimmediately rejects with the reason of the first promise that rejected. - Use Case: Fetching data from multiple independent microservices or databases simultaneously where all pieces of data are required before proceeding (e.g., retrieving user profile data, order history, and shopping cart contents for a single page render).
// Example: Fetching user details and order history concurrently const fetchUserDetails = getUser(userId); const fetchOrderHistory = getOrders(userId); Promise.all([fetchUserDetails, fetchOrderHistory]) .then(([user, orders]) => { console.log('All data loaded:', user, orders); }) .catch(error => { console.error('Failed to load all data:', error); });Promise.race(iterable):- Purpose: Returns a promise that fulfills or rejects as soon as one of the promises in the iterable fulfills or rejects, with the value or reason from that promise.
- Behavior: The “winner” is the first promise to settle (either fulfill or reject).
- Use Case: Implementing a timeout for an asynchronous operation. If the main operation doesn’t complete within a certain time, the timeout promise wins and rejects, preventing indefinite waiting. Or, fetching data from multiple redundant sources and taking the fastest response.
// Example: Fetching data with a timeout const fetchDataWithTimeout = (url, timeoutMs) => { const fetchPromise = fetch(url); const timeoutPromise = new Promise((resolve, reject) => setTimeout(() => reject(new Error('Request timed out')), timeoutMs) ); return Promise.race([fetchPromise, timeoutPromise]); }; fetchDataWithTimeout('https://api.example.com/data', 3000) .then(response => console.log('Data received:', response.json())) .catch(error => console.error(error.message));Promise.allSettled(iterable)(ES2020+):- Purpose: Returns a promise that fulfills after all of the given promises have either fulfilled or rejected, with an array of objects describing the outcome of each promise.
- Behavior: It never rejects. The returned promise always fulfills with an array containing objects of the form
{ status: 'fulfilled', value: result }or{ status: 'rejected', reason: error }for each input promise. - Use Case: Executing a batch of independent tasks where you need to know the outcome of all of them, regardless of whether they succeeded or failed (e.g., sending out multiple transactional emails, generating multiple reports, or publishing messages to various third-party services).
// Example: Sending multiple independent notifications const sendEmail = sendGrid.send(); const sendSMS = twilio.send(); const logActivity = activityLogger.log(); Promise.allSettled([sendEmail, sendSMS, logActivity]) .then(results => { results.forEach(result => { if (result.status === 'fulfilled') { console.log('Operation succeeded:', result.value); } else { console.error('Operation failed:', result.reason); } }); });Promise.any(iterable)(ES2021+):- Purpose: Returns a promise that fulfills as soon as any of the promises in the iterable fulfills, with the value of that fulfilled promise. If all of the promises in the iterable reject, then the returned promise rejects with an
AggregateErrorcontaining an array of all rejection reasons. - Behavior: The “winner” is the first promise to fulfill. Rejections are ignored until all promises have rejected.
- Use Case: Fetching data from multiple redundant CDN endpoints or fallback services. You need any successful response, and you only care if all sources fail (e.g., trying to fetch a static asset from CDN1, then CDN2, then local server).
// Example: Fetching a resource from the fastest available source const fetchFromCDN1 = fetch('https://cdn1.example.com/asset.js').then(res => res.text()); const fetchFromCDN2 = fetch('https://cdn2.example.com/asset.js').then(res => res.text()); const fetchFromLocal = fetch('https://localhost:8080/asset.js').then(res => res.text()); Promise.any([fetchFromCDN1, fetchFromCDN2, fetchFromLocal]) .then(assetContent => { console.log('Asset loaded from fastest source:', assetContent.substring(0, 50) + '...'); }) .catch(error => { console.error('All asset sources failed:', error.errors); // error.errors will be an array of all rejection reasons });- Purpose: Returns a promise that fulfills as soon as any of the promises in the iterable fulfills, with the value of that fulfilled promise. If all of the promises in the iterable reject, then the returned promise rejects with an
Key Points:
Promise.all(): All must succeed, or first failure rejects all.Promise.race(): First to settle (fulfill or reject) determines outcome.Promise.allSettled(): Waits for all to settle, never rejects, provides all outcomes.Promise.any(): First to fulfill determines outcome; only rejects if all reject.
Common Mistakes:
- Using
Promise.all()when you need all outcomes (including failures) – useallSettled()instead. - Using
Promise.race()without a clear “winner” strategy or when partial results are acceptable. - Not being aware of
AggregateErrorwithPromise.any(). - Using older Node.js versions that might not support
Promise.allSettled()orPromise.any()without polyfills (though for 2026, modern Node.js versions will support them).
- Using
Follow-up:
- “When might
Promise.allbe a performance bottleneck?” - “How would you implement a retry mechanism for a failed promise within a
Promise.allcontext?”
- “When might
MCQ Section
1. Which core Node.js module is primarily responsible for its asynchronous I/O operations and event-driven architecture?
A) `http`
B) `fs`
C) `libuv`
D) `V8`
**Correct Answer: C**
* **Explanation:**
* A) `http` is for creating HTTP servers/clients, using `libuv` internally.
* B) `fs` is for file system operations, using `libuv` internally.
* C) `libuv` is the C++ library that provides Node.js with its asynchronous I/O capabilities and implements the Event Loop.
* D) `V8` is Google's JavaScript engine, responsible for executing JavaScript code, but not directly for I/O.
2. What will be the output of the following Node.js code snippet?
```javascript
console.log('Start');
process.nextTick(() => console.log('Next Tick 1'));
Promise.resolve().then(() => console.log('Promise 1'));
setTimeout(() => console.log('Timeout 1'), 0);
setImmediate(() => console.log('Immediate 1'));
console.log('End');
```
A) Start, End, Next Tick 1, Promise 1, Timeout 1, Immediate 1
B) Start, End, Promise 1, Next Tick 1, Timeout 1, Immediate 1
C) Start, End, Next Tick 1, Promise 1, Immediate 1, Timeout 1
D) Start, Next Tick 1, Promise 1, End, Timeout 1, Immediate 1
**Correct Answer: A**
* **Explanation:**
* `console.log('Start')` and `console.log('End')` run synchronously first.
* `process.nextTick` callbacks run immediately after the current operation and before the Event Loop proceeds to microtasks or next phases.
* `Promise.resolve().then()` callbacks are microtasks, which run after `nextTick` callbacks but also before the Event Loop moves to the next phase (e.g., timers, check). However, `process.nextTick` has higher precedence than microtasks in Node.js.
* `setTimeout(fn, 0)` callbacks are handled in the `timers` phase.
* `setImmediate` callbacks are handled in the `check` phase.
* The typical order in Node.js is: Sync code -> `process.nextTick` -> Promise Microtasks -> `timers` (e.g., `setTimeout`) -> `poll` (I/O) -> `check` (`setImmediate`) -> `close callbacks`.
3. In an ES Module (file ending in .mjs or package.json has "type": "module"), which of the following is the correct way to import a default export named myFunc from utils.js?
A) `const myFunc = require('./utils.js');`
B) `import { myFunc } from './utils.js';`
C) `import myFunc from './utils.js';`
D) `module.exports = { myFunc };`
**Correct Answer: C**
* **Explanation:**
* A) `require()` is for CommonJS, not ES Modules.
* B) `import { myFunc }` is for named exports, not default exports.
* C) `import myFunc from './utils.js';` is the correct syntax for importing a default export in ES Modules.
* D) `module.exports` is for CommonJS exporting.
4. Which of the following statements about Buffer in Node.js is true?
A) `Buffer` instances are automatically garbage collected by V8 like regular JavaScript objects.
B) `Buffer` is primarily used for string manipulation and parsing JSON data.
C) `Buffer` provides a way to handle raw binary data outside the V8 heap.
D) `Buffer` is a type of JavaScript `Array` that can store any data type.
**Correct Answer: C**
* **Explanation:**
* A) `Buffer` instances are allocated outside the V8 heap and are not subject to V8's garbage collection directly, although the JS object holding the buffer reference is.
* B) While buffers can be converted to strings and can contain JSON data, their primary purpose is raw binary data, not high-level string/JSON manipulation.
* C) This accurately describes `Buffer`'s purpose and memory allocation.
* D) `Buffer` is *not* a regular JavaScript `Array`; it's a `Uint8Array` instance, specifically designed for bytes.
5. When using Promise.all([promise1, promise2, promise3]), what happens if promise2 rejects, but promise1 and promise3 are still pending?
A) `Promise.all` will wait for `promise1` and `promise3` to settle, then fulfill with an `AggregateError`.
B) `Promise.all` will immediately reject with the reason from `promise2`.
C) `Promise.all` will fulfill with the values of `promise1` and `promise3`, ignoring `promise2`.
D) `Promise.all` will resolve with an array of outcomes similar to `Promise.allSettled`.
**Correct Answer: B**
* **Explanation:** `Promise.all` is "fail-fast." If any of the promises in the iterable rejects, `Promise.all` immediately rejects with the reason of the first promise that rejected, without waiting for the others to settle.
Mock Interview Scenario: Debugging Asynchronous Flow
Scenario Setup:
You’re a backend engineer tasked with fixing a bug in an existing Node.js API endpoint (/users/:id). Users are reporting that sometimes, even for valid user IDs, the API returns a “User not found” error, or takes an unexpectedly long time to respond. The current code aims to fetch user data from a database and then retrieve a list of recent activities for that user from a separate microservice.
Code Snippet (simplified):
// userController.js
const db = require('../db'); // Simulated DB module, returns Promises
const activityService = require('../activityService'); // Simulated Microservice, returns Promises
async function getUserWithActivities(req, res) {
const userId = req.params.id;
try {
const user = await db.getUserById(userId);
if (!user) {
return res.status(404).send('User not found');
}
const activities = activityService.getRecentActivities(userId); // THIS LINE IS SUSPECT
// Assume there's more logic here, potentially using 'activities'
// For simplicity, let's just send back user data and activities
res.json({ user, activities });
} catch (error) {
console.error('Error in getUserWithActivities:', error.message);
res.status(500).send('Internal server error');
}
}
module.exports = { getUserWithActivities };
Interviewer: “Hello, thanks for coming in. We have a critical bug in our user profile API. It sometimes says ‘User not found’ even for existing users, or it’s very slow. I’ve provided a simplified userController.js snippet. Can you identify potential issues with this code, especially concerning its asynchronous nature and error handling, and propose fixes?”
Expected Flow of Conversation & Candidate Responses:
Initial Impression (Candidate): “Looking at the
getUserWithActivitiesfunction, the most immediate potential issue I see is on the lineconst activities = activityService.getRecentActivities(userId);. SinceactivityService.getRecentActivitiesis expected to return a Promise (as per typical microservice interactions and the context of Node.js async code), this line is likely missing anawaitkeyword.”Interviewer: “Interesting. What exactly happens if we omit
awaitthere?”Candidate: “Without
await,activitieswill not hold the resolved data from the promise; instead, it will hold the Promise object itself. So,res.json({ user, activities })will send the user data along with a pending Promise object as theactivitiesvalue, which is not the intended behavior. The client would receive something like{ user: {...}, activities: {} }or{ user: {...}, activities: { then: [Function], catch: [Function], ... } }depending onJSON.stringifybehavior, which is incorrect. This explains why the API might seem ‘slow’ if the client expectsactivitiesto be populated, but it actually has to wait for its own request to finish.”Interviewer: “You mentioned ‘User not found’ as another symptom. How could a missing
awaitlead to that, or is there another potential issue?”Candidate: “A missing
awaitforactivityService.getRecentActivitieswouldn’t directly cause ‘User not found’ for a valid user ID, asdb.getUserByIdis correctly awaited. However, ifactivityService.getRecentActivitiesitself could potentially reject (e.g., microservice is down, network error), and that rejection isn’t handled, it would cause an unhandled promise rejection outside of thetry...catchblock here, asactivitiesis just a Promise object. If that unhandled rejection propagates up to a global handler and causes a server restart or similar, it could indirectly affect other requests. But the ‘User not found’ error specifically comes fromif (!user) { return res.status(404).send('User not found'); }. A missingawaitin theactivitiesline won’t affect theuservariable.”“A more likely cause for ‘User not found’ could be:
- Database connection issues:
db.getUserById(userId)might be throwing an error if the database is unreachable, causing thecatchblock to trigger. - Incorrect
userIdparsing: Ifreq.params.idis not correctly parsed or validated, it might lead todb.getUserByIdreturningnullorundefinedfor an existing user. - Race condition/data inconsistency: A rare case where a user is deleted between the time
db.getUserByIdis called and some other implicit check. - Network issues: If
db.getUserByIdthrows a network-related error. In the context of the bug description, the ‘slow’ part is definitely related to the missingawaitforactivities, while ‘User not found’ for valid IDs points to an issue withdb.getUserByIdor theuserIdinput itself.”
- Database connection issues:
Interviewer: “Good analysis on the ‘User not found’ part. Let’s focus on the
activitiesissue. How would you fix the missingawait?”Candidate: “The fix is straightforward:
// ... const activities = await activityService.getRecentActivities(userId); // ADDED AWAIT // ...With
await, the execution of theasyncfunction will pause until thegetRecentActivitiesPromise resolves, andactivitieswill then correctly hold the resolved data. This ensures the client gets the full, expected response.”Interviewer: “Now consider error handling. What if
activityService.getRecentActivitiesrejects after yourawaitfix? How would the code behave, and is that sufficient?”Candidate: “If
activityService.getRecentActivitiesrejects, theawaitkeyword will effectively turn that Promise rejection into a thrown error. This error would then be caught by thetry...catchblock surrounding theawaitcall. Theconsole.errorwould log the issue, and the API would respond with a500 Internal server error. This is generally acceptable as a default fallback.However, for a more refined approach, we might want different error responses depending on which service failed. For example, if user data is paramount but activities are secondary, we could use
Promise.allSettledor a separatetry...catchforactivities.”Interviewer: “Can you show me how you might fetch activities concurrently with the user data, and handle potential failures gracefully, perhaps still returning user data even if activities fail?”
Candidate: “Certainly. To fetch user data and activities concurrently, we can use
Promise.allSettledto ensure we get results for both, even if one fails. This also allows us to send a partial response if one fails.async function getUserWithActivities(req, res) { const userId = req.params.id; try { const userPromise = db.getUserById(userId); const activitiesPromise = activityService.getRecentActivities(userId); const [userResult, activitiesResult] = await Promise.allSettled([userPromise, activitiesPromise]); if (userResult.status === 'rejected') { // If the user promise rejected (e.g., DB error), log and send 500 console.error('Error fetching user:', userResult.reason); return res.status(500).send('Internal server error'); } const user = userResult.value; if (!user) { // User not found in DB return res.status(404).send('User not found'); } let activities = []; if (activitiesResult.status === 'fulfilled') { activities = activitiesResult.value; } else { // Activities service failed, log and perhaps send an empty array or specific message console.warn('Could not fetch activities for user', userId, 'Reason:', activitiesResult.reason); // We proceed without activities, or with an empty array. } res.json({ user, activities }); } catch (error) { // This catch would now primarily handle errors outside of the promises, // or errors in the sync logic within the try block. console.error('Unexpected error in getUserWithActivities:', error.message); res.status(500).send('Internal server error'); } }This version uses
Promise.allSettledto await both operations concurrently. It then explicitly checks the status of each promise result. If fetching the user fails, it’s a critical error. If fetching activities fails, it’s logged, but the API can still return the user data with an empty (or partial) activities list, providing a more resilient user experience.”
Red Flags to Avoid:
- Not identifying the missing
await: This is the primary bug. - Suggesting synchronous solutions: Blocks the Event Loop.
- Poor error handling: Not using
try...catchor neglecting Promise rejections. - Assuming all errors are 500: Not distinguishing between client-side (404, 400) and server-side (500) errors.
- Over-engineering for a simple fix: Starting with
Promise.allSettledbefore addressing the basicawaitissue.
Practical Tips
- Master the Event Loop: Truly understanding how the Node.js Event Loop works is the single most important concept. Diagram it, trace code, and experiment with
process.nextTick,setImmediate,setTimeout, and Promises. - Embrace Asynchronous JavaScript: Be comfortable with Callbacks, Promises (
.then().catch().finally()), and especiallyasync/await. Know when to use each and how to handle errors effectively within them. - Differentiate CJS and ESM: For modern Node.js development (2026), ESM is preferred. Understand its syntax, how to configure your project for it, and the interoperability challenges with older CJS modules.
- Hands-on Practice: The best way to learn is by doing.
- Build small Node.js scripts that mimic common backend tasks (file I/O, simple HTTP server, interacting with dummy APIs).
- Solve coding challenges focusing on async patterns, closures, and object manipulation.
- Implement basic data structures and algorithms in JavaScript to reinforce core language skills.
- Read Node.js Documentation: The official Node.js documentation (nodejs.org) is an authoritative and up-to-date resource. Pay special attention to the
fs,http,events,stream, andutilmodules. - Understand
thisand Closures: These are fundamental JavaScript concepts that frequently appear in interview questions and are critical for writing correct and maintainable Node.js code, especially when dealing with classes, modules, and event handlers. - Version Awareness: Always be aware of the Node.js LTS (Long Term Support) versions (e.g., Node.js v20 as of 2026 is an LTS) and the latest current release (e.g., v21 or v22). Interviewers expect you to be current with modern features and best practices.
Summary
This chapter laid the groundwork for your Node.js backend engineering interview preparation by focusing on the core fundamentals. We explored Node.js’s unique event-driven, non-blocking architecture, delving into the intricacies of the Event Loop, process.nextTick(), and setImmediate(). We also covered essential JavaScript concepts such as this context, closures, and modern module systems (ESM vs. CJS), along with robust asynchronous error handling using Promises and async/await. Practical coding questions reinforced these theoretical concepts, and a mock debugging scenario challenged your ability to apply this knowledge to real-world problems.
A strong command of these fundamentals is critical for all Node.js roles. As you progress, remember that higher-level roles will expect not just knowledge, but also the ability to reason about the implications of these concepts on performance, scalability, and maintainability. Continue practicing, experimenting, and building on this foundational knowledge for the subsequent chapters.
References
- Node.js Official Documentation: https://nodejs.org/docs/latest/api/ (Always refer to the latest stable/LTS version for up-to-date info, e.g., v20.x or v21.x documentation for 2026.)
- MDN Web Docs - JavaScript: https://developer.mozilla.org/en-US/docs/Web/JavaScript (Authoritative source for core JavaScript concepts like Promises,
async/await,this, and Closures.) - InterviewBit - Node.js Interview Questions: https://www.interviewbit.com/node-js-interview-questions/ (Provides a broad range of questions, useful for additional practice.)
- GeeksforGeeks - Node.js Exercises: https://www.geeksforgeeks.org/node-js/node-exercises (Offers interactive quizzes and coding challenges for hands-on practice.)
- Medium - I Failed 17 Senior Backend Interviews. Here’s What They Actually Test: https://medium.com/lets-code-future/i-failed-17-senior-backend-interviews-heres-what-they-actually-test-with-real-questions-639832763034 (Offers insights into real-world interview expectations for senior roles, including debugging and architectural thinking, as of Feb 2026.)
This interview preparation guide is AI-assisted and reviewed. It references official documentation and recognized interview preparation resources.