Introduction

Welcome to Chapter 3 of our comprehensive Node.js interview preparation guide! This chapter delves into the foundational elements of Node.js: its Core APIs and the intricate Module System. A deep understanding of these topics is paramount for any Node.js developer, regardless of experience level, as they form the bedrock of building efficient, scalable, and maintainable backend applications.

This section is designed to progressively build your knowledge, covering everything from basic module syntax and core utility modules (like fs, http, path, events, process) to advanced concepts like Streams, Buffers, and the nuances of CommonJS versus ES Modules. Whether you are an intern looking to grasp the basics, a junior developer aiming to solidify your understanding, or a senior/staff engineer needing to articulate advanced design patterns and troubleshooting strategies, these questions and scenarios will equip you with the insights and confidence required for your next interview. We’ll also cover npm and other package managers, which are integral to modern Node.js development as of early 2026.

Core Interview Questions

Intern/Junior Level

Q1: What is Node.js, and what are its key advantages for backend development?

A: Node.js is an open-source, cross-platform JavaScript runtime environment that executes JavaScript code outside a web browser. It uses Google’s V8 JavaScript engine (latest versions, e.g., Node.js v22/v24, incorporate updated V8 versions), a non-blocking, event-driven architecture, and a single-threaded event loop for handling concurrent requests efficiently.

Its key advantages for backend development include:

  • Performance: The non-blocking I/O model makes it highly efficient for I/O-bound tasks, like building APIs, microservices, and real-time applications.
  • Scalability: The event-driven architecture allows it to handle many concurrent connections with minimal overhead, making it suitable for scalable systems.
  • Unified Language: Developers can use JavaScript for both frontend and backend development, leading to code reuse and easier context switching.
  • Rich Ecosystem: npm (Node Package Manager) provides access to millions of open-source libraries, accelerating development.
  • Low Latency: Excellent for applications requiring fast response times, such as chat applications or streaming services.

Key Points:

  • JavaScript runtime built on Chrome’s V8 engine.
  • Non-blocking, event-driven I/O.
  • Single-threaded event loop (for user code).
  • Great for I/O-bound tasks and real-time applications.

Common Mistakes:

  • Stating Node.js is a programming language or a framework.
  • Incorrectly claiming it’s multi-threaded for user code execution.

Follow-up:

  • Can Node.js execute CPU-bound tasks efficiently? If not, how would you handle them?
  • Explain the “event-driven” architecture in more detail.

Q2: Explain the purpose of package.json and its essential fields.

A: package.json is a manifest file for Node.js projects. It’s crucial for managing project metadata, dependencies, and scripts. It allows Node.js to understand how a project is configured and helps package managers like npm, yarn, or pnpm manage dependencies and execute tasks.

Essential fields include:

  • name: The name of the package. Must be lowercase and hyphen-separated.
  • version: The current version of the package (遵循Semantic Versioning: MAJOR.MINOR.PATCH).
  • description: A brief description of the package.
  • main: The entry point of the application (e.g., index.js).
  • scripts: A dictionary of runnable scripts (e.g., start, test, build).
  • dependencies: Runtime dependencies required by the project.
  • devDependencies: Development-only dependencies (e.g., testing frameworks, build tools).
  • engines: Specifies the Node.js versions your package runs on (e.g., "node": ">=20.0.0").
  • license: The license under which your package is released.

Key Points:

  • Project metadata and configuration.
  • Dependency management (dependencies, devDependencies).
  • Script execution (scripts).
  • Adheres to Semantic Versioning.

Common Mistakes:

  • Forgetting name and version are mandatory for publishing to npm.
  • Confusing dependencies with devDependencies.

Follow-up:

  • What is Semantic Versioning (SemVer) and why is it important for package.json?
  • How do npm install and npm ci differ, and when would you use each?

Q3: How do you import and export modules using CommonJS in Node.js? Illustrate with an example.

A: CommonJS is the default module system in Node.js (prior to ES Modules becoming widely adopted and stable for production in recent Node.js versions). It uses require() to import modules and module.exports or exports to export them.

Example:

mathOperations.js (module to export):

// mathOperations.js
const add = (a, b) => a + b;
const subtract = (a, b) => a - b;

// Option 1: Using module.exports (preferred for exporting a single thing or multiple named exports)
module.exports = {
  add,
  subtract
};

// Option 2: Using exports (a reference to module.exports, can only add properties)
// exports.add = (a, b) => a + b;
// exports.subtract = (a, b) => a - b;

app.js (module to import):

// app.js
const math = require('./mathOperations'); // Relative path for local modules

console.log('Addition:', math.add(5, 3));      // Output: Addition: 8
console.log('Subtraction:', math.subtract(10, 4)); // Output: Subtraction: 6

// You can also destructure:
// const { add, subtract } = require('./mathOperations');
// console.log('Addition:', add(5, 3));

Key Points:

  • require() for importing.
  • module.exports for setting the value returned by require().
  • exports is a shorthand reference to module.exports, but direct assignment to exports will break the reference.
  • Modules are cached after the first require().

Common Mistakes:

  • Directly assigning exports = ... instead of module.exports = ... when intending to export a single object or function. exports is a reference, not the actual export object.

Follow-up:

  • What happens if you require() the same module multiple times?
  • Explain the difference between module.exports and exports more deeply.

Mid-Level Professional

Q4: Describe the Node.js module resolution algorithm for CommonJS. How does it find modules?

A: When require() is called, Node.js follows a specific algorithm to resolve the module path:

  1. Core Modules: Checks if the module name (e.g., fs, http, path) corresponds to a built-in Node.js core module. If so, it loads it directly.

  2. Relative/Absolute Paths (starts with ./, ../, /):

    • Treats the path as a file path.
    • Tries to load the file exactly as specified.
    • If not found, it appends extensions (.js, .json, .node) in order.
    • If still not found, it treats the path as a directory.
    • Looks for a package.json file in that directory. If found, it reads the main field to find the entry file.
    • If no package.json or main field, it looks for index.js, index.json, index.node within the directory.
  3. Bare Module Identifiers (e.g., lodash, express):

    • Treats the module name as a package name.
    • Starts searching for a node_modules directory in the current directory.
    • If not found, it moves up to the parent directory, checking for node_modules there, and continues this process recursively until the root directory or a node_modules is found.
    • Inside the node_modules directory, it looks for a directory matching the module name.
    • Inside that module’s directory, it follows the same logic as step 2 (check package.json’s main field, then index.js/index.json/index.node).

Key Points:

  • Prioritizes core modules.
  • Handles relative/absolute paths for local files/directories.
  • Searches node_modules for bare module identifiers, traversing up the directory tree.
  • Looks for package.json’s main field or index.js within directories.

Common Mistakes:

  • Incorrectly stating exports or module.exports plays a role in resolution (they determine what is exported, not where).
  • Missing the crucial node_modules lookup strategy.

Follow-up:

  • How does the NODE_PATH environment variable affect module resolution? (Less common now, but good to know).
  • What about symlinked modules? How are they resolved?

Q5: Discuss ES Modules (import/export) in Node.js. What’s their current status (as of March 2026), and how do they differ from CommonJS?

A: As of March 2026, ES Modules (ESM) are fully stable and the recommended modern approach for module management in Node.js. Node.js v20+ (LTS) and current versions like v22/v24 fully support ESM syntax and features.

Differences from CommonJS:

FeatureCommonJS (CJS)ES Modules (ESM)
Syntaxrequire(), module.exports, exportsimport, export
LoadingSynchronous, dynamic (can be called conditionally)Asynchronous, static (parsed at compile time)
BindingValue copy (exports are copies)Live binding (exports are references to original values)
thisRefers to module.exportsundefined at the top level of a module
Top-level__dirname, __filename, require, exports, moduleNot available by default; must be imported or constructed
Interoperabilityrequire() cannot directly load ESM files. import() can load CJS.import() can load CJS. CJS require() cannot load ESM.
Strict ModeOptionalAlways enforced (implicitly in module scope)
Tree ShakingDifficult due to dynamic naturePossible due to static analysis

How to use ESM in Node.js:

  1. "type": "module" in package.json: This makes all .js files in the package (and subdirectories, unless overridden) treated as ESM.
  2. .mjs file extension: Any file ending in .mjs is treated as an ESM, regardless of "type" in package.json.
  3. .cjs file extension: Any file ending in .cjs is treated as a CommonJS module, regardless of "type".

Example (ESM):

mathOperations.mjs:

// mathOperations.mjs
export const add = (a, b) => a + b;
export const subtract = (a, b) => a - b;

// Default export (optional)
export default {
  multiply: (a, b) => a * b
};

app.mjs:

// app.mjs
import { add, subtract } from './mathOperations.mjs'; // Must include file extension
import multiplier from './mathOperations.mjs';

console.log('Addition:', add(5, 3)); // Output: Addition: 8
console.log('Subtraction:', subtract(10, 4)); // Output: Subtraction: 6
console.log('Multiplication:', multiplier.multiply(2, 3)); // Output: Multiplication: 6

Key Points:

  • ESM is the standard for modern JS and fully supported in Node.js (v20+).
  • Static loading allows for optimizations like tree-shaking.
  • Requires type: "module" in package.json or .mjs extension.
  • Interoperability between CJS and ESM can be tricky; import() is key.

Common Mistakes:

  • Forgetting file extensions in import statements (required for relative/absolute paths in ESM).
  • Trying to use require in an ESM module without specific interoperability patterns.
  • Not understanding the impact of type: "module".

Follow-up:

  • How would you handle a mixed CJS/ESM codebase, specifically if an ESM module needs to require a CJS module, or vice-versa?
  • What are the benefits of static module analysis enabled by ESM?

Q6: Explain Node.js Streams and their different types. Provide a use case where Streams are beneficial.

A: Node.js Streams are abstract interfaces for working with streaming data. They are instances of EventEmitter and offer an efficient way to handle continuous or large data by processing it in chunks, rather than loading the entire data into memory at once. This is crucial for performance and memory management, especially when dealing with large files, network requests, or real-time data processing.

Types of Streams:

  1. Readable Streams: Abstract source from which data can be read (e.g., fs.createReadStream(), HTTP responses from client).
    • Methods: read(), pipe(), unpipe().
    • Events: data, end, error, close, readable.
  2. Writable Streams: Abstract destination to which data can be written (e.g., fs.createWriteStream(), HTTP requests from server).
    • Methods: write(), end().
    • Events: drain, finish, error, close.
  3. Duplex Streams: Both Readable and Writable (e.g., net.Socket).
  4. Transform Streams: Duplex streams that can modify or transform data as it is written and then read (e.g., zlib.createGzip() for compression).

Use Case: Processing a Large Log File Imagine you have a several-gigabyte log file (access.log) and you need to filter lines containing “ERROR” and write them to a new file (errors.log), without loading the entire access.log into memory.

// Node.js v22+/v24+ supports 'node:stream/promises' for async iteration, or 'stream.pipeline'
import { createReadStream, createWriteStream } from 'node:fs';
import { pipeline } from 'node:stream/promises'; // Or require('node:stream/promises') for CJS
import { Transform } from 'node:stream';

const filterErrors = new Transform({
  transform(chunk, encoding, callback) {
    const lines = chunk.toString().split('\n');
    const filteredLines = lines.filter(line => line.includes('ERROR'));
    this.push(filteredLines.join('\n') + '\n'); // Re-add newline
    callback();
  }
});

async function processLogs() {
  const readableStream = createReadStream('access.log', { encoding: 'utf8' });
  const writableStream = createWriteStream('errors.log', { encoding: 'utf8' });

  try {
    await pipeline(
      readableStream,
      filterErrors,
      writableStream
    );
    console.log('Error logs processed successfully!');
  } catch (error) {
    console.error('An error occurred during log processing:', error);
  }
}

processLogs();

This example efficiently reads the large file in chunks, processes each chunk with a Transform stream to filter error lines, and writes the filtered output to a new file, all without consuming excessive memory.

Key Points:

  • Handle large data in chunks, not all at once.
  • Prevent memory exhaustion.
  • Readable, Writable, Duplex, Transform are main types.
  • pipe() method simplifies data flow.
  • stream/promises or pipeline for robust error handling.

Common Mistakes:

  • Forgetting to handle backpressure on writable streams (e.g., using write() repeatedly without checking its return value or waiting for the drain event).
  • Not cleaning up resources (e.g., close event handling) or managing errors in a stream pipeline.

Follow-up:

  • What is backpressure in the context of Streams, and how do you handle it?
  • When would you use stream.pipeline() over stream.pipe()?

Senior/Staff/Lead Engineer

Q7: Deep dive into the Buffer class. When would you use it, and how does it relate to Streams and typed arrays?

A: The Buffer class in Node.js is a global object (doesn’t need require()) that represents a fixed-size raw binary data sequence. It’s essentially a temporary storage area for binary data, similar to an array of integers but optimized for direct manipulation of raw memory outside the V8 JavaScript heap. This makes it highly efficient for operations involving binary protocols, cryptographic functions, and file/network I/O.

When to use Buffer:

  • File I/O and Network I/O: Reading/writing binary files, handling network packets. fs and net modules heavily use Buffers.
  • Binary Data Manipulation: Encoding/decoding data (e.g., Base64, Hex), cryptographic operations (hashing, encryption).
  • Interacting with C/C++ Addons: Native Node.js modules can directly exchange data with Buffers.
  • Image/Video Processing: Handling raw pixel data or video frames.
  • Streaming Data: Streams often operate on Buffers internally. For example, a Readable stream might emit Buffer chunks.

Relation to Streams: Streams are high-level abstractions for reading/writing data. Internally, Node.js streams often deal with Buffer objects. When you createReadStream a file, it emits Buffer chunks (unless an encoding is specified, in which case it converts to strings). When you write to a Writable stream, you can provide either strings or Buffers. Streams use Buffers to manage their internal high-water marks and backpressure mechanisms.

Relation to Typed Arrays (e.g., Uint8Array): The Buffer class is an instance of Uint8Array and extends it. This means Buffer instances behave very similarly to Uint8Arrays, allowing you to use many of the same methods (e.g., slice, indexOf, iteration).

  • Buffer is a Uint8Array: Buffer.from(data) returns an object that is an instance of Uint8Array.
  • Buffer extends Uint8Array: It adds Node.js-specific methods like toString(encoding), toJSON(), writeUInt8(), readIntBE(), etc., which are very useful for common backend operations with binary data.
  • Memory Management: Both Buffer and Uint8Array directly access raw memory. However, Buffer objects are allocated outside the V8 heap in C++ memory, whereas Uint8Arrays in browsers (and sometimes in Node.js when not explicitly using Buffer) might be managed within the JavaScript engine’s heap, albeit still representing raw data. Node.js’s Buffer allocation strategy is highly optimized for performance and lower-level control.

Example: Converting image data to Base64

import { readFileSync } from 'node:fs'; // Using ESM syntax for 2026

try {
  const imageBuffer = readFileSync('my_image.png'); // Reads as a Buffer by default
  const base64Image = imageBuffer.toString('base64');
  console.log('Base64 Image Data:', base64Image.substring(0, 50) + '...'); // Show first 50 chars
  
  // Convert back to buffer
  const decodedBuffer = Buffer.from(base64Image, 'base64');
  console.log('Decoded buffer equals original:', decodedBuffer.equals(imageBuffer)); // Output: true
} catch (err) {
  console.error('Error handling image:', err);
}

Key Points:

  • Represents raw binary data, fixed size.
  • Optimized for I/O, crypto, binary protocols.
  • Is an instance of Uint8Array with Node.js-specific enhancements.
  • Memory is allocated outside the V8 heap for efficiency.
  • Crucial for low-level data handling and performance.

Common Mistakes:

  • Treating Buffers as plain JavaScript strings or arrays without considering encoding.
  • Mismanaging Buffer slices (they share underlying memory, modifications in one slice affect others).
  • Ignoring memory implications when creating many large Buffers.

Follow-up:

  • How do you efficiently concatenate multiple Buffers?
  • Explain Buffer.from(), Buffer.alloc(), and Buffer.allocUnsafe(), and their implications.
  • What is a common security vulnerability related to Buffer handling and how can it be mitigated?

Q8: Compare and contrast child_process.spawn(), exec(), and fork(). When would you choose one over the others for a Node.js backend system?

A: The child_process module allows Node.js to spawn new processes that can run any command, providing robust ways to interact with the operating system. The three primary functions are spawn(), exec(), and fork(), each with distinct use cases.

  1. child_process.spawn(command, [args], [options])

    • Description: Spawns a new process directly without a shell. It’s asynchronous and returns a ChildProcess object. The stdout and stderr are available as streams, allowing you to work with large amounts of data without buffering it all in memory.
    • Use Cases:
      • Streaming I/O: When the child process might produce or consume a lot of data (e.g., git clone, ffmpeg for video processing, log tailing).
      • Long-running processes.
      • High performance: Avoids the overhead of spawning a shell.
      • Security: More secure as it doesn’t interpret commands through a shell (prevents shell injection vulnerabilities unless shell: true option is explicitly used).
    • Return: A ChildProcess instance with stdin, stdout, stderr streams.
  2. child_process.exec(command, [options], [callback])

    • Description: Spawns a new shell and executes the command within that shell. It’s asynchronous and buffers the child process’s stdout and stderr until the process terminates.
    • Use Cases:
      • Simple commands: When you need to execute a short command and capture its entire output (e.g., ls -lh, grep 'error' file.log).
      • Shell features: When you need shell features like piping, wildcards, or environment variables (e.g., command1 | command2).
    • Return: Buffers stdout and stderr and passes them to a callback.
    • Caveats: Buffering limits the amount of data it can handle (maxBuffer option, defaults to 1MB). Shell execution can introduce security risks if command inputs are not properly sanitized.
  3. child_process.fork(modulePath, [args], [options])

    • Description: A special case of spawn() specifically for spawning new Node.js processes. It establishes an IPC (Inter-Process Communication) channel between the parent and child, allowing messages to be passed back and forth.
    • Use Cases:
      • Creating worker processes: To run CPU-bound tasks in a separate Node.js process without blocking the main event loop.
      • Load balancing: Used in Node.js clustering (cluster module is built on fork()).
      • Long-running tasks that need to communicate with the parent.
    • Return: A ChildProcess instance, enhanced with a send() method and a message event for IPC.

Choosing the right one:

  • spawn(): For streaming data, long-running processes, or when you need explicit control over arguments and security (no shell parsing). This is generally the most performant and secure for complex tasks.
  • exec(): For simple, short-lived commands where buffering the entire output is acceptable and shell features are useful. Be cautious about input sanitization.
  • fork(): Exclusively for spawning other Node.js processes, especially when IPC is required for offloading CPU-bound work or building multi-process Node.js applications.

Key Points:

  • spawn: Direct process, streams I/O, no shell, performant, secure by default.
  • exec: Shell execution, buffers I/O, simple commands, potential security risks.
  • fork: Spawns Node.js processes, IPC channel, ideal for CPU-bound tasks and clustering.

Common Mistakes:

  • Using exec for large outputs, leading to maxBuffer errors.
  • Not sanitizing user input before passing it to exec or spawn with shell: true.
  • Using fork when a simple spawn (e.g., for a non-Node.js script) would suffice.

Follow-up:

  • How would you ensure robust error handling and exit code management when working with child processes?
  • Discuss the security implications of using exec with user-provided input. How can shell injection be prevented?
  • When would you prefer using Node.js worker_threads over child_process.fork() for CPU-bound tasks?

Q9: Design a module loading and caching strategy for a microservices architecture using Node.js, considering both CommonJS and ES Modules, dynamic loading, and managing shared dependencies efficiently across services within a monorepo.

A: In a microservices architecture within a monorepo, effective module loading and caching are critical for performance, maintainability, and resource utilization. We need to handle internal shared modules, third-party dependencies, and potentially dynamic loading.

Strategy Components:

  1. Monorepo Structure & Package Managers (e.g., pnpm, Yarn Berry workspaces):

    • pnpm/Yarn Berry Workspaces (2026 standard): These are essential. They create a flat node_modules structure where shared dependencies are hoisted to the monorepo root (or linked if using pnpm’s content-addressable store) and individual services have symlinks to these shared packages. This prevents duplication, saves disk space, and speeds up npm install operations.
    • Internal Shared Modules: Create explicit packages/shared-utils, packages/shared-models, etc., within the monorepo. Each shared package will have its own package.json and be linked into dependent microservices using workspaces.
  2. Module System Choice & Interoperability:

    • Modern Preference (ESM): For new microservices and shared libraries, prioritize ES Modules ("type": "module" in package.json). ESM offers static analysis benefits (tree-shaking, better tooling) and is the future of JavaScript.
    • CJS for Legacy/Interoperability: Existing services or specific third-party libraries might still be CJS.
      • CJS require() ESM: Not directly supported. CJS can only await import('esm-module') using dynamic import() within an async function. This requires careful handling for synchronous CJS codebases.
      • ESM import() CJS: Fully supported. An ESM module can import CjsModule from './path/to/cjs-module.cjs';.
    • Explicit Extensions: Always use .mjs for ESM and .cjs for CJS files in a mixed environment, or ensure package.json exports map correctly.
  3. Module Caching (Node.js Default):

    • CommonJS: Node.js caches modules after their first require(). Subsequent require() calls return the cached module.exports object. This is highly efficient.
    • ESM: ESM also has a caching mechanism, ensuring modules are executed only once. Due to live bindings, subsequent imports get the same reference.
    • Implication: For long-running microservices, this default caching is beneficial. Any shared utility or configuration loaded via require/import will be instantiated once and reused.
  4. Dynamic Module Loading (Runtime Efficiency):

    • import() for ESM: This allows for dynamic, asynchronous loading of ES Modules (and CJS modules).
    • Use Case:
      • Feature Flags/Conditional Logic: Only load a specific module (e.g., a complex reporting engine or a payment gateway integration) when a feature flag is enabled or a specific condition is met, reducing initial startup time and memory footprint for unused features.
      • Plugin Systems: Allow microservices to load specific plugins or extensions at runtime based on configuration, without bundling all possibilities.
      • Reducing Cold Start for Serverless (Edge Runtimes): Only load necessary modules for a given request path.
  5. Managing Shared Dependencies:

    • Internal Monorepo Packages: Convert shared code into dedicated internal packages (@my-org/utils, @my-org/models) within the monorepo. These are referenced by services in their package.json (e.g., "@my-org/utils": "workspace:^1.0.0").
    • Version Control: Pin major/minor versions for internal packages to ensure compatibility across services. Use SemVer strictly.
    • Dependency Audits: Regularly audit common dependencies for security vulnerabilities (e.g., npm audit, snyk).
    • Module Federation (Webpack 5+, etc.): While primarily frontend, similar concepts of sharing live modules across different builds could be considered for very complex polyglot microfrontend/microservice scenarios if Node.js runtime bundlers evolve to support it effectively without over-complication. (As of 2026, still more frontend-centric but worth mentioning for advanced thought.)

Example Scenario (Monorepo with shared utility):

/my-monorepo
├── packages/
│   ├── shared-utils/
│   │   ├── package.json ({ "name": "@my-org/shared-utils", "type": "module", "exports": { ".": "./src/index.mjs" } })
│   │   └── src/index.mjs (export const formatName = ...;)
│   └── service-a/
│       ├── package.json ({ "name": "service-a", "dependencies": { "@my-org/shared-utils": "workspace:^1.0.0" }, "type": "module" })
│       └── src/server.mjs (import { formatName } from '@my-org/shared-utils';)
├── package.json (monorepo root, configures workspaces)
└── pnpm-workspace.yaml (or yarn.lock for Yarn Berry)

In this setup, @my-org/shared-utils is installed once (or symlinked by pnpm/yarn), and service-a references it directly. If service-a conditionally needs a specific feature, it could use await import():

// service-a/src/server.mjs
import { formatName } from '@my-org/shared-utils'; // Always loaded

async function handleRequest(request) {
  const userName = formatName(request.user);
  let responseData = { userName };

  if (request.query.includeAnalytics === 'true') {
    // Dynamically load a heavy analytics module only when needed
    const { getAnalyticsData } = await import('../analytics/analyticsModule.mjs');
    responseData.analytics = getAnalyticsData(request.user.id);
  }

  return responseData;
}

Key Points:

  • Monorepo tools (pnpm/Yarn Workspaces): Crucial for dependency de-duplication and management.
  • ESM-first for new code: Leverage static analysis benefits.
  • Interoperability: Understand import() for CJS-ESM boundaries.
  • Node.js Caching: Rely on built-in caching for loaded modules.
  • Dynamic import(): Use for conditional loading to optimize startup and memory.
  • Internal Packages: Structure shared code as internal monorepo packages.

Common Mistakes:

  • Copy-pasting shared code instead of creating internal packages, leading to maintenance nightmares.
  • Ignoring package.json type field or file extensions, causing CJS/ESM conflicts.
  • Overusing dynamic import() when a module is always needed, adding unnecessary overhead.
  • Not explicitly defining exports maps in package.json for internal packages, leading to unexpected resolution issues.

Follow-up:

  • How would you handle hot-reloading or clearing the module cache in a development environment without restarting the entire microservice?
  • Discuss the challenges of circular dependencies in a large module graph and strategies to mitigate them.
  • When would you consider vendoring dependencies directly into a microservice instead of using node_modules and why?

MCQ Section

Choose the best answer for each question.

1. Which of the following is NOT a core module in Node.js? A) fs B) http C) react D) path Correct Answer: C) react Explanation: fs, http, and path are built-in Node.js core modules. react is a third-party library that needs to be installed via npm.

2. What is the primary advantage of Node.js’s non-blocking I/O model? A) It allows Node.js to use multiple CPU cores by default. B) It prevents JavaScript code from being executed. C) It allows the main thread to continue processing other requests while waiting for I/O operations to complete. D) It eliminates the need for error handling in asynchronous operations. Correct Answer: C) It allows the main thread to continue processing other requests while waiting for I/O operations to complete. Explanation: Non-blocking I/O is central to Node.js’s performance. It ensures that I/O operations (like reading from disk or network requests) don’t halt the single-threaded event loop, enabling high concurrency.

3. In CommonJS, what is the correct way to export a single function myFunction from myModule.js? A) exports = myFunction; B) module.exports = myFunction; C) export default myFunction; D) require('myFunction'); Correct Answer: B) module.exports = myFunction; Explanation: module.exports is the actual object that require() returns. Directly assigning to exports breaks its reference to module.exports. export default is for ES Modules. require() is for importing, not exporting.

4. Which package.json field specifies the main entry point of a Node.js application? A) start B) index C) main D) entry Correct Answer: C) main Explanation: The main field indicates the primary entry point of your package. When you require('your-package'), this is the file that will be loaded.

5. Which Node.js Stream type is capable of both reading and writing data, while also modifying it in transit? A) Writable Stream B) Readable Stream C) Duplex Stream D) Transform Stream Correct Answer: D) Transform Stream Explanation: A Transform stream is a type of Duplex stream that can modify data as it passes through. Duplex streams can both read and write, but don’t necessarily transform the data.

6. To enable ES Modules (import/export) for all .js files in a Node.js project as of 2026, which of the following is the most common and recommended approach? A) Rename all .js files to .mjs. B) Add "type": "module" to package.json. C) Use a transpiler like Babel. D) Include import 'esm'; at the top of each file. Correct Answer: B) Add "type": "module" to package.json. Explanation: While renaming to .mjs also works, adding "type": "module" to package.json is the standard way to declare that all .js files within that package should be treated as ES Modules by default.

7. When is child_process.fork() the most appropriate choice over child_process.spawn() or child_process.exec()? A) When executing a simple shell command and capturing its entire output. B) When streaming large amounts of data to/from a non-Node.js executable. C) When spawning another Node.js process and needing an IPC channel for communication. D) When running synchronous tasks that block the event loop. Correct Answer: C) When spawning another Node.js process and needing an IPC channel for communication. Explanation: fork() is specifically designed for spawning Node.js processes and setting up an IPC channel, making it ideal for worker processes or clustering. exec() is for simple shell commands, and spawn() for general streaming I/O with external executables.

Mock Interview Scenario: Refactoring a File Processing Microservice

Scenario Setup: You’re interviewing for a mid-level Node.js Backend Engineer role. The interviewer presents you with a hypothetical situation: A legacy microservice written in Node.js (CommonJS) is responsible for receiving large CSV file uploads, parsing them, and saving the processed data to a database. The current implementation loads the entire CSV file into memory before parsing, which causes OutOfMemoryError issues when large files are uploaded. Your task is to explain how you would refactor this service to use Node.js Core APIs efficiently, improve performance, and handle potential errors.

Interviewer: “Hello, welcome. Let’s imagine you’ve joined our team. We have a service called csv-processor. It currently takes uploaded CSVs, reads them into memory using fs.readFileSync, parses them with a synchronous CSV parser, and then bulk-inserts into a database. We’re seeing crashes with large files. How would you approach fixing this, focusing on Node.js core principles and APIs?”

Candidate’s Expected Flow of Conversation:

  1. Identify the Core Problem: The first thing to recognize is the OutOfMemoryError with large files, directly linked to fs.readFileSync and loading the entire file into memory. This immediately points to the need for asynchronous, chunk-based processing.

    • Candidate: “The root cause of the OutOfMemoryError is loading the entire CSV file into memory using fs.readFileSync, which is a synchronous, blocking operation. For large files, this is unsustainable. The solution lies in processing the file in smaller, manageable chunks.”
  2. Propose Streams: Introduce Node.js Streams as the primary solution for handling large data.

    • Candidate: “My immediate thought is to leverage Node.js Streams. Specifically, a Readable stream to read the incoming CSV file, and potentially a Transform stream to parse each chunk of data. This allows us to process the file incrementally without holding it all in RAM.”
  3. Outline the Refactoring Steps (High-Level):

    • Candidate: “The refactoring would involve several steps:
      1. Reading: Replace fs.readFileSync with fs.createReadStream() to get a readable stream of the uploaded CSV.
      2. Parsing: Instead of a synchronous parser, I’d look for an asynchronous, stream-compatible CSV parser library (like csv-parser or fast-csv) that can work with the Readable stream. Alternatively, if we had to build one, it would be a custom Transform stream.
      3. Processing/Database Insertion: The parsed chunks (rows) would then be fed into a mechanism for database insertion. Instead of one bulk insert at the end, we’d batch inserts, perhaps collecting 100-1000 rows at a time and inserting them. This would involve a Writable stream or a pipeline that handles batching and database calls.
      4. Error Handling: Implement robust error handling throughout the stream pipeline.
      5. Backpressure: Ensure proper backpressure handling between the reading, parsing, and writing stages to prevent overwhelming the downstream components (especially the database connection).”
  4. Elaborate on Specific API Choices and Best Practices:

    • Candidate: “For reading the file, createReadStream() will emit Buffer chunks. If the CSV is UTF-8 encoded, we can specify encoding: 'utf8' directly in createReadStream to get string chunks.
    • For the parsing logic, a Transform stream is ideal. Each chunk from the Readable stream would enter the _transform method of our custom parser. This parser would accumulate data until it has a complete line or a complete record, then this.push() the parsed object.
    • To connect these, I would use the pipeline() utility from node:stream/promises (or stream.pipeline in CJS contexts) because it handles error propagation and stream cleanup automatically, which is more robust than manual pipe() calls and event listeners.
    • For database inserts, we could have a Writable stream or an async iterator that consumes parsed rows, batches them, and performs async database writes. We’d need to consider the database’s write capacity and implement backpressure by pausing the upstream Readable stream if the database cannot keep up.”
  5. Discuss Error Handling and Backpressure:

    • Candidate: “Error handling is critical. Using stream.pipeline() simplifies this as it propagates errors through the pipeline and ensures all streams are properly destroyed. For the database insertion part, any database-related errors (e.g., connection issues, constraint violations) would need to be caught and potentially lead to pausing the pipeline, logging the error, or retrying.
    • Regarding backpressure, if the database write stream is slower than the file read/parse stream, the Writable stream will return false from its write() method. In a manual pipeline, we’d listen for the drain event to resume writing. stream.pipeline() and Readable.toWeb() or AsyncIterable patterns handle this more implicitly, but the core concept remains: don’t overwhelm downstream.”

Red Flags to Avoid:

  • Proposing synchronous solutions: Suggesting fs.readFileSync again or a synchronous parser.
  • Ignoring memory issues: Not addressing the core problem.
  • Lack of error handling: Overlooking error propagation in streams.
  • Ignoring backpressure: Not considering what happens if processing is slower than reading.
  • Only high-level talk: Not being able to mention specific Node.js APIs (createReadStream, pipeline, Transform).
  • Forgetting module system context: If the interviewer mentions “legacy CommonJS,” ensure your example syntax reflects that or explain how you’d migrate to ESM.

Practical Tips

  1. Master the Node.js Documentation: The official Node.js documentation (nodejs.org/docs) is your best friend. For Node.js v22.x LTS (current around 2026), familiarize yourself with the fs, http, events, stream, buffer, path, process, and child_process modules. Pay attention to methods, events, and examples.

  2. Hands-on Practice with Core Modules:

    • fs: Read/write files (async, sync, streams), watch directories.
    • http: Build a basic HTTP server and client. Understand request/response objects.
    • events: Implement custom event emitters.
    • stream: Create a simple Readable, Writable, and Transform stream. Practice pipe() and pipeline(). Experiment with backpressure.
    • buffer: Convert between strings and Buffers, perform binary operations.
    • child_process: Experiment with spawn, exec, and fork for different scenarios.
  3. Understand CommonJS vs. ES Modules Deeply:

    • Practice writing modules in both systems.
    • Experiment with interoperability (import() in CJS, require() within ESM using createRequire).
    • Understand the type: "module" setting and .mjs/.cjs extensions.
    • Be able to articulate the advantages and disadvantages of each.
  4. package.json and Package Managers:

    • Create projects, initialize package.json, and manage dependencies using npm (or yarn, pnpm).
    • Understand dependency types (dependencies, devDependencies, peerDependencies).
    • Be familiar with npm audit for security.
    • For senior roles, understand monorepo tools like npm workspaces, yarn berry, pnpm.
  5. Review Interview Resources:

    • Platforms like InterviewBit, LeetCode (for general problem-solving relevant to backend), and often company-specific blogs (e.g., Medium articles about engineering practices) contain relevant questions. Filter for recent questions (2025-2026).
    • Focus on how core APIs are used to solve common backend problems (e.g., handling file uploads, serving large responses, running background tasks).
  6. Articulate Design Choices: Don’t just know what an API does, understand why you would choose it over alternatives. Be ready to discuss trade-offs (e.g., spawn vs. exec, streams vs. buffering).

Summary

This chapter has equipped you with a robust understanding of Node.js Core APIs and the Module System, crucial knowledge for any Node.js backend engineer. We’ve covered:

  • Node.js Fundamentals: What Node.js is, its advantages, and how package.json structures projects.
  • Module Systems: The intricacies of CommonJS (require, module.exports, exports) and the modern ES Modules (import, export), including their differences, interoperability, and current status in Node.js v22/v24.
  • Core APIs: Deep dives into Buffer (binary data), Streams (efficient I/O for large data, backpressure), and child_process (spawning external processes and Node.js workers).
  • Practical Application: Through an MCQ section and a mock interview scenario, we’ve simulated real-world problem-solving, emphasizing the application of these core concepts to build performant and resilient systems.

By thoroughly grasping these concepts, practicing with actual code, and being able to articulate your design decisions, you’ll significantly enhance your ability to confidently tackle Node.js backend interviews across all experience levels. The next step in your preparation is to continue exploring asynchronous patterns and the Node.js Event Loop, which builds directly on these foundational concepts.


References:

  1. Node.js Official Documentation (v22.x/v24.x LTS): The definitive source for all Node.js APIs. Regularly check for the latest stable LTS versions.
  2. MDN Web Docs - JavaScript Modules: Comprehensive guide on ES Modules.
  3. InterviewBit - Node.js Interview Questions: A collection of Node.js questions that often align with current trends.
  4. GeeksforGeeks - Node.js: Offers exercises and explanations on various Node.js topics.
  5. Indeed - Node.js Backend Engineer Jobs (for understanding job requirements): Provides insights into sought-after skills in job descriptions.
  6. Medium - “I Failed 17 Senior Backend Interviews. Here’s What They Actually Test (With Real Questions)”: Offers real-world insights into interview expectations for senior roles, including system design and debugging.

This interview preparation guide is AI-assisted and reviewed. It references official documentation and recognized interview preparation resources.