For 1β2 Years Experience Frontend Developers
Curated by a 5-Year Experienced Frontend Guide Β· Real interview-style answers with examples
Great question β this is one of the most basic but important things to get right. So basically, var is the old way of declaring variables. It's function-scoped, not block-scoped, and it gets hoisted to the top of its function with the value undefined. This can cause really unexpected bugs.
let and const were introduced in ES6. They're both block-scoped, meaning they only exist within the nearest pair of curly braces. The difference between them is that let allows reassignment, while const doesn't β though for objects/arrays declared with const, you can still mutate the contents, you just can't reassign the variable itself.
var x = 10;
if (true) {
var x = 20; // same variable! overwrites outer x
console.log(x); // 20
}
console.log(x); // 20 β var leaked out of the block
let y = 10;
if (true) {
let y = 20; // different variable, block-scoped
}
console.log(y); // 10 β unaffected
const obj = { name: "John" };
obj.name = "Jane"; // β
allowed β mutating content
obj = {}; // β TypeError β reassignment not allowed
const by default. Use let when you know the value will change. Never use var in modern code.var declarations are hoisted and initialized with undefined. let and const are hoisted too but stay in a "Temporal Dead Zone" β accessing them before declaration throws a ReferenceError.A closure is when an inner function has access to the variables of its outer function even after the outer function has finished executing. Basically, the inner function "closes over" the outer function's scope.
A classic real-world example is a counter:
function makeCounter() {
let count = 0; // this is "closed over"
return function() {
count++;
return count;
};
}
const counter = makeCounter();
console.log(counter()); // 1
console.log(counter()); // 2
console.log(counter()); // 3
Even though makeCounter has returned, count is still alive because the returned function holds a reference to it. Each call to counter() increments the same count.
== is the loose equality operator β it compares values after performing type coercion. === is the strict equality operator β it compares both value AND type, with no coercion.
console.log(0 == false); // true β because false coerces to 0 console.log(0 === false); // false β different types (number vs boolean) console.log("5" == 5); // true β string "5" coerces to number 5 console.log("5" === 5); // false β different types console.log(null == undefined); // true β special case console.log(null === undefined); // false
=== unless you have a very specific reason to use ==. Coercion rules are confusing and lead to bugs.JavaScript has 8 data types total. 7 are primitives and 1 is a non-primitive:
Primitives: String, Number, BigInt, Boolean, undefined, null, Symbol
Non-primitive: Object (which includes arrays, functions, dates, etc.)
typeof "hello" // "string" typeof 42 // "number" typeof true // "boolean" typeof undefined // "undefined" typeof null // "object" β famous JS bug! typeof Symbol() // "symbol" typeof 9007199254740993n // "bigint" typeof {} // "object" typeof [] // "object" β arrays are objects! typeof function(){} // "function"
Array.isArray(value) β because typeof [] returns "object", which isn't helpful.undefined means a variable has been declared but not yet assigned a value β it's the default state. null is an intentional absence of value β it's something you explicitly assign to say "this has no value."
let a; console.log(a); // undefined β declared but not assigned let b = null; console.log(b); // null β explicitly set to "no value" console.log(typeof undefined); // "undefined" console.log(typeof null); // "object" β historic JS bug console.log(null == undefined); // true console.log(null === undefined); // false
Think of it like this: undefined is "the box doesn't have anything in it yet," and null is "I've deliberately put nothing in this box."
Hoisting is JavaScript's default behavior of moving declarations to the top of their scope before code execution. It's important to understand that only the declaration is hoisted, not the initialization.
console.log(name); // undefined β hoisted, but not initialized var name = "John"; // Same as JS seeing: var name; // declaration hoisted console.log(name); // undefined name = "John"; // initialization stays here // Function declarations are fully hoisted: greet(); // Works! "Hello" function greet() { console.log("Hello"); } // Function expressions are NOT fully hoisted: sayBye(); // β TypeError β sayBye is not a function var sayBye = function() { console.log("Bye"); };
The Temporal Dead Zone is the period between when a let or const variable enters scope (the block starts) and when it's actually declared. If you try to access the variable during this period, you get a ReferenceError.
{
// TDZ starts here for 'x'
console.log(x); // β ReferenceError: Cannot access 'x' before initialization
let x = 5; // TDZ ends here
console.log(x); // β
5
}
This is actually a good thing β it helps catch bugs where you accidentally use a variable before defining it. var would just silently give you undefined instead.
A function declaration uses the function keyword as a statement. It's fully hoisted β you can call it before it appears in the code. A function expression assigns a function to a variable. It's NOT fully hoisted β you can only call it after the assignment.
// Function Declaration greet("Alice"); // β Works β hoisted function greet(name) { return `Hello, ${name}!`; } // Function Expression sayHi("Bob"); // β TypeError β not hoisted const sayHi = function(name) { return `Hi, ${name}!`; }; // Arrow Function Expression (also not hoisted) const add = (a, b) => a + b;
this, don't have arguments object, and can't be used as constructors. They inherit this from the surrounding lexical scope.this refers to the object that is currently executing the function β but what "currently" means depends on HOW the function is called, not where it's defined.
// 1. Method call β 'this' is the object before the dot const person = { name: "Alice", greet() { console.log(this.name); } }; person.greet(); // "Alice" // 2. Regular function call β 'this' is undefined (strict) or global function show() { console.log(this); } show(); // Window (browser) or global (Node) // 3. Arrow function β 'this' is inherited from outer scope const obj = { name: "Bob", greet: () => { console.log(this.name); } }; obj.greet(); // undefined β arrow doesn't bind its own 'this' // 4. Explicit binding with call/apply/bind function introduce() { console.log(this.name); } introduce.call({ name: "Carol" }); // "Carol"
this. call invokes immediately with args listed out, apply invokes immediately with args as an array, and bind returns a new function with this permanently bound.When JavaScript looks up a variable, it starts in the current scope. If it can't find it there, it goes up to the parent scope, then the parent's parent, all the way up to the global scope. This chain of scopes is called the scope chain.
let globalVar = "I'm global";
function outer() {
let outerVar = "I'm outer";
function inner() {
let innerVar = "I'm inner";
console.log(innerVar); // found in own scope
console.log(outerVar); // found in parent scope (closure)
console.log(globalVar); // found in global scope
}
inner();
}
outer();
If a variable isn't found anywhere in the chain, JavaScript throws a ReferenceError.
Type coercion is when JavaScript automatically converts one data type to another during an operation. It can be implicit (done by JS automatically) or explicit (done by you).
// Implicit coercion console.log("5" + 3); // "53" β number coerced to string console.log("5" - 3); // 2 β string coerced to number console.log(true + 1); // 2 β true coerces to 1 console.log(false + ""); // "false" β boolean to string console.log([] + []); // "" β both become empty strings console.log([] + {}); // "[object Object]" // Explicit coercion Number("42"); // 42 String(42); // "42" Boolean(0); // false parseInt("10px"); // 10
+ operator is the tricky one β it prefers string concatenation if either operand is a string. -, *, / always convert to numbers.In a boolean context (like an if statement), every value in JavaScript is either "truthy" or "falsy." There are only 6 falsy values in JS β everything else is truthy.
// The 6 falsy values: false 0 // also -0 and 0n (BigInt zero) "" // empty string null undefined NaN // Common truthy "surprises": Boolean([]); // true β empty array is truthy! Boolean({}); // true β empty object is truthy! Boolean("0"); // true β non-empty string, even "false" Boolean(-1); // true β any non-zero number
if (user && user.name) β this checks that user exists (truthy) AND has a name before accessing it, preventing TypeError.This is how JavaScript handles asynchronous operations despite being single-threaded. The call stack is where synchronous code executes β it's a LIFO stack of function calls. When you call a function, it goes on the stack; when it returns, it comes off.
The event loop constantly checks: "Is the call stack empty? If yes, is there anything in the callback queue (task queue or microtask queue)?" If there is, it moves that callback onto the stack to execute.
console.log("1 - Start");
setTimeout(() => {
console.log("3 - setTimeout callback");
}, 0);
Promise.resolve().then(() => {
console.log("2 - Promise microtask");
});
console.log("4 - End");
// Output order:
// 1 - Start
// 4 - End
// 2 - Promise microtask β microtask queue runs first
// 3 - setTimeout callback β macrotask runs after
A Promise is an object representing the eventual completion (or failure) of an asynchronous operation. It has three states: pending, fulfilled, and rejected.
// Creating a promise const fetchData = new Promise((resolve, reject) => { setTimeout(() => { const success = true; if (success) { resolve({ data: "User data" }); } else { reject(new Error("Something went wrong")); } }, 1000); }); // Consuming a promise fetchData .then(result => console.log(result.data)) // "User data" .catch(err => console.error(err.message)) .finally(() => console.log("Done!")); // Promise combinators Promise.all([p1, p2, p3]); // waits for ALL, fails if any fails Promise.allSettled([p1, p2]); // waits for ALL, reports each result Promise.race([p1, p2]); // resolves/rejects with first settled Promise.any([p1, p2]); // resolves with first fulfilled
async/await is syntactic sugar over Promises that makes asynchronous code look and behave like synchronous code. It makes it much easier to read and write. An async function always returns a Promise. Inside it, await pauses execution until the Promise settles.
// Without async/await (Promise chaining) function loadUser() { return fetch('/api/user') .then(res => res.json()) .then(user => fetch(`/api/posts/${user.id}`)) .then(res => res.json()) .catch(err => console.error(err)); } // With async/await β much cleaner! async function loadUser() { try { const res = await fetch('/api/user'); const user = await res.json(); const postsRes = await fetch(`/api/posts/${user.id}`); const posts = await postsRes.json(); return posts; } catch (err) { console.error(err); } }
await calls in try/catch to handle errors. Or use the pattern const [data, err] = await somePromise.then(d => [d, null]).catch(e => [null, e]) for cleaner error handling.Synchronous code executes line by line, and each line waits for the previous one to finish. Asynchronous code allows certain operations (like network requests or timers) to run in the background without blocking the rest of the code.
// Synchronous β blocks execution console.log("A"); console.log("B"); console.log("C"); // Output: A, B, C (always in order) // Asynchronous β non-blocking console.log("A"); setTimeout(() => console.log("B"), 0); console.log("C"); // Output: A, C, B // "B" goes to callback queue, "C" runs first
JavaScript is single-threaded but handles async operations through the browser's Web APIs (like timers, fetch), the callback queue, and the event loop.
A callback is a function passed as an argument to another function, to be called later. It's the oldest way of handling async in JavaScript. Callback hell (also called "pyramid of doom") is when you have many nested callbacks, making the code hard to read and maintain.
// Callback hell example: getUser(userId, function(user) { getPosts(user.id, function(posts) { getComments(posts[0].id, function(comments) { getLikes(comments[0].id, function(likes) { // Getting harder to read and maintain... console.log(likes); }, handleError); }, handleError); }, handleError); }, handleError); // Modern solution β async/await async function getData() { const user = await getUser(userId); const posts = await getPosts(user.id); const comments = await getComments(posts[0].id); const likes = await getLikes(comments[0].id); console.log(likes); }
They both use ... syntax but do opposite things. Spread expands an iterable (like an array) into individual elements. Rest collects multiple elements into an array. Context determines which is which.
// SPREAD β expands values const arr1 = [1, 2, 3]; const arr2 = [...arr1, 4, 5]; // [1, 2, 3, 4, 5] const obj1 = { a: 1, b: 2 }; const obj2 = { ...obj1, c: 3 }; // { a: 1, b: 2, c: 3 } Math.max(...arr1); // 3 β spread into function args // REST β collects values into array function sum(...numbers) { // collects all args into array return numbers.reduce((total, n) => total + n, 0); } sum(1, 2, 3, 4); // 10 function first(a, b, ...rest) { console.log(a); // 1 console.log(b); // 2 console.log(rest); // [3, 4, 5] } first(1, 2, 3, 4, 5);
Destructuring is a syntax that lets you unpack values from arrays or properties from objects into separate variables. It makes code much cleaner and more readable.
// Array destructuring const [first, second, ...rest] = [1, 2, 3, 4, 5]; console.log(first); // 1 console.log(second); // 2 console.log(rest); // [3, 4, 5] // Object destructuring const user = { name: "Alice", age: 25, role: "admin" }; const { name, age, role = "user" } = user; // default value for role // Rename while destructuring const { name: userName } = user; console.log(userName); // "Alice" // Nested destructuring const { address: { city } } = { address: { city: "Delhi" } }; // In function parameters β very common in React! function greet({ name, age }) { return `${name} is ${age}`; }
Template literals (introduced in ES6) are strings enclosed in backticks (` `) instead of quotes. They support multi-line strings and embedded expressions using ${expression}.
const name = "Alice"; const age = 25; // Old way console.log("Hello, " + name + "! You are " + age + " years old."); // Template literal console.log(`Hello, ${name}! You are ${age} years old.`); // Can embed any expression console.log(`2 + 2 = ${2 + 2}`); console.log(`Is adult: ${age >= 18 ? 'Yes' : 'No'}`); // Multi-line strings (no \n needed!) const html = ` <div> <h1>${name}</h1> <p>Age: ${age}</p> </div> `;
These are the three most important array methods for frontend devs. map() transforms each element and returns a new array of the same length. filter() returns a new array with only the elements that pass a test. reduce() reduces an array to a single value by applying a function cumulatively.
const numbers = [1, 2, 3, 4, 5]; // map β transforms each element const doubled = numbers.map(n => n * 2); // [2, 4, 6, 8, 10] // filter β keeps elements that pass test const evens = numbers.filter(n => n % 2 === 0); // [2, 4] // reduce β accumulates to single value const sum = numbers.reduce((acc, n) => acc + n, 0); // 15 // Real world: chaining const result = numbers .filter(n => n > 2) // [3, 4, 5] .map(n => n * 10) // [30, 40, 50] .reduce((a, b) => a + b, 0); // 120
forEach when you just need side effects (like logging or updating DOM) and don't need a new array back. Use map when you need the transformed array.A shallow copy creates a new object but doesn't recursively copy nested objects β they still share the same reference. A deep copy creates a completely independent clone of the entire structure, including nested objects.
const original = { name: "Alice", address: { city: "Delhi" } };
// Shallow copy β nested objects still shared
const shallow = { ...original };
shallow.name = "Bob"; // β
original.name unchanged
shallow.address.city = "Mumbai"; // β original.address.city also changes!
// Deep copy methods:
// 1. JSON method (simple, but loses functions, Date, undefined)
const deep1 = JSON.parse(JSON.stringify(original));
// 2. structuredClone (modern, recommended)
const deep2 = structuredClone(original);
// 3. Lodash _.cloneDeep (for complex objects)
const deep3 = _.cloneDeep(original);
structuredClone() is now available natively in browsers and Node.js and is the best modern approach to deep cloning.Object.freeze() makes an object immutable β you can't add, delete, or modify its properties. However, it's a shallow freeze. Nested objects can still be mutated.
const config = Object.freeze({
apiUrl: "https://api.example.com",
timeout: 5000,
settings: { theme: "dark" }
});
config.apiUrl = "changed"; // β silently ignored (or TypeError in strict mode)
config.newProp = "test"; // β silently ignored
delete config.timeout; // β silently ignored
config.settings.theme = "light"; // β
works! shallow freeze, nested object not frozen
console.log(Object.isFrozen(config)); // true
All four iterate over an array and test elements with a callback, but they return different things:
const users = [
{ id: 1, name: "Alice", active: true },
{ id: 2, name: "Bob", active: false },
{ id: 3, name: "Carol", active: true }
];
// find() β returns first matching ELEMENT (or undefined)
const found = users.find(u => u.id === 2);
console.log(found); // { id: 2, name: "Bob", active: false }
// findIndex() β returns INDEX of first match (or -1)
const idx = users.findIndex(u => u.name === "Carol"); // 2
// some() β returns true if AT LEAST ONE element passes
users.some(u => u.active); // true
// every() β returns true if ALL elements pass
users.every(u => u.active); // false (Bob is inactive)
These two are lifesavers when dealing with potentially undefined/null data. Optional chaining (?.) lets you safely access nested properties without throwing a TypeError. Nullish coalescing (??) provides a default value only when the left side is null or undefined (unlike || which also triggers on falsy values like 0 and "").
const user = { profile: { name: "Alice" } };
// Without optional chaining β crashes if no address
user.address.city; // β TypeError: Cannot read properties of undefined
// With optional chaining
user?.address?.city; // β
undefined (no crash)
user?.greet?.(); // β
undefined (safe method call)
// Nullish coalescing vs OR
const score = 0;
console.log(score || "No score"); // "No score" β WRONG! 0 is falsy
console.log(score ?? "No score"); // 0 β CORRECT! 0 is not null/undefined
// Combined β very common pattern
const city = user?.address?.city ?? "City not provided";
All three are static methods that extract data from an object's own enumerable properties.
const person = { name: "Alice", age: 25, city: "Delhi" };
Object.keys(person); // ["name", "age", "city"]
Object.values(person); // ["Alice", 25, "Delhi"]
Object.entries(person); // [["name","Alice"], ["age",25], ["city","Delhi"]]
// Practical use β convert object to different format
Object.entries(person).forEach(([key, value]) => {
console.log(`${key}: ${value}`);
});
// Object.fromEntries() β reverse of entries
const doubled = Object.fromEntries(
Object.entries({ a: 1, b: 2 }).map(([k, v]) => [k, v * 2])
); // { a: 2, b: 4 }
A Map is a key-value data structure like an object, but with key differences: Map allows any type as a key (not just strings/symbols), maintains insertion order, has a built-in size property, is iterable out of the box, and doesn't have prototype pollution issues.
const map = new Map();
map.set("name", "Alice");
map.set(42, "the number forty-two");
map.set({ id: 1 }, "an object as key!");
map.set(true, "boolean key");
map.get("name"); // "Alice"
map.has(42); // true
map.size; // 4
map.delete(42);
// Iteration
for (const [key, value] of map) {
console.log(key, value);
}
// When to prefer Map over Object:
// β
Need non-string keys
// β
Need to frequently add/remove entries
// β
Need to know the size quickly
// β
Iterating in insertion order is important
A Set is a collection of unique values β duplicates are automatically ignored. Very useful when you need to deduplicate an array or check for membership quickly.
const set = new Set([1, 2, 3, 2, 1]); console.log(set); // Set { 1, 2, 3 } β duplicates removed set.add(4); set.has(3); // true set.delete(2); set.size; // 3 // Most common use: deduplicate an array const arr = [1, 2, 2, 3, 3, 4]; const unique = [...new Set(arr)]; // [1, 2, 3, 4] // Set operations const a = new Set([1, 2, 3]); const b = new Set([2, 3, 4]); const union = new Set([...a, ...b]); // {1,2,3,4} const intersection = new Set([...a].filter(x => b.has(x))); // {2,3}
A generator is a special type of function that can pause its execution and resume later, yielding values one at a time. They're defined with function* and use the yield keyword.
function* numberGenerator() {
yield 1;
yield 2;
yield 3;
}
const gen = numberGenerator();
gen.next(); // { value: 1, done: false }
gen.next(); // { value: 2, done: false }
gen.next(); // { value: 3, done: false }
gen.next(); // { value: undefined, done: true }
// Infinite sequence generator
function* infiniteCounter(start = 0) {
while (true) {
yield start++;
}
}
const counter = infiniteCounter(5);
counter.next().value; // 5
counter.next().value; // 6
Generators are great for lazy evaluation, infinite sequences, and implementing custom iterators. They're also used internally by async/await implementations.
Symbol is a primitive type introduced in ES6. Every Symbol is guaranteed to be unique β even if you create two Symbols with the same description, they are never equal. They're mainly used as unique object property keys to avoid naming conflicts.
const id1 = Symbol("id");
const id2 = Symbol("id");
console.log(id1 === id2); // false β every Symbol is unique
const user = {
name: "Alice",
[id1]: 12345 // Symbol as object key β won't clash
};
user[id1]; // 12345
user.name; // "Alice"
// Symbol keys don't appear in normal iteration
Object.keys(user); // ["name"] β no Symbol
JSON.stringify(user); // {"name":"Alice"} β Symbol ignored
// Well-known Symbols β customize JS behavior
class MyArray {
[Symbol.iterator]() { /* custom iteration logic */ }
}
Memoization is an optimization technique where you cache the results of expensive function calls and return the cached result when the same inputs occur again. It's a form of caching applied to functions.
// Without memoization β recalculates every time function fibonacci(n) { if (n <= 1) return n; return fibonacci(n - 1) + fibonacci(n - 2); } fibonacci(40); // very slow β exponential complexity // With memoization function memoize(fn) { const cache = new Map(); return function(...args) { const key = JSON.stringify(args); if (cache.has(key)) return cache.get(key); // cache hit! const result = fn.apply(this, args); cache.set(key, result); return result; }; } const memoFib = memoize(function fib(n) { if (n <= 1) return n; return memoFib(n-1) + memoFib(n-2); }); memoFib(40); // blazing fast!
Currying is a functional programming technique where a function that takes multiple arguments is transformed into a sequence of functions, each taking a single argument.
// Regular function function add(a, b, c) { return a + b + c; } add(1, 2, 3); // 6 // Curried version function curriedAdd(a) { return function(b) { return function(c) { return a + b + c; }; }; } curriedAdd(1)(2)(3); // 6 // Arrow function currying β cleaner syntax const add = a => b => c => a + b + c; // Real use case β partially applied functions const add5 = curriedAdd(5); const add5and3 = add5(3); add5and3(2); // 10 add5and3(10); // 18
Currying is especially useful when you want to create reusable functions that have some arguments pre-filled. Think of it like "configuring" a function ahead of time.
The key difference is the return value. forEach always returns undefined β it's used purely for side effects. map returns a new array with the transformed values. You can chain methods after map but not after forEach.
const nums = [1, 2, 3]; // forEach β side effects only const result1 = nums.forEach(n => n * 2); console.log(result1); // undefined // Can't chain: nums.forEach(n => n * 2).filter(n => n > 2); // β TypeError // map β returns new array const result2 = nums.map(n => n * 2); console.log(result2); // [2, 4, 6] // Can chain: nums.map(n => n * 2).filter(n => n > 2); // β [4, 6] // When to use which: // forEach β logging, DOM manipulation, updating external state // map β transforming data, creating new arrays
WeakMap and WeakSet hold weak references to their keys/values. This means if there are no other references to the object, it can be garbage collected. They're great for associating private data with objects without preventing garbage collection.
const weakMap = new WeakMap();
let obj = { name: "Alice" };
weakMap.set(obj, { visits: 5 });
weakMap.get(obj); // { visits: 5 }
obj = null; // obj is now eligible for garbage collection
// The WeakMap entry is also cleaned up automatically
// Differences from Map:
// β Keys must be objects (no primitives)
// β Not iterable (no .keys(), .values(), .entries())
// β No .size property
// β
Automatically garbage collected when key is collected
// Use case: storing private/metadata for DOM nodes
const cache = new WeakMap();
function processElement(el) {
if (cache.has(el)) return cache.get(el);
const result = expensiveComputation(el);
cache.set(el, result);
return result;
}
slice() returns a new array with a portion of the original β it doesn't modify the original. splice() modifies the original array by removing, replacing, or adding elements.
const arr = [1, 2, 3, 4, 5]; // slice(start, end) β non-destructive arr.slice(1, 3); // [2, 3] β original unchanged arr.slice(-2); // [4, 5] β last 2 elements arr.slice(); // [1,2,3,4,5] β shallow copy of whole array console.log(arr); // [1, 2, 3, 4, 5] β untouched // splice(start, deleteCount, ...items) β destructive! const removed = arr.splice(1, 2); // removes 2 elements at index 1 console.log(removed); // [2, 3] console.log(arr); // [1, 4, 5] β MODIFIED! arr.splice(1, 0, 10, 11); // insert without removing console.log(arr); // [1, 10, 11, 4, 5]
JavaScript uses try/catch/finally for synchronous error handling. For async code, you use .catch() on Promises or try/catch inside async functions. You can also create custom error classes.
// Basic try/catch/finally try { const data = JSON.parse("invalid json"); } catch (error) { console.error(error.name); // "SyntaxError" console.error(error.message); // "Unexpected token i..." } finally { console.log("This always runs"); } // Custom error class class ValidationError extends Error { constructor(message, field) { super(message); this.name = "ValidationError"; this.field = field; } } // Async error handling async function fetchUser(id) { try { const res = await fetch(`/api/users/${id}`); if (!res.ok) throw new Error(`HTTP error! status: ${res.status}`); return await res.json(); } catch (err) { if (err instanceof ValidationError) { /* handle validation */ } else { /* handle other errors */ } } }
A Proxy lets you intercept and customize operations on objects β like getting/setting properties, calling functions, checking properties. It's the foundation of Vue 3's reactivity system.
const handler = {
get(target, prop) {
console.log(`Getting ${prop}`);
return prop in target ? target[prop] : `Property '${prop}' not found`;
},
set(target, prop, value) {
if (typeof value !== "string") throw new TypeError("Only strings allowed!");
target[prop] = value;
return true; // must return true on success
}
};
const user = new Proxy({}, handler);
user.name = "Alice"; // β
user.age = 25; // β TypeError: Only strings allowed!
console.log(user.name); // "Getting name" β "Alice"
console.log(user.missing); // "Property 'missing' not found"
for...in iterates over the enumerable property keys (string keys) of an object, including inherited ones. for...of iterates over the values of any iterable (arrays, strings, Maps, Sets, generators).
const arr = [10, 20, 30]; // for...in β iterates keys (indices as strings for arrays) for (const key in arr) { console.log(key); // "0", "1", "2" β string indices } // for...of β iterates values for (const value of arr) { console.log(value); // 10, 20, 30 } // for...in problem β also picks up inherited/prototype properties const obj = { a: 1, b: 2 }; for (const key in obj) { if (obj.hasOwnProperty(key)) { // guard needed console.log(key); } } // for...of on string for (const char of "hello") { console.log(char); // h, e, l, l, o }
for...of for arrays and iterables. Use for...in only for plain objects when you need keys.Both techniques limit how often a function gets called. Debouncing delays execution until after a specified time has passed since the last invocation. Throttling ensures a function is called at most once per specified time interval.
// DEBOUNCE β wait for user to stop typing function debounce(fn, delay) { let timer; return function(...args) { clearTimeout(timer); timer = setTimeout(() => fn.apply(this, args), delay); }; } const handleSearch = debounce((query) => { fetch(`/api/search?q=${query}`); }, 300); // Only calls API 300ms after user stops typing // THROTTLE β limit scroll/resize calls function throttle(fn, limit) { let lastCall = 0; return function(...args) { const now = Date.now(); if (now - lastCall >= limit) { lastCall = now; return fn.apply(this, args); } }; } const handleScroll = throttle(() => { console.log("scroll position:", window.scrollY); }, 200); // Fires at most once every 200ms no matter how fast user scrolls
CommonJS (CJS) is the older module system used in Node.js. It uses require() and module.exports, and loads synchronously. ES Modules (ESM) is the modern standard (ES6), used in both browsers and modern Node.js. It uses import/export and loads asynchronously, and supports static analysis for tree-shaking.
// CommonJS (Node.js) const express = require('express'); module.exports = { greet: function() {} }; exports.helper = function() {}; // ES Modules (Modern β preferred) import React from 'react'; import { useState, useEffect } from 'react'; import * as utils from './utils.js'; export const greet = (name) => `Hello, ${name}`; export default function App() {} // Dynamic import (lazy loading) const module = await import('./heavyModule.js');
The DOM (Document Object Model) is a tree-like representation of the HTML document. JavaScript can access and manipulate this tree to dynamically change the page content, structure, and styles.
// Selecting elements document.getElementById('myId'); document.querySelector('.myClass'); // first match document.querySelectorAll('div.card'); // all matches (NodeList) // Modifying content const el = document.querySelector('h1'); el.textContent = "New Heading"; // safe β no HTML parsing el.innerHTML = "<em>New</em>"; // parses HTML β XSS risk with user input! // Changing styles and classes el.style.color = "red"; el.classList.add("active"); el.classList.remove("hidden"); el.classList.toggle("selected"); el.classList.contains("active"); // true // Creating and inserting elements const div = document.createElement('div'); div.textContent = "Hello!"; document.body.appendChild(div); // Modern insertion methods el.insertAdjacentHTML('beforeend', '<p>Added</p>'); parent.append(child1, child2); // can take strings too
When an event happens on a DOM element, it goes through three phases:
Capturing phase: Event travels from window down to the target element. Target phase: Event reaches the target element. Bubbling phase: Event bubbles back up from the target to the window. By default, event listeners fire during the bubbling phase.
<div id="parent">
<button id="child">Click me</button>
</div>
parent.addEventListener('click', () => console.log('Parent'));
child.addEventListener('click', () => console.log('Child'));
// Click "button" β logs: "Child" then "Parent" (bubbling)
// Stop bubbling
child.addEventListener('click', (e) => {
e.stopPropagation(); // prevents reaching parent
console.log('Child only');
});
// Capture phase β third arg true
parent.addEventListener('click', () => console.log('Parent first!'), true);
// Now: "Parent first!" then "Child"
Event delegation is a pattern where you attach a single event listener to a parent element instead of many listeners on individual children. Because events bubble, the parent listener catches events from all children.
// BAD β attaching listener to every item (100 listeners for 100 items) document.querySelectorAll('.item').forEach(item => { item.addEventListener('click', handleClick); }); // GOOD β event delegation (1 listener total) document.querySelector('.list').addEventListener('click', function(e) { const item = e.target.closest('.item'); if (item) { handleClick(item); } }); // Works even for dynamically added elements! function addItem(text) { const li = document.createElement('li'); li.className = 'item'; li.textContent = text; document.querySelector('.list').appendChild(li); // automatically works }
This is especially powerful for dynamic lists where items are added/removed β no need to re-attach listeners.
All three are ways to store data in the browser, but they differ in persistence, size, and accessibility.
// localStorage β persists until explicitly cleared localStorage.setItem('user', JSON.stringify({ name: 'Alice' })); const user = JSON.parse(localStorage.getItem('user')); localStorage.removeItem('user'); localStorage.clear(); // Size: ~5MB, not sent to server, same origin only // sessionStorage β cleared when tab/browser closes sessionStorage.setItem('token', 'abc123'); // Size: ~5MB, not sent to server, per tab // Cookies β sent with every HTTP request document.cookie = "session=abc; expires=Fri, 31 Dec 2025 23:59:59 GMT; path=/"; // Size: ~4KB, sent to server, can be HttpOnly/Secure
Key differences: localStorage and sessionStorage are purely client-side (not sent to server). Cookies are sent with every HTTP request, making them suitable for auth tokens. localStorage persists across sessions; sessionStorage doesn't. Cookies can be made HttpOnly (not accessible to JS) for security.
CORS (Cross-Origin Resource Sharing) is a security mechanism that restricts web pages from making requests to a different domain than the one that served the page. It's enforced by the browser, not the server. The server has to explicitly allow cross-origin requests via response headers.
// Browser automatically adds Origin header to cross-origin requests fetch('https://api.other-site.com/data'); // Browser sends: Origin: https://your-site.com // Server must respond with: // Access-Control-Allow-Origin: https://your-site.com (or *) // Preflight request (OPTIONS) for: // - Non-simple HTTP methods (PUT, DELETE, PATCH) // - Custom headers // - Content-Type other than text/plain // The error you see in console: // "Access to fetch at 'https://...' from origin 'https://...' // has been blocked by CORS policy"
The Fetch API is the modern way to make HTTP requests in JavaScript. It returns Promises, is cleaner to use, and supports the Request/Response model. XMLHttpRequest (XHR) is the older way β it uses callbacks and is more verbose.
// Fetch API β clean and modern async function getUser(id) { const response = await fetch(`/api/users/${id}`, { method: 'GET', headers: { 'Content-Type': 'application/json' } }); if (!response.ok) { throw new Error(`HTTP error: ${response.status}`); } // Note: fetch doesn't reject on HTTP errors (404, 500)! // You must check response.ok manually return response.json(); } // POST request await fetch('/api/users', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ name: 'Alice' }) });
fetch() only rejects on network failure β NOT on HTTP error status codes like 404 or 500. Always check response.ok!innerHTML gets/sets HTML content including tags. textContent gets/sets raw text content (ignores HTML). innerText is similar to textContent but is "human-readable" β it respects CSS styling like display:none.
<div id="box"><b>Hello</b> <span style="display:none">hidden</span></div>
const box = document.getElementById('box');
box.innerHTML; // "<b>Hello</b> <span style=...>hidden</span>"
box.textContent; // "Hello hidden" β all text, ignores CSS
box.innerText; // "Hello" β respects CSS, hidden text excluded
// Setting:
box.innerHTML = "<strong>Bold</strong>"; // parses HTML β XSS risk!
box.textContent = "<strong>Bold</strong>"; // renders as literal text β safe
// NEVER do this with user-provided content:
box.innerHTML = userInput; // β XSS vulnerability!
box.textContent = userInput; // β
Safe
preventDefault() prevents the browser's default behavior for an event (like following a link or submitting a form). stopPropagation() prevents the event from bubbling up to parent elements. They do completely different things and can be used together.
// preventDefault β prevent default browser action const link = document.querySelector('a'); link.addEventListener('click', (e) => { e.preventDefault(); // page won't navigate to href console.log('Link clicked, but not followed'); }); const form = document.querySelector('form'); form.addEventListener('submit', (e) => { e.preventDefault(); // form won't reload the page handleFormData(new FormData(e.target)); }); // stopPropagation β prevent bubbling to parent const button = document.querySelector('#inner-btn'); button.addEventListener('click', (e) => { e.stopPropagation(); // parent's click handler won't fire doSomething(); }); // stopImmediatePropagation β also prevents other listeners // on the SAME element from firing
The Virtual DOM is a JavaScript representation of the actual DOM. Instead of directly manipulating the real DOM (which is slow), libraries like React maintain a lightweight virtual copy. When state changes, React creates a new virtual DOM, compares it to the previous one (called "diffing"), and only updates the actual DOM where things have actually changed (called "reconciliation"). This minimizes expensive DOM operations.
// React's Virtual DOM concept simplified: // 1. State changes in your component // 2. React creates a new Virtual DOM tree // 3. React diffs new tree vs old tree (diffing) // 4. React calculates minimal DOM updates needed // 5. React applies only those changes to real DOM (patching) // Virtual DOM element is just a plain object: const vNode = { type: 'div', props: { className: 'card', id: 'main' }, children: [ { type: 'h1', props: {}, children: ['Hello'] } ] };
requestAnimationFrame(callback) tells the browser you want to perform an animation, and it schedules the callback to run before the next repaint (typically 60fps = every ~16ms). It's more efficient than setInterval for animations because it pauses when the tab is hidden and synchronizes with the display refresh rate.
// BAD β setInterval for animation setInterval(() => { element.style.left = parseInt(element.style.left) + 1 + 'px'; }, 16); // not synchronized with screen refresh // GOOD β requestAnimationFrame let position = 0; function animate() { position += 2; element.style.left = position + 'px'; if (position < 500) { requestAnimationFrame(animate); // schedule next frame } } requestAnimationFrame(animate); // start animation // Cancel it if needed const rafId = requestAnimationFrame(animate); cancelAnimationFrame(rafId);
A Service Worker is a JavaScript file that runs in the background, separate from the web page. It can intercept network requests, cache resources, enable offline functionality, and handle push notifications. It's the foundation of Progressive Web Apps (PWAs).
// Register service worker if ('serviceWorker' in navigator) { navigator.serviceWorker.register('/sw.js') .then(reg => console.log('SW registered')) .catch(err => console.error('SW failed:', err)); } // sw.js β intercept and cache requests self.addEventListener('fetch', (event) => { event.respondWith( caches.match(event.request).then(cached => { return cached || fetch(event.request).then(response => { const clone = response.clone(); caches.open('v1').then(cache => cache.put(event.request, clone)); return response; }); }) ); });
When you compare objects with == or ===, JavaScript compares references, not content. Two distinct objects with identical content are NOT equal unless they're the same reference.
const a = { name: "Alice" };
const b = { name: "Alice" };
const c = a;
console.log(a == b); // false β different references
console.log(a === b); // false β different references
console.log(a === c); // true β same reference!
// To compare object contents:
JSON.stringify(a) === JSON.stringify(b); // true β but fragile (order matters)
// Better: deep equality libraries
_.isEqual(a, b); // true (Lodash)
// Or write your own deep equal:
function deepEqual(a, b) {
if (typeof a !== typeof b) return false;
if (typeof a !== 'object') return a === b;
const keysA = Object.keys(a), keysB = Object.keys(b);
if (keysA.length !== keysB.length) return false;
return keysA.every(k => deepEqual(a[k], b[k]));
}
MutationObserver watches for changes to the DOM tree. It's the modern alternative to the deprecated mutation events. Useful for reacting to DOM changes you don't control (like from third-party libraries).
const observer = new MutationObserver((mutations) => {
mutations.forEach((mutation) => {
if (mutation.type === 'childList') {
console.log('Children changed:', mutation.addedNodes);
}
if (mutation.type === 'attributes') {
console.log(`Attribute "${mutation.attributeName}" changed`);
}
});
});
observer.observe(document.querySelector('#app'), {
childList: true, // watch for child additions/removals
attributes: true, // watch attribute changes
subtree: true, // include all descendants
characterData: true // watch text content changes
});
observer.disconnect(); // stop observing
IntersectionObserver asynchronously observes changes in the intersection of a target element with the viewport or a parent element. It's used for lazy loading images, infinite scroll, scroll-triggered animations, and "sticky" elements.
// Lazy loading images const imageObserver = new IntersectionObserver((entries) => { entries.forEach(entry => { if (entry.isIntersecting) { const img = entry.target; img.src = img.dataset.src; // load the actual image imageObserver.unobserve(img); // stop watching after load } }); }, { threshold: 0.1 // trigger when 10% visible }); document.querySelectorAll('img[data-src]').forEach(img => { imageObserver.observe(img); }); // Infinite scroll const bottomObserver = new IntersectionObserver(([entry]) => { if (entry.isIntersecting) loadMoreContent(); }); bottomObserver.observe(document.querySelector('#sentinel'));
DOMContentLoaded fires when the HTML is fully parsed and the DOM is ready β but before images, stylesheets, and subframes have finished loading. window.onload (load event) fires when the entire page including all assets (images, CSS, fonts) has loaded.
// DOMContentLoaded β DOM is ready, assets still loading document.addEventListener('DOMContentLoaded', () => { console.log('DOM ready β manipulate it here'); document.querySelector('h1').textContent = 'Updated!'; }); // load β everything including images loaded window.addEventListener('load', () => { console.log('Everything loaded including images'); const img = document.querySelector('img'); console.log(img.naturalWidth); // image dimensions available }); // Order: DOMContentLoaded fires BEFORE load // Use DOMContentLoaded unless you specifically need assets to be ready
JavaScript uses prototypal inheritance β every object has a hidden internal link ([[Prototype]], accessible via __proto__) to another object called its prototype. When you access a property on an object and it's not found, JS walks up the prototype chain looking for it.
const animal = {
breathe() { return "breathing"; }
};
const dog = Object.create(animal); // dog's prototype is animal
dog.bark = function() { return "woof!"; };
dog.bark(); // "woof!" β own method
dog.breathe(); // "breathing" β inherited from prototype
console.log(dog.hasOwnProperty('bark')); // true
console.log(dog.hasOwnProperty('breathe')); // false β inherited
// Prototype chain:
// dog β animal β Object.prototype β null
console.log(Object.getPrototypeOf(dog) === animal); // true
In classical inheritance (Java, C++), classes are blueprints β you create instances from them, and they inherit via class hierarchies. In prototypal inheritance (JavaScript), objects inherit directly from other objects β there are no real "classes" (ES6 class syntax is just syntactic sugar over prototypes).
// Prototypal β object inheriting from object const vehicleProto = { describe() { return `I'm a ${this.type}`; } }; const car = Object.create(vehicleProto); car.type = "car"; car.describe(); // "I'm a car" // ES6 class syntax (syntactic sugar over prototypes) class Vehicle { constructor(type) { this.type = type; } describe() { return `I'm a ${this.type}`; } } class Car extends Vehicle { constructor() { super("car"); } honk() { return "beep!"; } } const myCar = new Car(); myCar.describe(); // "I'm a car" β inherited myCar.honk(); // "beep!" // Under the hood, Car.prototype.__proto__ === Vehicle.prototype
When you use the new keyword, JavaScript does four things automatically:
function Person(name, age) {
this.name = name;
this.age = age;
}
Person.prototype.greet = function() {
return `Hi, I'm ${this.name}`;
};
const alice = new Person("Alice", 25);
// What 'new' does under the hood:
function myNew(Constructor, ...args) {
// 1. Create a new empty object
const obj = {};
// 2. Set its prototype to Constructor.prototype
Object.setPrototypeOf(obj, Constructor.prototype);
// 3. Call Constructor with 'this' = the new object
const result = Constructor.apply(obj, args);
// 4. Return the new object (or result if it's an object)
return result instanceof Object ? result : obj;
}
const bob = myNew(Person, "Bob", 30);
bob.greet(); // "Hi, I'm Bob"
All three are methods on Function.prototype that let you explicitly set what this refers to inside the function. The difference is how they handle arguments and when the function executes.
function introduce(greeting, punctuation) {
return `${greeting}, I'm ${this.name}${punctuation}`;
}
const user = { name: "Alice" };
// call() β invoke immediately, args listed separately
introduce.call(user, "Hello", "!"); // "Hello, I'm Alice!"
// apply() β invoke immediately, args in array
introduce.apply(user, ["Hi", "."]); // "Hi, I'm Alice."
// bind() β returns a new bound function (doesn't call immediately)
const boundIntroduce = introduce.bind(user, "Hey");
boundIntroduce("?"); // "Hey, I'm Alice?"
// Common use: preserving 'this' in callbacks
class Timer {
constructor() { this.ticks = 0; }
start() {
setInterval(function() {
this.ticks++; // β 'this' is window, not Timer
}, 1000);
setInterval(this.tick.bind(this), 1000); // β
setInterval(() => this.ticks++, 1000); // β
arrow function
}
}
Getters and setters are special methods that allow you to define how a property is accessed and set. They look like properties but are actually functions running behind the scenes.
class Temperature {
#celsius; // private field
constructor(celsius) {
this.#celsius = celsius;
}
// Getter β accessed like a property: temp.fahrenheit
get fahrenheit() {
return this.#celsius * 9/5 + 32;
}
// Setter β set like a property: temp.fahrenheit = 98.6
set fahrenheit(value) {
this.#celsius = (value - 32) * 5/9;
}
get celsius() { return this.#celsius; }
set celsius(value) {
if (value < -273.15) throw new Error("Below absolute zero!");
this.#celsius = value;
}
}
const temp = new Temperature(100);
temp.fahrenheit; // 212 β getter called
temp.fahrenheit = 98.6; // setter called β updates celsius
temp.celsius; // 37
Private class fields (prefixed with #) are truly private β they can only be accessed from within the class body. They're not accessible from outside or from subclasses, unlike convention-based private fields like _name.
class BankAccount {
#balance = 0; // truly private field
#owner;
constructor(owner, initialBalance) {
this.#owner = owner;
this.#balance = initialBalance;
}
deposit(amount) {
if (amount > 0) this.#balance += amount;
}
get balance() { return this.#balance; }
// Private method
#validateAmount(amount) {
return amount > 0 && amount <= this.#balance;
}
withdraw(amount) {
if (this.#validateAmount(amount)) {
this.#balance -= amount;
return true;
}
return false;
}
}
const account = new BankAccount("Alice", 1000);
account.balance; // 1000 β via getter
account.#balance; // β SyntaxError β truly private!
Instance methods are called on instances of a class and can access this. Static methods are called on the class itself (not instances) and don't have access to instance data.
class MathHelper {
// Static method β called on the CLASS
static add(a, b) { return a + b; }
static multiply(a, b) { return a * b; }
static PI = 3.14159; // static property
}
MathHelper.add(2, 3); // β
5
new MathHelper().add(2, 3); // β TypeError β not on instances
class User {
constructor(name) { this.name = name; }
// Instance method β called on instances
greet() { return `Hello, I'm ${this.name}`; }
// Static factory method
static createAdmin(name) {
const user = new User(name);
user.role = 'admin';
return user;
}
}
const alice = new User("Alice");
alice.greet(); // β
"Hello, I'm Alice"
const admin = User.createAdmin("Bob"); // β
instanceof checks if an object is an instance of a particular class (or constructor function) by looking at the prototype chain. It returns true if the constructor's prototype appears anywhere in the object's prototype chain.
class Animal {}
class Dog extends Animal {}
const dog = new Dog();
dog instanceof Dog; // true
dog instanceof Animal; // true β because of inheritance
dog instanceof Object; // true β everything is an Object
dog instanceof Array; // false
// With arrays and objects:
[] instanceof Array; // true
[] instanceof Object; // true
{} instanceof Object; // true
// instanceof can be fooled across iframes (different realms)
// Use Array.isArray() for arrays β more reliable
Array.isArray([]); // true β works across realms
Object.create(proto) creates a new object with proto as its prototype β no constructor function is called, no new keyword needed. It's a direct way to set up prototypal inheritance without classes.
// Object.create() β direct prototypal inheritance const animalProto = { breathe() { return "breathing"; }, describe() { return `I am a ${this.type}`; } }; const cat = Object.create(animalProto); cat.type = "cat"; cat.describe(); // "I am a cat" β inherited method // vs class / constructor function Animal(type) { this.type = type; } Animal.prototype.describe = function() { return `I am a ${this.type}`; }; const lion = new Animal("lion"); lion.describe(); // "I am a lion" // Object.create(null) β no prototype at all const pureDict = Object.create(null); // no toString, hasOwnProperty, etc. pureDict.key = "value"; // Safer dictionary that can't be confused with inherited properties
A mixin is a way to add methods from multiple sources to a class without using inheritance. Since JavaScript doesn't support multiple inheritance, mixins let you compose functionality from different objects.
// Mixin objects const Serializable = { serialize() { return JSON.stringify(this); }, deserialize(json) { return JSON.parse(json); } }; const Validatable = { validate() { return Object.keys(this).every(key => this[key] !== null); } }; // Apply mixins to a class class User { constructor(name, email) { this.name = name; this.email = email; } } Object.assign(User.prototype, Serializable, Validatable); const user = new User("Alice", "alice@example.com"); user.serialize(); // '{"name":"Alice","email":"alice@example.com"}' user.validate(); // true
Inheritance (is-a relationship) creates a hierarchy: a Dog IS-A Animal. Composition (has-a relationship) builds objects by combining smaller pieces: a User HAS-A address, HAS-A subscription. The principle "favor composition over inheritance" exists because deep inheritance trees become rigid and hard to change.
// Inheritance β tight coupling, hard to change class Animal { breathe() {} } class Dog extends Animal { bark() {} } class LoudDog extends Dog { loudBark() {} } // What if you need a loud cat? Inheritance doesn't help cleanly. // Composition β flexible, mix and match const canBreathe = () => ({ breathe: () => "breathing" }); const canBark = () => ({ bark: () => "woof" }); const canMeow = () => ({ meow: () => "meow" }); const canLoudlyBark = () => ({ loudBark: () => "WOOF!" }); // Create any combination you need const dog = { ...canBreathe(), ...canBark(), ...canLoudlyBark() }; const cat = { ...canBreathe(), ...canMeow() }; const weirdAnimal = { ...canBreathe(), ...canBark(), ...canMeow() };
Object.assign(target, ...sources) copies all enumerable own properties from source objects into the target object. It modifies and returns the target. It does a shallow copy.
// Merging objects const defaults = { theme: "light", lang: "en", fontSize: 14 }; const userPrefs = { theme: "dark", fontSize: 18 }; const settings = Object.assign({}, defaults, userPrefs); // { theme: "dark", lang: "en", fontSize: 18 } // Note: {} as target to avoid mutating defaults // Clone an object (shallow) const clone = Object.assign({}, original); // Equivalent modern way with spread: const settings2 = { ...defaults, ...userPrefs }; // Gotcha β it MUTATES the target! Object.assign(defaults, userPrefs); // β mutates defaults
An iterator is an object that follows the iterator protocol β it has a next() method that returns { value, done }. An iterable is an object that has a [Symbol.iterator]() method that returns an iterator. Arrays, strings, Maps, and Sets are all built-in iterables.
// Custom iterable const range = { from: 1, to: 5, [Symbol.iterator]() { let current = this.from; const last = this.to; return { next() { if (current <= last) { return { value: current++, done: false }; } return { value: undefined, done: true }; } }; } }; // Now 'range' works with for...of, spread, destructuring for (const num of range) { console.log(num); // 1, 2, 3, 4, 5 } [...range]; // [1, 2, 3, 4, 5] const [a, b] = range; // a=1, b=2
Functional programming (FP) is a programming paradigm that treats computation as the evaluation of mathematical functions. Key principles include: pure functions, immutability, avoiding side effects, and composing functions. JavaScript supports both FP and OOP.
// Pure function β same inputs always give same output, no side effects const add = (a, b) => a + b; // β pure let total = 0; const addToTotal = (n) => { total += n; }; // β impure β side effect // Immutability β don't mutate, create new const arr = [1, 2, 3]; const newArr = [...arr, 4]; // β new array arr.push(4); // β mutates original // Function composition const compose = (...fns) => x => fns.reduceRight((v, f) => f(v), x); const double = x => x * 2; const addOne = x => x + 1; const doubleAndAddOne = compose(addOne, double); doubleAndAddOne(5); // 11 // Higher-order functions β functions that take/return functions const multiply = factor => number => number * factor; const triple = multiply(3); [1, 2, 3].map(triple); // [3, 6, 9]
Shallow equality checks if two objects are the same reference (or if their top-level properties are equal). Deep equality recursively checks that all nested properties and values are identical. JavaScript's === does reference equality for objects, not value equality.
const obj1 = { a: 1, b: { c: 2 } };
const obj2 = { a: 1, b: { c: 2 } };
const obj3 = obj1;
// Reference equality
obj1 === obj2; // false (different references)
obj1 === obj3; // true (same reference)
// Shallow equality check
function shallowEqual(a, b) {
const keysA = Object.keys(a);
const keysB = Object.keys(b);
if (keysA.length !== keysB.length) return false;
return keysA.every(key => a[key] === b[key]);
}
shallowEqual(obj1, obj2); // false β b.c is different reference
// Deep equality
JSON.stringify(obj1) === JSON.stringify(obj2); // true (but fragile)
_.isEqual(obj1, obj2); // true β Lodash deep equal
Lazy loading is a technique to defer loading of non-critical resources until they are needed, improving initial page load time. In JavaScript, you can lazy load modules, images, and components.
// Dynamic import β lazy load JS module button.addEventListener('click', async () => { const { openModal } = await import('./modal.js'); openModal(); // module only loaded on click }); // React lazy loading const LazyComponent = React.lazy(() => import('./HeavyComponent')); function App() { return ( <React.Suspense fallback={<Loading />}> <LazyComponent /> </React.Suspense> ); } // Lazy load images with IntersectionObserver const observer = new IntersectionObserver(entries => { entries.forEach(entry => { if (entry.isIntersecting) { entry.target.src = entry.target.dataset.src; observer.unobserve(entry.target); } }); }); document.querySelectorAll('img[data-src]').forEach(img => observer.observe(img)); // HTML native lazy loading // <img src="..." loading="lazy" />
Tree shaking is a form of dead code elimination. When you bundle JavaScript with tools like Webpack or Rollup, unused exports from ES modules are automatically removed from the final bundle. It's called "tree shaking" because you shake the dependency tree to make dead leaves (unused code) fall off.
// utils.js β exports multiple functions export const add = (a, b) => a + b; export const subtract = (a, b) => a - b; export const multiply = (a, b) => a * b; // β not used anywhere // main.js β only imports what it needs import { add, subtract } from './utils.js'; // multiply is never imported β tree shaker removes it from bundle! // Requirements for tree shaking to work: // β Must use ES Modules (import/export) β NOT CommonJS (require) // β Module must be "side-effect free" in package.json: "sideEffects": false // β Build tool must support it (Webpack 4+, Rollup, Vite) // Bad β defeats tree shaking import * as utils from './utils'; // imports everything import _ from 'lodash'; // imports entire lodash! // Good import { debounce } from 'lodash-es'; // or import debounce from 'lodash/debounce'
Code splitting breaks your JavaScript bundle into smaller chunks that can be loaded on demand. Instead of loading the entire application upfront, you load what's needed for the current page/route. This dramatically improves initial load time.
// Webpack/Vite automatically splits on dynamic import // Route-based code splitting in React Router: import { lazy, Suspense } from 'react'; import { Routes, Route } from 'react-router-dom'; const Home = lazy(() => import('./pages/Home')); const Dashboard = lazy(() => import('./pages/Dashboard')); const Settings = lazy(() => import('./pages/Settings')); function App() { return ( <Suspense fallback={<PageLoader />}> <Routes> <Route path="/" element={<Home />} /> <Route path="/dashboard" element={<Dashboard />} /> <Route path="/settings" element={<Settings />} /> </Routes> </Suspense> ); } // Each page is its own bundle, only loaded when visited
The Observer pattern defines a one-to-many dependency between objects so that when one object (the subject/publisher) changes state, all its dependents (observers/subscribers) are notified automatically. It's also called the Publish-Subscribe (PubSub) pattern.
class EventEmitter {
constructor() {
this.events = {};
}
on(event, listener) {
if (!this.events[event]) this.events[event] = [];
this.events[event].push(listener);
return this; // chainable
}
off(event, listener) {
this.events[event] = this.events[event]?.filter(l => l !== listener);
}
emit(event, ...args) {
this.events[event]?.forEach(listener => listener(...args));
}
}
const emitter = new EventEmitter();
emitter.on('data', (data) => console.log('Received:', data));
emitter.on('data', (data) => saveToDb(data));
emitter.emit('data', { user: 'Alice' }); // both listeners fire
// This is how Node.js EventEmitter works!
The Singleton pattern ensures a class has only one instance and provides a global access point to it. Useful for things like configuration, database connections, or shared state.
class Config {
static #instance = null;
#settings = {};
constructor() {
if (Config.#instance) return Config.#instance; // return existing
Config.#instance = this;
}
set(key, value) { this.#settings[key] = value; return this; }
get(key) { return this.#settings[key]; }
}
const config1 = new Config();
const config2 = new Config();
console.log(config1 === config2); // true β same instance!
config1.set('apiUrl', 'https://api.example.com');
config2.get('apiUrl'); // "https://api.example.com" β shared!
// Module pattern also creates singletons naturally in ESM
// because modules are cached after first import
The Factory pattern provides a way to create objects without specifying the exact class. A factory function returns different types of objects based on input, hiding the creation logic from the caller.
function createUser(type, name) {
const baseUser = {
name,
createdAt: new Date(),
login() { console.log(`${this.name} logged in`); }
};
switch(type) {
case 'admin':
return { ...baseUser, role: 'admin', deleteUser() {} };
case 'moderator':
return { ...baseUser, role: 'moderator', banUser() {} };
default:
return { ...baseUser, role: 'user' };
}
}
const admin = createUser('admin', 'Alice');
const user = createUser('user', 'Bob');
// Caller doesn't need to know how each type is created
// Real-world: React component factories, API client factories
function createApiClient(baseURL) {
return {
get: (path) => fetch(baseURL + path),
post: (path, data) => fetch(baseURL + path, { method: 'POST', body: JSON.stringify(data) })
};
}
const api = createApiClient('https://api.example.com');
Imperative programming describes HOW to do something β you write step-by-step instructions. Declarative programming describes WHAT you want β you express the outcome without specifying all the steps. Modern JavaScript (and especially React) favors declarative style.
const numbers = [1, 2, 3, 4, 5]; // Imperative β HOW to do it (step by step) const doubled = []; for (let i = 0; i < numbers.length; i++) { doubled.push(numbers[i] * 2); } // Declarative β WHAT you want const doubled = numbers.map(n => n * 2); // Imperative DOM update const el = document.createElement('p'); el.className = 'message'; el.textContent = isLoggedIn ? 'Welcome!' : 'Please login'; container.appendChild(el); // Declarative (React) <p className="message"> {isLoggedIn ? 'Welcome!' : 'Please login'} </p>
A pure function has two properties: (1) Given the same inputs, it always returns the same output. (2) It has no side effects β it doesn't modify anything outside its scope (no mutating arguments, no API calls, no DOM manipulation, no logging).
// Pure functions const add = (a, b) => a + b; // β always same result const double = arr => arr.map(n => n * 2); // β creates new array const greet = name => `Hello, ${name}`; // β no side effects // Impure functions let count = 0; const increment = () => count++; // β modifies external state const getTime = () => Date.now(); // β different result each call const addUser = (user) => { db.save(user); // β side effect (I/O) return user; }; // Benefits of pure functions: // β Easy to test (no mocking needed) // β Easy to reason about // β Safe to memoize // β Safe to run in parallel
A higher-order function is a function that either (1) takes one or more functions as arguments, or (2) returns a function as its result. They're a fundamental concept in functional programming and are used all over JS.
// Takes a function as argument [1,2,3].map(n => n * 2); // map is a higher-order function [1,2,3].filter(n => n > 1); // filter is a higher-order function setTimeout(callback, 1000); // setTimeout is a higher-order function // Returns a function function multiplier(factor) { return (number) => number * factor; // returns a function } const triple = multiplier(3); triple(5); // 15 // Both β takes and returns a function function withLogging(fn) { return function(...args) { console.log(`Calling ${fn.name} with`, args); const result = fn(...args); console.log(`Result:`, result); return result; }; } const loggedAdd = withLogging(add); loggedAdd(2, 3); // logs input and output, returns 5
A regular (synchronous) iterator's next() returns a value immediately. An async iterator's next() returns a Promise. They're used with for await...of to process async data streams like paginated APIs, file streams, or WebSocket messages.
// Async iterator β fetching paginated data async function* fetchAllPages(url) { let page = 1; let hasMore = true; while (hasMore) { const res = await fetch(`${url}?page=${page}`); const data = await res.json(); yield data.items; hasMore = data.hasNextPage; page++; } } // for await...of consumes async iterables for await (const items of fetchAllPages('/api/products')) { items.forEach(item => console.log(item.name)); } // Stream reading (Node.js) const stream = fs.createReadStream('large-file.txt'); for await (const chunk of stream) { processChunk(chunk); }
The module pattern uses an immediately invoked function expression (IIFE) to create private scope and expose only what you want publicly. It's the pre-ES6 way of creating encapsulated modules.
// IIFE Module Pattern const counter = (function() { let _count = 0; // private variable function _validate(n) { // private function return typeof n === 'number'; } return { // public API increment(n = 1) { if (_validate(n)) _count += n; }, decrement(n = 1) { if (_validate(n)) _count -= n; }, getCount() { return _count; }, reset() { _count = 0; } }; })(); counter.increment(5); counter.getCount(); // 5 counter._count; // undefined β truly private!
Today we use ES Modules instead, but understanding IIFE patterns is still important for reading older codebases.
Tail call optimization (TCO) is when the compiler/engine optimizes a recursive function where the recursive call is the last operation (a "tail call"), reusing the stack frame instead of creating a new one. This prevents stack overflow for deep recursion.
// Regular recursion β NOT tail-call optimizable function factorial(n) { if (n <= 1) return 1; return n * factorial(n - 1); // multiply happens AFTER recursive call } // factorial(10000) β Stack Overflow! // Tail-recursive version β recursive call is the LAST thing function factorialTCO(n, acc = 1) { if (n <= 1) return acc; return factorialTCO(n - 1, n * acc); // recursive call is last } // Note: TCO is not widely supported in browsers (only strict mode Safari) // In practice, use iteration for performance-critical code: function factorialIterative(n) { let result = 1; for (let i = 2; i <= n; i++) result *= i; return result; }
JavaScript's event loop processes tasks in a specific priority order. Macrotasks (or just "tasks") include: setTimeout, setInterval, I/O, UI rendering, script execution. Microtasks have higher priority and include: Promise callbacks (.then/.catch), queueMicrotask, MutationObserver. The microtask queue is completely drained after every macrotask, before the next macrotask runs.
console.log("1 - Script start"); // synchronous
setTimeout(() => console.log("5 - setTimeout"), 0); // macrotask
Promise.resolve()
.then(() => console.log("3 - Promise 1")) // microtask
.then(() => console.log("4 - Promise 2")); // microtask
queueMicrotask(() => console.log("2 - queueMicrotask")); // microtask
console.log("?? - Script end"); // synchronous
// Output: 1, ??, 2, 3, 4, 5
// Synchronous first β all microtasks β macrotasks
JavaScript automatically manages memory. When you create variables, functions, or objects, memory is allocated. When those things are no longer reachable (no references pointing to them), the garbage collector reclaims that memory. The main algorithm is mark-and-sweep.
// Memory is automatically freed when references are gone function createUser() { const user = { name: "Alice", data: new Array(1000000) }; return user.name; // user object becomes unreachable after return } // user object is garbage collected after function returns // Memory leaks β when you accidentally keep references // 1. Global variables (accidentally) function leak() { leaked = "I'm global now!"; // no var/let/const β becomes global! } // 2. Detached DOM nodes let btn = document.querySelector('button'); btn.addEventListener('click', handler); btn.remove(); // removed from DOM but still referenced! btn = null; // β release reference // 3. Forgotten intervals const timer = setInterval(doSomething, 1000); // Always clear when no longer needed: clearInterval(timer); // 4. Closures holding large data function createClosure() { const bigData = new Array(1000000); return () => bigData.length; // bigData stays alive as long as closure does }
A Web Worker lets you run JavaScript in a background thread, separate from the main UI thread. This prevents heavy computations from blocking the UI and making the page unresponsive. Workers can't access the DOM β they communicate with the main thread via message passing.
// main.js β create and communicate with worker const worker = new Worker('worker.js'); worker.postMessage({ data: largeArray }); // send to worker worker.onmessage = (event) => { console.log('Result from worker:', event.data); displayResult(event.data.result); }; worker.onerror = (err) => console.error(err); // Terminate when done worker.terminate(); // worker.js β runs in background thread self.onmessage = (event) => { const { data } = event.data; // Heavy computation (won't block main thread!) const result = data.reduce((sum, n) => sum + heavyCalculation(n), 0); self.postMessage({ result }); // send back to main };
This is essentially the same as Q83 (microtasks vs macrotasks) but let me explain the queue mechanics more deeply. The event loop has two queues. The task queue (macrotask queue) holds macrotasks. The microtask queue holds microtasks. After each macrotask completes, the event loop empties the entire microtask queue before pulling the next macrotask.
// Demonstration of queue ordering const log = []; setTimeout(() => log.push('macro 1'), 0); setTimeout(() => log.push('macro 2'), 0); Promise.resolve() .then(() => { log.push('micro 1'); }) .then(() => { log.push('micro 2'); }); Promise.resolve().then(() => { log.push('micro 3'); }); // After current synchronous code: // Microtask queue: [micro 1, micro 2, micro 3] // Task queue: [macro 1, macro 2] // Execution order: micro 1 β micro 2 β micro 3 β macro 1 β macro 2 // All microtasks drain before ANY macrotask runs
AbortController lets you cancel ongoing fetch requests or any other operation that supports the AbortSignal interface. This is essential for preventing memory leaks in React when a component unmounts while a fetch is in progress.
// Basic usage const controller = new AbortController(); const { signal } = controller; fetch('/api/data', { signal }) .then(res => res.json()) .then(data => console.log(data)) .catch(err => { if (err.name === 'AbortError') { console.log('Fetch was cancelled'); } else { console.error(err); } }); // Cancel it controller.abort(); // React use case β cancel on unmount useEffect(() => { const controller = new AbortController(); async function fetchData() { try { const res = await fetch('/api/user', { signal: controller.signal }); const data = await res.json(); setUser(data); } catch (err) { if (err.name !== 'AbortError') setError(err); } } fetchData(); return () => controller.abort(); // cleanup on unmount }, []);
JSON (JavaScript Object Notation) is a lightweight data-interchange format. It's a string that represents data. In JavaScript, you use JSON.stringify() to convert an object to JSON string, and JSON.parse() to convert JSON string back to an object.
const user = { name: "Alice", age: 25, active: true };
// Object β JSON string
const jsonStr = JSON.stringify(user);
// '{"name":"Alice","age":25,"active":true}'
// Formatted with indentation
JSON.stringify(user, null, 2);
// JSON string β Object
const parsed = JSON.parse(jsonStr);
// Gotchas:
JSON.stringify(undefined); // undefined (not serializable)
JSON.stringify({ fn: () => {} }); // '{}' β functions ignored
JSON.stringify({ date: new Date() }); // date becomes string
// Replacer function
JSON.stringify(user, ['name', 'age']); // only include these keys
// Reviver function
JSON.parse(jsonStr, (key, value) => {
if (key === 'age') return value + 1; // transform during parse
return value;
});
The Reflect object provides methods for interceptable JavaScript operations β the same operations that Proxy traps intercept. It's a companion to Proxy and provides default behavior for proxy traps, making it easier to write handlers.
// Reflect methods mirror Proxy trap names Reflect.get(target, prop); // like target[prop] Reflect.set(target, prop, value); // like target[prop] = value Reflect.has(target, prop); // like 'prop' in target Reflect.deleteProperty(target, prop); // like delete target[prop] Reflect.ownKeys(target); // all own keys including Symbols // Best practice: use Reflect in Proxy handlers for default behavior const proxy = new Proxy(target, { get(target, prop, receiver) { console.log(`Getting ${prop}`); return Reflect.get(target, prop, receiver); // delegate to default }, set(target, prop, value, receiver) { console.log(`Setting ${prop} = ${value}`); return Reflect.set(target, prop, value, receiver); // delegate } });
Tagged templates let you parse template literals with a function. The tag function receives the raw strings and all the interpolated values, letting you process the template however you want. This is how libraries like styled-components and GraphQL's gql work.
function highlight(strings, ...values) {
return strings.reduce((result, str, i) => {
const value = values[i - 1];
return result + `<mark>${value}</mark>` + str;
});
}
const name = "Alice";
const score = 95;
const msg = highlight`${name} scored ${score} on the test`;
// "<mark>Alice</mark> scored <mark>95</mark> on the test"
// SQL tagged template (safe parameterized queries)
function sql(strings, ...params) {
return {
text: strings.join('$' + params.map((_, i) => i + 1).join(', $')),
values: params
};
}
const query = sql`SELECT * FROM users WHERE id = ${userId} AND role = ${role}`;
// Real-world: styled-components
const Button = styled.button`
background: ${props => props.primary ? 'blue' : 'white'};
padding: 8px 16px;
`;
There are multiple ways to copy objects in JavaScript, each with different trade-offs regarding depth, performance, and what types they handle.
const original = {
name: "Alice",
age: 25,
address: { city: "Delhi" },
scores: [90, 85, 92],
greet: function() { return "hello"; }
};
// 1. Spread β shallow copy
const spread = { ...original };
// 2. Object.assign β shallow copy
const assigned = Object.assign({}, original);
// 3. JSON β deep copy (loses functions, Date, undefined, Symbol)
const json = JSON.parse(JSON.stringify(original));
// json.greet is gone!
// 4. structuredClone β deep copy (modern, recommended)
const structured = structuredClone(original);
// handles Date, RegExp, ArrayBuffer, but NOT functions
// 5. Lodash cloneDeep β handles everything
const lodash = _.cloneDeep(original);
// 6. Custom recursive clone
function deepClone(obj) {
if (obj === null || typeof obj !== 'object') return obj;
if (Array.isArray(obj)) return obj.map(deepClone);
return Object.fromEntries(
Object.entries(obj).map(([k, v]) => [k, deepClone(v)])
);
}
Function composition is combining two or more functions to produce a new function. The output of one function becomes the input of the next. It's a core concept in functional programming.
// Manual composition const double = x => x * 2; const addTen = x => x + 10; const square = x => x * x; const result = square(addTen(double(5))); // double(5) = 10, addTen(10) = 20, square(20) = 400 // compose β right to left (mathematical order) const compose = (...fns) => x => fns.reduceRight((acc, fn) => fn(acc), x); const transform = compose(square, addTen, double); transform(5); // 400 β same as above // pipe β left to right (easier to read) const pipe = (...fns) => x => fns.reduce((acc, fn) => fn(acc), x); const transform2 = pipe(double, addTen, square); transform2(5); // 400 // Real-world: data transformation pipeline const processUsers = pipe( users => users.filter(u => u.active), users => users.map(u => ({ ...u, name: u.name.toUpperCase() })), users => users.sort((a, b) => a.name.localeCompare(b.name)) );
An IIFE is a function that is defined and called immediately. It creates its own scope, preventing variables from polluting the global scope. It was very commonly used before ES modules existed.
// Basic IIFE (function() { const localVar = "I'm private"; console.log("IIFE runs immediately!"); })(); // localVar doesn't exist outside // Arrow function IIFE (() => { const setup = "Setup code here"; initializeApp(); })(); // IIFE with parameters (function(global, factory) { factory(global); })(window, function(window) { // Library code here }); // IIFE with return value const config = (() => { const env = process.env.NODE_ENV; return { apiUrl: env === 'production' ? 'https://api.prod.com' : 'http://localhost:3000', debug: env !== 'production' }; })();
Both can create arrays from iterables, but Array.from() is more powerful β it accepts a second argument (a mapping function) and can handle array-like objects (things with a length property but not iterable).
// Array.from() β can take array-like objects AND iterables Array.from("hello"); // ['h','e','l','l','o'] Array.from(new Set([1,2,3])); // [1, 2, 3] Array.from({ length: 5 }, (_, i) => i); // [0, 1, 2, 3, 4] Array.from({ length: 3 }, () => "π"); // ["π","π","π"] // DOM NodeList (array-like, not truly iterable in old browsers) Array.from(document.querySelectorAll('div')); // β always works [...document.querySelectorAll('div')]; // β modern browsers // Spread β only works on true iterables [..."hello"]; // ['h','e','l','l','o'] [...new Set([1,2,3])]; // [1, 2, 3] [...{ length: 5 }]; // β TypeError β not iterable // Create array of N items Array.from({ length: 10 }, (_, i) => i * 2); // [0,2,4,6,8,10,12,14,16,18]
window is the global browser object β it represents the browser window/tab and is the global scope in browsers. document is a property of window that represents the HTML document currently loaded. All global variables and functions are properties of window.
// window β global browser environment window.innerWidth; // browser viewport width window.location; // URL, href, search, hash window.history; // browser history window.navigator; // browser/device info window.alert("Hi"); // same as just alert("Hi") window.setTimeout; // same as setTimeout // All globals are on window var x = 5; window.x; // 5 (but not let/const) // document β the HTML document document.querySelector('.btn'); document.getElementById('app'); document.createElement('div'); document.title = "New Title"; document.body; document.head; document.cookie; // document is part of window window.document === document; // true
In a Single Page Application (SPA), routing is client-side navigation β JavaScript handles changing the content based on the URL without page reloads. Deep linking refers to the ability to link directly to a specific route/page within the SPA, even on first load. SPAs need server-side configuration to support deep linking β the server must return the same HTML for all routes.
// Client-side routing (React Router) import { BrowserRouter, Routes, Route } from 'react-router-dom'; function App() { return ( <BrowserRouter> <Routes> <Route path="/" element={<Home />} /> <Route path="/products" element={<Products />} /> <Route path="/products/:id" element={<ProductDetail />} /> </Routes> </BrowserRouter> ); } // Two history modes: // 1. HTML5 History API (BrowserRouter) β /products/123 // Server must redirect all requests to index.html // 2. Hash-based (HashRouter) β /#/products/123 // Works without server config (hash is client-only)
When you access a property on an object, JavaScript first looks at the object's own properties. If not found, it looks at the prototype (the object's [[Prototype]]). If still not found, it goes up the chain until it reaches Object.prototype, and then returns undefined.
function Dog(name) { this.name = name; }
Dog.prototype.speak = function() { return `${this.name} says woof`; };
Dog.prototype.breathe = function() { return "breathing"; };
const rex = new Dog("Rex");
// Property lookup chain for rex.speak():
// 1. Check rex (own properties): { name: "Rex" } β no 'speak'
// 2. Check Dog.prototype: { speak: fn, breathe: fn } β FOUND!
// Lookup chain for rex.toString():
// 1. rex β no toString
// 2. Dog.prototype β no toString
// 3. Object.prototype β FOUND! toString is here
// Lookup chain for rex.nonExistent:
// 1. rex β 2. Dog.prototype β 3. Object.prototype β 4. null β undefined
rex.hasOwnProperty('name'); // true β own property
rex.hasOwnProperty('speak'); // false β on prototype
Great interview question to show maturity. Beyond the classics, in modern frontend work I commonly use:
// 1. Module Pattern (ES6 modules) β encapsulation export const userService = { getUser: (id) => fetch(`/api/users/${id}`), createUser: (data) => fetch('/api/users', { method: 'POST' }) }; // 2. Observer / Pub-Sub β event systems const store = createStore(reducer); // Redux is observer pattern // 3. Strategy Pattern β swap algorithms const sorters = { bubble: (arr) => { /* ... */ }, quick: (arr) => { /* ... */ }, merge: (arr) => { /* ... */ } }; function sortData(data, strategy = 'quick') { return sorters[strategy](data); } // 4. Decorator Pattern β add behavior function withAuth(fn) { return async function(...args) { if (!isAuthenticated()) throw new Error('Not authenticated'); return fn(...args); }; } const secureDeleteUser = withAuth(deleteUser); // 5. Proxy Pattern β intercept operations // 6. Factory Pattern β create objects // 7. Singleton β shared instances
Compile-time errors (or parse-time errors) are caught before code executes β syntax errors that prevent the JS engine from even parsing the code. Runtime errors occur while the code is executing β the code was syntactically valid, but something went wrong during execution.
// Compile-time / SyntaxError β caught immediately const x = ; // SyntaxError: Unexpected token ';' function() {} // SyntaxError: Function statements require a name if (true { } // SyntaxError: Missing ) after condition // Code CANNOT run at all if there's a syntax error // Runtime errors β code runs, then fails null.property; // TypeError: Cannot read property of null undeclaredVar; // ReferenceError: undeclaredVar is not defined decodeURI('%'); // URIError: URI malformed // TypeScript helps catch type errors at "compile" time: function add(a: number, b: number) { return a + b; } add("hello", 5); // TS Error β caught before runtime // Handle runtime errors gracefully: try { riskyOperation(); } catch (e) { if (e instanceof TypeError) { /* handle */ } else if (e instanceof RangeError) { /* handle */ } }
This is a broad question and I love it β it shows thinking in systems. I'd approach it in layers:
// 1. MEASURE FIRST β don't optimize blindly // Use Chrome DevTools β Performance tab // Use Lighthouse for overall audit console.time('operation'); doHeavyThing(); console.timeEnd('operation'); // 2. REDUCE BUNDLE SIZE // Tree shaking, code splitting, lazy loading const HeavyComponent = React.lazy(() => import('./Heavy')); // 3. AVOID UNNECESSARY RE-RENDERS (React) const MemoizedComp = React.memo(MyComponent); const memoValue = useMemo(() => expensiveCalc(data), [data]); const memoCallback = useCallback(() => handler(), [dep]); // 4. EFFICIENT DOM MANIPULATION // Batch DOM reads and writes (avoid layout thrashing) const heights = elements.map(el => el.offsetHeight); // batch reads elements.forEach((el, i) => el.style.height = heights[i] + 'px'); // then writes // 5. DEBOUNCE/THROTTLE expensive event handlers window.addEventListener('scroll', throttle(handleScroll, 100)); searchInput.addEventListener('input', debounce(search, 300)); // 6. VIRTUALIZE LONG LISTS // Only render visible items β react-window, react-virtualized // 7. USE WEB WORKERS for heavy computation const worker = new Worker('./heavy-task.js'); // 8. CACHE with memoization and service workers // 9. ASSET OPTIMIZATION // Compress images, use WebP, serve correct sizes // Use CDN for static assets
These 100 questions cover the most frequently asked JavaScript interview topics for 1β2 year experience level. Focus on understanding the concepts, not memorizing β interviewers appreciate reasoning over rote answers. Good luck!