Complexity Analysis

Big O Notation

Time and space complexity explained visually — with memory hooks, code patterns, and real algorithm examples. Click any card to expand.

Growth Rate Comparison (n = input size)
🟢 Excellent: O(1), O(log n)
🟡 Good: O(n), O(n log n)
🟠 Bad: O(n²), O(n³)
🔴 Horrible: O(2ⁿ), O(n!)
🔍 How to Identify Big O from Code
O(1)
No loops, no recursion. Just math or a direct lookup. return a + b or arr[5]
O(log n)
The input is halved each step. while n > 1: n = n // 2 or binary search.
O(n)
One loop through the input once. for i in range(n) doing constant work.
O(n log n)
A loop AND halving — like sorting. Usually a divide-and-conquer algorithm.
O(n²)
Nested loops, both going to n. for i in range(n): for j in range(n):
O(n³)
Triple nested loops. for i ... for j ... for k ...
O(2ⁿ)
Recursive function that calls itself twice. f(n) = f(n-1) + f(n-2)
O(n!)
Generating all permutations of n items. Brute-force search through all arrangements.
O(1)

Constant Time

Always takes the same time regardless of input size

🟢 PERFECT
Memory Hook
🎯 O(1) = one move, done. Think of looking up your friend's number when it's already in your hand. Doesn't matter if your phone has 10 or 10,000 contacts — you're looking at the one in your hand. The input size is completely irrelevant.
Code Pattern — looks like this
def get_first(arr): return arr[0] # O(1) — no loop, direct access def add(a, b): return a + b # O(1) — just math, no looping def is_even(n): return n % 2 == 0 # O(1) — single operation
Time vs Space
⏱ Time O(1)
No loops. Runs in the same number of steps whether n=1 or n=1,000,000.
💾 Space O(1)
Uses a fixed number of variables. Doesn't create new arrays that grow with input.
Real examples
Array index lookup Hash table get/set Stack push/pop Math formula Variable assignment
O(log n)

Logarithmic Time

Input is halved at each step

🟢 EXCELLENT
Memory Hook
📖 O(log n) = phone book search. Finding "Smith" in a 1000-page phone book: open to page 500. Too early? Jump to 750. Too late? Jump to 625. You're halving the search space every time. A million-item input only needs ~20 steps (log₂(1,000,000) ≈ 20).
Code Pattern — look for HALVING
def binary_search(arr, target): lo, hi = 0, len(arr) - 1 while lo <= hi: mid = (lo + hi) // 2 if arr[mid] == target: return mid elif arr[mid] < target: lo = mid + 1 # cut LEFT half else: hi = mid - 1 # cut RIGHT half # each step: input SIZE HALVES → log n steps total
Time vs Space
⏱ Time O(log n)
Doubling the input only adds ONE more step. n=1M → 20 steps. n=2M → 21 steps.
💾 Space O(1) or O(log n)
Iterative binary search: O(1). Recursive: O(log n) because the call stack grows log n deep.
Real examples
Binary search BST lookup AVL tree operations Heap insert/delete Git bisect
O(n)

Linear Time

One pass through all the data

🟡 GOOD
Memory Hook
🚶 O(n) = walking the whole line. Checking every student's ID to find one person. If there are 100 students, you check up to 100 IDs. 1000 students → up to 1000 checks. Time grows directly with input. One loop = linear.
Code Pattern — ONE loop through n
def find_max(arr): max_val = arr[0] for x in arr: # ONE loop — O(n) if x > max_val: max_val = x return max_val def count_vowels(s): return sum(1 for c in s if c in 'aeiou') # O(n)
Time vs Space
⏱ Time O(n)
One step per input item. n=100 → 100 steps. n=1M → 1M steps. Proportional.
💾 Space O(n)
If you create a copy or new array of size n (like a result list), that's O(n) space.
Real examples
Linear search Sum of array Print all items Fibonacci iterative Counting sort
O(n log n)

Linearithmic Time

Loop + divide-and-conquer at each step

🟡 ACCEPTABLE
Memory Hook
🃏 O(n log n) = sorting a deck of cards by splitting it. You divide the deck in half (log n splits), then merge each half back together touching each card once (n touches per level). The best comparison-based sorts live here — it's the theoretical minimum for sorting by comparing.
Pattern — divide input, then process each piece
def merge_sort(arr): if len(arr) <= 1: return arr mid = len(arr) // 2 L = merge_sort(arr[:mid]) # split in half (log n levels) R = merge_sort(arr[mid:]) return merge(L, R) # merge touches n items per level # Total: n items × log n levels = O(n log n)
Time vs Space
⏱ Time O(n log n)
Slightly worse than O(n) but WAY better than O(n²). The gold standard for sorting.
💾 Space varies
Merge sort: O(n) extra space. Heap sort: O(1). Quick sort average: O(log n) call stack.
Real examples
Merge sort Heap sort Quick sort (avg) Python's sort() FFT algorithm
O(n²)

Quadratic Time

Nested loops — every item checked against every item

🟠 BAD
Memory Hook
🪆 O(n²) = every person shakes every other person's hand. 10 people → 90 handshakes. 100 people → 9,900. 1000 people → ~1,000,000. The number of operations grows as the SQUARE of the input. Nested loops are the telltale sign.
Code Pattern — NESTED loops both going to n
def bubble_sort(arr): n = len(arr) for i in range(n): # outer loop: n times for j in range(n - 1): # inner loop: n times if arr[j] > arr[j+1]: arr[j], arr[j+1] = arr[j+1], arr[j] # n × n = n² operations total → O(n²)
Time vs Space
⏱ Time O(n²)
n=100 → 10,000 ops. n=1,000 → 1,000,000 ops. Gets slow fast. Fine for small data.
💾 Space O(1)
Usually — bubble/insertion sort sort in-place. The nested loops don't necessarily create extra memory.
Real examples
Bubble sort Selection sort Insertion sort Checking all pairs Matrix multiplication (naive)
O(n³)

Cubic Time

Three nested loops

🟠 REALLY BAD
Memory Hook
📦 O(n³) = a 3D cube of operations. n² was a square grid of work — n³ is a full 3D cube. n=10 → 1,000 ops. n=100 → 1,000,000. Only practical for small n. Three nested loops = triple the pain.
Code Pattern — TRIPLE nested loops
def matrix_multiply(A, B, C): n = len(A) result = [[0]*n for _ in range(n)] for i in range(n): # loop 1 for j in range(n): # loop 2 for k in range(n):# loop 3 result[i][j] += A[i][k] * B[k][j] # n × n × n = n³ operations
Real examples
Naive matrix multiply Floyd-Warshall (all-pairs shortest path) Certain DP problems
O(2ⁿ)

Exponential Time

Doubles with every new input — recursive branching

🔴 TERRIBLE
Memory Hook
🌿 O(2ⁿ) = a tree that doubles every level. This is the naive Fibonacci we talked about — every call spawns TWO more calls. n=10 → 1,024 ops. n=30 → over a billion. n=50 → over a quadrillion. Completely unusable at scale.
Code Pattern — function calls ITSELF TWICE
def fib(n): if n <= 1: return n return fib(n-1) + fib(n-2) # CALLS ITSELF TWICE! # Each call → 2 more calls → 4 → 8 → 2ⁿ total def subsets(arr): # All subsets of arr if not arr: return [[]] rest = subsets(arr[1:]) return rest + [[arr[0]] + s for s in rest] # O(2ⁿ)
Time vs Space
⏱ Time O(2ⁿ)
Each new input item DOUBLES the work. This is why recursive Fibonacci is so terrible.
💾 Space O(n)
The call stack only goes n levels deep at any one time, so space is "just" O(n).
Real examples
Naive Fibonacci All subsets (power set) Tower of Hanoi Brute-force knapsack
O(n!)

Factorial Time

All possible orderings — the absolute worst

🔴 APOCALYPTIC
Memory Hook
🗺️ O(n!) = trying every possible route. The Travelling Salesman brute-force: 3 cities → 6 routes. 10 cities → 3,628,800 routes. 20 cities → 2.4 quintillion routes. n! grows faster than exponential. Even 2ⁿ looks tame by comparison.
Code Pattern — generating ALL permutations
from itertools import permutations def brute_force_tsp(cities): best = float('inf') for route in permutations(cities): # n! routes! best = min(best, route_length(route)) return best # 10 cities: 3.6M routes. 15 cities: 1.3 TRILLION.
Real examples
Brute-force TSP All permutations Bogosort (joke sort) Brute-force scheduling
💾

Space Complexity — The Other Dimension

How much memory does your algorithm use as n grows?

IMPORTANT — often overlooked
Space Complexity Quick Reference
O(1) space
Only uses a fixed set of variables. No arrays that grow. Examples: in-place sorting, iterative Fibonacci, swapping two values.
O(log n) space
Recursive algorithms where the call stack is log n deep. Binary search (recursive version), balanced tree traversal.
O(n) space
Creating an output array of size n. Storing all inputs in a hash map. Merge sort's temporary arrays. Recursive Fibonacci call stack.
O(n²) space
Storing an n×n grid or adjacency matrix. DP table for two sequences of length n. All pairs of items from a set of n.
Key insight: time ≠ space
An algorithm can have different time and space complexities. Merge sort is O(n log n) time but O(n) space. Recursive Fibonacci is O(2ⁿ) time but O(n) space (the call stack). Always consider both.