Every concept, from scratch, explained like a friend — not a textbook. ADHD-friendly, exam-ready.
An algorithm is just a set of instructions for solving a problem. Like a recipe — it takes inputs and produces an output.
Time complexity is how we measure how much longer an algorithm takes as the input gets bigger. We use Big O notation for this.
Always takes the same time, no matter the input. Fast. Like looking up your name in a phonebook if you know the exact page.
Time grows proportionally to input size. Scan through a list of 100 items → 100 steps. List of 1,000 → 1,000 steps.
Time grows by the square. A list of 10 → 100 steps. A list of 100 → 10,000 steps. Gets slow fast.
Doubles with every new input. n=10 → 1,024 steps. n=50 → over 1 quadrillion steps. Basically unusable at large n.
The function calls itself to solve smaller versions of the problem.
The problem? To calculate fib(5), it calls fib(4) AND fib(3). Then fib(4) calls fib(3) and fib(2)... it recalculates the same values over and over. The tree of calls explodes.
Uses a loop. Calculates each number once and moves on. No repeated work.
The iterative approach is far more efficient. Iterative runs in O(n) time — it visits each step exactly once. Recursive runs in O(2ⁿ) — it branches into two calls at every step, causing massive repeated computation. As n grows, recursive becomes exponentially slower and quickly becomes impractical.
Elements sit in a continuous block of memory, side by side. Fast to access by index (just jump to position). But inserting/deleting in the middle requires shifting everything.
Elements are scattered anywhere in memory. Each element (node) stores its data AND a pointer (address) to the next node. Easy to insert/delete anywhere — just update pointers.
The diagram shows nodes connected with arrows pointing in both directions, AND the last node's arrow wraps back to the first. That's the key pattern.
| Type | Arrows | Last connects to first? | Real-world example |
|---|---|---|---|
| Singly Linked | → one way | No | Basic list |
| Circular Singly | → one way | ✓ Yes (loops) | Chess game (whose turn is it?) |
| Doubly Linked | ↔ both ways | No | Music player (fwd & back) |
| Circular Doubly ← THIS ONE | ↔ both ways | ✓ Yes (loops) | Mac ⌘+Tab app switcher |
Circular Doubly Linked List. Each node has a NEXT pointer (forward) and a PREVIOUS pointer (backward), and the last node's NEXT pointer points back to the head (first) node.
Unlike arrays, a linked list doesn't need contiguous memory. Each node is an object stored at any available memory address, containing:
Each node holds: [DATA | PREV_POINTER | NEXT_POINTER]
The NEXT_POINTER is the memory address of the next node. The PREV_POINTER is the address of the previous one. They can be anywhere in RAM — the pointers link them together logically.
Nodes are allocated in non-contiguous (scattered) memory locations. Each node stores its value plus two pointers — one to the next node's address, one to the previous node's address. In a circular doubly linked list, the tail's NEXT pointer stores the head's address, and the head's PREV pointer stores the tail's address, forming a complete loop.
Every parent is smaller than or equal to its children. The root = minimum value in the whole tree.
Every parent is larger than or equal to its children. The root = maximum value in the whole tree.
The process is called "heapify". Here's how to build a Max Heap:
48 is at position n/2 (the last non-leaf node). Start there.Min Heap: A complete binary tree where every parent ≤ its children. Root holds the minimum. Max Heap: A complete binary tree where every parent ≥ its children. Root holds the maximum.
Insert [1, 4, 8, 48, 12, 15, 22] into a complete binary tree. For a Max Heap: start from the middle element and "bubble down" — compare each parent to its children and swap if a child is larger. Repeat until the entire tree satisfies the Max Heap property (48 at root). For a Min Heap, swap when a child is smaller, resulting in 1 at root.
A hash collision occurs when two different pieces of data produce the same hash value, meaning they're mapped to the same slot in the hash table. Example: if h(PHL)=4 and h(HKG)=4, both try to go in slot 4.
Instead of one value per slot, each slot holds a linked list. Colliding values just get added to the chain.
If the target slot is occupied, search for the next available slot by decrementing the index by 1. Wrap around to the end if you hit 0.
Example: PHL hashes to slot 4. HKG also hashes to slot 4. Slot 4 is full, so try slot 3 — it's empty! Put HKG in slot 3 instead.
Computers are made of transistors — tiny switches that are either ON (1) or OFF (0). That's it. So everything a computer does is built from combinations of 1s and 0s. This is binary (base 2).
We use decimal (base 10) in everyday life — ten digits (0-9). Binary only has 2 digits (0 and 1). Octal uses base 8 (digits 0–7). Hexadecimal uses base 16 (digits 0–9 then A–F).
Each position represents a power of 2, from right to left: 1, 2, 4, 8, 16, 32...
16 + 8 + 0 + 2 + 1 = 27 | .5 + 0 + .125 = .625 → 27.625 in decimal
Align the decimal points. Add column by column from right to left.
| Inputs | Sum bit | Carry |
|---|---|---|
| 0 + 0 | 0 | 0 |
| 0 + 1 or 1 + 0 | 1 | 0 |
| 1 + 1 | 0 | carry 1 to left |
| 1 + 1 + 1 (with carry) | 1 | carry 1 to left |
Addition result: 00100.010
Show your work: write B and C aligned at the decimal, add bit by bit right-to-left, carrying when 1+1.
Starting number:
Rule: Group binary digits into sets of 4, starting from the decimal point going both left and right. Add leading/trailing zeros as needed.
Integer part: 0001 1011 → 0001 = 1, 1011 = 11 = B → 1B
Fraction part: .1010 → 1010 = 10 = A → .A
Hex reference: 0-9 are the same, then 10=A, 11=B, 12=C, 13=D, 14=E, 15=F
Rule: Group binary digits into sets of 3, starting from the decimal point going both ways.
Integer: 011 011 → 3, 3 → 33
Fraction: .101 → 4+0+1 = 5 → .5
This is how computers store decimal numbers in 32 bits. Three parts: Sign | Exponent | Mantissa
011011.101 becomes 1.1011101 × 2⁴ (moved 4 places left)100000111011101. Pad with zeros to get 23 bits: 101110100000000000000000 | 10000011 | 10111010000000000000000The absolute easiest conversion: flip every single bit. 0 becomes 1, 1 becomes 0. That's it.
flip all bits ↓
| Gate | Symbol | Rule in plain English | Example |
|---|---|---|---|
| AND | · or ∧ | Output 1 only if ALL inputs are 1 | 1·1=1, 1·0=0 |
| OR | + or ∨ | Output 1 if ANY input is 1 | 1+0=1, 0+0=0 |
| NOT | A' or Ā | Flip the input | NOT 1 = 0 |
| NAND | (A·B)' | AND, then flip | 1·1 → flip → 0 |
| NOR | (A+B)' | OR, then flip | 0+0 → flip → 1 |
| XOR | A⊕B | Output 1 if inputs are DIFFERENT | 1⊕0=1, 1⊕1=0 |
In the exam diagram, tracing the paths:
• Top path: A and B feed into an AND gate → produces A·B
• Middle path: B and C feed into an OR gate → that feeds into an AND gate with something → produces B·C
• Final gate: Both results feed into an OR gate → final output Q
Q = AB + BC
Read the circuit from left to right. Each AND gate multiplies its inputs. Each OR gate adds them.
Convert each 8-bit group to a decimal number, then look up the character in the ASCII table.
To convert binary to decimal: write out the bit positions (128, 64, 32, 16, 8, 4, 2, 1) and add up positions that have a 1.
The binary decodes to: "No!"
Use the ASCII table provided on the exam. Look up each byte's decimal value to find its character.
BoW represents a piece of text as a vector of word counts. You:
Example:
Vocabulary: [the, cat, in, hat, sat, on, mat]
Sentence 1: "The cat in the hat" → [2, 1, 1, 1, 0, 0, 0]
Sentence 2: "The cat sat on the mat" → [2, 1, 0, 0, 1, 1, 1]
One-hot encoding represents each item as a vector with exactly one 1 and the rest 0s. Each item "lights up" only its own position.
| Word | human | animal | world | computer |
|---|---|---|---|---|
| human | 1 | 0 | 0 | 0 |
| animal | 0 | 1 | 0 | 0 |
| world | 0 | 0 | 1 | 0 |
| computer | 0 | 0 | 0 | 1 |
BoW: Represents text as a vector of word frequencies from a fixed vocabulary, ignoring grammar and word order. Example: "The cat sat" with vocab [the, cat, sat, on] → [1, 1, 1, 0].
One-Hot: Each word in [human, animal, world, computer] is represented as a 4-element vector with a 1 in its position and 0s everywhere else (see table above).
A set is just a collection of things. A = {all Americans}. M = {all mathematicians}. Operations: A ∩ B = intersection (in both), A ∪ B = union (in either), A' = complement (everything NOT in A).
A relation is a set of ordered pairs showing a connection between two sets. "Knows" is a relation: (Alice, Bob) ∈ Knows means Alice knows Bob. Notation: Knows ∋ (x, y) means the pair (x,y) is IN the Knows relation.
Given: A = Americans, M = Mathematicians, P = Philosophers, U = all people. Knows(x,y) = x knows y.
We want: find all x such that there exists a y where (x knows y) AND (y is American) AND (y is a mathematician).
American mathematicians = A ∩ M
People who know someone in A ∩ M = the "left side" of the Knows relation, filtered to pairs where the right side is in A ∩ M
π₁(Knows ⋈ (A ∩ M))
Or in plain English: Join the Knows relation with the set of American mathematicians (A ∩ M), then project (select) just the first component (the "knower").
This is trickier. We want philosophers where every single person they know is a mathematician.
Start with all philosophers: P
Remove any philosopher who knows at least one NON-mathematician
Non-mathematicians = M' (complement of M)
P − π₁(Knows ⋈ M')
Take all philosophers, subtract those who have any "knows" relationship with someone who is NOT a mathematician. What remains are philosophers who only know mathematicians.
| Law name | Formula | Plain English |
|---|---|---|
| Implication | A → B = A' ∪ B | "if A then B" = "not A, or B" |
| Complement (∩) | A ∩ A' = ∅ | A AND not-A = nothing |
| Complement (∪) | A ∪ A' = U | A OR not-A = everything |
| Identity (∪) | A ∪ ∅ = A | A or nothing = A |
| Domination (∪) | A ∪ U = U | A or everything = everything |
| Distribution | A∩(B∪C) = (A∩B)∪(A∩C) | Like algebra distribution |
| De Morgan (∩) | (A∩B)' = A'∪B' | NOT(A AND B) = NOT-A OR NOT-B |
Result: U (the Universal Set) — this expression is always true (a tautology). It simplifies completely.
The slides confirm: Complement law, Associative law, Implication law, and De Morgan's Law are all used.
Imagine drawing an arrow from the origin (0,0) to the point (2, 3i) on a 2D graph. The magnitude is the arrow's length. The angle is how far it's rotated from the positive x-axis (counterclockwise).
Use the Pythagorean theorem — it's just the hypotenuse of a right triangle:
Use the arctangent function — it gives the angle from the x-axis:
Joseph Fourier (1807) proved that any signal can be represented as a sum of sine waves. The Fourier Transform finds those sine waves.
Each sine wave has three properties:
How fast it oscillates (Hz = cycles per second)
How tall/strong the wave is
Where in the cycle it starts
Key insight: This is essentially comparing your signal against each possible frequency using a dot product. High dot product = that frequency is strongly present in your signal.
Iterative: O(n) ✓
Recursive: O(2ⁿ) 💀
Circular Doubly Linked List
Scattered memory, 2 pointers per node
Min: parent ≤ children, root = smallest
Max: parent ≥ children, root = largest
Chaining = linked list at slot
Probing = find next empty slot
Addition result: 00100.010
Multiply: partial sums, then place decimal
Hex: 1B.A (groups of 4 bits)
Octal: 33.5 (groups of 3 bits)
1s Comp: 00100.010 (flip all)
IEEE: sign|exp+127|mantissa 23-bit
Q = AB + BC
Q8 — ASCII01001110 01101111 00100001 = "No!"
Q9 — Text encodingBoW = word frequency vector
One-hot: single 1 per word vector
A→B = A'∪B (implication law)
Q11 simplifies to U (tautology)
|z|=√13≈3.606, θ≈56.31°
DFT: loop → sine wave → dot product