What is an Algorithm?
Simple Definition
An algorithm is a step-by-step set of instructions to solve a problem or complete a task.
Think of it like a recipe: it tells you what to do, in what order, to get the result you want.
Key Components
- Input: What you start with
- Process: The steps you follow
- Output: The result you get
Why Learn Algorithms?
Foundation of Software
Used in GPS, search engines, AI systems, games, databases, and web applications.
Career Opportunities
Many technical interviews evaluate algorithmic reasoning and data structures.
Problem-Solving Skills
Builds structured thinking, improves debugging, and helps you write faster code.
Simple Example: Finding the Largest Number
Algorithm Steps:
- 1Look at the first number, remember it as biggest
- 2Look at the next number; if it's bigger, remember that
- 3Repeat until you check all numbers
- ✓The remembered one is the largest
When to use this pattern
Any time you need a single “best” item: max/min, top score, earliest date, cheapest price.
Interactive Demo:
Big O & Efficiency Updated
What is Big O?
Big O notation describes how an algorithm’s runtime (or memory usage) grows as input size grows. It’s mainly used to compare approaches and avoid slow solutions at scale.
Key Points:
- Upper bound (commonly discussed as “worst-case”)
- Ignores constants and lower-order terms
- Helps choose the right algorithm for a given constraint
Space Complexity New
Measures memory usage relative to input size.
- Auxiliary space: extra memory created by the algorithm
- Recursion stack: call stack space used during recursion
Rule of Thumb
For interviews and real systems, aim to recognize what will break first:
- Time-bound: choose lower Big O (e.g.,
O(n log n)overO(n²)) - Memory-bound: avoid large auxiliary arrays; consider in-place options
Big O Growth (Interactive)
Use the slider to change n. The chart updates to show growth behavior.
Operations for n = 100:
Note: Values are scaled raw operation counts for intuition, not measured runtime on hardware.
Essential Data Structures
Quick Visual Intuition
Array
Fast random access; slower inserts in the middle.
Linked List
Easy inserts/deletes if you have the node; slower search.
Binary Tree
Hierarchy is great for searching and ordering (when balanced).
Array
Linear structure with contiguous memory.
Linked List
Nodes connected by pointers.
Stack
LIFO (Last In, First Out).
Queue
FIFO (First In, First Out).
Balanced BST
A Binary Search Tree that stays balanced.
Hash Table
Key-value structure using a hash function.
Performance depends on collisions + load factor.
Graph Theory New
Traversals: BFS vs DFS
BFS (Breadth-First Search)
Explores level-by-level using a Queue. Ideal for shortest path in unweighted graphs.
DFS (Depth-First Search)
Explores as deep as possible using a Stack or recursion. Great for connectivity, cycles, and topological reasoning.
Chooser Heuristic
- BFS: shortest path (unweighted), “layers”, nearest neighbors
- DFS: detect cycles, explore components, backtracking-style problems
Representations
- Adjacency Matrix:
matrix[i][j] = 1indicates an edge. Best for dense graphs. - Adjacency List: array of lists. Memory-efficient for sparse graphs.
- Edge List: list of (u, v) pairs. Simple, good for algorithms like Kruskal.
Essential Algorithms
Recursion & the Call Stack Update
Recursion solves a problem by calling the same function on a smaller version of the problem. The call stack holds each active call — that stack usage is part of space complexity.
Common pattern
- 1) Base case (stop)
- 2) Recursive case (reduce problem size)
- 3) Combine / return
Example: Factorial
def factorial(n):
if n == 0:
return 1
return n * factorial(n - 1)
Searching Algorithms
Linear Search
Checks each element one by one until the target is found.
def linear_search(arr, target):
for i in range(len(arr)):
if arr[i] == target:
return i
return -1
When to use
Small arrays, unsorted data, or one-off scans.
Binary Search
Divides a sorted array in half repeatedly to find target.
def binary_search(arr, target):
left, right = 0, len(arr) - 1
while left <= right:
mid = (left + right) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
left = mid + 1
else:
right = mid - 1
return -1
When to use
Sorted data with random access. If data is unsorted, sorting might be needed first.
Sorting Algorithms
Sorting Algorithms Comparison
| Algorithm | Best | Average | Worst | Space | Stable |
|---|---|---|---|---|---|
| Bubble Sort | O(n) | O(n²) | O(n²) | O(1) | ✅ |
| Selection Sort | O(n²) | O(n²) | O(n²) | O(1) | ❌ |
| Insertion Sort | O(n) | O(n²) | O(n²) | O(1) | ✅ |
| Merge Sort | O(n log n) | O(n log n) | O(n log n) | O(n) | âś… |
| Quick Sort | O(n log n) | O(n log n) | O(n²) | O(log n) | ❌ |
Interview note
In practice, Merge Sort and Quick Sort patterns matter more than memorizing slow sorts.
Interactive Sorting Demo
Algorithm Design Patterns
Divide & Conquer
Break the problem into subproblems, solve them, then combine results.
Greedy
Choose the best local option at each step.
Dynamic Programming
Use overlapping subproblems + caching (memoization / tabulation).
Two Pointers
Use two indices/pointers to traverse efficiently (often on arrays/strings).
Sliding Window
Maintain a window range to compute max/min/sums on contiguous elements.
Backtracking
Try options, undo, and try another path (permutations, combinations, Sudoku).
Bitwise Operations New
Why Bitwise Matters
Bitwise ops manipulate bits directly (AND, OR, XOR, shifts). They’re common in performance work, encoding, set operations, and certain interview classics.
Quick Use Cases
- XOR: find the single number (pairs cancel out)
- AND + masks: check/set/unset flags
- Shifts: multiply/divide by powers of 2 (careful with negatives)
Mini Example: Single Number (XOR)
If every number appears twice except one, XOR cancels duplicates.
def single_number(nums):
x = 0
for n in nums:
x ^= n
return x
Algorithm Frontiers 2025
Quantum Complexity
Traditional Big O assumes classical hardware. Some quantum algorithms offer asymptotic speedups for specific problem classes (e.g., Grover’s algorithm gives quadratic speedup for unstructured search under standard assumptions).
Note: This is conceptual; practical quantum advantage depends on hardware + error correction.
Learned Index Structures
Some database research explores models that approximate index structures (often discussed as “learned indexes”), which may reduce memory or improve lookup behavior in certain distributions.
Note: Results vary by workload and data distribution. Treat as an active research area.