Dynamic Programming
Cheat Sheet
Prime Use Case
When a problem exhibits 'Optimal Substructure' (the global optimum can be constructed from local optima) and 'Overlapping Subproblems' (the same sub-calculations are performed multiple times).
Critical Tradeoffs
- Time vs. Space: Trading memory (memoization table) for significant speedups (polynomial vs. exponential time).
- Implementation Complexity: DP is often harder to design and debug than iterative or greedy approaches.
- Top-down (Recursion) vs. Bottom-up (Iteration): Recursive is more intuitive but risks stack overflow; iterative is more memory-efficient.
Killer Senior Insight
DP is essentially 'Recursion + Memoization' or 'Smart Brute Force'—it is the art of identifying the minimal set of parameters that uniquely define a subproblem state.
Recognition
Common Interview Phrases
Common Scenarios
- 0/1 Knapsack and its variations (Subset Sum, Partition Problem).
- String manipulation (Longest Common Subsequence, Edit Distance, Palindromic Substrings).
- Pathfinding on grids (Unique Paths, Minimum Path Sum).
- Interval-based problems (Matrix Chain Multiplication, Burst Balloons).
- State-machine DP (Best Time to Buy and Sell Stock with constraints).
Anti-patterns to Avoid
- Using DP when a Greedy approach is sufficient (e.g., Fractional Knapsack or Dijkstra).
- Applying DP to problems without overlapping subproblems (e.g., Merge Sort, where subproblems are independent).
- Attempting DP when the state space is too large to fit in memory (e.g., Traveling Salesperson Problem with N > 25).
The Problem
The Fundamental Issue
The 'Combinatorial Explosion' caused by re-calculating the same sub-results in a naive recursive branching tree, leading to exponential time complexity.
What breaks without it
Time Limit Exceeded (TLE) errors due to O(2^N) or O(N!) complexity.
Stack Overflow errors from deep recursion without pruning or memoization.
Why alternatives fail
Brute force explores the entire search space without memory, making it unusable for N > 30.
Greedy algorithms make locally optimal choices that may lead to a globally sub-optimal result in problems with complex dependencies.
Mental Model
The Intuition
Imagine you are counting the number of people in a room. Instead of recounting everyone every time someone enters, you remember the previous count and just add one. You've stored the result of a sub-calculation to save time later.
Key Mechanics
Define the DP State: What do the indices of your table (e.g., dp[i][j]) represent in the context of the problem?
Identify Base Cases: What are the simplest possible subproblems (e.g., dp[0] or empty string)?
Formulate the State Transition Equation: How do you build the solution for state 'i' using previously computed states 'i-1', 'i-2', etc.?
Determine Computation Order: Should you fill the table iteratively (bottom-up) or use recursion with a cache (top-down)?
Framework
When it's the best choice
- When the problem can be broken into stages with a finite number of states.
- When the constraints suggest a polynomial time complexity like O(N^2) or O(N*M).
- When you need to find the 'best' or 'total count' rather than just 'any' solution.
When to avoid
- When the problem has no overlapping subproblems (use Divide and Conquer).
- When the state depends on the entire history of choices rather than a summary (state explosion).
- When the problem can be solved with a simple Greedy choice or a BFS/DFS.
Fast Heuristics
Tradeoffs
Strengths
- Guarantees an optimal solution for problems with optimal substructure.
- Drastically reduces time complexity from exponential to polynomial.
- Provides a highly structured and mathematical approach to optimization.
Weaknesses
- High memory consumption for large state tables (O(N^2) or O(N^3) space).
- Difficult to conceptualize and debug compared to iterative or greedy approaches.
- Requires identifying the correct state, which is often the hardest part of the interview.
Alternatives
When it wins
When a local optimum always leads to a global optimum (e.g., Huffman Coding, Prim's Algorithm).
Key Difference
Greedy never reconsiders choices; DP evaluates all possibilities but does so efficiently by reusing results.
When it wins
When the state space is sparse (not all states in the DP table are reachable).
Key Difference
Top-down approach that only computes necessary states, whereas Bottom-up DP computes all states in the table.
When it wins
Finding the shortest path in an unweighted graph.
Key Difference
BFS explores layer by layer; DP is used when edges have weights or complex state transitions (like 'min cost').
Execution
Must-hit talking points
- Clearly define the DP state (e.g., 'dp[i][j] represents the max profit using first i items with capacity j').
- Explain the recurrence relation verbally before writing a single line of code.
- Discuss space optimization (e.g., reducing a 2D table to 1D if only the previous row is needed).
- Mention the time and space complexity explicitly using Big O notation.
Anticipate follow-ups
- Q:Can you optimize the space complexity from O(N^2) to O(N)?
- Q:How would you reconstruct the actual solution path (the items chosen), not just the final value?
- Q:What if the constraints change (e.g., the capacity is very large)?
- Q:Can this be solved using a different DP state definition that is more efficient?
Red Flags
Incorrectly initializing the DP table (e.g., using 0 instead of -1 or infinity).
Why it fails: Leads to incorrect results when 0 is a valid computed value or when looking for a minimum value.
Off-by-one errors in table dimensions or loop ranges.
Why it fails: Causes index-out-of-bounds errors or misses the final state/base case calculation.
Defining a state that doesn't capture enough information.
Why it fails: The transition becomes impossible because the subproblem doesn't have the context needed to make the next decision (e.g., forgetting to include 'current direction' in a grid DP).