Warm-up Coding Interview: Dynamic Programming

The most challenging algorithmic problems involve optimization, where we seek to find a solution that maximizes or minimizes some function.

Algorithms for optimization problems require proof that they always return the best possible solution. Greedy algorithms that make the best local decision at each step are typically efficient by usually do not guarantee global optimality. Exhaustive search algorithms that try all possibilities and select the best always produce the optimum result, but usually at a prohibitive cost in terms of time complexity.

Dynamic programming combines the best of both worlds. It gives us a way to design custom algorithms that systematically search all possibilities (thus guaranteeing correctness) while storing results to avoid recomputing (that providing efficiency). By storing the consequences of all possible decisions and using this information in a systematic way, the total amount of work is minimized.

Once you understand it, dynamic programming is probably the easiest algorithm design technique to apply in practice. In fact, I find that dynamic programming algorithms are often easier to reinvent than to try to look up in a book.

Dynamic programming is a technique for efficiently implementing a recursive algorithm by storing partial results. The trick is seeing whether the naive recursive algorithm computes the same subproblems over and over and over again. If so, storing the answer for each subproblems in a table to look up instead of recompute can lead to an efficient algorithm. Start with a recursive algorithm or definition. Only once we have a correct recursive algorithm do we worry about speeding it up by using a results matrix.

Dynamic programming is generally the right method for optimization problems on combinatorial objects that have an inherent left to right order among components. Left-to-right objects includes: character strings, rooted trees, polygons, and integer sequences.

Caching via Computation

Fibonacci Numbers by Caching

In fact, we can do much better. We can explicitly store (or cache) the results of each Fibonacci computation F(k) in a table data structure indexed by the parameter k. The key is to avoiding recompilation is to explicitly check for the value before trying to compute it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#define MAXN 45
#define UNKNOWN -1
long f[MAXN+1];

long fib_c(int n) {
    if (f[n] == UNKNOWN)
        f[n] = fib_c(n-1) + fib_c(n-2);

    return(f[n]);
}

long fib_c_driver(int n) {
    int i; /* counter */

    f[0] = 0;
    f[1] = 1;
    for (i=2; i<=n; i++) f[i] = UNKNOWN;

    return(fib_c(n));
}

The general method of explicitly caching results from recursive calls to avoid recompilation provides a simple way to get most of the benefits of full dynamic programming, so it is worth a more careful look. In principle, such caching can be employed on any recursive algorithm. However, storing partial results would have done absolutely no good for such recursive algorithms as quicksort, backtracking, and depth-first search because all the recursive calls made in these algorithms have distinct parameter values.

Fibonacci Numbers by Dynamic Programming

We can calculate Fn in linear time more easily by explicitly specifying the order of evaluation of the recurrence relation:

1
2
3
4
5
6
7
8
9
10
long fib_dp(int n) {
    int i; /* counter */
    long f[MAXN+1]; /* array to cache computed fib values */

    f[0] = 0;
    f[1] = 1;
    for (i=2; i<=n; i++) f[i] = f[i-1] + f[i-2];

    return (f[n]);
}

More careful study shows that we do not need to store all the intermediate values for the entire period of execution. Because the recurrence depends on two arguments, we only need to retain the last two values we have seen:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
long fib_ultimate(int n) {
    int i;
    long back2=0, back1=1;
    long next;

    if (n == 0) return (0);

    for (i=2; i<n; i++) {
        next = back1+back2;
        back2 = back1;
        back1 = next;
    }
    return (back1+back2);
}

Binomial Coefficients

How do you compute the binomial coefficients? First, {n \choose k} = n!/((n-k)!k!), so in principle you can compute them straight from factorials. However, this method has a serious drawback. Intermediate calculations can easily cause arithmetic overflow, even when the final coefficient fits comfortably within an integer.

A more stable way to compute binomial coefficients is using the recurrence relation implicit in the construction of Pascal’s triangle:

Each number is the sum of the two numbers directly above it. The recurrence relation implicit in this is that

{n \choose k} = {n-1 \choose k-1} + {n-1 \choose k}

The best way to evaluate such a recurrence is to build a table of possible values up to the size that you are interested in:

The best way to evaluate such a recurrence is to build a table of possible values up to the size that you are interested in:

1
2
3
4
5
6
7
8
9
10
11
12
13
long binomial_coefficient(int n, int k) {
    int i, j;
    long bc[MAXN][MAXN];

    for (i = 0; i<=n; i++) bc[i][0] = 1;
    for (j = 0; j<=n; j++) bc[j][j] = 1;

    for (i=1; i<=n; i++)
        for (j=1; j<i; j++)
            bc[i][j] = bc[i-1][j-1] + bc[i-1][j];

    return(bc[n][k]);
}

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.