Dynamic programming typically applies to optimization problems in which we make a set of choices in order to arrive at an optimal solution. As we make each choice, subproblems of the same form often arise. Dynamic programming is effective when a given subproblem may arise form more than one partial set of choices.

- The key technique is to store the solution to each such subproblem in case it should reappear.

Divide-and-conquer algorithms portion the problem into disjoint subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem — that is, when subproblems share subsubproblems.

- In this context, a divide-and-conquer algorithm does more work than necessary, repeatedly, solving the common subsubproblems.

A dynamic-programming algorithm solves each subsubproblem just once and then saves its answer in a table, thereby avoiding the work of recomputing the answer every time it solves each subsubproblem.

We typically apply dynamic programming to *optimization problems*. Such problems can have many possible solutions. Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum) value. We call such a solution __an__ optimal solution to the problem, as opposed to __the__ optimal solution, since there may be several solutions that achieve the optimal value.

When developing a dynamic-programming algorithm, we follow a sequence of four steps:

- Characterize the structure of an optimal solution.
- Recursively define the value of an optimal solution.
- Compute the value of an optimal solution, typically in a bottom-up fashion.
- Construct an optimal solution from computed information.

Step 1-3 from the basis of a dynamic-programming solution to a problem.

- If we need only the value of an optimal solution, and not the solution itself, the we can omit step 4.
- When we do perform step 4, we sometimes maintain additional information during step 3 so that we can easily construct an optimal solution.