Limitations of Dynamic Programming: TSP
Dynamic programming doesn’t always work. It is important to see why it can fail, to help avoid traps leading to incorrect or inefficient algorithms.
Problem: Longest Simple Path
Input: A weighted graph G, with specified start and end vertices s and t.
Output: What is the most expensive path from s to t that does not visit any vertex more than once?
When are Dynamic Programming Algorithms Correct?
Dynamic programming algorithms are only as correct as the recurrence relations they are based on.
Dynamic programming can be applied to any problem that observes the principle of optimality. Roughly stated, this means that partial solution can be optimally extended with regard to the state after the partial solution, instead of the specifics of the partial solution itself. For example, in deciding whether to extend an approximate string matching by a substitution, insertion, or deletion, we did not need to know which sequence of operations had been performed to date. In fact, there may be several different edit sequences that achieve a cost of C on the first p characters of pattern P and t characters of string T. Future decisions are made based on the consequences of previous decisions, not the actual decisions themselves.
Problems do not satisfy the principle of optimality when the specifics of the operations matter, as opposed to just the cost of the operations. Such would be the case with a form of edit distance where we are not allowed to use combinations of operations in certain particular orders. Properly formulated, however, many combinatorial problems respect the principle of optimality.
When are Dynamic Programming Algorithms Efficient?
The running time of any dynamic programming algorithm is a function of two things: (1) number of partial solutions we must keep track of, and (2) how long it take to evaluate each partial solution. The first issue – namely the size of the state space – is usually the more pressing concern.
In all of the examples we have seen, the partial solutions are completely described by specifying the stopping places in the input. This is because the combinatorial objects being worked on have an implicit order defined upon their elements. This order cannot be scrambled without completely changing the problem. Once the order is fixed, there are relatively few possible stopping places or states, so we get efficient algorithms.
When the objects are not firmly ordered, however, we get an exponential number of possible partial solutions.