Warm-up Coding Interview: Dynamic Programming

Approximate String Matching

Searching for patterns in text strings is a problem of unquestionable importance.

To deal with inexact string matching, we must first define a cost function telling us how far apart two strings are – i.e., a distance measure between pairs of strings. A reasonable distance measure reflects the number of changes that must be made to convert one string to another. There are three natural types of changes:

  • Substitution – Replace a single character from pattern P with a different character in text T, such as changing “shot” to “spot”.
  • Insertion – Insert a single character into pattern P to help it match text T, such as changing “ago” to “agog”.
  • Deletion – Delete a single character from pattern P to help it match text T, such as changing “hour” to “our”.

Properly posing the question of string similarity requires us to set the cost of each of these string transform operations. Assigning each operation an equal cost of 1 defines the edit distance between two strings.

Approximate string matching seems like a difficult problem, because we must decide exactly where to delete and insert (potentially) many characters in pattern and text. But let us think about the problem in reverse. What information would we like to have to make the final decision? What can happen to the last character in the matching for each string?

Edit Distance by Recursion

We can define a recursive algorithm using the observation that the last character in the string must either be matched, substituted, inserted, or deleted. Chopping off the characters involved in the last edit operation leaves a pair of smaller strings. Let i and j be the last character of the relevant prefix of P and T, respectively. There are three pairs of shorter strings after the last operation, corresponding to the strings after a match/substitution, insertion, or deletion. If we knew the cost of editing these three pairs of smaller strings, we could decide which option leads to the best solution and choose that option accordingly. We can learn this cost through the magic of recursion.

More precisely, let D[i,j] be the minimum number of differences between [katex]P_1, P_2, \cdots, P_i[/katex] and the segment of T ending at j. D[i,j] is the minimum of the three possible ways to extend smaller strings:

  • If [katex](P_i = T_j)[/katex] then D[i-1, j-1], else D[i-1, j-1] + 1.
  • D[i,j-1] + 1. This means that there is an extra character in the pattern to account for.
  • D[i-1, j] + 1. This means that there is an extra character in the text to remove.
[cce_c] #define MATCH 0 /* enumerated type symbol for match */ #define INSERT 1 /* enumerated type symbol for insert */ #define DELETE 2 /* enumerated type symobl for delete */ int string_compare(char *s, char *t, int i, int j) { /* Note: We use index conventions here */ int k; int opt[3]; int lowest_cost; if (i == 0) return (j * indel(‘ ‘)); if (j == 0) return (i * indel(‘ ‘)); opt[MATCH] = string_compare(s, t, i-1, j-1) + match(s[i], t[j]); opt[INSERT] = string_compare(s, t, i, j-1) + indel(t[j]); opt[DELETE] = string_compare(s, t, i-1, j) + indel(s[i]); lowest_cost = opt[MATCH]; for (k=INSERT; k<=DELETE; k++) if (opt[k] < lowest_cost) lowest_cost = opt[k]; return( lowest_cost ); } [/cce_c]

This program is absolutely correct – convince yourself. It also turns out to be impossibly slow.

Why is the algorithm so slow? It takes exponential time because it recomputes values again and again and again.

Edit Distance by Dynamic Programming

So how can we make this algorithm practical? The important observation is that most of these recursive calls are computing things that have been previously computed. There can only be [katex]|P| \cdot |T|[/katex] possible unique recursive calls, since there are only that many distinct (i,j) pairs to serve as the argument parameters of recursive calls.

A table-based dynamic programming implementation of this algorithm is given below. The table is a two-dimensional matrix m where each of the [katex]|P| \cdot |T|[/katex] cells contains the cost of the optimal solution to a subproblem, as well as a parent pointer explaining how we got to this location:

[cce_c] typedef struct { int cost; /* cost of reaching this cell */ int parent; /* parent cell */ } cell; cell m[MAXLEN+1][MAXLEN+1]; /* dynamic programming table */ [/cce_c]

Our dynamic programming implementation has three differences from the recursive version. First, it gets its intermediate values using table lookup instead of recursive calls. Second, it updates the [cci]parent[/cci] field of each cell, which will enable us to reconstruct the edit sequence later. Third, it is implemented using a more general [cci]goal_cell()[/cci] function instead of just returning [cci]m[|P|][|T|].cost[/cci]. This will enable us to apply this routine to a wider class of problems.

[cce_c] int string_compare(char *s, char *t) { int i,j,k; /* counters */ int opt[3]; /* cost of the three options */ for (i=0; iReconstructing the Path

The string comparison function returns the cost of the optimal alignment, but not the alignment itself. Knowing you can convert “though shalt not” to “you should not” in only five moves is dandy, but what is the sequence of editing operations that does it?

The possible solutions to a given dynamic programming problem are described by paths through the dynamic programming matrix, starting from the initial configuration (the pair of empty strings (0,0)) down to the final goal state (the pair of full strings (0,0)) down to the final goal state (the pair of full strings (|P|, |T|)). The key to building the solution is to reconstruct the decision made at every step along the optimal path that leads to the goal state. These decisions have been recorded in the [cci]parent[/cci] field of each array cell.

Reconstructing these decisions is done by walking backward from the goal state, following the [cci]parent[/cci] pointer back to an earlier cell. We repeat this process until we arrive back at the in the initial cell. The [cci]parent[/cci] field for [cci]m[i,j][/cci] tells us whether the operation at (i,j) was [cci]MATCH[/cci], [cci]INSERT[/cci], or [cci]DELETE[/cci]. Tracing back through the parent matrix yields the edit sequence.

Walking backward reconstructs the solution in reverse order. However, clever use of recursion can do the reversing for us.

[cce_c] reconstruct_path(char *s, char *t, int i, int j) { if (m[i][j].parent == -1) return; if (m[i][j].parent == MATCH) { reconstruct_path(s, t, i-1, j-1); match_out(s, t, i, j); return; } if (m[i][j].parent == INSERT) { reconstruct_path(s, t, i, j-1); insert_out(t,j); return; } if (m[i][j].parent == DELETE) { reconstruct_path(s, t, i-1, j); delete_out(t,i); return; } } [/cce_c]

For many problems, including edit distance, the tour can be reconstructed from the cost matrix without explicitly retaining the last-move pointer array. The trick is working backward from the costs of the three possible ancestor cells and corresponding string characters to reconstruct the move the took you to the current cell at the given cost.

Varieties of Edit Distance

The [cci]string_compare[/cci] and path reconstruction routines reference several functions that awe have not yet defined. These fall into four categories:

  • Table Initialization
  • Penalty Costs
  • Goal Cell Identification
  • Traceback Actions

Although dynamic programming algorithms are easy to design once you understand the technique, getting the details right reuqires carefully thinking and thorough testing.

This may seem to be a lot of infrastructure to develop for such a simple algorithm. However, several important problems can now be solved as special cases of edit distance using only minor changes to some of these stub functions:

  • Substring Matching – Suppose that we want to find where a short pattern P best occurs within a long text T – say, searching, for “Skiena” in all its misspellings within a long file. Plugging this search into our original edit distance function will achieve little sensitivity, since the vast majority of any edit cost will consist of deleting that which is not “Skiena” from the body of the text.

We want an edit distance search where the cost of starting the match is independent of the position in the text, so that a match in the middle is not prejudiced against. Now the goal state is not necessarily at the end of both strings, but the cheapest place to match the entire pattern somewhere in the text. Modifying these two functions gives us the correct solution:

  • Longest common Subsequence – Perhaps we are interested in finding the longest scattered string of characters included within both strings.

A common subsequence is defined by all the identical-character matches in an edit trace. To maximize number of such matches, we must prevent substitution of nonidentical characters. With substitution forbidden, the only way to get rid of the noncommon subsequence is through insertion and deletion. The minimum cost alignment has the fewest such “in-dels”, so it must preserve the longest common substring. We get the alignment we want by changing the match-cost function to make substitutions expensive:

[cce_c] int match(char c, char d) { if (c == d) return(0); else return(MAXLEN); } [/cce_c]
  • Maximum Monotone Subsequence – A numerical sequence is monotonical increasing if the element is at element is at least as big as the (i-1)st element. The maximum monotone subsequence problem seeks to delete the fewest number of elements from an input S to leave a monotonically increasing subsequence.

In fact, this is just a longest common subsequence problem, where the second string is the elements of S stored in increasing order. Any common sequence of these two must (a) represent characters in proper order in S, and (b) use only characters with increasing position in the collating sequence – so, the longest one does the job. Of course, this approach can be modified to give the longest descreasing sequence by simply reversing the sorted order.

As you can see, our edit distance routine can be made to do many amazing things easily. The trick is ovserving that your problem is just a special case of approximate string matching.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.