Warm-up Coding Interview: Combinatorial Search and Heuristic Methods

In this section, we introduce backtracking as a technique for listing all possible solutions for a combinatorial algorithm problem. We illustrate the power of clever pruning techniques to speed up real search applications. For problems that are too large to contemplate using brute-force combinatorial search, we introduce heuristic methods such as simulated annealing. Such heuristic methods are important weapons in any practical algorist’s arsenal.

Backtracking

Backtracking is a systematic way to iterate through all the possible configurations of a search space. These configurations may represent all possible arrangements of objects (permutations) or all possible ways of building a collection of them (subsets). Other situations may demand enumerating all spanning trees of a graph, all paths between two vertices, or all possible ways to partition vertices into collar classes.

What these problems have in common is that we must generate each one possible configuration exactly once. Avoiding both repetitions and missing configurations means that we must define a systematic generation order. We will model our combinatorial search solution as a vector [katex]a = (a_1, a_2, \cdots, a_n)[/katex], where each element [katex]a_i[/katex] is selected from a finite ordered set [katex]S_i[/katex]. The vector can even represent a sequence of moves in a game or a path in a graph, where [katex]a_i[/katex] contains the 9th event in the sequence.

At each step in the backtracking algorithm, we try to extend a given partial solution [katex]a = (a_1, a_2, \dots, a_k)[/katex] by adding another element at the end. After extending it, we must test whether what we now have is a solution: if so, we should print it or count it. If not, we must check whether the partial solution is still potentially extendible to some complete solution.

Backtracking constructs a tree of partial solutions, where each vertex represents a partial solution. There is an edge from x to y if node y was created by advancing from x. This tree of partial solutions provides an alternative way to think about backtracking, for the process of constructing the solutions corresponds exactly to doing a depth-first traversal of the backtrack tree. Viewing backtracking as a depth-first search on an implicit graph yields a natural recursive implementation of the basic algorithm.

Although a breadth-first search could also be used to enumerate solutions, a depth-first search is greatly preferred because it uses less space. The current state of a search is completely represented by the path from the root to the current search depth-first node. This requires space proportional to the height of the tree. In the breadth-first search, the queue stores all the nodes at the current level, which is proportional to the width of the search tree. For most interesting problems, the width of the tree grows exponentially in its height.

Implementation

The honest working [cci]backtrack[/cci] code is given below:

[cce_c] bool finished = FALSE; /* found all solutions yet? */ backtrack(int a[], int k, data input) { int c[MAXCANDIDATES]; int ncandidates; int i; if (is_a_solution(a, k, input)) process_solution(a, k, input); else { k = k+1; construct_candidate(a, k, input, c, & candidates); for (i=0; iBacktracking ensures correctness by enumerating all possibilities. It ensures efficiency by never visiting a state more than once.

Study how recursion yields an elegant and easy implementation of the backtracking algorithm. Because a new candidates array [cci]c[/cci] is allocated with each recursive procedure call, the subsets of not-yet-considered extension candidates at each position will not interfere with each other.

The application-specific parts of this algorithm consists of five subroutines:

  • [cci]is_a_solution(a, k, input)[/cci] – This Boolean function tests whether the first [cci]k[/cci] elements of vector [cci]a[/cci] from a compolete soution for the given problem.
  • [cci]construct_candidates(a, k, input, c, ncandidates)[/cci] – This routine fills an array [cci]c[/cci] with the complete set of possible candidates for the [cci]k[/cci]th position of a, given the contents of the first [cci]k-1[/cci] positions. The number of candidates returned in this array is denoted by [cci]ncandidates[/cci].
  • [cci]process_solution(a, k, input)[/cci] – This routine prints, counts, or however processes a complete solution once it is constructed.
  • [cci]make_move(a, k, input)[/cci] and [cci]unmake_move(a, k, input)[/cci] – These routines enable us to modify a data structure in response to the latest move, as well as clean up this data structure if we decide to take back the move.

These calls function as null stubs in all of this section’s examples, but will be employed in the Sudoku program.

We include a global [cci]finished[/cci] flag to allow for premature termination, which could be set in any application-specific routine.

To really understand how backtracking works, you must see how such objects as permutations and subsets can be constructed by defining the right state spaces.

Constructing All Subsets

To construct all [katex]2^n[/katex] subsets, we set up an array/vector of n cells, where the value of [katex]a_i[/katex] (true or false) signifies whether ith item is in the given subset. In the scheme of our general backtrack algorithm, [katex]S_k = (true, false)[/katex] and a is a solution whenever k = n. We can now construct all subsets with simple implementations of [cci]is_a_solution()[/cci], [cci]construct_candidates()[/cci], [cci]process_solution()[/cci].

[cce_c] is_a_solution(int a[], int k, int n) { return (k == n); } construct_candidates(int a[], int k, int n, int c[], int *ncandidates) { c[0] = TRUE; c[1] = FALSE; *ncandidates = 2; } process_solution(int a[], int k) { int i; /* counter */ printf(“{“); for (i = 1; i<=k; i++) if (a[i] == TRUE) printf(" %d", i); printf(" }\n"); } [/cce_c]

Printing each out subset after constructing it proves to be the most complicated of the three routines!

Finally, we must instantiate the call to [cci]backtrack[/cci] with the right arguments. Specifically, this means giving a pointer to the empty solution vector, setting [cci]k = 0[/cci] to denote that it is empty, and specifying the number of elements in the universal set:

[cce_c] generate_subsets(int n) { int a[NMAX]; backtrack(a, 0, n); } [/cce_c]

Constructing All Paths in a Graph

Enumerating all the simple s to t paths through a given path is a more complicated problem than listing permutations or subsets. There is no explicit formula that counts the number of solutions as a function of the number of edges or vertices, because the number of paths depends upon the structure of the graph.

The starting point of any path from s to t is always s. Thus, s is the only candidate for the first position and [katex]S_1 = \{s\}[/katex]. The possible candidates for the second position are the vertices v such that (s,v) is an edge of the graph, for the path wanders from vertex using edges to define the legal steps. In general, [katex]S_{k+1}[/katex] consists of the set of vertices adjacent to [katex]a_k[/katex] that have not been used elsewhere in the partial solution A.

[cce_c] construct_candidates(int a[], int k, int n, int c[], int *ncandidates) { int i; bool in_sol[NMAX]; edgenode *p; int last; for (i=1; iy ]) { c[*ncandidates] = p->y; *ncandidates = *ncandidates + 1; } p = p->next; } } } [/cce_c]

We report a successful path whenever [cci]a_k = t[/cci].

[cce_c] is_a_solution(int a[], int k, int t) { return (a[k] == t); } process_solution(int a[], int k) { solution_count ++; /* count all s to t paths */ } [/cce_c]

The solution vector A must have room for all n vertices, although most paths are likely shorter than this.

Searching Pruning

Backtracking ensures correctness by enumerating all possibilities. Enumerating all n! permutations of n vertices of the graph and selecting the best one yields the correct algorithm to find the optimal traveling salesman tour. For each permutation, we could see whether all edges implied by the tour really exists in the graph G, and if so, add the weights of these edges together.

Pruning is the technique of cutting off the search the instant we have established that a partial solution cannot be extended into a full solution.

Exploiting symmetry is another avenue for reducing combinatorial searches. Pruning away partial solutions identical to those previously considered requires recognizing underlying symmetries in the search space.

Sudoku

Backtracking lends itself nicely to the problem of solving Sudoku puzzles. We will use the puzzle here to better illustrate the algorithmic technique. Our state space will be the sequence of open squares, each of which must ultimately be filled in with a number. The candidates for open squares (i,j) are exactly the integers from 1 to 9 that have not yet appeared in row i, column j, or the 3 * 3 sector containing (i,j). We backtrack as soon as we are out of candidates for a square.

The solution vector a supported by backtrack only accepts a single integer per position. This is enough to store contents of a board square but not the coordinates of the board square. Thus, we keep a separate array of [cci]move[/cci] positions as part of our [cci]board[/cci] data type provided below. The basic data structures we need to support our solution are:

[cce_c] #define DIMENSION 9 #define NCELLS DIMENSION*DIMENSION typedef struct { int x, y; } point; typedef struct { int m[DIMENSION+1][DIMENSION+1]; /* matrix of board contents */ int freecount; /* how many open squares remain? */ point move[NCELLS+1]; /* how did we fill the squares? */ } boardtype; [/cce_c]

Constructing the candidates for the next solution position involves first pick the open square we want to fill next ([cci]next_square[/cci]), and then identifying which numbers are candidates to fill that square ([cci]possible_values[/cci]). These routines are basically bookkeeping, although the subtle details of how they work can have an enormous impact on performance.

[cce_c] construct_candidates(int a[], int k, boardtype *board, int c[], int *ncandidates) { int x,y; int i; bool possible[DIMENSION+1]; next_square(&x, &y, board); board->move[k].x = x; board->move[k].y = y; *ncandidates = 0; if ((x<0) && (y<0)) return; /* error condition, no moves possible */ possible_values(x, y, board, possible); for (i=0; i<=DIMENSION; i++) if (possible[i] == TRUE) { c[*ncandidates] = i; *ncandidates = *ncandidates + 1; } } [/cce_c]

We must update our [cci]board[/cci] data structure to reflect the effect of filling a candidate value into a square, as well as remove these changes should we backtrack away from this position. These updates are handled by [cci]make_move[/cci] and [cci]unmake_move[/cci], both of which are called directly from [cci]backtrack[/cci]:

[cce_c] make_move(int a[], int k, boardtype *board) { fill_square(board->move[k].x, board->move[k].y, a[k], board); } unmake_move(int a[], int k, boardtype *board) { free_square(board->move[k].x, board->move[k].y, board); } [/cce_c]

One important job for these board update routines is maintaining how many free squares remain on the board. A solution is found when there are no more free squares remaining to be filled:

[cce] is_a_solution(int a[], int k, boardtype *board) { if (board->freecount == 0) return (TRUE); else return (FALSE); } [/cce]

We print the configuration and turn of the backtrack search by setting off the global [cci]finished[/cci] flag on finding a solution.

[cce_c] process_solution(int a[], int k, boardtype *board) { print_board(board); finished = TRUE; } [/cce_c]

Two reasonable ways to select the next square are:

  • Arbitrary Square Selection – Pick the first open square we encounter, possibly picking the first, the last, or a random open square.
  • Most Constrained Square Selection – Here, we check each of the open squares (i,j) to see how many number candidates remain for each – i.e., have not already been used in either row i, column j, or the sector containing (i,j). We pick the square with the fewest number of candidates.

Although both possibilities work correctly, the second option is much, much better. Often there will be open squares with only one remaining candidate.

Our final decision concerns the [cci]possible_values[/cci] we all for each square. We have two possibles:

  • Local Count
  • Look ahead – But what if our current partial solution has some other open square where there are no candidates remaining under the local count criteria? There is no possible way to complete this partial solution into a full Sudoku grid. (?)

Successful pruning requires looking ahead to see when a solution is doomed to go nowhere, and backing off as soon as possible.

Heuristic Search Methods

Heuristic methods provide an alternate way to approach difficult combinatorial optimization problems. However, any algorithm searching all configurations is doomed to be impossible on large instances.

In particular, we will look at three different heuristic search methods: random sampling, gradient-descent search, and simulated annealing. The traveling salesman problem will be our ongoing example for comparing heuristics. All three methods have two common components:

  • Solution space representation – This is a complete yet concise description of the set of possible solutions for the problem. For traveling salesman, the solution space consists of (n-1)! elements — namely all possible circular permutations of the vertices. We need a data structure to represent each element of the solution space. For TSP, the candidate solutions can naturally be represented using an array S of n-1 vertices, where [katex]S_i[/katex] defines the (i+1)st vertex on the tour starting from [katex]v_1[/katex].
  • Cost function – Search methods need a cost or evaluation function to access the quality of each element of the solution space. Our search heuristic identifies the element with the best possible score – either highest or lowest depending upon the nature of the problem. For TSP, the cost function for evaluating a given candidate solution S should just sum up the cost involved, namely the weight of all edges [katex](S_i, S_{i+1})[/katex], where [katex]S_{n+1}[/katex] denotes [katex]v_1[/katex].

Random Sampling

The simplest method to search in a solution space uses random sampling. It is also called the Monte Carlo method. We repeatedly construct random solutions and evaluate them, stopping as soon as we get a good enough solution, or (more likely) when we are tired of waiting. We report the best solution found over the course of our sampling.

True random sampling requires that we are able to select elements form the solution space uniformly at random. This means that each of the elements of the solution space must have an equal probability of being the next candidate selected. Such sampling can be a subtle problem.

[cce_c] random_sampling(tsp_instance *t, int nsamples, tsp_solution *bestsol) { tsp_solution s; /* current tsp solution */ double best_cost; /* best cost so far */ double cost_now; /* current cost */ int i; /* counter */ initialize_solution(t->n, &s); best_cost = solution_cost(&s, t); copy_solution(&s, bestsol); for (i=1; i<=nsamples; i++) { random_solution(&s); cost_now = solution_cost(&s, t); if (cost_now < best_cost) { best_cost = cost_now; copy_solution(&s, bestsol); } } } [/cce_c]

When might random sampling do well?

  • When there are a high proportion of acceptable solutions. Finding prime numbers is a domain where a random search proves successful. Generating large random prime numbers for keys is an important aspect of cryptogrpahic systems such as RSA. Roughly one out of every [katex]\ln n[/katex] integers are prime, so only a modest number of samples need to be taken to discover primes that are several hundred digits long.
  • When there is no coherence in the solution space – Random sampling is the right thing to do when there is no sense of when we are getting closer to a solution.

Consider again the problem of hunting for a large prime number. Prime are scattered quite arbitrarily among the integers. Random sampling is as good as anything else.

How does random sampling do on TSP? Pretty lousy.

Most problems we encounter, like TSP, have relatively few good solutions but a highly coherent solution space. More powerful heuristic search algorithms are required to deal effectively with such problems.

Stop and Think: Picking the Pair

Problem: We need an efficient and unbiased way to generate random pairs of vertices to perform random vertex swaps. Propose an efficient algorithm to generate elements from the [katex]n \choose k[/katex] unordered pairs on [katex]\{1, \dots, n\}[/katex] uniformly at random.

Local Search

A local search employs local neighborhood around every element in the solution space. Think of each element x in the solution space as a vertex, with a directed edge (x, y) to every candidate solution y that is a neighbor of x. Our search proceeds from x to the most promising candidate in x’s neighborhood.

We certainly do not want to explicitly construct this neighborhood graph for any sizable solution space. We are conducting a heuristic search precisely because we cannot hope to do these many operations in a reasonable amount of time.

Instead, we want a general transition mechanism that takes us to the next solution by slightly modifying the current one. Typical transition mechanisms include swapping a random pair of items of changing (inserting or deleting) a single item in the solution.

The most obvious transition mechanism for TSP would be to swap the current tour positions of a random pair of vertices S_i and S_j.

A local search heuristic starts from an arbitrary element of the solution space, and then scans the neighborhood looking for a favorable transition to take. For TSP, this would be transition, which lowers the cost of the tour. In a hill-climbing procedure, we try to find the top of a mountain (or alternately, the lowest point in a ditch) by starting at some arbitrary point and taking any step that leads in the direction we want to travel. We repeat until we have reached a point where all our neighbors lead us in the wrong direction.

Hill-climbing and closely related heuristics such as greedy search or gradient descent search are great at finding local optima quickly, but often fail to find the globally best solution.

[cce_c] hill_climbing(tsp_instance *t, tsp_solution *s) { double cost; double delta; int i, j; bool stuck; double transition(); initialize_solution(t->n, s); random_solution(s); cost = solution_cost(s, t); do { stuck = TRUE; for (i=1; in; i++) for (j=i+1; j<=t->n; j++) { delta = transition(s,t,i,j); if (delta < 0) { stuck = FALSE; cost = cost + delta; } else transistion(s,t,j,i); } } while (!stuck); } [/cce_c]

When does local search do well?

  • When there is great coherence in the solution space – Hill climbing is at its best when the solution space is convex. In other words, it consists of exactly one hill.
  • Whenever the cost of incremental evaluation is much cheaper than global evaluation – It cost [katex]\Theta(n)[/katex] to evaluate the cost of an arbitrary n-vertex candidate TSP solution, because we must total the cost of each edge in the circular permutation describing the tour. Once that is found, however, the cost of the tour after swapping a given pair of vertices can be determined in constant time.

If we are given a very large value of n and a very small budget of how much time we can spend searching, we are better off using it to do several incremental evaluations than a few random samples, even if we are looking for a needle in a haystack.

The primary drawback of a local search is that soon there isn’t anything left for us to do as we find the local optimum.

How does local search do on TSP? Much better than random sampling for a similar amount of time.

Simulated Annealing

Simulated annealing is a heuristic search procedure that allows occasional transitions leading to more expensive (and hence inferior) solutions. This may not sound like process, but it helps keep our search from getting stuck in local optima.

The inspiration for simulated annealing comes from the physical process of cooling molten materials down to the solid state. In the thermodynamic theory, the energy state of a system is described by the energy state of each particle constituting it. A particle’s energy state jumps about randomly, with such transitions governed by the temperature of the system. In particular, the transition probability [katex]P(e_i, e_j, T)[/katex] from energy [katex]e_i[/katex] to [katex]e_j[/katex] at temperature [katex]T[/katex] is given by

[katex]P(e_i, e_j, T) = e^{(e_i – e_j)/(k_BT)}[/katex]

where [katex]k_B[/katex] is a constant – called Boltzmann’s constant.

What does this formula mean? Consider the value of the exponent under different conditions. The probability of moving from a high-energy state to a lower-energy state is very high. But, there is still a nonzero probability of accepting a transition into a high-energy state, with such small jumps much more likely than big ones. The higher the temperature, the more likely energy jumps will occur.

Through random transitions generated according to the given probability distribution, we can mimic the physics to solve arbitrary combinatorial optimization problems.

Applications of Simulated Annealing

We provide several examples to demonstrate how these components can lead to elegant simulated annealing solutions for real combinatorial search problems.

Maximum Cut

Independent Set

An “independent set” of a graph G is a subset of vertices S such that there is no edge with both endpoints in S. Finding large independent sets arises in dispersion problems associated with facility location and coding theory.

The natural state space for a simulated annealing solution would be all 2^n subsets of the vertices, represented as a bit vector. As with maximum cut, a simple transition mechanism would add or delete one vertex from S.

One natural cost function for subset S might be 0 if S contains an edge, and |S| if it is indeed an independent set. This function ensures taht we work towards an independent set at all times.

Circuit Board Placement

Parallel Algorithms

Other Heuristic Search Methods

Popular methods include genetic algorithms, neural networks, and ant colony optimization.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.