The authors introduce the concept of dynamic programming
in this chapter where programming refers to tabular method. Unlike divide-and-conquer technique, where a problem is divided into independent sub-problems, dynamic programming is applicable when the sub-problems are not independent and share common sub-sub-problems. A divide-and-conquer approach does more work than necessary solving the common sub-sub-problems repeatedly. A dynamic programming algorithm solves every sub-sub-problem once and stores the result in a table thereby avoiding the work of recomputing the answer every time the sub-sub-problem is encountered. This simple technique sometimes transforms exponential-time algorithms to polynomial-time algorithms. Dynamic programming is typically applicable to optimization problems
where there are multiple solutions with one value attached to each solution and we need to find the solution with an optimal value.
The development of a dynamic programming algorithm is divided in four steps:
The authors then apply the above steps to this problem as below:
The authors then apply the dynamic programming steps to the problem as below:
to be applicable to them:
steps to the problem as below:
. A sequence of keys is given in sorted order with each key having a probability associated with it that a search will be for . There are dummy keys such that , and for each with a probability associated with each dummy key that a search will be for . The problem is to create a binary search key such that the number of nodes visited on average is minimized. The authors then apply the dynamic programming
steps to the problem as below:
The development of a dynamic programming algorithm is divided in four steps:
- Characterize the structure of an optimal solution.
- Recursively define the value of an optimal solution.
- Compute the value of an optimal solution in a bottom-up approach.
- Construct an optimal solution from computed information.
The authors then apply the above steps to this problem as below:
- The structure of an optimal solution: The authors prove that an optimal solution to the problem contains an optimal solution to sub-problem, i.e. fastest way through station contains the fastest way through either or .
- The recursive solution: The authors define to be the fastest time to get a chassis all the way from start to station and derive following recursive relationship
- Computing the fastest times: The authors provide an algorithm which finds the fastest assembly line in time.
- Constructing the fastest way: The authors then show the solution for a sample problem given.
The authors then apply the dynamic programming steps to the problem as below:
- The structure of an optimal parenthesization: The authors prove that the optimal solution for parenthesization of matrix chain already contain the optimal solution for parenthesization of matrix chain and where .
- A recursive solution: Let be the minimum number of scalar multiplications needed to compute the matrix . It can be recursively defined as follows:
- Computing the optimal costs: The authors point out that deciding and such that requires choices in all and provide an algorithm to compute with a running time of .
- Constructing an optimal solution: The authors provide a method that prints the parenthesization required to generate the minimum number of scalar multiplication to get the matrix multiplication.
- Optimal substructure: If optimal solution to the problem contains optimal solution to sub-problems, the problem is a probable candidate for dynamic programming (and greedy method too).
- Overlapping sub-problems: The space of the sub-problems must be small and a recursive algorithm must be solving the same sub-problem over and over. Typically, the total number of distinct sub-problems is a polynomial in input size.
- Characterizing the longest common sub-sequence: If and be sequences, and let be any longest common prefix of and . Then one of the following three holds:
- If then and is a longest common sub-sequence of and .
- If then implies that is a longest common sub-sequence of and .
- If then implies that is a longest common sub-sequence of and .
- A recursive solution
: If the length of a longest common sub-sequence of the sequences and is defined as then the following recursive formula can be derived based on the observation in 1 above
- Computing the length of a longest common sub-sequence: The authors then provide a algorithm
to compute the length of a longest common sub-sequence.
- Constructing a longest common sub-sequence: The authors then provide an algorithm
to print the longest common sub-sequence from the tables generated in 3.
- The structure of an optimal binary search tree: The authors describe that the optimal binary search tree will also contain optimal binary search trees as sub-trees rooted at the left and right child of the root of the tree.
- A recursive solution: If is the expected cost of searching an optimal binary search tree containing the keys and if the sum of all the probabilities in the sub-tree with keys is then
- Computing the expected search cost of an optimal binary search tree: The authors give time algorithm to calculate the expected search cost of an optimal binary search tree.
- Constructing an optimal binary search tree: The authors leave it to an exercise to write an algorithm to output an optimal binary search tree constructed from the root table generated in 3.
No comments:
Post a Comment