Saturday, July 4, 2015

Matrix-chain-order problem.
A matrix chain order determines the optimal number of scalar multiplications needed to compute a matrix chain product. Even though matrix multiplication is associative, the order of parenthesizing the product impacts the cost of the product. This is because the order determines the number of scalar multiplications to be performed because a matrix multiplication cost is based on the columns of the preceding matrix to be multiplied with the number of rows from the succeeding matrix. Therefore, by fully parenthesizing the matrix product of A1, A2, … An matrices we can compute the total cost and consequently the optimal solution will have an order of multiplication corresponding to the lowest cost.
To solve this problem, we apply dynamic programming by following the four-step sequence:
1.     Characterize the structure of an optimal solution
2.     Recursively define the value of an optimal solution
3.     Compute the value of an optimal solution
4.     Construct an optimal solution from computed information
The first step can be performed by arguing that finding the optimal solution of the product of A1, A2 … An  is the same as finding the optimal solution of two subproblems obtained by splitting the range into two involving A1…Ak and Ak+1 …An and then combining these optimal subproblem solutions.
Next the recurrence is described with a termination condition as zero cost multiplication when the chain consists of only one matrix. The progress condition can be described as the minimum cost for the range i to j as the minimum of the sum of the cost associated with the sub problems plus the scalar multiplication cost of the results where this sum is calculated one for each chosen k before taking the minimum.
Step 3 can be performed by using a bottom up approach. Given that each matrix Ai has pi-1  x pi dimensions, a bottom up approach solves incremental dimensions so that the solution of the matrix involving previous dimensions can be readily looked up. There are two tables maintained – one that stores the cost of computing a matrix chain product and another auxiliary table that records which index k achieved the optimal cost in computing the cost stored.
Finally we can compute the optimal solution using the auxiliary index table above.  We can find the k from the auxiliary table that stores it. We can determine the earlier matrix multiplications  recursively since s[1,s[1,n]] determines the last matrix multiplication when computing the preceding component and the s[s[1,n]+1,n] determines the last matrix multiplication when computing the succeeding  component of the product of the two subproblems. Again the termination condition is the single element chain.
Moreover, as with dynamic programming problems with bottom-up approach, there is also a corresponding recursive solution using a top down strategy and optional memorization to improve efficiency . That top down stragey is based directly on the recurrence mentioned above. We still maintain a table of costs but the order of filling the table is more like the order of recursion.

The bottom up strategy:
Matrix-Chain-Order(p)
n = p.length - 1
let m[1..n, 1..n] and s[1 .. n-1, 2..n] be new tables for cost and index of optimal
for i = 1 to n
    m[i,i] = 0
for l = 2 to n // chain length
     for i = 1 to n-l+1
           j = i+l-1
           m[i,j] = infinity
           for k = i to j-1
                 q = m[i,k] + m[k+1,j] + pi-1pkpj
                 if q < m[i,j]
                     m[i,j] = q
                     s[i,j] = k
return m and s

The top down strategy:
Recursive-Matrix-Chain(p, i, j)
if i == j
    return 0
m[i,j] = infinity
for k = i to j - 1
    q = recursive-matrix-chain(p,i,k) + recursive-matrix-chain(p, k+1,j) + pi-1pkpj
    if q < m[i,j]
       m[i,j] = q
return m[i,j]




No comments:

Post a Comment