Sunday, November 2, 2014

We resume our discussion on matrix operations. In this post, we will pick up Jacobii iteration. The Jacobii method also solves the same linear equation as before. The matrix A is split into two parts: D whose diagonal elements are identical to those of A, and whose off-diagonal elements are zero and E whose diagonal elements are zero and whose off diagonal elements are identical to those of A. A is the sum of D and E. The usefulness of D is that it is a diagonal matrix and so it can be inverted.
As compared to the steepest descent method, this does not have to look for a value that converges to zero.Using the linear equation and the split of the matrix into the diagonal specific matrices, we can now write equation as an iteration where the next step is calculated based on a constant times the previous step taken together with another constant. Each iteration leads closer to the final solution which is called a stationary point because the next iteration at the solution will be the same as the previous step. Let us take a closer look at what each iteration does. If we express each iterate (current step) as the sum of the exact solution and the error term, then the equation becomes a simple translation in terms of previous step. In other words, each iteration does not affect the correct part of the iterate but only the error term as the iterations converge to the stationary point. Hence the initial vector x0 has no effect on the inevitable outcome. And it also does not affect the number of iterations required to converge to a given tolerance.
We will next discuss spectral radius which determines the speed of convergence in this method.Suppose that vj is the eigenvector of B with the largest eigenvalue. The spectral radius of B  is this largest eigenvalue. Now the error can be expressed as a linear combination of eigenvectors, one of which will be in the direction of vj. and this one will be the slowest to converge because the eigenvalue is proportional to the steepness of the corresponding slope.  This follows from the eigenvectors being aligned along the axes of the paraboloid describing the quadratic form of the function. The rate of convergences depends on this spectral radius and it is in turn dependent on A. Unfortunately, this technique does not apply to all A. However, this method illustrates the usefulness of eigenvectors. The eigenvector paths of each successive error term determine the path of convergence.They converge normally at the rate defined by their eigenvalues.
#coding exercise
Given the matrix in the coding exercise described earlier, print the elements in  a single dimensional sorted order
List<int> GetSortedElements(int[,] matrix, int row, int col)
{
 // assuming parameter validation
  var ret = new List<int>();
  int i = 0;
  int j = 0;
  var front = new List<int>(); // column corresponding to each row of the front
  for (int i = 0; i < row; i++) { front.add(0); } // initialize to first column index for each row
  while (i < row && j < col)
 {
  // extract min of allfront
  var candidate = new List<int>();
  for (int k = 0; k< row; k++)
        candidate.add((front[k] < col ? matrix[k, front[k]] :  INT_MAX);
  int min = candidate.min();
  i = candidate.IndexOf(min);
  front[i] = front[i] + 1;
  j = front[i];
  ret.Add(min);
}
return ret; 
}
23 34 45 46 68 89
27 35 47 69 97 110
32 46 48 65 98 112

No comments:

Post a Comment