Monday, November 24, 2014

In today's post we continue our discussion on Conjugate Gradients method. We looked at the effectiveness of a preconditioner in solving the linear matrix equation. CG can also be used to solve equations where the matrix A in the linear equation is not symmetric, not positive-definite, and even not square. In such a case we minimize the sum of squares of the errors because there may not be a solution. To find this minimum, we set the derivative of the linear expression to zero. We do this by applying A-transpose on both sides of the linear equation . When the matrix A is not a square, there are possibly more linearly independent equations than variables which may not have a solution.  This is called an over-constrained equation. But it is always possible to find a value of x that minimizes sum of squares. A-transpose A is symmetric and positive. In such cases, Steepest Descent and CG can be used to solve the linear equation. The only nuisance is that the condition number of A-transpose A is the square of A and the convergence is much slower. Also, the linear equation may be underconstrained when there are less equations than variables, then using the transpose doesn't help and CG cannot be applied.  That said, when we use this technique, we apply the A-transpose A, we never form it explicitly because it is less sparse than A. Instead it is formed by finding the product Ad, and then the transpose is applied to the product.
#codingexercise
Int GetDistinct (int [] A)
{
if (A == null) return 0;
return A.Distinct ();
}


No comments:

Post a Comment