One more coding exercise just like the previous:
#codingexercise
Int GetCountDuplicatesMax (int [] A)
{
if (A == null) return 0;
return A.CountDuplicatesMax();
}
In today's post we continue our discussion on the convergence of conjugate gradient method. We have seen that at each step of CG, the value of the current eigenvector is chosen from the initial eigen vector and the current subspace. The current subspace is formed from the span of previous subspaces. Each subspace was found by applying the matrix A against a vector. The vector is the initial residual which is also the initial eigenvector. These subspaces called Krylov subspaces are found iteratively. Recall that subspace means that when we don't know what the search vector should be, we choose a search subspace to find it in. Fortunately this subspace is incremental and we look for the next residual to be A-orthogonal to this current search subpace, so the next residual is A-orthogonal to all the previous except the current search vector. This makes it easier to find the next residual. The new search vector will be a linear combination of the next residual and the current search vector.
For a given iteration, the error term can be found by applying the initial eigenvector to the sum of all the previous matrix and the identity matrix. This sum can be expressed as a polynomial of a degree equal to the number of iterations so far. It doesn't matter if the polynomial is expressed in terms of a scalar or a matrix, what we want is to evaluate it.This means that we can evaluate it with eigenvalues as scalar or with matrix A. This flexible notation of expressing the sum as a polynomial P will let us apply a vector the same to whether P was in terms of A or in terms of a scalar eigenvalue. Note that by definition of an eigenvalue, it is a scalar constant that makes the application of a matrix to a vector the same as the application of this eigenvalue to the same vector. If we keep applying the matrix and the eigenvalue to both sides of this equation, the equation doesn't change. This lets us write the error term as one that applies a polynomial to the initial error term. If we start with an initial polynomial that evaluates to one in this method, and express the error term as a linear combination of the orthogonal unit eigenvectors, we find that CG finds the polynomial that depends on how close a polynomial of degree i can be to zero on each eigenvalue, given the initial constraint of the initial polynomial = 1.
#codingexercise
Int GetCountDuplicatesMax (int [] A)
{
if (A == null) return 0;
return A.CountDuplicatesMax();
}
In today's post we continue our discussion on the convergence of conjugate gradient method. We have seen that at each step of CG, the value of the current eigenvector is chosen from the initial eigen vector and the current subspace. The current subspace is formed from the span of previous subspaces. Each subspace was found by applying the matrix A against a vector. The vector is the initial residual which is also the initial eigenvector. These subspaces called Krylov subspaces are found iteratively. Recall that subspace means that when we don't know what the search vector should be, we choose a search subspace to find it in. Fortunately this subspace is incremental and we look for the next residual to be A-orthogonal to this current search subpace, so the next residual is A-orthogonal to all the previous except the current search vector. This makes it easier to find the next residual. The new search vector will be a linear combination of the next residual and the current search vector.
For a given iteration, the error term can be found by applying the initial eigenvector to the sum of all the previous matrix and the identity matrix. This sum can be expressed as a polynomial of a degree equal to the number of iterations so far. It doesn't matter if the polynomial is expressed in terms of a scalar or a matrix, what we want is to evaluate it.This means that we can evaluate it with eigenvalues as scalar or with matrix A. This flexible notation of expressing the sum as a polynomial P will let us apply a vector the same to whether P was in terms of A or in terms of a scalar eigenvalue. Note that by definition of an eigenvalue, it is a scalar constant that makes the application of a matrix to a vector the same as the application of this eigenvalue to the same vector. If we keep applying the matrix and the eigenvalue to both sides of this equation, the equation doesn't change. This lets us write the error term as one that applies a polynomial to the initial error term. If we start with an initial polynomial that evaluates to one in this method, and express the error term as a linear combination of the orthogonal unit eigenvectors, we find that CG finds the polynomial that depends on how close a polynomial of degree i can be to zero on each eigenvalue, given the initial constraint of the initial polynomial = 1.
No comments:
Post a Comment