int getMin ( int [] A)
{
if (A == null) return 0;
return A.Min();
}
int getMax ( int [] A)
{
if (A == null) return 0;
return A.Max();
}
int getAvg ( int [] A)
{
if (A == null) return 0;
return A.Avg();
}
int getMedian ( int [] A)
{
if (A == null) return 0;
return A.Median ()
}
Today we continue to discuss conjugate gradient methods. We discussed conjugate gradient direction earlier but we knew the limitations there. In this post, we cover why conjugate gradient have become more popular. All of the methods we have discussed so far have been using the linear equation or quadratic form. In fact, the method of conjugate gradients is simply the method of conjugate directions where the search directions are constructed by conjugation of the residuals. We saw that when we replaced the position vectors with the residuals, it worked for the steepest descent method. And the residuals have the nice property that it is orthogonal to the previous search directions. This worked well for us because we were guaranteed a new search direction and if the residual was zero, we had arrived at the final answer. There is an even better reason to choose the residual as we will see in this method.
First the search vectors are built from the residuals, the subspace span is equal to Di. As each residual is orthogonal to the previous search directions, it is also orthogonal to the previous residuals. From the recurrence to find the residual as in the Steepest descent method, we already know that each residual is just a linear combination of the previous residual and the matrix component along the search direction. Since the search directions belong to the subspace span, each new subspace is formed from the union of the previous subspace and the current subspace ADi. We transform the subspace of search directions to a subspace of search direction to a subspace of residuals. This subspace is called Krylov subspace. It is formed by incrementally applying a matrix to a vector. Because it is incremental, it has the nice property that the next residual is orthogonal to the current subspace, This also implies that the next residual is A-orthogonal to the the previous subspace. With this the iteration becomes simpler as with Graham Schmidt conjugation because r(i+1) is already A-orthogonal to all of the previous search directions except di. What this means is we no longer need to store the old search vectors to ensure A-orthogonality of new search vectors. The major advance is what makes CG as important an algorithm as it is, because both the space complexity and time complexity per iteration are reduced from order of N ^2 to linear.
{
if (A == null) return 0;
return A.Min();
}
int getMax ( int [] A)
{
if (A == null) return 0;
return A.Max();
}
int getAvg ( int [] A)
{
if (A == null) return 0;
return A.Avg();
}
int getMedian ( int [] A)
{
if (A == null) return 0;
return A.Median ()
}
Today we continue to discuss conjugate gradient methods. We discussed conjugate gradient direction earlier but we knew the limitations there. In this post, we cover why conjugate gradient have become more popular. All of the methods we have discussed so far have been using the linear equation or quadratic form. In fact, the method of conjugate gradients is simply the method of conjugate directions where the search directions are constructed by conjugation of the residuals. We saw that when we replaced the position vectors with the residuals, it worked for the steepest descent method. And the residuals have the nice property that it is orthogonal to the previous search directions. This worked well for us because we were guaranteed a new search direction and if the residual was zero, we had arrived at the final answer. There is an even better reason to choose the residual as we will see in this method.
First the search vectors are built from the residuals, the subspace span is equal to Di. As each residual is orthogonal to the previous search directions, it is also orthogonal to the previous residuals. From the recurrence to find the residual as in the Steepest descent method, we already know that each residual is just a linear combination of the previous residual and the matrix component along the search direction. Since the search directions belong to the subspace span, each new subspace is formed from the union of the previous subspace and the current subspace ADi. We transform the subspace of search directions to a subspace of search direction to a subspace of residuals. This subspace is called Krylov subspace. It is formed by incrementally applying a matrix to a vector. Because it is incremental, it has the nice property that the next residual is orthogonal to the current subspace, This also implies that the next residual is A-orthogonal to the the previous subspace. With this the iteration becomes simpler as with Graham Schmidt conjugation because r(i+1) is already A-orthogonal to all of the previous search directions except di. What this means is we no longer need to store the old search vectors to ensure A-orthogonality of new search vectors. The major advance is what makes CG as important an algorithm as it is, because both the space complexity and time complexity per iteration are reduced from order of N ^2 to linear.
ReplyDeleteGreat Article
Final Year Projects in Python
Python Training in Chennai
FInal Year Project Centers in Chennai
Python Training in Chennai