Sunday, November 16, 2014

Today we look at the complexity in the convergence of the CG method. The dominating operations during an iteration are the matrix-vector multiplication which requires O(m) operations, where m is the number of non-zero entries in the matrix. In many cases however, A is sparse and m belongs to O(n). If we want to do just enough iterations to reduce the norm of the error in each iteration by a factor of epsilon applied to the initial error term, then we find this maximum number of iterations required to achieve it. In the case of Steepest Descent method, this maximum number of iterations is bounded by a constant k times the natural logarithm of the inverse of epsilon. And in the case of Conjugate Gradient method, it is bounded by the square root of the constant times a similar natural logarithm of the inverse of epsilon. The steepest descent has a time complexity of O(mk) where as the Conjugate Gradient has a time complexity of O(m square-root(k)). Both algorithms have a space complexity of O(m).
By going to polynomials of higher degree, we can generalize this to second order elliptic boundary value problems. There are finite difference and finite element approximation methods used for solving these problems on d - dimensional domains. Recall that the finite difference method applies a local Taylor expansion to approximate the differential equations. The method is used to solve Partial differential equations in a numerical way where a topological square network of lines is used to construct the discretization of PDE. Partial Differential Equations have the property that they are of the form : elliptic, parabolic and hyperbolic. This gives us the convenience of applying Taylor expansions. When there are multiple dimensions however, this topology does not approximate well. Instead, finite element technique is used. The Finite element method discretizes the region of interest into N-1 subdomains and assumes that the approximation is represented by an expansion basis which are continuous polynomials which we piecewise break and describe with one of a few possible shape functions. As an aside, a Finite volume method is another such improvement where we take the region of integration to be a control volume associated with a point of co-ordinate xi and taken around its immediate vicinity. Here the integral is taken over this small volume stretched on the unit length around xi which can then be translated in a semi discrete form. Peiro and Sherwin mention that this approach produces a conservative scheme if the flux on the boundary of one cell equals flux on the boundary of the adjacent cell. In our case, we look at elliptic boundary value problems and both finite difference and finite element  methods have a time complexity where the constant k is of the order of O(n^2 raised to the inverse of d). Thus for two dimensional problems, Steepest Descent has a time complexity of O(n ^2 ) and Conjugate gradient method has a time complexity of O(n  ^ ( 3/2)) for Conjugate Gradient method. For three dimensional problems, Steepest Descent has a time complexity of O(n ^ 5/3) while Conjugate Gradient method has a time complexity of O(n ^ 4/3) .
When discussing iterations using these methods, it might be helpful to discuss starting and stopping of such iterations. Starting is usually a good guess. If we know the approximate value of our variable, then we can use it as our starting value. Otherwise, we begin at zero and both Steepest Descent method and Conjugate Gradient method can find the solution to the linear system. The same cannot be said for non-linear systems because they may or may not converge and they may have several local minima. For stopping, both Steepest Descent method and Conjugate gradient method stops when it reaches the minimum point. We took this as when the residual becomes zero. However, there's a possibility that accumulated round off errors in the recursive formulation of the residual (using step-lengths and matrix-vector product) may lead to a false zero. This may be corrected by restarting with the residual.
 

No comments:

Post a Comment