In our previous post on use of optimization for flight searches from the Programming Collective Intelligence book, we discussed optimizing for costs and then by preferences. We now explore other optimization techniques such as matrix methods from a paper by Gill, Murray, Saunders, Wright et al. Here the optimization can be written in the form of matrices.
In particular we will look at the sparse matrix methods in optimization. As with most optimization techniques we solve a set of linear equations : Bkyk = bk. Both direct and iterative equation solvers are used.
While a single equation system was being solved predominantly, soon there was a need for different systems of equations to be solved together. The single system solution was then applied repeatedly to a sequence of modified systems. The speed of factorizing a matrix then becomes less important than solving the subsequent many systems Furthermore, the solution Bk is related to Bk-1
The simplest example of the use of matrices for such problems is the example of linear programming which tries to find the optimal solution as a region bounded by a set of linear equations in a convex polytope. It has applications in routing, scheduling, assignment and design.
Linear programming has been applied to problems of all sizes, however the relative costs of the steps of the optimization methods changes when the problems become large. For example, when there is no constraint, the cost function is measured based on the number of evaluations. But subsequent recomputations are more expensive because there is more data structure and computing involved. Instead we expressed the evaluation in a way where the (k+1)th iterate is defined as xk+1 = xk + alpha-k.pk and dependent on the previous one.
We will discuss Newton and Conjugate gradient methods in this regard.
In particular we will look at the sparse matrix methods in optimization. As with most optimization techniques we solve a set of linear equations : Bkyk = bk. Both direct and iterative equation solvers are used.
While a single equation system was being solved predominantly, soon there was a need for different systems of equations to be solved together. The single system solution was then applied repeatedly to a sequence of modified systems. The speed of factorizing a matrix then becomes less important than solving the subsequent many systems Furthermore, the solution Bk is related to Bk-1
The simplest example of the use of matrices for such problems is the example of linear programming which tries to find the optimal solution as a region bounded by a set of linear equations in a convex polytope. It has applications in routing, scheduling, assignment and design.
Linear programming has been applied to problems of all sizes, however the relative costs of the steps of the optimization methods changes when the problems become large. For example, when there is no constraint, the cost function is measured based on the number of evaluations. But subsequent recomputations are more expensive because there is more data structure and computing involved. Instead we expressed the evaluation in a way where the (k+1)th iterate is defined as xk+1 = xk + alpha-k.pk and dependent on the previous one.
We will discuss Newton and Conjugate gradient methods in this regard.
No comments:
Post a Comment