Content area

Abstract

We begin by developing a line search method for unconstrained optimization which can be regarded as a combined quasi-Newton and steepest descent method. This method possesses a guaranteed global convergence on general nonconvex objective functions and has similar superlinear convergence properties to the usual quasi-Newton methods. Our numerical results on standard test problems show that the new method can outperform the corresponding quasi-Newton method, especially when the starting point is far away from the optimal point. The new method significantly improves the performance of the DFP method.

We continue by analyzing the widely used Nelder-Mead simplex method for unconstrained optimization. We present two examples in which the Nelder-Mead simplex method does not converge to a single point and investigate the effect of dimensionality on the Nelder-Mead method. It is shown that the Nelder-Mead simplex method becomes less efficient as the dimension increases.

Details

Title
Algorithms for unconstrained optimization
Author
Han, Lixing
Year
2000
Publisher
ProQuest Dissertations Publishing
ISBN
978-0-599-90413-2
Source type
Dissertation or Thesis
Language of publication
English
ProQuest document ID
304602107
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.