Content area
Abstract
We begin by developing a line search method for unconstrained optimization which can be regarded as a combined quasi-Newton and steepest descent method. This method possesses a guaranteed global convergence on general nonconvex objective functions and has similar superlinear convergence properties to the usual quasi-Newton methods. Our numerical results on standard test problems show that the new method can outperform the corresponding quasi-Newton method, especially when the starting point is far away from the optimal point. The new method significantly improves the performance of the DFP method.
We continue by analyzing the widely used Nelder-Mead simplex method for unconstrained optimization. We present two examples in which the Nelder-Mead simplex method does not converge to a single point and investigate the effect of dimensionality on the Nelder-Mead method. It is shown that the Nelder-Mead simplex method becomes less efficient as the dimension increases.