In the early years of Optimization, the first classical schemes were derived from an abstract concept of approximation (e.g., Gradient method, Newton's methods, etc.). However, since the development of Complexity Theory for Convex Optimization (Nemirovsky, Yudin 1970's), the most powerful approaches for constructing efficient (optimal) methods are based on the model of the objective function. This model incorporates the characteristic properties of the corresponding problem class and provides us with a comprehensive information on the behavior of the objective. At the same time, it helps in deriving theoretically unimprovable complexity bounds for the target class.
However, this framework completely neglects the fact that every objective function belongs, at the same time, to many different problem classes. Hence, it should be treated by a method developed for the most appropriate class of problems. However, for the real-life problems, such a choice is seldomly feasible, at least in advance.
In this talk, we discuss several ideas for constructing universal methods, which automatically ensure the best possible convergence rate among appropriate problem classes. The simplest methods of this type adjust to the best power in Holder condition for the target derivative. Our most promising super-universal Regularized Newton's Method works properly for a wide range of problems, starting from the functions with bounded variation of Hessian up to the functions with Lipschitz continuous third derivative. Thus, being a second-order scheme, it covers all diversity of problems, from the problems traditionally treated by the first-order methods, up to the problems, which are usually attributed to the third-order schemes. For its proper work, no preliminary information on the objective function is needed.
(Some of the results were obtained jointly with N. Doikov, G. Grapiglia, and K. Mishchenko.)