Zero-Order or Derivative-Free optimization methods have become popular due to recent developments in Machine-Learning and AI-Training. In this talk, we describe a zero-order optimization solver, SOLNP+, which is able to solve general nonlinear constrained or unconstrained problems only using zero-order function information. There are two kinds of methods in this field: direct search method and model-based method, the representative solvers of which are NOMAD and COBYLA respectively. SOLNP+ belongs to the latter category that use various finite-difference computations to approximate gradients. SOLNP+ is coded in C and adopts various gradient and/or Hessian estimation techniques, including the implicit filtering to adaptively choose the step, randomized and block-coordinate direction search, the BFGS Hessian update, the dimension-reduced trust-region, the Augmented Lagrangian, the Interior-Point method, and etc. The computational results on solving classical benchmark and real application problems will be presented.
报告人简介:叶荫宇,斯坦福大学管理科学与工程系及计算数学工程研究院的杰出终身教授。叶荫宇教授主要研究方向为优化、复杂性理论、算法设计与分析、数学规划应用、运筹学和系统工程。他是INFORMS(国际运筹和管理科学协会)会士,并获得了多个研究奖项。包括2006年首届Farkas优化奖和2009年IBM教师奖,2009年叶教授被授予约翰•冯•诺依曼理论奖,以表彰他对运筹学和管理科学理论的持续贡献。2012年,他成为国际数学规划大会(ISMP)Tseng Lectureship 奖的首位获奖者;2014年,他成为美国应用数学学会(SIAM)三年一度的优化大奖(Optimization Prize)的获奖者。