2024-11-23 Saturday Sign in CN

Activities
A class of variance reduced proximal stochastic gradient method for convex optimization
Home - Activities
Reporter:
Tengteng Yu, Lecturer, Beijing University Of Agriculture
Inviter:
Li Zhang, Doctor
Subject:
A class of variance reduced proximal stochastic gradient method for convex optimization
Time and place:
10:30-11:30 October 27( Friday), Z311
Abstract:

Recently, proximal stochastic variance reduced methods have surged into prominence for solving large-scale machine learning problems. For the nonsmooth convex regularized empirical risk minimization problem, in order to reduce the variance, a new stochastic gradient is designed. Combined with a sub-sampling strategy, a class of variance reduction proximal stochastic method is proposed. Ignoring the computation of the full gradient, this method only calculates the mean of gradients of partial component functions, which reduces the computational complexity of the method. It is proved that the method can enjoy linear convergence rate for both strongly convex and non-strongly convex problems, and achieve sublinear convergence rate for general convex problems. The complexity of the proposed method is established. The experimental results verify the effectiveness of the proposed methods.

报告人简介:于腾腾,2016年硕士毕业于西安电子科技大学,2021年博士毕业于河北工业大学。2021年10月至2023年8月在中国科学院数学与系统科学研究院从事博士后研究。2023年8月至今在北京农学院工作。主要研究兴趣为大规模机器学习中的随机梯度算法,相关成果发表在IEEE Transactions on Neural Networks and Learning Systems、Journal of Scientific Computing、Journal of the Operations Research Society of China等期刊。