2024年05月05日 星期日 登录 EN

学术活动
A class of variance reduced proximal stochastic gradient method for convex optimization
首页 - 学术活动
报告人:
Tengteng Yu, Lecturer, Beijing University Of Agriculture
邀请人:
Li Zhang, Doctor
题目:
A class of variance reduced proximal stochastic gradient method for convex optimization
时间地点:
10:30-11:30 October 27( Friday), Z311
摘要:

Recently, proximal stochastic variance reduced methods have surged into prominence for solving large-scale machine learning problems. For the nonsmooth convex regularized empirical risk minimization problem, in order to reduce the variance, a new stochastic gradient is designed. Combined with a sub-sampling strategy, a class of variance reduction proximal stochastic method is proposed. Ignoring the computation of the full gradient, this method only calculates the mean of gradients of partial component functions, which reduces the computational complexity of the method. It is proved that the method can enjoy linear convergence rate for both strongly convex and non-strongly convex problems, and achieve sublinear convergence rate for general convex problems. The complexity of the proposed method is established. The experimental results verify the effectiveness of the proposed methods.

报告人简介:于腾腾,2016年硕士毕业于西安电子科技大学,2021年博士毕业于河北工业大学。2021年10月至2023年8月在中国科学院数学与系统科学研究院从事博士后研究。2023年8月至今在北京农学院工作。主要研究兴趣为大规模机器学习中的随机梯度算法,相关成果发表在IEEE Transactions on Neural Networks and Learning Systems、Journal of Scientific Computing、Journal of the Operations Research Society of China等期刊。