Diffusion models, which generate new data instances by learning to reverse a Markov diffusion process from noise, have become a cornerstone in contemporary generative modeling. While their practical power has now been widely recognized, theoretical underpinnings for mainstream samplers remain underdeveloped. Moreover, despite the recent surge of interest in accelerating diffusion-based samplers, convergence theory for these acceleration techniques remains limited. In this talk, I will introduce a new suite of non-asymptotic results aimed at better understanding popular samplers like DDPM and DDIM in discrete time, offering significantly improved convergence guarantees over previous work. Our theory accommodates L2-accurate score estimates, and does not require log-concavity or smoothness on the target distribution. Building on these insights, we propose training-free algorithms that provably accelerate diffusion-based samplers, leveraging ideas from higher-order approximation similar to those used in high-order ODE solvers like DPM-Solver.
报告人简介:Gen Li is currently an assistant professor in the Department of Statistics at the Chinese University of Hong Kong. He received the Ph.D. in the Department of Electronic Engineering at Tsinghua University in 2021, and received the bachelor's degree from the Department of Electronic Engineering and Department of Mathematics at Tsinghua University in 2016. His research interests include diffusion based generative model, reinforcement learning, high-dimensional statistics, machine learning.