2024-11-24 Sunday Sign in CN

Activities
Overcoming the curse of dimensionality: from nonlinear Monte Carlo to the training of neural networks
Home - Activities
Reporter:
Arnulf Jentzen, Professor, School of Data Science and Shenzhen Research Institut of Big Data, The Chinese University of Hong Kong
Inviter:
Jialin Hong, Professor
Subject:
Overcoming the curse of dimensionality: from nonlinear Monte Carlo to the training of neural networks
Time and place:
15:30-16:30 April 25 (Tuesday), N109
Abstract:

Partial differential equations (PDEs) are among the most universal tools used in modelling problems in nature and man-made complex systems. Nearly all traditional approximation algorithms for PDEs in the literature suffer from the so-called ”curse of dimensionality” in the sense that the number of required computational operations of the approximation algorithm to achieve a given approximation accuracy grows exponentially in the dimension of the considered PDE. With such algorithms it is impossible to approximately compute solutions of high-dimensional PDEs even when the fastest currently available computers are used. In the case of linear parabolic PDEs and approximations at a fixed space-time point, the curse of dimensionality can be overcome by means of Monte Carlo approximation algorithms and the Feynman-Kac formula. In this talk we demonstrate that deep neural network approximations overcome the curse of dimensionality in the case of a general class of semilinear parabolic PDEs. Moreover, we also specify concrete examples of smooth functions which can not be approximated by shallow neural networks without the curse of dimensionality but which can be approximated by deep neural networks without the curse of dimensionality.