Partial differential equations (PDEs) are among the most universal tools used in modelling problems in nature and man-made complex systems. Nearly all traditional approximation algorithms for PDEs in the literature suffer from the so-called ”curse of dimensionality” in the sense that the number of required computational operations of the approximation algorithm to achieve a given approximation accuracy grows exponentially in the dimension of the considered PDE. With such algorithms it is impossible to approximately compute solutions of high-dimensional PDEs even when the fastest currently available computers are used. In the case of linear parabolic PDEs and approximations at a fixed space-time point, the curse of dimensionality can be overcome by means of Monte Carlo approximation algorithms and the Feynman-Kac formula. In this talk we demonstrate that deep neural network approximations overcome the curse of dimensionality in the case of a general class of semilinear parabolic PDEs. Moreover, we also specify concrete examples of smooth functions which can not be approximated by shallow neural networks without the curse of dimensionality but which can be approximated by deep neural networks without the curse of dimensionality.