General multivariate distributions are notoriously expensive to sample from, particularly the high-dimensional posterior distributions in PDE-constrained inverse problems. In this talk, I will present a measure transport based approach for Bayesian inverse problems based on low-rank surrogates in the tensor train format, a methodology that has been exploited for many years for scalable, high-dimensional density function approximation in quantum physics and chemistry. We build upon recent developments in the field of cross approximation algorithms in linear algebra to construct a tensor train approximation to the target probability density function using a small number of function evaluations. For sufficiently smooth distributions, the storage required for accurate tensor train approximations is moderate, scaling linearly with dimension. In turn, the structure of the tensor train (TT) surrogate allows sampling by an efficient conditional distribution method since marginal distributions are computable with linear complexity in dimension. Using this generic tool enables conditional sampling, construction of optimal biasing densities or even tractable Bayesian optimal experimental design. In order to keep the arising ranks in the TT approximations manageable, we furthermore propose a ‘deep’ version of the approach where the transport from reference to target distribution is approximated incrementally via intermediate bridging densities. The method is demonstrated in the context of complex PDE-constrained Bayesian inverse problems, computing expectations of functionals of the PDE solution with respect to high-dimensional posterior distributions, including rare event probabilities and optimal experimental designs.