Matrix denoising is central to signal processing and machine learning. Its statistical analysis when the matrix to infer has a factorised structure with a rank growing proportionally to its dimension remains a challenge, except when it is rotationally invariant. The reason is that the model is not a usual spin system because of the growing rank dimension, nor a matrix model due to the lack of rotation symmetry, but rather a hybrid between the two. I will discuss recent findings on Bayesian matrix denoising when the hidden signal XX^t is not rotationally invariant. I will discuss the existence of a « universality breaking » phase transition separating a regime akin to random matrix theory with strong universality properties, from one of the mean-field type as in spin models, treatable by spin glass techniques.
In the second part, I will connect this model and phenomenology to learning in neural networks. We will see how these findings allow to analyse neural networks with an extensively large hidden layer trained near their interpolation threshold, a model that has been resisting for a long time. I will show that the phase transition in matrix denoising translates in this context into a sharp learning transition. The related papers are: https://arxiv.org/pdf/2411.01974 ; https://arxiv.org/pdf/2501.18530
Bio: Jean Barbier is a mathematical physicist specialized in information processing systems, working as Associate Professor at the International Centre for Theoretical Physics (ICTP) in Italy. He is interested in the physics of high-dimensional inference and learning problems. Thanks to a grant from the European Research Council for his project on the « Computational hardness of representation learning », Barbier’s group is currently developing novel statistical tools to better quantify the performance of neural networks trained from structured data, through a combinations of random matrix theory, statistical mechanics and information theory.