2024-09-19 Thursday Sign in CN

Activities
What Does a Neural Network Look Like If It Is Trained for a Long Time?
Home - Activities
Reporter:
Weijie Su, Professor, University of Pennsylvania
Inviter:
Pingbing Ming, Professor
Subject:
What Does a Neural Network Look Like If It Is Trained for a Long Time?
Time and place:
9:00-10:00 April 26(Tuesday)
Abstract:

The remarkable development of deep learning over the past decade relies heavily on sophisticated heuristics and tricks. To better exploit its potential in the coming decade, perhaps a rigorous framework for reasoning deep learning is needed, which however is not easy to build due to the intricate details of modern neural networks. For near-term purposes, a practical alternative is to develop a mathematically tractable surrogate model that yet maintains many characteristics of deep learning models. This talk introduces a model of this kind as a tool toward understanding deep learning. The effectiveness of this model, which we term the Layer-Peeled Model, is evidenced by two use cases. First, we use this model to explain an empirical pattern of deep learning recently discovered by David Donoho and his students. Moreover, this model predicts a hitherto unknown phenomenon that we term Minority Collapse in deep learning training. This is based on arXiv:2101.12699 and arXiv:2110.02796.