Conference abstracts

Session A1 - Approximation Theory

July 11, 18:20 ~ 18:55 - Room B3

Optimal Approximation with Sparsely Connected Deep Neural Networks

Gitta Kutyniok

Technische Universität Berlin, Germany   -   kutyniok@math.tu-berlin.de

Despite the outstanding success of deep neural networks in real-world applications, most of the related research is empirically driven and a mathematical foundation is almost completely missing. One central task of a neural network is to approximate a function, which for instance encodes a classification task. In this talk, we will be concerned with the question, how well a function can be approximated by a deep neural network with sparse connectivity. We will derive fundamental lower bounds on the connectivity and the memory requirements of deep neural networks guaranteeing uniform approximation rates for arbitrary function classes, also including functions on low-dimensional immersed manifolds. Additionally, we prove that our lower bounds are achievable for a broad family of function classes, thereby deriving an optimality result. Finally, we present numerical experiments demonstrating that the standard stochastic gradient descent algorithm generates deep neural networks providing close-to-optimal approximation rates at minimal connectivity. Moreover, surprisingly, these results show that stochastic gradient descent actually learns approximations that are sparse in the representation systems optimally sparsifying the function class the network is trained on.

Joint work with Helmut B\"olcskei (ETH Z\"urich, Switzerland), Philipp Grohs (Universit\"at Wien, Austria) and Philipp Petersen (Technische Universit\"at Berlin, Germany).

View abstract PDF



FoCM 2017, based on a nodethirtythree design.