Paper
9 September 2019 The structure of spaces of neural network functions
Author Affiliations +
Abstract
We analyse spaces of deep neural networks with a fixed architecture. We demonstrate that, when interpreted as a set of functions, spaces of neural networks exhibit many unfavourable properties: They are highly non-convex and not closed with respect to Lp-norms, for 0 < p < ∞ and all commonly-used activation functions. They are not closed with respect to the L-norm for almost all practically-used activation functions; here, the (parametric) ReLU is the only exception. Finally, we show that the function that maps a family of neural network weights to the associated functional representation of a network is not inverse stable for every practically-used activation function.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Philipp Petersen, Mones Raslan, and Felix Voigtlaender "The structure of spaces of neural network functions", Proc. SPIE 11138, Wavelets and Sparsity XVIII, 111380F (9 September 2019); https://doi.org/10.1117/12.2528313
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Machine learning

Back to Top