Skip to main content

Learning Machines Seminars - Linear Regions of Deep Neural Networks

-
Online via Zoom

Learning Machines Seminars gathers experts in AI for an open weekly seminar! Seminars include presentations on a current topic on machine learning.

Arranged by: 
RISE Research Institutes of Sweden

Abstract: Many of the most widely used neural network architectures make use of rectified linear activations (ReLU, i.e. f(x) = max(x, 0)) and are therefore piecewise linear functions. The maximal subsets of the input space on which the network function is linear are called linear regions. If we want to better understand ReLU networks, it may be beneficial to understand these regions. There is the common intuition that the number of linear regions of neural networks measures their expressivity. Therefore a lot of focus has been placed on trying to obtain estimates of this number. However, this number is staggeringly high: Even very small networks have many more linear regions than there are atoms in the universe (10^80). This number is also much larger than the number of points in the dataset.

This raises the question of how representative the number of linear regions is for network performance and how information extracted from training samples passes on to the many linear regions free of data for successful generalisation to test data. Our approach differs from previous ones focused on counting in that it investigates the linear coefficients associated to the linear linear regions. We propose TropEx, a tropical algebra-based algorithm extracting linear terms of the network function. This allows us to compare the network function with an extracted tropical function that agrees with the original network around all training data points, but which has much fewer linear regions. We also use our algorithm to compare different network types from the perspective of linear regions and their coefficients.

Speaker: Martin Trimmel, Lund University.

Attend