Probability Seminar
Title: Random Neural Networks
Abstract: Fully connected neural networks are described two by structural parameters: a depth L and a width N. In this talk, I will present results and open questions about the asymptotic analysis of such networks with random weights and biases in the regime where N (and potentially L) are large. The first set of results are for deep linear networks, which are simply products of L random matrices of size N x N. I’ll explain how the setting where the ratio L / N is fixed with both N and L large reveals a number of phenomena not present when only one of them is large. I will then state several results about non-linear networks in which this depth-to-width ratio L / N again plays a crucial role and gives an effective notion of depth for a random neural network.