On stochastic processes
Document Type
Lecture
Publication Date
11-21-2024
Abstract
In a comprehensive study of feed-forward ReLU neural networks, Grigsby et al. (2022) explore the functional dimension of such networks, which measures a network’s expressiveness. One factor contributing to a functional dimension below the maximal level is the presence of stably inactivated neurons. In this work, we analyze a feed-forward neural network with input dimension n. We show that the probability of a neuron being stably inactivated in the second hidden layer is: (2n + 1)/(4n+1)when the first hidden layer has n+1 neurons, and is 1/(2^n1+1)when the first hidden layer has n1 neurons where n1≤ n. Moreover, a conjecture for more general case when n1≥ n+1 will be proposed, along with supporting experimental evidence presented at the end.
Relational Format
presentation
Recommended Citation
Zhang, Na, "On stochastic processes" (2024). Probability & Statistics Seminar. 1.
https://egrove.olemiss.edu/math_statistics/1