"On stochastic processes" by Na Zhang
 

On stochastic processes

Document Type

Lecture

Publication Date

11-21-2024

Abstract

In a comprehensive study of feed-forward ReLU neural networks, Grigsby et al. (2022) explore the functional dimension of such networks, which measures a network’s expressiveness. One factor contributing to a functional dimension below the maximal level is the presence of stably inactivated neurons. In this work, we analyze a feed-forward neural network with input dimension n. We show that the probability of a neuron being stably inactivated in the second hidden layer is: (2n + 1)/(4n+1)when the first hidden layer has n+1 neurons, and is 1/(2^n1+1)when the first hidden layer has n1 neurons where n1≤ n. Moreover, a conjecture for more general case when n1≥ n+1 will be proposed, along with supporting experimental evidence presented at the end.

Relational Format

presentation

This document is currently not available for download.

Share

COinS