For a neural network with a fixed number of neurons/layers, is there a way to approximate the maximum number of computational steps it can model, regardless of number of sub functions?

It’s known that a single mathematical neural network with a fixed number of neurons/layers is able to compute a large variety of individual sub functions. A biological comparison could be made in the sense that it’s known highly intelligent and/or highly skilled people have less activation of their neural networks for a relevant skill, indicating that it is highly optimized.

Both scenarios imply that there exists some sort of mathematical limitation on the amount of computational steps a single network could model. For instance, perhaps there exists some neural network that when in it’s most highly optimized state, it could model 10,000 different 3-step functions, or perhaps just 3 different 10,000-step function. Or in a highly unoptimized state it could just be calculating 2 5-step functions, etc – meaning that in its highly unoptimized state, it is no where near it’s full potential of utilization.

Does all this relate to dimensionality of a neural network with the weights and biases? In biology a certain degree of neuron redundancy is necessary, but overall I wonder how the max number of functions/computational steps can be calculated for a given set of neurons.

submitted by /u/LunarStone
[link] [comments]

Published by

Nevin Manimala

Nevin Manimala is interested in blogging and finding new blogs

Leave a Reply

Your email address will not be published. Required fields are marked *