- Main
Representation of Variables and Their Values in Neural Networks
Abstract
Neural nets (NNs) such as multi-layer feedforward and recurrent nets have had considerable success in creating representations in the hidden layers. In a combinatorial domain, such as a visual scene, a parsimonious represent- ation might be in terms of component features (or variables) such as colour, shape and size (each of which can take on multiple values, such as red or green, or square or circle). Simulations are described demonstrating that a multi-variable encoder network can learn to represent an input pattern in terms of its component variables, wherein each variable is encoded by a pair of hidden units. The interesting aspect of this representation is that the number of hidden units required to represent arbitrary numbers of variables and values is linear in the number of variables, but constant with respect to the number of values for each variable. This result provides a new perspective for assessing the representational capacity of hidden units in combinatorial domains.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-