Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
The Partial Information Decomposition of Generative Neural Network Models
oleh: Tycho M.S. Tax, Pedro A.M. Mediano, Murray Shanahan
| Format: | Article |
|---|---|
| Diterbitkan: | MDPI AG 2017-09-01 |
Deskripsi
In this work we study the distributed representations learnt by generative neural network models. In particular, we investigate the properties of redundant and synergistic information that groups of hidden neurons contain about the target variable. To this end, we use an emerging branch of information theory called partial information decomposition (PID) and track the informational properties of the neurons through training. We find two differentiated phases during the training process: a first short phase in which the neurons learn redundant information about the target, and a second phase in which neurons start specialising and each of them learns unique information about the target. We also find that in smaller networks individual neurons learn more specific information about certain features of the input, suggesting that learning pressure can encourage disentangled representations.