Ex) Article Title, Author, Keywords
New Phys.: Sae Mulli 2019; 69: 338-347
Published online April 30, 2019 https://doi.org/10.3938/NPSM.69.338
Copyright © New Physics: Sae Mulli.
Myoung Won CHO*
Department of Global Medical Science, Sungshin Women's University, Seoul 01133, Korea
This is an open-access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Both the firing and the learning process in a neural network can be described through the methodology in statistical mechanics. If the learning rule is defined as the gradient descent in the free energy, the derivative of the internal energy should be the product between the activities of the pre- and postsynaptic neurons in a synapse. This corresponds with the basic learning principle, the so-called Hebb's rule. On the other hand, the derivative of the entropy is expected to bring about a competitive relationship between synapses, which is a requisite mechanism for a neural network to have diverse and proper functions. However, the entropy can be derived in a variety of forms depending on the models, and the maximization exerts different effects on the learning process. In this paper, we explore how the free energy or the entropy can be defined in several models and classify how the entropy affects the learning process when the learning rule is derived from the gradient descent on the free energy. Also, we discuss what characteristics a neural network model should have in order for a proper competitive learning rule to be derived from the entropy maximization process.
Keywords: KNeural network learning, Entropy principle