search for




 

https://doi.org/10.3938/NPSM.69.338
Significance of the Entropy Maximization Principle in a Neural System
New Phys.: Sae Mulli 2019; 69: 338~347
Published online April 30, 2019;  https://doi.org/10.3938/NPSM.69.338
© 2019 New Physics: Sae Mulli.

Myoung Won CHO*

Department of Global Medical Science, Sungshin Women's University, Seoul 01133, Korea
Correspondence to: mwcho@sungshin.ac.kr
Received December 18, 2018; Revised January 29, 2019; Accepted February 18, 2019.
cc This is an open-access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Both the firing and the learning process in a neural network can be described through the methodology in statistical mechanics. If the learning rule is defined as the gradient descent in the free energy, the derivative of the internal energy should be the product between the activities of the pre- and postsynaptic neurons in a synapse. This corresponds with the basic learning principle, the so-called Hebb's rule. On the other hand, the derivative of the entropy is expected to bring about a competitive relationship between synapses, which is a requisite mechanism for a neural network to have diverse and proper functions. However, the entropy can be derived in a variety of forms depending on the models, and the maximization exerts different effects on the learning process. In this paper, we explore how the free energy or the entropy can be defined in several models and classify how the entropy affects the learning process when the learning rule is derived from the gradient descent on the free energy. Also, we discuss what characteristics a neural network model should have in order for a proper competitive learning rule to be derived from the entropy maximization process.
PACS numbers: 84.35.+i,87.19.lv
Keywords: KNeural network learning, Entropy principle


June 2019, 69 (6)
  • Scopus
  • CrossMark