npsm 새물리 New Physics : Sae Mulli

pISSN 0374-4914 eISSN 2289-0041


Research Paper

New Phys.: Sae Mulli 2019; 69: 338-347

Published online April 30, 2019

Copyright © New Physics: Sae Mulli.

Significance of the Entropy Maximization Principle in a Neural System

Myoung Won CHO*

Department of Global Medical Science, Sungshin Women's University, Seoul 01133, Korea


Received: December 18, 2018; Revised: January 29, 2019; Accepted: February 18, 2019

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-Commercial License ( which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.


Both the firing and the learning process in a neural network can be described through the methodology in statistical mechanics. If the learning rule is defined as the gradient descent in the free energy, the derivative of the internal energy should be the product between the activities of the pre- and postsynaptic neurons in a synapse. This corresponds with the basic learning principle, the so-called Hebb's rule. On the other hand, the derivative of the entropy is expected to bring about a competitive relationship between synapses, which is a requisite mechanism for a neural network to have diverse and proper functions. However, the entropy can be derived in a variety of forms depending on the models, and the maximization exerts different effects on the learning process. In this paper, we explore how the free energy or the entropy can be defined in several models and classify how the entropy affects the learning process when the learning rule is derived from the gradient descent on the free energy. Also, we discuss what characteristics a neural network model should have in order for a proper competitive learning rule to be derived from the entropy maximization process.

Keywords: KNeural network learning, Entropy principle

Stats or Metrics

Share this article on :

Related articles in NPSM