npsm 새물리 New Physics : Sae Mulli

pISSN 0374-4914 eISSN 2289-0041
Qrcode

Article

Research Paper

New Phys.: Sae Mulli 2020; 70: 885-895

Published online October 30, 2020 https://doi.org/10.3938/NPSM.70.885

Copyright © New Physics: Sae Mulli.

Graphical Illustration of the Learning Process in Simple Neural Networks

Xue-Mei CUI1, Donghyeon GIM2, Seung Kee HAN3*

1Normal College, Yanbian University, Jilin 133002, China

2GSI, Anyang 14056, Korea
3Department of Physics, Chungbuk National University, Cheongju 28644, Korea

Correspondence to:skhan@chungbuk.ac.kr

Received: July 12, 2020; Revised: August 1, 2020; Accepted: August 1, 2020

Abstract

Recently, artificial neural networks (ANNs) have been utilized for various tasks, including image classification, speech recognition, and machine translation, and for medical diagnosis. For practical applications, we use neural networks composed of a very large number of neurons, but we do not have much information on how to fix the number of hidden layers or the number of neurons in a layer. In this paper, a graphical illustration of neural learning in simple neural networks of a multilayer perceptron (MLP) is presented. In the case of XOR-like problems, the learning process corresponds to finding a line or surface that separates several states into two groups in a higher dimensional space. Here, we illustrate graphically how the bipartition problem depends on the number of neurons in a layer, we also address the meaning of adding a layer in the network. We expect that this intuitive graphical understanding of increasing the number of neurons or layers in simple neural networks will be useful in constructing neural networks for practical problems.

Keywords: Neural networks, Multilayer perceptron, Neural learning, Graphical illustration

Stats or Metrics

Share this article on :

Related articles in NPSM