Ex) Article Title, Author, Keywords
Ex) Article Title, Author, Keywords
New Phys.: Sae Mulli 2021; 71: 599-604
Published online July 30, 2021 https://doi.org/10.3938/NPSM.71.599
Copyright © New Physics: Sae Mulli.
Minsu KWON1, Yongseok OH1*, Young-Ho SONG2
1Department of Physics, Kyungpook National University, Daegu 41566, Korea
Correspondence to:yohphy@knu.ac.kr
The regression process of machine learning is applied to investigate the pattern of alpha decay half-lives of heavy nuclei. By making use of the available experimental data for 164 nuclides, we scrutinize the predictive power of machine learning in the study of nuclear alpha decays within two approaches. In Model (I), we trained neural networks to experimental data of the half-lives of nuclear alpha decays directly while, in Model (II), they are trained to the gap between the experimental data and the predictions of the Viola-Seaborg formula as a theoretical model. The purpose of Model (I) was to verify the applicability of machine learning to nuclear alpha decays, and the motivation of Model (II) was to apply the technique to estimate the uncertainties in the predictions of theoretical models. Out results show that room exists for improving the predictions of empirical models by using machine learning techniques. We also present predictions on unmeasured nuclear alpha decays.
Keywords: Nuclear alpha decays, Machine learning
The research on nuclear
In the present work, we adopt a different approach to investigate the nuclear
Many existing applications of ML to nuclear
This paper is organized as follows. In the next section, we briefly introduce the concepts of machine learning, and we construct our models as well. In Sec.III, the results from machine learning are compared with available data and predictions for unobserved nuclear
Though Artificial Intelligence (AI) has been developed in various ways since 1950s when modern computers were developed, recent popularity of ML is mostly from the success of ANN. We can understand ANN as a mapping function
where
In this work, we adopt Rectified Linear Unit (ReLU) function as an activation function [21], which is defined as
for hidden layers. ReLU is widely used to avoid the vanishing gradient problem in training. We adopt Adam optimizer [22] that is an algorithm for first-order gradientbased optimization of stochastic objective functions. More explanations of the structure of ANN and loss function are given in the next section. All numerical calculations in this work are done by using the TensorFlow library [20]. The details of the TensorFlow algorithm can be found elsewhere 1 and will not be repeated here.
It is well known that the half-lives of nuclear
where
which also caused several derived versions [3,18]. Normally, to improve the predictive power of the VS formula, the parameters are determined separately for even-even, even-odd, odd-even, and odd-odd nuclei, which implies the mass number dependence of the parameters. When the half-lives are given in the units of second and
Table 1 Fitted coefficients of the VS formula. The values are from Ref. [3] and are given for even-even, evenodd, odd-even, and odd-odd nuclei.
Type | ||||
---|---|---|---|---|
e-e | 1.48503 | 5.26806 | -0.18879 | -33.89407 |
e-o | 1.55427 | 1.23165 | -0.18749 | -34.29805 |
o-e | 1.64654 | -3.14939 | -0.22053 | -32.74153 |
o-o | 1.34355 | 13.92103 | -0.12867 | -37.19944 |
In the present work, we make use of the ANN in two different ways. In the first approach, which is an
On the other hand, it would be interesting to see whether machine learning algorithm can fill the gap between experimental data and theoretical model predictions. Therefore, this approach is
Though it is desirable to survey the model space of ANN with various number of hidden layers or neurons, to simplify analysis, we chose to use fixed number of hidden layers and neurons. ANN for Model (I) has 3 hidden layers with 8, 9, 8 neurons, receptively, and ANN for Model (II) has 3 hidden layers with 7, 5, 8 neurons, respectively. We use the data for the
With 164 data points in hand, we randomly separate data into training set and test set with the ratio of 80:20. The training set is used to train ANN and the test set is used to check the predictive power or credibility of the process. Furthermore, part of the training sets are randomly selected for validation tests. To avoid overfitting, we chose the early stopping, in which the training process stops when the validation error starts to increase. In order to estimate the accuracy of the calculation, we use the mean square error defined as
where
Table 2 shows the MSEs of our models. Small MSEs and the similarities between training set and test set indicate that ANNs are well trained. Slightly larger MSEs for test sets are understandable as they are not used for training. The unbiased ANN of Model (I) achieves reasonable accuracy as phenomenological VS formula. Our results also show that the MSE of Model (II) is smaller than that of Model (I) so that the accuracy is improved by about 15%. In other words, training ANN with theoretical guide is better than the naive approach. However, the improvement is not so impressive compared to the VS formula. This observation is in agreement with the observation of Ref. [16]. There may be several explanation for this observation. Probably the most significant factor would be the limit of available experimental data in
Table 2 Obtained mean square errors.
Model (I) | VS formula | Model (II) | |
---|---|---|---|
Training Set | 0.337 | 0.265 | 0.258 |
Test Set | 0.368 | 0.370 | 0.355 |
In Table 3 we compare our calculations with several observed data among 164 nuclides used in the present work. For this calculation we use the central values of the measured
Table 3 Observed
(( | |||||
---|---|---|---|---|---|
( | |||||
(118, 294)* | 11.81 | -2.8539 | -2.9245 | -3.6475 | -3.6192 |
(116, 293)* | 10.68 | -1.0969 | -0.5067 | -0.8421 | -0.8617 |
(116, 292)* | 10.77 | -1.6198 | -0.7931 | -1.7074 | -1.7048 |
(116, 291) | 10.89 | -1.5528 | -1.1299 | -1.3990 | -1.3728 |
(116, 290) | 10.00 | -2.0969 | -1.4300 | -2.2416 | -2.2095 |
(115, 288)* | 10.63 | -0.7212 | -0.7972 | -0.3369 | -0.3114 |
(115, 287)* | 10.74 | -0.9208 | -1.1202 | -1.0450 | -1.0104 |
(114, 289)* | 9.97 | 0.3802 | 0.7361 | 0.5676 | 0.5021 |
(114, 288)* | 10.07 | -0.1249 | 0.4314 | -0.4126 | -0.4553 |
(114, 287)* | 10.16 | -0.2840 | 0.1587 | 0.0185 | -0.0025 |
(114, 286)* | 10.37 | -0.4559 | -0.3930 | -1.2087 | -1.1998 |
(113, 284)* | 10.11 | -0.02548 | 0.0065 | 0.3821 | 0.3743 |
(113, 283)* | 10.26 | -0.9914 | -0.4149 | -0.3823 | -0.3640 |
(113, 282) | 10.78 | -1.1549 | -1.6643 | -1.2586 | -1.2196 |
(112, 285)* | 9.32 | 1.5051 | 2.0813 | 1.9338 | 1.8264 |
(112, 283)* | 9.67 | 0.5798 | 0.9233 | 0.8494 | 0.7970 |
(111, 280)* | 9.89 | 0.5478 | 0.1308 | 0.3641 | 0.3425 |
(111, 279)* | 10.52 | -0.7696 | -1.3793 | -1.6371 | -1.6015 |
(111, 278)* | 10.85 | -2.3768 | -2.2056 | -1.9802 | -1.9374 |
(110, 279)* | 9.84 | 0.3010 | 0.1283 | -0.2651 | -0.3057 |
(109, 276)* | 9.814 | -0.14267 | -0.0650 | -0.0229 | -0.0492 |
(109, 275) | 10.48 | -2.0132 | -1.6690 | -2.1184 | -2.0848 |
(109, 274)* | 10.04 | -0.3526 | -0.7340 | -0.6128 | -0.5919 |
(108, 275)* | 9.44 | -0.5376 | 0.6992 | 0.2937 | 0.2275 |
(107, 272) | 9.14 | 0.9128 | 1.2396 | 1.1891 | 1.1196 |
(107, 270)* | 9.06 | 1.7782 | 1.2923 | 1.4188 | 1.3762 |
(106, 271)* | 8.66 | 2.2122 | 2.4594 | 2.1209 | 2.0043 |
In this work, we have applied machine learning technique to investigate nuclear
Table 4 Predictions on the decay lifetimes for unobserved superheavy elements in the units of second. We refer to Ref. [4] for details on
( | Q (MeV) | Model (I) | Model (II) | |||
---|---|---|---|---|---|---|
(122, 307) | 12.289 | 4.340 × 10-4 | 4.514 × 10-4 | 3.194 × 10-4 | 1.257 × 10-3 | 5.964 × 10-4 |
(122, 306) | 12.420 | 2.517 × 10-4 | 2.688 × 10-4 | 1.891 × 10-4 | 5.348 × 10-4 | 9.887 × 10-5 |
(122, 305) | 12.550 | 1.402 × 10-4 | 1.539 × 10-4 | 1.073 × 10-4 | 2.288 × 10-4 | 1.539 × 10-4 |
(122, 304) | 12.679 | 7.919 × 10-5 | 8.911 × 10-5 | 6.193 × 10-5 | 9.840 × 10-5 | 2.839 × 10-5 |
(122, 303) | 12.807 | 4.646 × 10-5 | 5.237 × 10-5 | 3.593 × 10-5 | 4.254 × 10-5 | 4.223 × 10-5 |
(122, 302) | 12.935 | 2.646 × 10-5 | 3.000 × 10-5 | 2.099 × 10-5 | 1.839 × 10-5 | 8.585 × 10-6 |
(121, 306) | 11.853 | 2.104 × 10-3 | 2.175 × 10-3 | 1.509 × 10-3 | 9.493 × 10-3 | 3.180 × 10-2 |
(121, 305) | 11.985 | 1.143 × 10-3 | 1.212 × 10-3 | 8.467 × 10-4 | 4.018 × 10-3 | 4.049 × 10-3 |
(121, 304) | 12.117 | 6.082 × 10-4 | 6.787 × 10-4 | 4.700 × 10-4 | 1.701 × 10-3 | 8.985 × 10-3 |
(121, 303) | 12.248 | 3.317 × 10-4 | 3.794 × 10-4 | 2.593 × 10-4 | 7.239 × 10-4 | 1.043 × 10-3 |
(121, 302) | 12.378 | 1.834 × 10-4 | 2.093 × 10-4 | 1.439 × 10-4 | 3.097 × 10-4 | 2.614 × 10-3 |
(121, 301) | 12.508 | 1.027 × 10-4 | 1.169 × 10-4 | 8.201 × 10-5 | 1.325 × 10-4 | 2.848 × 10-4 |
(120, 304) | 11.546 | 5.792 × 10-3 | 6.146 × 10-3 | 4.349 × 10-3 | 3.083 × 10-2 | 2.715 × 10-3 |
(120, 303) | 11.679 | 2.987 × 10-3 | 3.331 × 10-3 | 2.289 × 10-3 | 1.298 × 10-2 | 5.037 × 10-3 |
(120, 302) | 11.812 | 1.561 × 10-3 | 1.761 × 10-3 | 1.217 × 10-3 | 5.467 × 10-3 | 7.189 × 10-4 |
(120, 301) | 11.944 | 8.288 × 10-4 | 9.395 × 10-4 | 6.575 × 10-4 | 2.314 × 10-3 | 1.196 × 10-3 |
(120, 300) | 12.076 | 4.465 × 10-4 | 5.053 × 10-4 | 3.520 × 10-4 | 9.797 × 10-4 | 1.867 × 10-4 |
(120, 299) | 12.207 | 2.436 × 10-4 | 2.817 × 10-4 | 1.957 × 10-4 | 4.169 × 10-4 | 2.948 × 10-4 |
(119, 298) | 11.772 | 1.131 × 10-3 | 1.322 × 10-3 | 8.986 × 10-4 | 3.132 × 10-3 | 1.480 × 10-2 |
(119, 297) | 11.904 | 5.932 × 10-4 | 1.610 × 10-3 | 4.795 × 10-4 | 1.326 × 10-3 | 1.884 × 10-3 |
(119, 296) | 12.036 | 3.147 × 10-4 | 3.587 × 10-4 | 2.593 × 10-4 | 5.613 × 10-4 | 4.102 × 10-3 |
(119, 295) | 12.167 | 1.643 × 10-4 | 1.913 × 10-4 | 1.405 × 10-4 | 2.388 × 10-4 | 4.898 × 10-4 |
(119, 294) | 12.297 | 8.668 × 10-5 | 1.044 × 10-4 | 7.549 × 10-5 | 1.022 × 10-4 | 1.201 × 10-3 |
(119, 293) | 12.427 | 4.775 × 10-5 | 5.767 × 10-5 | 4.168 × 10-5 | 4.371 × 10-5 | 1.349 × 10-4 |
(118, 298) | 11.197 | 1.206 × 10-2 | 1.373 × 10-2 | 9.535 × 10-3 | 5.798 × 10-2 | 5.871 × 10-3 |
(118, 297) | 11.332 | 5.977 × 10-3 | 7.008 × 10-3 | 4.774 × 10-3 | 2.416 × 10-2 | 1.095 × 10-2 |
(118, 296) | 11.466 | 3.013 × 10-3 | 3.481 × 10-3 | 2.423 × 10-3 | 1.012 × 10-2 | 1.451 × 10-3 |
(118, 295) | 11.600 | 1.500 × 10-3 | 1.762 × 10-3 | 1.244 × 10-3 | 4.239 × 10-3 | 2.426 × 10-3 |
(118, 294) | 11.733 | 7.515 × 10-4 | 9.050 × 10-4 | 6.387 × 10-4 | 1.785 × 10-3 | 3.572 × 10-4 |
(118, 293) | 11.865 | 3.832 × 10-4 | 4.644 × 10-4 | 3.289 × 10-4 | 7.556 × 10-4 | 5.701 × 10-4 |
(117, 298) | 10.920 | 1.678 × 10-1 | 1.916 × 10-1 | 1.311 × 10-1 | 2.234 × 10-1 | 2.993 × 10-1 |
(117, 297) | 10.749 | 7.769 × 10-2 | 9.001 × 10-2 | 6.129 × 10-2 | 4.665 × 10-1 | 2.857 × 10-1 |
(117, 296) | 10.886 | 3.620 × 10-2 | 4.330 × 10-2 | 2.903 × 10-2 | 1.924 × 10-1 | 3.861 × 10-1 |
(117, 295) | 11.023 | 1.735 × 10-2 | 2.035 × 10-2 | 1.396 × 10-2 | 7.931 × 10-2 | 6.419 × 10-2 |
(117, 294) | 11.158 | 8.146 × 10-3 | 9.736 × 10-3 | 6.779 × 10-3 | 3.304 × 10-2 | 1.001 × 10-1 |
(117, 293) | 11.293 | 3.885 × 10-3 | 4.752 × 10-3 | 3.244 × l0-3 | 1.377 × 10-2 | 1.484 × 10-2 |
In summary, we confirm that machine learning can give a global description of nuclear
The work of M.K. and Y.O. was supported by National Research Foundation (NRF) under Grants No. NRF-2020R1A2C1007597 and No. NRF-2018R1A6A1A06024970 (Basic Science Research Program). The work of Y.-H.S. was supported by the Rare Isotope Science Project of Institute for Basic Science, funded by Ministry of Science and ICT (MSICT) and by NRF of Korea (2013M7A1A1075764) and by the National Supercomputing Center with supercomputing resources including technical support (KSC-2020-CRE-0027).
2 These inputs are scaled to be in the range of (0,1) in our Model (I) calculations.