pISSN 0374-4914 eISSN 2289-0041

## Research Paper

New Phys.: Sae Mulli 2021; 71: 599-604

Published online July 30, 2021 https://doi.org/10.3938/NPSM.71.599

## Machine Learning Study on Nuclear $\alpha$ Decays

Minsu KWON1, Yongseok OH1*, Young-Ho SONG2

1Department of Physics, Kyungpook National University, Daegu 41566, Korea

2Rare Isotope Science Project, Institute for Basic Science, Daejeon 34047, Korea

Correspondence to:yohphy@knu.ac.kr

Received: January 7, 2021; Revised: April 23, 2021; Accepted: May 19, 0221

The regression process of machine learning is applied to investigate the pattern of alpha decay half-lives of heavy nuclei. By making use of the available experimental data for 164 nuclides, we scrutinize the predictive power of machine learning in the study of nuclear alpha decays within two approaches. In Model (I), we trained neural networks to experimental data of the half-lives of nuclear alpha decays directly while, in Model (II), they are trained to the gap between the experimental data and the predictions of the Viola-Seaborg formula as a theoretical model. The purpose of Model (I) was to verify the applicability of machine learning to nuclear alpha decays, and the motivation of Model (II) was to apply the technique to estimate the uncertainties in the predictions of theoretical models. Out results show that room exists for improving the predictions of empirical models by using machine learning techniques. We also present predictions on unmeasured nuclear alpha decays.

Keywords: Nuclear alpha decays, Machine learning

### I. INTRODUCTION

The research on nuclear α decay has a long history and has been one of the most important tools to study nuclear forces and nuclear structure [1]. Even today, its role cannot be overemphasized in the investigation of nuclear properties, in particular, in identifying new heavy nuclides. The most widely used theoretical models are based on the effective potential that the preformed α particle feels in nuclei. Once the potential form is determined, the half-life is calculated using the WKB approximation [2]. (See also, for example, Refs. [3,4])

In the present work, we adopt a different approach to investigate the nuclear α decay. Namely, we make use of the recently developed machine learning (ML) methods to predict half-lives of nuclear α-decays. At these days, machine learning technique has been widely applied to various fields in physics [5]. In nuclear physics, to our knowledge, the first application of ML technique is to apply the methods to nuclear mass systematics [6-8]. This idea has been further developed for various nuclear physics problems, for example, nuclear masses [9-12], deuteron properties [13], extrapolation of ab initio calculations such as the no-core shell model [12, 14, 15], nuclear alpha decays [16-18], and nuclear β decays [19].

Many existing applications of ML to nuclear α-decay use artificial neural network (ANN) method. In Refs. [17, 18], ANN is trained to predict Q-value (energy released) for the α-decay channel of nuclei. Then, decay rates are obtained by using WKB approximation with semi-classical effective potential [17] or modified empirical α-decay formulas [18]. On the other hand, ANN is trained directly to experimental α-decay half-lives in Ref. [16]. In the present exploratory study, taking similar strategy of Ref. [16], we apply ANN method to α-decay half-lives in two different approaches. One is the unbiased approach where we directly use experimental data to train the machine learning process and make predictions for test data. This approach is Model (I) in the present work. The other is a theoretically biased approach where we rely on phenomenological formulas for globally understanding nuclear α decays and the gap with the experimental data are tamed by machine learning. This approach, named Model (II), can test the impact of machine learning when combined with theoretical models.

This paper is organized as follows. In the next section, we briefly introduce the concepts of machine learning, and we construct our models as well. In Sec.III, the results from machine learning are compared with available data and predictions for unobserved nuclear α decays are presented. Section IV contains summary and conclusion.

### 1. Machine Learning

Though Artificial Intelligence (AI) has been developed in various ways since 1950s when modern computers were developed, recent popularity of ML is mostly from the success of ANN. We can understand ANN as a mapping function y = f(x; θ) with parameters θ. ANN consists of an input layer, output layer, and hidden layers, where each layer contains number of “neurons.” Output of j-th neuron in a layer, hj , can be expressed as

hj=σbj+iωjix i,

where xi are outputs from i-th neuron of previous layer, θj = (ωji, bj) are parameters of the neuron, and σ is a non-linear activation function. With a large number of hidden layers of neurons, ANN can cover a wide range of function space. An advantage of ANN over conventional fitting methods lies in its flexible model space and efficient training algorithm that adjusts parameters to minimize the “loss” function. The loss function quantifies the difference between training data and the corresponding results of ANN.

In this work, we adopt Rectified Linear Unit (ReLU) function as an activation function [21], which is defined as

ReLUx=x, if x>00, if x0

for hidden layers. ReLU is widely used to avoid the vanishing gradient problem in training. We adopt Adam optimizer [22] that is an algorithm for first-order gradientbased optimization of stochastic objective functions. More explanations of the structure of ANN and loss function are given in the next section. All numerical calculations in this work are done by using the TensorFlow library [20]. The details of the TensorFlow algorithm can be found elsewhere 1 and will not be repeated here.

### 2. Models for nuclear α decays

It is well known that the half-lives of nuclear α decays heavily depend on the Q value of the decay. Therefore, successful description of nuclear α decays requires accurate information on the nuclear α potential and reasonable reproduction of nuclear masses. The half-lives of nuclear α decays were found to obey a simple relation known as the Geiger-Nuttall law [23], which describes the half-life of a nuclear α decay T1/2 as

log10T1/2=aZQα+b,

where Z is the atomic number and Qα is the Q value of α decay. The constants a and b are to be fitted to experimental data. This phenomenological formula was improved through the Viola-Seaborg (VS) empirical formula [24], which is widely used to estimate the α decay lifetimes. This formula is written as

log10T1/2=aZ+bQα+cZ+d,

which also caused several derived versions [3,18]. Normally, to improve the predictive power of the VS formula, the parameters are determined separately for even-even, even-odd, odd-even, and odd-odd nuclei, which implies the mass number dependence of the parameters. When the half-lives are given in the units of second and Qα in the units of MeV, the parameters obtained by this procedure are given in Table 1. We refer the details to Ref. [3].

Fitted coefficients of the VS formula. The values are from Ref. [3] and are given for even-even, evenodd, odd-even, and odd-odd nuclei.

Typeabcd
e-e1.485035.26806-0.18879-33.89407
e-o1.554271.23165-0.18749-34.29805
o-e1.64654-3.14939-0.22053-32.74153
o-o1.3435513.92103-0.12867-37.19944

In the present work, we make use of the ANN in two different ways. In the first approach, which is an unbiased approach, we directly apply machine learning to nuclear α decays. The inputs are the atomic number Z, mass number A, and the Qα value of a nucleus. 2 Then the experimental data for log10T1/2Expt are used to train ANN. Therefore, this does not include any physical intuition or prejudice and, as a result, it is unbiased to any theoretical models for nuclear α decay. This is our Model (I). In other words, we use the loss function of mean squared error (MSE) defined as Lθ=1NT i=1 N T yifxi;θ2 with yi=log10T1/2i for i-th training data, where NT is the total number of training data.

On the other hand, it would be interesting to see whether machine learning algorithm can fill the gap between experimental data and theoretical model predictions. Therefore, this approach is biased to a particular theoretical or phenomenological model. In the present work, we adopt the VS formula as a reference theoretical model and ANN is trained with the difference between experiment and theoretical model predictions, while the inputs are Z, A, and Qα as before. This constitutes our Model (II).

Though it is desirable to survey the model space of ANN with various number of hidden layers or neurons, to simplify analysis, we chose to use fixed number of hidden layers and neurons. ANN for Model (I) has 3 hidden layers with 8, 9, 8 neurons, receptively, and ANN for Model (II) has 3 hidden layers with 7, 5, 8 neurons, respectively. We use the data for the α decay half-lives of 164 nuclei compiled in Refs. [25,26].

### III. RESULTS

With 164 data points in hand, we randomly separate data into training set and test set with the ratio of 80:20. The training set is used to train ANN and the test set is used to check the predictive power or credibility of the process. Furthermore, part of the training sets are randomly selected for validation tests. To avoid overfitting, we chose the early stopping, in which the training process stops when the validation error starts to increase. In order to estimate the accuracy of the calculation, we use the mean square error defined as

MSE=1N i=1 Nlog10T 1/2 DataT 1/2 Cal2,

where T1/2Cal are the values calculated by phenomenological models or by machine learning.

Table 2 shows the MSEs of our models. Small MSEs and the similarities between training set and test set indicate that ANNs are well trained. Slightly larger MSEs for test sets are understandable as they are not used for training. The unbiased ANN of Model (I) achieves reasonable accuracy as phenomenological VS formula. Our results also show that the MSE of Model (II) is smaller than that of Model (I) so that the accuracy is improved by about 15%. In other words, training ANN with theoretical guide is better than the naive approach. However, the improvement is not so impressive compared to the VS formula. This observation is in agreement with the observation of Ref. [16]. There may be several explanation for this observation. Probably the most significant factor would be the limit of available experimental data in α decays. Unlike nuclear mass cases, which have order of thousand available data, the available data sets for α decay are only a few hundreds.

Obtained mean square errors.

Model (I)VS formulaModel (II)
Training Set0.3370.2650.258
Test Set0.3680.3700.355

In Table 3 we compare our calculations with several observed data among 164 nuclides used in the present work. For this calculation we use the central values of the measured Qα values. For comparison, the results with the VS formula are also presented. Our results show that the overall agreement is improved for Model (II) compared to the case of Model (I), although not impressive. Again, this is partly due to small number of samples except the even-even nuclei. The number of data points is not large enough to expect substantial improvement. We also observe that the gaps with the experimental data for several nuclides are larger than the other nuclei. This may indicate the effects of nuclear structure which cannot be easily reflected by machine learning. Nevertheless, our results show that the machine learning algorithm may give a reasonable description of the observed data as a whole.

Observed α decay half-lives of heavy nuclei and the results of the present work. The half-life T1/2 is in the units of second. The experimental data are from Refs. [25,26].

((Z,A)*:The data which are belonging to the training data.)
(Z,A)QαExpt (MeV)log10T1/2Exptlog10T1/2Model (I)log10T1/2VSlog10T1/2Model (II)
(118, 294)*11.81-2.8539-2.9245-3.6475-3.6192
(116, 293)*10.68-1.0969-0.5067-0.8421-0.8617
(116, 292)*10.77-1.6198-0.7931-1.7074-1.7048
(116, 291)10.89-1.5528-1.1299-1.3990-1.3728
(116, 290)10.00-2.0969-1.4300-2.2416-2.2095
(115, 288)*10.63-0.7212-0.7972-0.3369-0.3114
(115, 287)*10.74-0.9208-1.1202-1.0450-1.0104
(114, 289)*9.970.38020.73610.56760.5021
(114, 288)*10.07-0.12490.4314-0.4126-0.4553
(114, 287)*10.16-0.28400.15870.0185-0.0025
(114, 286)*10.37-0.4559-0.3930-1.2087-1.1998
(113, 284)*10.11-0.025480.00650.38210.3743
(113, 283)*10.26-0.9914-0.4149-0.3823-0.3640
(113, 282)10.78-1.1549-1.6643-1.2586-1.2196
(112, 285)*9.321.50512.08131.93381.8264
(112, 283)*9.670.57980.92330.84940.7970
(111, 280)*9.890.54780.13080.36410.3425
(111, 279)*10.52-0.7696-1.3793-1.6371-1.6015
(111, 278)*10.85-2.3768-2.2056-1.9802-1.9374
(110, 279)*9.840.30100.1283-0.2651-0.3057
(109, 276)*9.814-0.14267-0.0650-0.0229-0.0492
(109, 275)10.48-2.0132-1.6690-2.1184-2.0848
(109, 274)*10.04-0.3526-0.7340-0.6128-0.5919
(108, 275)*9.44-0.53760.69920.29370.2275
(107, 272)9.140.91281.23961.18911.1196
(107, 270)*9.061.77821.29231.41881.3762
(106, 271)*8.662.21222.45942.12092.0043

### IV. SUMMARY AND CONCLUSION

In this work, we have applied machine learning technique to investigate nuclear α decays. For this end, we applied widely used artificial neural network with 164 data points of whose 80% were used for trainig ANN. We employed two approaches; one is unbiased approach and the other is to make use of the empirical VS formula as a reference. Our results show that the theory-guided approach gives a better description of the data. However, the improvement over the empirical VS formula is not noticeable. We ascribe this partly to the incomplete number of data points and it also indicates that the effects of nuclear structure may be important for some nuclides. Encouraged by this observation, we extend our study to make predictions on unobserved nuclear α decays. Our results are in fair agreement with the previous estimates reported in Ref. [4].

Predictions on the decay lifetimes for unobserved superheavy elements in the units of second. We refer to Ref. [4] for details on T1/2SLy4, T1/2D1S, and T1/2DDME2.

(Z,A)Q (MeV)T1/2SLy4 [4]T1/2D1S (s) [4]T1/2DDME2 [4]Model (I)Model (II)
(122, 307)12.2894.340 × 10-44.514 × 10-43.194 × 10-41.257 × 10-35.964 × 10-4
(122, 306)12.4202.517 × 10-42.688 × 10-41.891 × 10-45.348 × 10-49.887 × 10-5
(122, 305)12.5501.402 × 10-41.539 × 10-41.073 × 10-42.288 × 10-41.539 × 10-4
(122, 304)12.6797.919 × 10-58.911 × 10-56.193 × 10-59.840 × 10-52.839 × 10-5
(122, 303)12.8074.646 × 10-55.237 × 10-53.593 × 10-54.254 × 10-54.223 × 10-5
(122, 302)12.9352.646 × 10-53.000 × 10-52.099 × 10-51.839 × 10-58.585 × 10-6
(121, 306)11.8532.104 × 10-32.175 × 10-31.509 × 10-39.493 × 10-33.180 × 10-2
(121, 305)11.9851.143 × 10-31.212 × 10-38.467 × 10-44.018 × 10-34.049 × 10-3
(121, 304)12.1176.082 × 10-46.787 × 10-44.700 × 10-41.701 × 10-38.985 × 10-3
(121, 303)12.2483.317 × 10-43.794 × 10-42.593 × 10-47.239 × 10-41.043 × 10-3
(121, 302)12.3781.834 × 10-42.093 × 10-41.439 × 10-43.097 × 10-42.614 × 10-3
(121, 301)12.5081.027 × 10-41.169 × 10-48.201 × 10-51.325 × 10-42.848 × 10-4
(120, 304)11.5465.792 × 10-36.146 × 10-34.349 × 10-33.083 × 10-22.715 × 10-3
(120, 303)11.6792.987 × 10-33.331 × 10-32.289 × 10-31.298 × 10-25.037 × 10-3
(120, 302)11.8121.561 × 10-31.761 × 10-31.217 × 10-35.467 × 10-37.189 × 10-4
(120, 301)11.9448.288 × 10-49.395 × 10-46.575 × 10-42.314 × 10-31.196 × 10-3
(120, 300)12.0764.465 × 10-45.053 × 10-43.520 × 10-49.797 × 10-41.867 × 10-4
(120, 299)12.2072.436 × 10-42.817 × 10-41.957 × 10-44.169 × 10-42.948 × 10-4
(119, 298)11.7721.131 × 10-31.322 × 10-38.986 × 10-43.132 × 10-31.480 × 10-2
(119, 297)11.9045.932 × 10-41.610 × 10-34.795 × 10-41.326 × 10-31.884 × 10-3
(119, 296)12.0363.147 × 10-43.587 × 10-42.593 × 10-45.613 × 10-44.102 × 10-3
(119, 295)12.1671.643 × 10-41.913 × 10-41.405 × 10-42.388 × 10-44.898 × 10-4
(119, 294)12.2978.668 × 10-51.044 × 10-47.549 × 10-51.022 × 10-41.201 × 10-3
(119, 293)12.4274.775 × 10-55.767 × 10-54.168 × 10-54.371 × 10-51.349 × 10-4
(118, 298)11.1971.206 × 10-21.373 × 10-29.535 × 10-35.798 × 10-25.871 × 10-3
(118, 297)11.3325.977 × 10-37.008 × 10-34.774 × 10-32.416 × 10-21.095 × 10-2
(118, 296)11.4663.013 × 10-33.481 × 10-32.423 × 10-31.012 × 10-21.451 × 10-3
(118, 295)11.6001.500 × 10-31.762 × 10-31.244 × 10-34.239 × 10-32.426 × 10-3
(118, 294)11.7337.515 × 10-49.050 × 10-46.387 × 10-41.785 × 10-33.572 × 10-4
(118, 293)11.8653.832 × 10-44.644 × 10-43.289 × 10-47.556 × 10-45.701 × 10-4
(117, 298)10.9201.678 × 10-11.916 × 10-11.311 × 10-12.234 × 10-12.993 × 10-1
(117, 297)10.7497.769 × 10-29.001 × 10-26.129 × 10-24.665 × 10-12.857 × 10-1
(117, 296)10.8863.620 × 10-24.330 × 10-22.903 × 10-21.924 × 10-13.861 × 10-1
(117, 295)11.0231.735 × 10-22.035 × 10-21.396 × 10-27.931 × 10-26.419 × 10-2
(117, 294)11.1588.146 × 10-39.736 × 10-36.779 × 10-33.304 × 10-21.001 × 10-1
(117, 293)11.2933.885 × 10-34.752 × 10-33.244 × l0-31.377 × 10-21.484 × 10-2

In summary, we confirm that machine learning can give a global description of nuclear α decays. To achieve more reliable results, however, we may need more theoretical guides.This includes more sophisticated and developed machine learning algorithms to overcome the limited number of data and theoretical studies in nuclear structure to estimate nuclear structure effects which cannot be captured by machine learning. More rigorous studies on various approaches such as choosing different inputs and hyperparameter scan are therefore needed and will be reported elsewhere.

### ACKNOWLEDGEMENTS

The work of M.K. and Y.O. was supported by National Research Foundation (NRF) under Grants No. NRF-2020R1A2C1007597 and No. NRF-2018R1A6A1A06024970 (Basic Science Research Program). The work of Y.-H.S. was supported by the Rare Isotope Science Project of Institute for Basic Science, funded by Ministry of Science and ICT (MSICT) and by NRF of Korea (2013M7A1A1075764) and by the National Supercomputing Center with supercomputing resources including technical support (KSC-2020-CRE-0027).

2 These inputs are scaled to be in the range of (0,1) in our Model (I) calculations.

### References

1. H. J. Mang, Ann. Rev. Nucl. Sci. 14, 1 (1964).
2. B. R. Holstein, Am. J. Phys. 64, 1061 (1996).
3. E. Shin, Y. Lim, C. H. Hyun and Y. Oh, Phys. Rev. C 94, 024320 (2016).
4. Y. Lim and Y. Oh, Phys. Rev. C 95, 034311 (2017).
5. G. Carleo, I. Cirac, K. Cranmer, L. Daudet, M. Schuld, N. Tishby, L. Vogt-Maranto and L. Zdeborová, Rev. Mod. Phys. 91, 045002 (2019).
6. S. Gazula, J. W. Clark and H. Bohr, Nucl. Phys. A 540, 1 (1992).
7. K. A. Gernoth, J. W. Clark, J. S. Prater and H. Bohr, Phys. Lett. B 300, 1 (1993).
8. S. Athanassopoulos, E. Mavrommatis, K. A. Gernoth and J. W. Clark, Nucl. Phys. A 743, 222 (2004).
9. R. Utama, J. Piekarewicz and H. B. Prosper, Phys. Rev. C 93, 014311 (2016).
10. R. Utama and J. Piekarewicz, Phys. Rev. C 96, 044308 (2017).
11. R. Utama and J. Piekarewicz, Phys. Rev. C 97, 014306 (2018).
12. R. Lasseri, D. Regnier, J.-P. Ebran and A. Penon, Phys. Rev. Lett. 124, 162502 (2020).
13. J. W. T. Keeble and A. Rios, Phys. Lett. B 809, 135743 (2020).
14. G. A. Negoita, J. P. Vary, G. R. Luecke, P. Maris, A. M. Shirokov, I. J. Shin, Y. Kim, E. G. Ng, C. Yang, M. Lockner and G. M. Prabhu, Phys. Rev. C 99, 054308 (2019).
15. W. G. Jiang, G. Hagen and T. Papenbrock, Phys. Rev. C 100, 054326 (2019).
16. P. S. A. Freitas and J. W. Clark, arXiv:1910.12345.
17. U. B. Rodríguez, C. Z. Vargas, M. Gonçalves, S. B. Duarte and F. Guzmán, J. Phys. G 46, 115109 (2019).
18. G. Saxena, P. K. Sharma and P. Saxena, J. Phys. G 48, 055103 (2021).
19. Z. M. Niu, H. Z. Liang, B. H. Sun, W. H. Long and Y. F. Niu, Phys. Rev. C 99, 064307 (2019).
20. M. Abadi et al, arXiv:1603.04467.
21. R. H. R. Hahnloser, R. Sarpeshkar, M. A. Mahowald, R. J. Douglas and H. S. Seung, Nature 405, 947 (2000).
22. D. P. Kingma and J. L. Ba, arXiv:1412.6980.
23. H. Geiger and J. M. Nuttall, Phil. Mag. Ser. 6 22, 613 (1911).
24. V. E. Viola Jr and G. T. Seaborg, J. Inorg. Nucl. Chem. 28, 741 (1966).
25. J. P. Cui, Y. L. Zhang, S. Zhang and Y. Z. Wang, Phys. Rev. C 97, 014316 (2018).
26. J. P. Cui, Y. Xiao, Y. H. Gao and Y. Z. Wang, Nucl. Phys. A 987, 99 (2019).