1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Image sorting of nuclear reactions recorded on CR-39 nuclear track detector using deep learning

5 9 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Image Sorting of Nuclear Reactions Recorded on CR-39 Nuclear Track Detector Using Deep Learning
Tác giả Ken Tashiro, Kazuki Noto, Quazi Muhammad Rashed Nizam, Eric Benton, Nakahiro Yasuda
Trường học Research Institute of Nuclear Engineering, University of Fukui, Tsuruga, Fukui, Japan
Chuyên ngành Nuclear Physics
Thể loại Research Article
Năm xuất bản 2022
Thành phố Fukui
Định dạng
Số trang 5
Dung lượng 2,82 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Deep learning has been utilized to trace nuclear reactions in the CR-39 nuclear track detector. Etch pit images on front and back surfaces of the CR-39 detector were obtained sequentially by moving the objective lens of a microscope, and merged to one image.

Trang 1

Available online 22 January 2022

( http://creativecommons.org/licenses/by-nc-nd/4.0/ ).

Image sorting of nuclear reactions recorded on CR-39 nuclear track detector

using deep learning

Ken Tashiroa,*, Kazuki Notoa, Quazi Muhammad Rashed Nizamb, Eric Bentonc,

Nakahiro Yasudaa

aResearch Institute of Nuclear Engineering, University of Fukui, Tsuruga, Fukui, Japan

bDepartment of Physics, University of Chittagong, Chittagong, Bangladesh

cDepartment of Physics, Oklahoma State University, Stillwater, OK, USA

A R T I C L E I N F O

Keywords:

CR-39 nuclear track detector

Deep learning

Object detection

Image merging

Total charge changing cross-section

A B S T R A C T Deep learning has been utilized to trace nuclear reactions in the CR-39 nuclear track detector Etch pit images on front and back surfaces of the CR-39 detector were obtained sequentially by moving the objective lens of a microscope, and merged to one image This image merging makes it possible to combine information on the displacement of the position of the etch pits produced by single particle traversals through a CR-39 layer in a single image, thereby making it easier to recognize corresponding nuclear fragmentation reactions Object detection based on deep learning has been applied to the merged image to identify nuclear fragmentation events

for measurement of the total charge changing cross-section based on the number of incident particles (N in) and

the number of particles that passed through target without any nuclear reaction (N out) We verified the accuracy (correct answer rate) of algorithms for extracting the two patterns of etch pit in merged images which

corre-sponds to N in and N out using the learning curves expressed as a function of the number of trainings Accuracy of

N in and N out were found to be 97.3 ± 4.0% and 98.0 ± 4.0%, respectively These results show that the object detection algorithm based on the deep learning can be a strong tool for CR-39 etch pit analysis

1 Introduction

CR-39 solid-state nuclear track detector has been a powerful tool to

2001, 2002; Cecchini et al., 2008; Duan et al., 2021; Huo et al., 2019;

Zheng et al., 2021) and fragment emission angles (Giacomelli et al.,

2004; Sihver et al., 2013; Zhang et al., 2018), since it has high charge

experi-mental application, CR-39 nuclear track detector is frequently used not

only as a detector but also as a target (material to be verified the

cross-sections) since etch pits appear along the ion track on both the

front and back surfaces after chemical etching The front and back

sur-face images are independently captured by a microscope These images

are analyzed to extract the position and size of the etch pits to trace

matching the positions of the etch pits obtained from independently

obtained images on the front and back surfaces of the detector, it has

been possible to identify particles that have passed through the target or

Golovchenko, 2001; Ota et al., 2008) In order to identify a nuclear re-action, it is necessary to establish a one-to-one correspondence between the etch pits on the detector’s front and the back surfaces The matching method requires accurate alignment of the etch pits on both surfaces,

The accuracy of this alignment puts a limit on the matching method Recently, we have developed a technology to takes images on the front and back surfaces of the CR-39 detector sequentially by moving the objective lens of a microscope without any treatment for alignment of

(matching) error is only due to the verticality of the Z-axis movement of

case) As a feasibility study, we applied object detection based on deep learning which can simultaneously classify nuclear reaction images and detect object positions

Object detection is a computer technology that determines whether

* Corresponding author Research Institute of Nuclear Engineering, University of Fukui, 1-3-33 Kanawa, 914-0055, Tsuruga, Fukui, Japan

E-mail address: tashiro0716@gmail.com (K Tashiro)

https://doi.org/10.1016/j.radmeas.2022.106706

Received 10 September 2021; Received in revised form 15 January 2022; Accepted 19 January 2022

Trang 2

objects of a given class (such as humans, cars, or buildings) are present

in digital images and movies When there are the objects, it returns the

technology has been researched based on human-designed features in

the field of computer vision for the development of technologies such as

techniques, a method of automatically learning features from data, has

et al., 2015) Performance of object detection is improving annually by

2019)

Recent studies in the field of radiation measurement are also

advancing research that applies deep learning technology, such as a new

method of visualizing the ambient dose rate distribution using artificial

et al., 2021) Methods have been developed to analyze radon time-series

sampling data by machine learning and analyze its relationship with

de-tectors that require image analysis such as the nuclear emulsion and the

fluorescent nuclear track detector (FNTD), analysis methods based on

image classification using deep learning have also been developed For

nuclear emulsion, an efficient classifier was developed that sorts

alpha-decay events from various vertex-like objects in an emulsion using

image processing technique involving convolutional neural networks

et al., 2020)

In this study, we have developed a new methodology for tracing ion

track penetration by merging images on both sides of a CR-39 detector

without relying on pattern matching Instead, object detection based on

deep learning is applied to the etch pit analysis

2 Materials and methods

2.1 Experimental

We used CR-39 detector (HARZLAS TD-1) manufactured by Fukuvi

Chemical Industry Co., Ltd Layers of CR-39 detector (0.45 mm thick)

were cut into 50 mm × 50 mm squares and exposed to a 55 MeV/

h After chemical etching, images of the front and back surfaces of the CR-39 detector were acquired using a FSP-1000 imaging microscope, manufactured by SEIKO Time Creation Inc The autofocus system of the microscope was used to capture images of the front and back surfaces After capturing an image on the front surface, the objective lens moves

to a lower depth (Z-axis of microscope system) of the CR-39 detector, and the back surface image is captured for the same field of view (Rashed-Nizam et al., 2020) The images of both surfaces (2500 pixels ×

1800 pixels) were obtained using a 20× magnification objective lens

a value from black (0) to white (255) in a grayscale image with 256 gray levels

2.2 Image merging of front and back surfaces on CR-39 detector

We have employed image merging which is a method of detecting moving objects by comparing the observed image with the background

detector were acquired from the microscope By subtracting each pixel value of the front image from each pixel value of the back image added

200 (gray level), we created a merged image (c)

In the merged image, white and black circles represent the etch pits

on the front and back surfaces, respectively The displacement of black and white etch pits position indicates that the ions penetrated the CR-39 detector with a small angle Here, it is easy to discriminate the corre-sponding etch pits formed on the front and back surfaces by the passage

of an incident ion without treatment of pattern matching by the align-ment between the front and back surfaces This method is able to pro-duce incident angle information by the displacement with distance (thickness) between front and back surface as described elsewhere (Rashed-Nizam et al., 2020), and also to indicate the presence or absence of nuclear reactions in the single image

Fig 2 shows examples of etch pits in the merged image Track events are classified into three categories: (a) the projectile passed through CR-

Fig 1 Images of the front (a) and back (b) surface were merged into the merged image (c) by image subtraction, after adding 200 (gray level) to each pixel value of

the back image In the merged image (c), white circles represent the etch pits on the front surface, black circles represent the etch pits on the back surface

Fig 2 Examples of etch pits in the merged image: (a) the projectile passed through the CR-39 detector without any reaction, producing two etch pits on both

surfaces (white and black); (b) the projectile decays into several lighter fragments, and these fragments are not detected due to the detection threshold; (c) nuclear fragments are observed as three tracks indicated by white arrows

Trang 3

39 detector without producing any nuclear reaction; (b) no etch pits are

observed on the back surface - one of possible reactions is C→6p+6n,

where protons and neutrons are out of detection due to the detection

are observed on the back surface and assumed to be the results of a

be detected

probability that the projectile changes its charge due to the nuclear

σ TCC= − M

ρ N A Xln(

N out

N in

thickness of the target, and its atomic or molecular mass, respectively

(Cecchini et al., 2008; Huo et al., 2019) N in is the number of incident

target without undergoing any reaction

through the detector without any nuclear reaction to the number of

charge changing cross-section

2.3 Object detection for etch pit image

vali-dation dataset and two training datasets were created from the merged

images The validation dataset consists of 256 merged images (2500

pixels × 1800 pixels) and includes the white and black etch pits (W/B

consists of 1229 white etch pit objects (1227 W/B and 2 W objects) as

described above

The training datasets (W/B and W) were prepared from merged

images other than the validation dataset Those two training datasets

consisted of 1200 white and black etch pits images (W/B images) and

white etch pits images (W images), respectively W/B images (416

based on the differences of the position between the etch pits on both surfaces and the distance between their centers The W images contain

and two training datasets were used separately to train the object

networks, and used Python 3.7.11 as a machine learning package with the machine learning framework “Darknet”, and Open CV 4.1.2 on the

ob-ject detection algorithms were trained by inputting the training dataset (W/B) to extract the W/B objects from the validation dataset The in-dividual algorithms were prepared according to the number of training datasets varied from N = 100 to 1200 We applied these algorithms to the validation dataset and counted the number of W/B objects detected

by each algorithm Accuracy was defined as the ratio of the number of W/B objects detected by the algorithms and the number of W/B objects

Accuracy [%] = The number of detected objects by algorithm

The number of objects in validation dataset × 100 (3)

also verified by the algorithms using the training dataset (W) and the validation dataset

3 Results and discussions

3.1 Object detection accuracy and error estimation of image sorting

As an evaluation of the algorithm, we employed a learning curve which shows predictive accuracy on the test examples as a function of

learning curve for the W/B object extraction algorithm The accuracy (in

%) is shown as a function of the number of training datasets The ac-curacy increased as the number of training datasets increased, reaching

a maximum of 98.0 ± 4.0% calculated from the number of detected W/B objects (1203) and W/B objects (1227) in the validation dataset after

extraction algorithm The accuracy improved as the number of training datasets increased, reaching 97.3 ± 4.0% calculated from the number of detected W objects (1196) and W objects (1229) in the validation dataset in 1000 trainings Errors in the accuracy are statistical errors calculated from the ratio of the number of W/B (W) objects detected by the algorithms and the number of W/B (W) objects in the validation

Fig 3 The learning curves of the (a) W/B and (b) W object extraction algorithms, respectively The accuracies (in %) of these algorithms are shown as a function of

the number of training datasets

Trang 4

with 97–98% The accuracies were also repeated rising and falling

ac-cording to increasing training dataset This phenomenon, often observed

in deep learning, is called overfitting (overtraining) which is a

training dataset and that this optimization did not generalize to the

validation dataset Various approaches are proposed to reduce this effect

such as changes of the neural network architecture in the algorithm and

expansion of the training dataset which includes more highly-varied

accu-racy in the future

The maximum, statistical W/B and W object detection accuracies

were found to be 98.0 ± 4.0% and are to be 97.3 ± 4.0% at N = 1000,

respectively Errors in accuracy are statistical errors individually

calculated from the ratio of the number of W/B (W) objects detected by

the algorithms and the constant number of 1227 W/B and 1229 W

ob-jects in the validation dataset As a result, the statistical errors in these

figures vary between 3.5 and 4.0 These statistical errors can be

improved by increasing the number of validation datasets with suitable

numbers of trainings On the other hand, the systematic error is due to

the fact that the results vary due to the creation of different learning

algorithms depending on how the training dataset is selected Here, we

evaluated how much the results would vary by randomly selecting from

1200 when extracting 1000 training datasets For each of the training

datasets (W/B and W), the training dataset was extracted and applied to

the validation dataset only after the algorithm was created This was

repeated ten times to determine the accuracies and evaluate standard

deviation of the systematic errors for both W/B and W dataset The

W objects, respectively The degree of contribution of the systematic

errors to the charge changing cross-section is expressed as follows:

Δ σ TCC(sys)= M

ρ N A X

̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅

(ΔN in(sys)

N in

)2+ (ΔN out(sys)

N out

)2

result, the systematic error of the charge changing cross-section can be

It should be pointed out that these statistical and systematic errors can

be affected by the etch pit density and etching conditions (size of the

etch pit), and it is necessary to optimize the algorithm for each set of

conditions

3.2 Classification of undetected objects for further improvements

The characteristics of the etch pits that could not be detected were

classified for further improvement of the accuracy Undetected objects

(2% of total W/B object) were classified into four types as shown in

Fig 4 (a) The W/B etch pits are close to each other and might be

recognized as W objects since the distance between the centers of the

two etch pits is shorter than the radii of each individual etch pit The

greater than expected, as a result of multiple Coulomb scattering (Highland, 1975; Beringer et al., 2012) In this case, W/B object detec-tion succeeded, but is recognized as chance coincidence of irrelevant etch pits since the calculated distance by the scattering was to be 38.3

objects are overlapping This is an essential limitation of the CR-39 detection technique that should be improved by reduction of exposure density and/or by shortening the etching time to avoid overlapping etch pits (d) The W/B object is located at the edge of the image and also cannot be processed by pattern matching It is necessary to take mea-sures to reduce the relative number of objects located at the image edges

by increasing the validation image size The (c) and (d) cases require additional processing in order to be included in the cross-section measurement

On the other hand, for W objects, undetected objects (2.7% of total W

scratches on the surface of CR-39 (b) are imaged near the detection target (W object) Improvements such as shortening the exposure time to the environment and handling without damaging the surface can be considered As a further usage of deep learning, in addition to the al-gorithm for extracting etch pits, it is possible to create an alal-gorithm to distinguish the etch pit from noises

The conventional pattern matching method requires the measure-ment of etch pits from images obtained of both surfaces of the CR-39 detector and execution of the pattern matching algorithm within the

such that we need to consider the multiple Coulomb scattering The presence or absence of a nuclear reaction can also be determined with high accuracy by using an image in which the front and back are merged

Fig 4 Four types of undetected objects: (a) the W/B etch pits are close each other; (b) the distance between the two etch pits is greater than expected due to multiple

Coulomb scattering; (c) multiple W/B objects are overlapping; and (d) the W/B object locate at the edge of the image

Fig 5 Examples of undetected W objects Etch pits due to α-particle from the environment (a), and dust or tiny scratches on the surface of CR-39 (b) indi-cated by arrows are imaged near the etch pit

Trang 5

recorded as etch pits to extract nuclear fragmentation events by merging

microscopic images of the front and back surfaces of an exposed CR-39

detector This enables us to obtain information on the displacement of

the etch pit position in a single image We have also applied object

detection based on deep learning to the merged image to identify

nu-clear fragmentation events The accuracy of object detection was

eval-uated using a learning curve expressed as a function of the number of

trainings The accuracy of the algorithms for extracting W/B and W

changing cross-section, were verified statistically to be 98.0 ± 4.0% and

97.3 ± 4.0%, respectively, thereby indicating the effectiveness of the

object detection algorithm based on the deep learning to CR-39 particle

detection We plan to apply this technique in order to measure the total

charge changing cross-section

Funding

This research did not receive any specific grant from funding

agencies in the public, commercial, or non-profit sectors

Declaration of competing interest

The authors declare that they have no known competing financial

interests or personal relationships that could have appeared to influence

the work reported in this paper

Acknowledgment

We would like to thank the WERC personnel for their help and

support during the experiment

References

Akselrod, M., Fomenko, V., Harrison, J., 2020 Latest advances in FNTD technology and

instrumentation Radiat Meas 133 (106302) https://doi.org/10.1016/j

radmeas.2020.106302

Beringer, J., et al., 2012 Review of Particle Physics (RPP) Phys Rev., D86 010001

https://doi.org/10.1103/PhysRevD.86.010001 [Paricle Data Group]

Bisong, E., 2019 Building Machine Learning and Deep Learning Models on Google Cloud

Platform, pp 59–64 https://doi.org/10.1007/978-1-4842-4470-8_7

Cecchini, S., Chiarusi, T., Giacomelli, G., Giorgini, M., Kumar, A., Mandrioli, G.,

Manzoor, S., Margiotta, A.R., Medinaceli, E., Patrizii, L., Popa, V., Qureshi, I.E.,

Sirri, G., Spurio, M., Togo, V., 2008 Fragmentation cross sections of Fe 26+ , Si 14+ and

C 6+ ions of 0.3–10 A GeV on polyethylene, CR39 and aluminum targets Nucl Phys

807, 206–213 https://doi.org/10.1016/j.nuclphysa.2008.03.017

Duan, H.R., Wu, J.Y., Ma, T.L., Li, J.S., Li, H.L., Xu, M.M., Yang, R.X., Zhang, D.H.,

Zhang, Z., Wang, Q., Kodaira, S., 2021 Fragmentation of carbon on elemental

targets at 290 AMeV Int J Mod Phys E 30 (No 06), 2150046 https://doi.org/

10.1142/S0218301321500464

Giacomelli, M., Sihver, L., Skvarˇc, J., Yasuda, N., Ilic, R., 2004 Projectilelike fragment

emission angles in fragmentation reactions of light heavy ions in the energy region <

200 MeV/nucleon: modeling and simulations Phys Rev C 69 (64601) https://doi

org/10.1103/PhysRevC.69.064601

Golovchenko, A.N., Skvarˇc, J., Yasuda, N., Ili´c, R., Tretyakova, S.P., Ogura, K.,

Murakami, T., 2001 Total charge-changing and partial cross-section measurements

in the reaction of 110 MeV/u 12 C with paraffin Radiat Meas 34, 297–300 https://

doi.org/10.1016/S1350-4487(01)00171-8

Golovchenko, A.N., Skvarˇc, J., Yasuda, N., Giacomelli, M., Tretyakova, S.P., Ili´c, R.,

Bimbot, R., Toulemonde, M., Murakami, T., 2002 Total charge-changing and partial

cross-section measurements in the reactions of ~110-250 MeV/nucleon 12 C in

carbon, paraffin, and water Phys Rev C 66 (14609) https://doi.org/10.1103/

PhysRevC.66.014609

Hatori, S., Ito, Y., Ishigami, R., Yasuda, K., Inomata, T., Maruyama, T., Ikezawa, K.,

Takagi, K., Yamamoto, K., Fukuda, S., Kume, K., Kagiya, G., Hasegawa, T.,

10.1016/j.cjph.2019.04.022 Janik, M., Bossew, P., Kurihara, O., 2018 Machine learning methods as a tool to analyse incomplete or irregularly sampled radon time series data Sci Total Environ 630, 1155–1167 https://doi.org/10.1016/j.scitotenv.2018.02.233

Kodaira, S., Morishige, K., Kawashima, H., Kitamura, H., Kurano, M., Hasebe, N., Koguchi, Y., Shinozaki, W., Ogura, K., 2016 A performance test of a new high- surface-quality and high-sensitivity CR-39 plastic nuclear track detector – TechnoTrak Nucl Instrum Method B 383, 129–135 https://doi.org/10.1016/j nimb.2016.07.002

LeCun, Y., Bengio, Y., Hinton, G., 2015 Deep learning Nature 521, 436–444 https:// doi.org/10.1038/nature14539

Liu, L., Ouyang, W., Wang, X., Fieguth, P., Chen, J., Liu, X., Pietk¨ainen, M., 2020 Deep learning for generic object detection: a survey Int J Comput Vis 128, 261–318

https://doi.org/10.1007/s11263-019-01247-4 Ota, S., Kodaira, S., Yasuda, N., Benton, E.R., Hareyama, M., Kurano, M., Sato, M., Shu, D., Hasebe, N., 2008 Tracking method for the measurement of projectile charge changing cross-section using CR-39 detector with a high speed imaging microscope Radiat Meas 43, 195–198 https://doi.org/10.1016/j.radmeas.2008.04.058 Ota, S., Yasuda, N., Sihver, L., Kodaira, S., Kurano, M., Naka, S., Ideguchi, Y., Benton, E R., Hasebe, N., 2011 Charge resolution of CR-39 plastic nuclear track detectors for intermediate energy heavy ions Nucl Instrum Methods B 269 (12), 1382–1388

https://doi.org/10.1016/j.nimb.2011.03.018 Perlich, C., 2011 Learning curves in machine learning In: Sammut, C., Webb, G.I (Eds.), Encyclopedia of Machine Learning Springer, Boston, MA https://doi.org/10.1007/ 978-0-387-30164-8_452

Rashed-Nizam, Q.M., Yoshida, K., Sakamoto, T., Benton, E., Shiver, L., Yasuda, N., 2020 High-precision angular measurement of 12 C ion interaction using a new imaging method with a CR-39 detector in the energy range below 100 MeV/nucleon Radiat Meas 131, 106225 https://doi.org/10.1016/j.radmeas.2019.106225

Redmon, J., Farhadi, A., 2018 Yolov3: an incremental improvement arXiv preprint arXiv (1804), 2767

Salman, S., Liu, X., 2019 Overfitting mechanism and avoidance in deep neural networks arXiv preprint arXiv (1901), 6566

Sasaki, M., Sanada, Y., Katengeza, E.W., Yamamoto, A., 2021 New method for visualizing the dose rate distribution around the Fukushima Daiichi Nuclear Power Plant using artificial neural networks Sci Rep 11, 1857 https://doi.org/10.1038/ s41598-021-81546-4

Sihver, L., Giacomelli, M., Ota, S., Skvarc, J., Yasuda, N., Ilic, R., Kodaira, S., 2013 Projectile fragment emission angles in fragmentation reactions of light heavy ions in

the energy region <200 MeV/nucleon: experimental study Radiat Meas 48 (1),

73–81 https://doi.org/10.1016/j.radmeas.2012.08.006 Skvarˇc, J., Golovchenko, A.N., 2001 A method of trajectory tracing of Z⩽10 ions in the energy region below 300 MeV/u Radiat Meas 34 (1–6), 113–118 https://doi.org/ 10.1016/S1350-4487(01)00134-2

Viola, P., Jones, M.J., 2004 Robust real-time face detection Int J Comput Vis 57, 137–154 https://doi.org/10.1023/B VISI.0000013087.49260.fb

Yasuda, N., Namiki, K., Honma, Y., Umeshima, Y., Marumo, Y., Ishii, H., Benton, E.R.,

2005 Development of a high speed imaging microscope and new software for nuclear track detector analysis Radiat Meas 40, 311–315 https://doi.org/ 10.1016/j.radmeas.2005.02.013

Yasuda, N., Zhang, D.H., Kodaira, S., Koguchi, Y., Takebayashi, S., Shinozaki, W., Fujisaki, S., Juto, N., Kobayashi, I., Kurano, M., Shu, D., Kawashima, H., 2008 Verification of angular dependence for track sensitivity on several types of CR-39 Radiat Meas 43, 269–273 https://doi.org/10.1016/j.radmeas.2008.03.027 Yasuda, N., Kodaira, S., Kurano, M., Kawashima, H., Tawara, H., Doke, T., Ogura, K., Hasebe, N., 2009 High speed microscope for large scale ultra heavy nuclei search using solid state track detector J Phys Soc Jpn 78, 142–145 https://doi.org/ 10.1143/JPSJS.78SA.142

Ying, X., 2019 An overview of overfitting and its solutions J Phys.: Conf Ser 1168 (22022) https://doi.org/10.1088/1742-6596/1168/2/022022

Yoshida, J., Ekawa, H., Kasagi, A., Nakagawa, M., Nakazawa, K., Saito, N., Saito, T.R., Taki, M., Yoshimoto, M., 2021 CNN-based event classification of alpha-decay events

in nuclear emulsion Nucl Instrum Methods A 989, 164930 https://doi.org/ 10.1016/j.nima.2020.164930

Zhang, D.H., Shi, R., Li, J.S., Kodaira, S., Yasuda, N., 2018 Projectile fragment emission

in the fragmentation of 20 Ne on C, Al and CH 2 targets at 400 MeV/u Nucl Instrum Methods B 435 (15), 174–179 https://doi.org/10.1016/j.nimb.2018.05.045 Zheng, S.H., Li, W., Gou, C.W., Wu, G.F., Yao, D., Zhang, X.F., Li, J.S., Kodaira, S., Zhang, D.H., 2021 Measurement of cross sections for charge pickup by 12 C on elemental targets at 400 MeV/n Nucl Phys 1016, 122317 https://doi.org/ 10.1016/j.nuclphysa.2021.122317

Zou, Z., Shi, Z., Guo, Y., Ye, J., 2019 Object detection in 20 Years: a survey arXiv preprint arXiv 1905, 5055

Ngày đăng: 24/12/2022, 01:59

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w