Enhancing COVID 19 Prediction Using Transfer Learning from Chest X Ray Images Enhancing COVID 19 prediction using transfer learning from Chest X ray images 1st Phuoc Hai Huynh Faculty of Information T[.]
Trang 1Enhancing COVID-19 prediction using transfer
learning from Chest X-ray images
1st Phuoc-Hai Huynh
Faculty of Information Technology
An Giang University, Vietnam National
University Ho Chi Minh City
hphai@agu.edu.vn
2nd Trung-Nguyen Tran Deparment of General Planing
An Giang Regional General Hospital
it.bvcd@gmail.com
3rd Van Hoa Nguyen Faculty of Information Technology
An Giang University, Vietnam National University Ho Chi Minh City nvhoa@agu.edu.vn
Abstract—The pandemic of COVID-19 is expansion and effect
for human lives all over the world Although many countries have
been vaccinated, the number of new COVID-19 patients infected
is still increasing Recently, the detection of COVID-19 early
can help find effective treatment plans using machine learning
technologies algorithms We propose the transfer learning models
to detect pneumonia disease by this virus from chest X-Ray
images The public dataset is used in this work, and the new
chest X-Ray images of COVID-19 patients are collected by
An Giang Regional General Hospital These images enrich the
current public dataset and improve the performance prediction
Six transfer learning architectures are investigated using locally
collected and public dataset The experiment results show that
the DenseNet121 transfer learning model outperforms others
with the accuracy, precision, recall, F1-scores, and AUC of
98.51%, 98.54%, 98.51%, 98.05% and 99.15%, respectively on
the augmented dataset and most algorithms process new data
are improved performance
Index Terms—COVID-19, transfer learning, imbalanced
dataset, X-Ray images
I INTRODUCTION From January 2020 to the present, the COVID-19 pandemic
has caused the most high-priority health crisis in human
history The disease has effect the world, with over 251 million
infections have been and killed more than 5 million worldwide
(as of November 2021) [1] The coronavirus causes various
symptoms, such as cough, fever, and cause difficulty while
breathing in more severe patients These symptoms are very
much approximate to the other usual pneumonia [2] Thus,
it is sometimes hard to find the difference between common
pneumonia and COVID-19 The COVID-19 patients have lung
injury and respiratory failure [3] Recognizing COVID-19
patients, isolating and caring for them is a critical technique for
better pandemic management In order to detect COVID-19,
the RT-PCR test [4] is an accepted standard for coronavirus
disease 2019 prediction [5] Nevertheless, RT-PCR is costly
as well as time-consuming [6] Therefore, many studies are
applied using medical image classification as an effective
method for detecting COVID-19 using digital X-Ray images
[7]–[9]
According to the World Health Organization, X-Ray is
currently one of the best available techniques for clinical
diagnosis [10] This method is widely used in the
COVID-19 prediction because it is not only quick but also cheap
[11] With increasing digital X-Ray images in hospitals, these images are processed by machine learning technologies to sup-port treatment [12], [13] In order to diagnose for COVID-19 patients, the medical image collections are increasingly being used in studies to construct artificial intelligence models to assist clinicians In recent years, the deep learning algorithms analysis of chest X-Ray images enable a promising method for COVID-19 prediction [7], [14] They could identify early COVID-19 disease at risk of severe progression which may facilitate personalized treatment plans However, collecting and publishing the large medical image datasets are challeng-ing tasks for contributchalleng-ing to the development of computer-aided design systems in COVID-19 prediction There are some issues in this work, including patient information security, misdiagnosis and the cost of collecting data annotations An alternative method of training deep learning models is transfer learning, which is a machine learning method where a model developed for a task is reused as the starting point for a model
on a second task The benefits of this approach are used to initiate deep learning models, next it is tuned using the limited sample dataset to outperform fully trained networks under certain circumstances [15] The purpose of this work is to investigate the usefulness of transfer learning for COVID-19 prediction in An Giang Regional General Hospital through two contributions
Firstly, we collected 750 Chest X-Ray images confirmed COVID-19 cases at An Giang Regional General Hospital (from September to 11 November, 2021) This dataset is very helpful to enrich the public X-Ray images dataset for COVID-19 prediction Secondly, six transfer learning models are implemented to detect COVID-19 including VGG16 [16], VGG19 [17], DenseNet121 [18], Xception [19], InceptionV2 [20], and Resnet50 [21] The main idea is to improve the strength of predict models using the representations learned
by a previous network to extract meaningful features from new images
Secondly, we have conducted a thorough evaluation of our approach in four experiments: (1) the public dataset is used for training and testing; (2) the augmented dataset is created by combining the new X-Ray COVID-19 images and the public dataset It is used to train and test the models; (3) the public dataset is used to train the model, and the local data is used for
Trang 2testing only; (4) the augmented dataset is used for training, and
the local dataset is used for testing The results have shown that
the DensenNet121 model achieves efficient results compared
to the other models for predicting COVID-19 Most algorithms
process our dataset with improved performance
The remainder of this paper is organized in the following
manner Proposed model is presented in Section 2 We analyze
the experiment and numerical test results are shown in Section
3, 4 and end in Section 5
II PROPOSED MODEL The proposed model consists of four steps: collecting data;
data preprocessing; transfer-learning using pretrained
deep-learning models: Xception [19], InceptionV2 [20], ResNet50
[21], VGG16 [16], VGG19 [17], and DenseNet121 [18];
and evaluating models This study classified the chest X-Ray
images into tree classes as COVID-19, normal and pneumonia
The description of the stages is discussed below Fig 1 shows
the diagrammatic flow of the proposed model
Collect Chest
X-ray images
Data preprocessing
Transfer Learning
Evaluation
Predict Covid-19
Using 6 models
Fig 1: Experiment pipeline for preprocessing and
classifica-tion
First of all, the public dataset is collected by 5144 samples
published by Cohen [22] This dataset currently contains
hundreds of frontal view X-Ray images and is the largest
public resource for COVID-19 images and prognostic data
Therefore, it is a necessary resource to develop and evaluate
tools to aid in the treatment of COVID-19 For this dataset,
the samples are divided into three classes as follows: 2121
normal, 3735 normal pneumonia, and 576 COVID-19 Second
dataset is collected at An Giang Regional General Hospital
which treats Covid-19 patients in severe form in An Giang
province This dataset contains 850 DICOM (Digital Imaging
and Communications in Medicine) X-Ray images COVID-19
The process of labeling the images based on the results of
RT-PCR testing The patient information are hidden and encoded
to security The DICOM images are converted to JPEG format
type to reduce size and suitable with transfer learning models
In order to convert DICOM images, we used PyDicom library
in Python [23] In order to resize the images, a dimension
of 224 × 224 × 3 was used Fig 2 shows an example of a
COVID-19, pneumonia and normal
(a) NORMAL
(b) PNEUMONIA
(c) COVID-19 Fig 2: Samples of the chest X-Ray images
Secondly, we implement six transfer learning models includ-ing: Xception [19], InceptionV2 [20], ResNet50 [21], VGG16 [16], VGG19 [17], and DenseNet121 [18]
The VGG models are proposed by Oxford university’s visual geometry team The VGG architectures are widely used
in image classification In this work, VGG16 [16] and VGG19 [17] are used with 16 and 19 convolution layers
The DenseNet (Dense Convolutional Network) [18] is an architecture that uses shorter connections between layers on making the networks go even deeper Inside this network, each layer is connected to all other layers that are deeper in the network to enable maximum information flow between the layers of the network The architecture of the network consists
of dense blocks and the transition layers stacking Especially, this model usually applies for medical images fields [23]–[25]
In this work, we use DenseNet121 architecture that has 121 layers
Trang 3The ResNet [21] is designed to handle the vanishing
gra-dient problem It implements skip connections between layers
to to train model more efficient In our experiments, we use
ResNet50 model with 50 layers
The Inception V3 [20] model is a deep architecture learning,
which has the depth and width of the network larger to
improve using computing resources These inception modules
are looped by stacking with layers to reduce dimension
Another model is Xception [19] is developed by Google Inc
It has been shown to outperform Inception on a large-scale
image classification dataset
Six models are used to transfer weights learned on
Ima-geNet [26] for COVID-19 prediction model in this work The
main idea of the method is to improve classification results
using public data and to use the benefits of transfer learning
technology It helps to address the limited size of the local
dataset In addition, it is also enhancing the training process
on modest devices
In order to handle imbalanced classes, we calculate and set
class weights for models It is a simple and effective method
to address this problem For binary classification, class weight
is computed by the frequency of the positive and negative
classes and inverting them In this work, we use multi-label
binarization to handle multi-classification The labels of the
dataset are encoded into binary vectors The class-weight is a
dictionary in the format class − label : class − weight
Finally, in order to evaluate, we use five most commonly
used multiple classification metrics: accuracy, precision, recall,
F1 Score (F1) and Area under the ROC Curve (AUC) [27]
III PERFORMANCE ANALYSES
This section presents a comparison and selection of the six
transfer learning models to detect X-Ray images of patients
COVID-19 The performance of models is evaluated in four
experiments Firstly, the public dataset [22] is trained and
tested The new images collected from the local hospital are
fused and used to train and test the models in the second
experiment Next, we train transfer learning models on the
public dataset and test the models on the local dataset Last,
the augmented dataset is used for training, and the local dataset
is used for testing
TABLE I: The number of training and testing subjects used
in four experiments
Exp. Training dataset Testing dataset
Covid Pneumonia Normal Covid Pneumonia Normal
1 460 3418 1266 116 317 855
2 835 3418 1266 491 317 855
3 460 3418 1266 375 317 855
4 835 3418 1266 375 317 855
In detail, Table I presents the number samples per class used
for training and tesing Fig 3 visualizes the data imbalance
issue of dataset This problem is addressed by using
”class-weight” setting in Session II
Fig 3: Number of samples per class in the training public dataset
A Computing Equipment Experiments are implemented using the Keras library [28] with a TensorFlow [29] This computer use a GPU NVIDIA
1080 TI (16GB Memory)
B Experiment setup
In order to find the most suitable model for
COVID-19 prediction on local dataset, we use six popular deep learning architectures including: VGG16 [16], VGG19 [17], DenseNet121 [18], Xception [19], ResNet50 [21], and Incep-tionV3 [20] These models are trained on ImageNet dataset [26] and using its weights to transfer prediction models
We are tuned models by multiple parameters Learning rates are selected between 10−6 to 10−3 The batch-sizes between
16 to 32 were applied The models do not use the hidden layer
in its architecture Epochs of classification models are set to
be 50
C Results and discussions Tables from II to V present classification results of our experiments The maximum values are bolded, while the second values are italicized
For the first experiment, the six models are trained on 5144 images of the pubic dataset and evaluated on 1288 the public dataset Table II and Fig 4 present the classification results of these models It is clear that the Densenet121 model has the best overall training technique accuracy of 98.21%, according
to the classification results In addition, the Precision, Recall, F1 and AUC of this model also outperform other transfer learning models
TABLE II: Performance of Experiment 1 on the public dataset Model Accuracy Precision Recall F1 AUC
VGG19 96.27 96.27 96.27 96.27 96.69 Densenet121 98.21 98.22 98.21 98.2 98.83 Xception 97.2 97.22 97.2 97.21 97.45 ResNet50 95.65 95.69 95.65 95.61 97.25 InceptionV3 97.98 97.98 97.98 97.97 98.67
Trang 4Fig 4: Comparison of classifying metrics on Experiment 1
For the second experiment, we use 750 images of the
local dataset that is collected at An Giang Regional General
Hospital These images are splitted two parts: 375 images
are added to the public training dataset and 375 images
are appended to the public testing dataset Consequently, the
augmented dataset is combined the local and public dataset In
this experiment, the transfer learning models are trained and
evaluated on the augmented dataset The classification results
are shown in Table III and Fig 5 For the DensenNet121 model
also has accuracy better other classifiers with an accuracy
of 98.62% Table II and III show that the accuracy rises
from 98.21% to 98.62% when the testing is carried out using
the locally collected COVID-19 X-Ray images For other
evaluation metrics, this model provides the best results with a
precision of 98.64%, a recall of 98.62%, a F1 score of 98.6
and AUC value of 99.18%
Moreover, the local images enrich the current public dataset
and improve the performance prediction of models In detail,
VGG16 model improves accuracy from 94,1% to 97,65%
as well as the accuracy of VGG19 enhances from 96,27%
to 97,65% The performance of the ResNet50 model on the
augmented dataset improved the accuracy of 2.49% percent
points on the public dataset However, for Xception and
InceptionV3 models, the accuracy dropped slightly to 95,91%
and 97,96% It indicates minor discrepancies between the
photos from the two datasets
TABLE III: Performance of Experiment 2 on the augmented
dataset
Model Name Accuracy Precision Recall F1 AUC
VGG16 97.65 97.65 97.65 97.64 98.26
VGG19 97.65 97.66 97.65 97.63 98.38
Densenet121 98.62 98.64 98.62 98.6 99.18
Xception 95.91 95.9 95.91 95.85 97.03
ResNet50 98.14 98.13 98.14 98.12 98.64
InceptionV3 97.96 97.99 97.96 97.97 98.01
For the third experiment, six models are trained on the
Fig 5: Comparison of classifying metrics on Experiment 2
public dataset and evaluated on the local dataset The local dataset has 375 COVID-19, 317 pneumonia and 855 samples Table IV and Fig 6 show the performance comparison of six models In this experiment, the DenseNet121model still has the best performance (Accuracy: 98,45%, Precision: 98,45%, Recall: 98,45%, F1:98,45% and AUC:98,44%) compared with other models
TABLE IV: Performance of Experiment 3 on the public training and local testing datasets
Model Accuracy Precision Recall F1 AUC VGG16 94.38 94.41 94.38 94.29 95.92 VGG19 96.25 96.26 96.25 96.25 96.7 Densenet121 98.45 98.45 98.45 98.45 98.44 Xception 97.8 97.82 97.8 97.81 97.87 ResNet50 96.32 96.34 96.32 96.28 97.51 InceptionV3 96.12 96.15 96.12 96.13 96.51
Fig 6: Comparison of classifying metrics on Experiment 3 For the last experiment, the models are trained on aug-mented dataset and evaluated on local dataset The perfor-mance comparison of six models are showed in Table V and
Trang 5Fig 7 Table IV shows that the DenseNet121model still has
the best performance (Accuracy: 98,51%, Precision: 98,54%,
Recall: 98,51%, F1: 98,05% and AUC: 99,15%) compared
with other models It is clear that the Densent121 model
improves the performance classification compared with results
of Table IV The results show that the enriched data of the
augmented dataset lead to an increase of the performance of
the Densenet121 model
TABLE V: Performance of Experiment 4 on the augmented
training and the local testing datasets
Model Name Accuracy Precision Recall F1 AUC
VGG16 97.48 97.48 97.48 97.46 98.19
VGG19 97.61 97.62 97.61 97.58 98.43
Densenet121 98.51 98.54 98.51 98.05 99.15
Xception 95.99 95.99 95.99 95.94 97.18
InceptionV3 97.8 97.84 97.8 97.81 97.92
Fig 7: Comparison of classifying metrics on Experiment 4
Table VI shows the corresponding confusion matrices for
the DenseNet121 model These tables demonstrate that the
Densenet121 model should be selected to predict COVID-19
because of its perfect prediction results
TABLE VI: Confusion matrix of DenseNet121 model in the
Experiment 3 and 4
(a) Experiment 3
COVID-19 NORMAL PNEUMONIA
(b) Experiment 4
COVID-19 NORMAL PNEUMONIA
In addition, transfer learning approaches training on new data is faster than starting from scratch For the training time
of models on the fused dataset, the DenseNet121 model is trained in 35 minutes The VGG16 and VGG19 are learned in
33 and 39 minutes The Xception model needs more time in
62 minutes Significantly, the ResNet50 model needs only 30 minutes for training
Overall, the transfer learning using Densenet121 architec-ture could performing ultra well with the chest X-Ray images These Fig 4, 5, 5 and 7 show that the DensenNet121 (third group columns) has not only the best accuracy (blue column) but also others metric including precision (red column), re-call (yellow column), F1 (green column) and AUC (orange column)
IV COMPARED WITH RELATED WORKS
In the proposed study, six models Xception [19], Incep-tionV2 [20], ResNet50 [21], VGG16 [16], VGG19 [17], and DenseNet121 [18] are used Moreover, 850 new chest X-Ray images of patients as positive COVID is used to enhance data
In addition, in order to compared with with the related studies, Table VII presents the comparison of the proposed technique with the studies in the literature using chest X-Ray images to detect COVID-19
TABLE VII: Comparison of studies for detecting COVID-19 [30]
Paper Chest X-Ray COVID-19 images Accuracy Singh et al [31] 50, Cohen Dataset 94.7% Sahinbas et al [32] 50, Cohen Dataset 80% Narin et al [33] 341, Cohen Dataset 96.1% Minaee et al [34] 71, Cohen Dataset – Maguolo et al [35] 144, Cohen Dataset – Hemdan et al [36] 25, Cohen Dataset 90% Khasawneh et al [30] 368, Cohen Dataset 99% Our work 850 new images are fused up to 98.5%
V CONCLUSION
In conclusion, we investigated transfer-learning approach for COVID-19 prediction using X-ray images in this work The results show that the COVID-19 prediction is improved
of current limited datasets by the addition of training images Experiments demonstrade that the DensenNet121 model has achieved to detect the COVID-19 automatically from chest X-Ray by training it with X-Ray images collected from both COVID-19, regular pneumonia patients, and people with nor-mal chest X-Ray images Moreover, the outcomes demonstrate that the this model outperformed amongst the others with the highest accuracy, precision, recall, F1-scores, and AUC of 98.51%, 98.54%, 98.51%, 98.05% and 99.15%, respectively
In the future, we will develop federated learning model for predicting clinical outcomes in patients with
COVID-19 as well as using computer vision algorithms to improve performance
Trang 6ACKNOWLEDGMENT The authors gratefully acknowledge An Giang Regional
General Hospital for providing access to Chest X-Ray images
in this research
REFERENCES [1] H Ritchie and et al., “Coronavirus pandemic (covid-19),” Our World in
Data, 2020, https://ourworldindata.org/coronavirus.
[2] H.-Y Wang, X.-L Li, Z.-R Yan, X.-P Sun, J Han, and B.-W Zhang,
“Potential neurological symptoms of covid-19,” Therapeutic advances
in neurological disorders, vol 13, p 1756286420917830, 2020.
[3] A Alharthy, M Abuhamdah, A Balhamar, F Faqihi, N Nasim,
S Ahmad, A Noor, H Tamim, S A Alqahtani, A A A S B.
Abdulaziz Al Saud et al., “Residual lung injury in patients recovering
from covid-19 critical illness: A prospective longitudinal point-of-care
lung ultrasound study,” Journal of Ultrasound in Medicine, vol 40, no 9,
pp 1823–1838, 2021.
[4] J Bachman, “Reverse-transcription pcr (rt-pcr),” Methods in
enzymol-ogy, vol 530, pp 67–74, 2013.
[5] A Tahamtan and A Ardebili, “Real-time rt-pcr in covid-19 detection:
issues affecting the results,” Expert review of molecular diagnostics,
vol 20, no 5, pp 453–454, 2020.
[6] S Kameswari, M Brundha, and D Ezhilarasan, “Advantages and
disadvantages of rt-pcr in covid 19,” European Journal of Molecular
and Clinical Medicine, pp 1174–1181, 2020.
[7] S Bhattacharya, P K R Maddikunta, Q.-V Pham, T R Gadekallu,
C L Chowdhary, M Alazab, M J Piran et al., “Deep learning
and medical image processing for coronavirus (covid-19) pandemic: A
survey,” Sustainable cities and society, vol 65, p 102589, 2021.
[8] A Abbas, M M Abdelsamea, and M M Gaber, “Classification of
covid-19 in chest x-ray images using detrac deep convolutional neural
network,” Applied Intelligence, vol 51, no 2, pp 854–864, 2021.
[9] S Liang, H Liu, Y Gu, X Guo, H Li, L Li, Z Wu, M Liu, and
L Tao, “Fast automated detection of covid-19 from medical images
using convolutional neural networks,” Communications Biology, vol 4,
no 1, pp 1–13, 2021.
[10] W H Organization et al., “Standardization of interpretation of chest
radiographs for the diagnosis of pneumonia in children,” World Health
Organization, Tech Rep., 2001.
[11] S Albahli, “A deep neural network to distinguish covid-19 from other
chest diseases using x-ray images,” Current medical imaging, vol 17,
no 1, pp 109–119, 2021.
[12] A Z Khuzani, M Heidari, and S A Shariati, “Covid-classifier: An
automated machine learning model to assist in the diagnosis of
covid-19 infection in chest x-ray images,” Scientific Reports, vol 11, no 1,
pp 1–6, 2021.
[13] J Rasheed, A A Hameed, C Djeddi, A Jamil, and F Al-Turjman, “A
machine learning-based framework for diagnosis of covid-19 from chest
x-ray images,” Interdisciplinary Sciences: Computational Life Sciences,
vol 13, no 1, pp 103–117, 2021.
[14] M J Horry, S Chakraborty, M Paul, A Ulhaq, B Pradhan, M Saha,
and N Shukla, “Covid-19 detection through transfer learning using
multimodal imaging data,” IEEE Access, vol 8, pp 149 808–149 824,
2020.
[15] H Ravishankar, P Sudhakar, R Venkataramani, S Thiruvenkadam,
P Annangi, N Babu, and V Vaidya, “Understanding the mechanisms of
deep transfer learning for medical images,” in Deep learning and data
labeling for medical applications Springer, 2016, pp 188–196.
[16] K Simonyan and A Zisserman, “Very deep convolutional networks for
large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[17] ——, “Very deep convolutional networks for large-scale image
recog-nition,” arXiv preprint arXiv:1409.1556, 2015.
[18] G Huang, Z Liu, L van der Maaten, and K Q Weinberger, “Densely
connected convolutional networks,” 2018.
[19] F Chollet, “Xception: Deep learning with depthwise separable
convo-lutions,” 2017.
[20] C Szegedy, V Vanhoucke, S Ioffe, J Shlens, and Z Wojna, “Rethinking
the inception architecture for computer vision,” 2015.
[21] K He, X Zhang, S Ren, and J Sun, “Deep residual learning for image
recognition,” 2015.
[22] J P Cohen, P Morrison, L Dao, K Roth, T Duong, and M Ghassemi,
“Covid-19 image data collection: Prospective predictions are the future,” MELBA, p 18272, 2020.
[23] D Mason, “Su-e-t-33: pydicom: an open source dicom library,” Medical Physics, vol 38, no 6Part10, pp 3493–3493, 2011.
[24] W Ausawalaithong, A Thirach, S Marukatat, and T Wilaiprasitporn,
“Automatic lung cancer prediction from chest x-ray images using the deep learning approach,” in 2018 11th Biomedical Engineering Interna-tional Conference (BMEICON) IEEE, 2018, pp 1–5.
[25] S Guendel, S Grbic, B Georgescu, S Liu, A Maier, and D Comaniciu,
“Learning to recognize abnormalities in chest x-rays with location-aware dense networks,” in Iberoamerican Congress on Pattern Recognition Springer, 2018, pp 757–765.
[26] L Fei-Fei, J Deng, and K Li, “Imagenet: Constructing a large-scale image database,” Journal of vision, vol 9, no 8, pp 1037–1037, 2009 [27] M Hossin and M N Sulaiman, “A review on evaluation metrics for data classification evaluations,” International journal of data mining & knowledge management process, vol 5, no 2, p 1, 2015.
[28] F Chollet et al (2015) Keras [Online] Available: https://github.com/fchollet/keras
[29] M Abadi and y et al., “TensorFlow: Large-scale machine learning
on heterogeneous systems,” software available from tensorflow.org [Online] Available: https://www.tensorflow.org/
[30] N Khasawneh, M Fraiwan, L Fraiwan, B Khassawneh, and A Ibnian,
“Detection of covid-19 from chest x-ray images using deep convolu-tional neural networks,” Sensors, vol 21, no 17, p 5940, 2021 [31] D Singh, V Kumar, V Yadav, and M Kaur, “Deep neural network-based screening model for covid-19-infected patients using chest x-ray images,” International Journal of Pattern Recognition and Artificial Intelligence, vol 35, no 03, p 2151004, 2021.
[32] K Sahinbas and F O Catak, “Transfer learning-based convolutional neural network for covid-19 detection with x-ray images,” in Data Science for COVID-19 Elsevier, 2021, pp 451–466.
[33] A Narin, C Kaya, and Z Pamuk, “Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks,” Pattern Analysis and Applications, pp 1–14, 2021 [34] S Minaee, R Kafieh, M Sonka, S Yazdani, and G J Soufi, “Deep-covid: Predicting covid-19 from chest x-ray images using deep transfer learning,” Medical image analysis, vol 65, p 101794, 2020.
[35] G Maguolo and L Nanni, “A critic evaluation of methods for covid-19 automatic detection from x-ray images,” Information Fusion, vol 76,
pp 1–7, 2021.
[36] E E.-D Hemdan, M A Shouman, and M E Karar, “Covidx-net: A framework of deep learning classifiers to diagnose covid-19 in x-ray images,” arXiv preprint arXiv:2003.11055, 2020.