1. Trang chủ
  2. » Luận Văn - Báo Cáo

Digital signal processing combined with machine learning in diabetes diagnosis

60 2 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Digital signal processing combined with machine learning in diabetes diagnosis
Tác giả Do Cong Tuan
Người hướng dẫn Assoc. Prof. PhD. Nguyen Thanh Tung
Trường học Vietnam National University, Hanoi International School
Chuyên ngành Informatics and Computer Engineering
Thể loại Graduation project
Năm xuất bản 2024
Thành phố Hanoi
Định dạng
Số trang 60
Dung lượng 2,5 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Cấu trúc

  • Chapter 1 Overview (9)
    • 1.1. The necessity of the topic (9)
    • 1.2. Recent works on machine learning for Raman spectroscopy analysis (10)
  • Chapter 2 Theory (13)
    • 2.1. Diabetes (13)
    • 2.2. Artificial Intelligence (15)
      • 2.2.1. Introduction of Artificial Intelligence (15)
      • 2.2.2. History of Artificial Intelligence in medicine (15)
      • 2.2.3. Extra Tree (17)
      • 2.2.4. The future of AI in healthcare (17)
    • 2.3. Signal Processing in machine learning (18)
      • 2.3.3. The introduction of signal processing (19)
      • 2.3.4. Benefits of preprocessing signals in machine learning (20)
    • 2.4. Raman Spectroscopy (21)
      • 2.4.3. Introduction of Raman scattering (21)
      • 2.4.4. Application of Raman scattering in healthcare (23)
    • 2.5. Background correction in Raman spectroscopy (27)
  • Chapter 3 Results (28)
    • 3.1. Input Data for processing (28)
      • 3.1.1. Data collection (29)
    • 3.2. Polynomial Fitting Method for baseline determination (32)
      • 3.2.1. Improved Modified Polynomial principles (32)
      • 3.2.2 Project set up (34)
      • 3.2.2. Baseline determination (37)
    • 3.3. Background correction results (41)
    • 3.4. Comparison of the signal before and after processing using SVM label classification (44)
      • 3.4.2. Accuracy of the unprocessed signals (46)
      • 3.4.3. Accuracy of the processed signals (48)
      • 3.4.4. Adjustments and improvements (51)
      • 3.4.5. Using Extra Tree classifier instead of SVM for comparision (53)
  • Chapter 4 Discussion (54)
    • 4.1. Potential clinical implications (54)
    • 4.2. Challenges for real-world application of the developed system (55)
  • Chapter 5 Conclusion (56)

Nội dung

Digital signal processing combined with machine learning in diabetes diagnosis Xử lý tín hiệu số kết hợp với học máy trong chẩn đoán bệnh tiểu đường

Overview

The necessity of the topic

Insulin is an essential hormone that regulates blood sugar levels and maintains metabolic balance in the body Diabetes is a chronic condition characterized by the body's inability to properly use insulin or produce sufficient amounts, leading to elevated blood glucose levels, known as hyperglycemia This condition can severely damage various bodily systems, especially blood vessels and neurons.

According to the Institute for Health Metrics and Evaluation, global diabetes cases have surged dramatically, rising from 108 million in 1980 to 422 million in 2014, which represents an almost fourfold increase in prevalence and associated health risks.

35 years This rise is more prominent in low- and middle-income countries, partly due to the growing prevalence of obesity and lack of physical activity

As of 2014, 8.5% of adults aged 18 and older are affected by diabetes, leading to approximately 1.5 million deaths annually, with nearly half of these fatalities occurring in individuals under 70 Additionally, diabetes is linked to around 460,000 deaths from other kidney diseases.

1 in 5 deaths from cardiovascular disease can be attributed to diabetes (Institute for Health Metrics and Evaluation, 2019)

Between 2000 and 2019, the global age-standardized mortality rate from diabetes increased by 3%, with a notable 13% rise in lower-middle-income countries In contrast, the global mortality rate from major non-communicable diseases declined by 22% during the same period.

As of 2021, the International Diabetes Federation (IDF) reports that approximately 537 million people worldwide have diabetes, representing 1 in 10 adults aged 20 to 79 Additionally, 1 in 6 infants is impacted by diabetes during fetal development, and nearly 50% of adults living with diabetes remain undiagnosed, according to the Institute for Health Metrics and Evaluation (2019).

Between 2000 and 2019, the global age-standardized mortality rate from diabetes saw a 3% increase However, in lower-middle-income countries, the mortality rate from

Between 2000 and 2019, diabetes cases surged by 13%, while there was a notable 22% global decline in the risk of dying from major noncommunicable diseases, including cancer, chronic respiratory diseases, diabetes, and cardiovascular diseases, for individuals aged 30 to 70 The International Diabetes Federation (IDF) reported that by 2021, approximately 537 million people worldwide were living with diabetes, which accounts for 1 in 10 adults aged 20 to 79.

6 babies born is affected by diabetes during fetal development, and up to 50% of adults have undiagnosed diabetes (MINISTRY OF HEALTH, 2022)

In Vietnam, nearly 5 million individuals are affected by diabetes, with over 55% facing complications A 2021 Ministry of Health survey indicates that the adult diabetes incidence is 7.1%, yet only about 35% of cases have been diagnosed and merely 23.3% are receiving proper management and treatment According to projections from the International Diabetes Federation (IDF), the number of diabetes cases in Vietnam and worldwide is expected to rise rapidly.

Recent works on machine learning for Raman spectroscopy analysis

Research on disease prevention increasingly utilizes machine learning in conjunction with Raman spectroscopy Despite variations in study approaches, data samples, and measurement tools, the integration of machine learning with Raman spectroscopy consistently produces favorable outcomes This promising synergy lays a strong groundwork for advancing non-invasive methods of disease diagnosis.

In the research paper titled “Recent Progresses in Machine Learning Assisted Raman Spectroscopy,” the authors highlighted the advantages of combining machine learning techniques with Raman spectroscopy to improve data analysis The study evaluated various statistical methods, such as Principal Component Analysis, K-Nearest Neighbor, Random Forest, and Support Vector Machines, alongside deep learning algorithms like Artificial Neural Networks and Convolutional Neural Networks This research underscored the extensive applicability of these advanced techniques in enhancing Raman spectroscopy data interpretation.

Machine learning is making significant advancements in materials science, biomedicine, and food science, enhancing analytical accuracy and bulk identification The study also addresses its limitations and suggests potential avenues for future research.

Recent research indicates that integrating machine learning with Raman spectroscopy is an effective method for detecting and classifying breast cancer, a major health concern for women Given that various breast cancer subtypes respond differently to treatments, precise classification of these subtypes is essential for improving treatment outcomes.

A recent study utilized Raman spectroscopy combined with machine learning techniques to effectively differentiate normal breast cells from cancerous ones and to classify various breast cancer subtypes By collecting Raman spectra from cultured breast cancer cell lines and applying principal component analysis (PCA) - discriminant function analysis (DFA) and support vector machine PCA (SVM), the study achieved over 97% accuracy in distinguishing normal from cancerous cells and over 92% accuracy in subtype classification The research highlights the potential of specific Raman spectral features as biomarkers for cancer, noting increased intensity of intrinsic Raman bands in cancer cells This innovative approach provides a rapid method for analyzing breast cancer, revealing significant differences in intracellular composition and molecular structure across subtypes.

A recent study investigated the variations in concentrations of fructose, glucose, maltose, sucrose, and other carbohydrates in honey Utilizing machine learning and Raman spectroscopy, the research presented an effective method for analyzing differences in these chemical components Original honey samples were collected from local beekeepers to support the findings.

This research focused on analyzing honey samples from Suichang using Raman spectroscopy The spectral data underwent Savitzky-Golay smoothing and was processed through partial least squares (PLS) method Key PLS features were selected based on their contribution rates for further analysis To classify pure and fake honey samples, various machine learning techniques were employed, including support vector machine (SVM), probabilistic neural network (PNN), and convolutional neural network (CNN).

Traditional diabetes detection methods often rely on invasive blood tests, which, despite their accuracy, present challenges such as high costs, lengthy wait times for results, discomfort, and risks of blood-borne diseases In response to these limitations, there is a growing interest in non-invasive testing techniques My project seeks to harness artificial intelligence to analyze Raman spectra, focusing on enhancing Raman signals through noise filtering The objective is to improve the effectiveness of Raman spectrum analysis using machine learning, ultimately providing more precise diagnostic outcomes.

Theory

Diabetes

Diabetes is a chronic condition characterized by the body's inability to effectively use insulin produced by the pancreas or insufficient insulin production, leading to difficulty in regulating blood sugar levels.

Glucose, with the chemical formula C6H12O6, is the most common monosaccharide and plays a vital role in conditions like hyperglycemia and diabetes Plants and most algae synthesize glucose from water and carbon dioxide through photosynthesis, making it essential for energy metabolism in all living organisms In plants, glucose is stored primarily as cellulose and starch, while animals store it as glycogen Naturally occurring D-glucose is significant, whereas L-glucose is produced artificially in smaller quantities and holds less importance.

Figure 2.1 Haworth projection of α-d-glucopyranose (Wikipedia, 2023)

The liver plays a vital role in regulating the body's glucose levels by acting as a reserve, synthesizing, and storing glucose according to the body's needs This process is primarily controlled by key hormones, including insulin and glucagon After meals, elevated insulin levels and decreased glucagon levels prompt the body to store glucose in the form of glycogen.

During fasting periods, particularly at night or between meals, the liver plays a crucial role by converting glycogen into glucose through glycogenolysis Additionally, the liver synthesizes essential glucose from waste products, lipid byproducts, and amino acids via gluconeogenesis.

Figure 2.2 Glucose production by liver during fasting conditions (Gluconeogenesis and Glycogenolysis) (University of

The liver maintains the body's glucose levels and acts as a reserve, playing a crucial role

The liver plays a crucial role in regulating glucose levels in the body by synthesizing and storing glucose according to physiological needs, influenced by hormones like insulin and glucagon After meals, high insulin and low glucagon levels promote the storage of glucose as glycogen In contrast, during fasting periods, the liver converts glycogen back into glucose through glycogenolysis Furthermore, the liver can generate essential glucose via gluconeogenesis, utilizing waste products, lipid byproducts, and amino acids.

Artificial Intelligence

Artificial intelligence (AI) refers to the capability of machines and computer systems to exhibit intelligence similar to that of humans or animals Defined by John McCarthy as "the science and engineering of creating intelligent machines, particularly intelligent computer programs," AI involves utilizing computers to understand human intelligence, extending beyond biologically observable methods.

Alan Turing's influential 1950 work "Computing Machinery and Intelligence" marked the commencement of the artificial intelligence discussion Turing, often known as the

Alan Turing, known as the "father of computer science," famously questioned whether machines can think and proposed the "Turing Test." This test involves a human interrogator attempting to differentiate between responses generated by a computer and those written by a human (Turing, 1950) Despite criticism, the Turing Test, which utilizes language concepts, stands as a pivotal milestone in the evolution of artificial intelligence and continues to be a relevant philosophical discussion today.

"Artificial Intelligence: A Modern Approach" by Stuart Russell and Peter Norvig significantly influences the conversation surrounding AI The authors identify four key objectives or definitions of artificial intelligence, categorizing computer systems based on their reasoning capabilities and their ability to think or act (Norvig, 2020).

In recent years, artificial intelligence has experienced significant advancements, establishing its importance in scientific and business sectors The launch of OpenAI's ChatGPT marks a pivotal moment, highlighting a revitalized and vigorous era in AI innovation.

2.2.2 History of Artificial Intelligence in medicine

In their article published in Gastrointestinal Endoscopy, Vivek Kaul, Sarah Enslin, and Seth A Gross discuss how initial limitations of models have hindered the adoption of artificial intelligence (AI) in medicine They emphasize that the advancements in deep learning technology are gradually overcoming these barriers, paving the way for broader acceptance and utilization of AI in the medical field.

Deep learning has brought in a new era for medicine by utilizing the robust computing capabilities of computers, analyzing complex algorithms, and incorporating self-learning mechanisms

The integration of AI into clinical practice through risk assessment models significantly enhances diagnostic accuracy and improves workflow efficiency The advent of deep learning in medicine represents a major breakthrough, enabling the healthcare sector to overcome previous challenges and substantially boost the potential of AI technologies.

Support Vector Machine (SVM) is a powerful supervised machine learning algorithm primarily utilized for classification tasks It operates by identifying the optimal hyperplane that maximizes the separation between different classes in an N-dimensional space Developed in the 1990s by Vladimir N Vapnik and his team, SVM has become a fundamental technique in the field of machine learning.

"Support vector method for function approximation, regression estimation and signal processing" in 1995

Support Vector Machine (SVM) is a powerful classification tool used to differentiate between two classes by identifying the optimal hyperplane that maximizes the margin between the closest data points of each class The dimensionality of the input data dictates whether this hyperplane appears as a line in a 2D space or as a plane in an n-dimensional space By maximizing the margin, SVM establishes the most effective decision boundary, enabling the model to generalize effectively to unseen data and enhance prediction accuracy The data points that lie closest to the optimal hyperplane are known as support vectors, as they are crucial in defining the maximum margin.

Support Vector Machines (SVM) are capable of managing both linear and nonlinear classification problems When data cannot be separated linearly, kernel functions come into play, transforming the data into a higher-dimensional space to achieve linear separability This technique, often referred to as "kernel tricks," relies on the selection of appropriate kernel functions, including linear, polynomial, and radial basis function kernels, to effectively classify complex datasets.

17 basis function (RBF) kernel, or sigmoid kernel) depends on the data characteristics and specific uses in each case (IBM, 2023)

In machine learning, an extra tree is a predictive model that connects observations of an object to conclusions about its target value Each internal node represents a variable, while the lines to child nodes indicate specific values for that variable The leaf nodes provide the predicted target value based on the variable values along the path from the root to the leaf This approach

Extra trees are a popular technique in data mining, characterized by a tree structure where leaves denote classifications and branches illustrate attribute combinations leading to those classifications The learning process involves recursively dividing the source set into subsets based on attribute values until no further splits are feasible or a single classification applies to each subset Furthermore, a random forest classifier enhances classification accuracy by utilizing multiple extra trees.

Extra trees are valuable for calculating conditional probabilities and are characterized by their integration of mathematical and computational methods, which assist in the description, classification, and generalization of data sets.

2.2.4 The future of AI in healthcare

Artificial intelligence (AI) is poised to play a vital role in the future of healthcare, particularly in diagnosis and treatment recommendations Despite existing challenges, advancements in AI's image analysis capabilities indicate that machines will increasingly assess radiology and pathology images, uncovering details that are often missed by humans Additionally, the growing implementation of speech and text recognition technologies will enhance tasks like documenting clinical notes and improving patient interactions.

Integrating AI into clinical practice faces several challenges, including regulatory approvals, standardization for consistent functionality, clinician education, and the establishment of payment frameworks by organizations Additionally, ongoing updates and enhancing user qualifications are essential for the widespread adoption of AI in healthcare While addressing these issues may take time, it is clear that AI will act as a supportive tool for healthcare professionals, complementing their unique human skills like empathy and holistic understanding.

The rise of AI in healthcare may lead to temporary job displacements, yet it is clear that AI will enhance, rather than replace, the role of doctors in patient care Although technology may change job responsibilities, the irreplaceable qualities of human healthcare professionals remain crucial for delivering compassionate and comprehensive care.

Signal Processing in machine learning

Signal processing is a crucial intermediary in machine learning, transforming raw data into a refined format suitable for analysis and modeling By employing various techniques and algorithms, it enhances the accuracy and efficiency of machine learning models by extracting relevant features and reducing noise The significance of signal processing lies in its ability to improve the quality of input data, which is essential for developing robust and precise models Without effective signal processing, machine learning models may struggle with noisy or incomplete data, leading to poor results Additionally, signal processing works alongside machine learning algorithms to uncover hidden patterns and trends, enabling machines to better understand and interpret data for informed decision-making and predictions.

2.3.3 The introduction of signal processing

Signal processing is a multidisciplinary domain focused on analyzing, modifying, and interpreting signals to extract meaningful information from various sources, including audio, video, images, and sensor readings These signals can be analog or digital, continuous or discrete The main objective of signal processing is to enhance signal quality, extract valuable features, and support effective data analysis through the application of mathematical techniques, algorithms, and tools.

Signal processing is divided into two main categories: analog signal processing, which handles continuous signals, and digital signal processing, which focuses on discrete signals represented as numerical sequences In machine learning, effective signal processing is essential for data preprocessing, which prepares the data for learning algorithms This process includes techniques to eliminate noise, filter out irrelevant information, and extract significant features from the signals, ensuring the accuracy and reliability of machine learning models.

Signal processing techniques are diverse and tailored to specific applications and signal characteristics Key methods include filtering, noise reduction, feature extraction, time-frequency analysis, and pattern recognition, each playing a vital role in effectively analyzing and interpreting signals.

Filtering is essential for eliminating unwanted noise or interference from signals, employing methods like low-pass, high-pass, and band-pass filters tailored to specific frequency ranges Effective noise reduction techniques, including adaptive filtering, spectral correction, and wavelet denoising, enhance signal clarity Additionally, feature extraction plays a crucial role in signal processing by identifying and capturing relevant information, utilizing techniques such as Fourier analysis, wavelet transforms, and principal component analysis (PCA) to represent signals more concisely and meaningfully.

Signal processing is essential in machine learning, facilitating the preprocessing and enhancement of data for improved analysis accuracy By utilizing diverse techniques, researchers can extract valuable insights from signals, fostering advancements across various sectors such as healthcare, finance, and telecommunications.

2.3.4 Benefits of preprocessing signals in machine learning

Signal processing plays a crucial role in machine learning by enhancing the quality and relevance of input data, leading to more accurate and reliable predictions and model training

Signal processing plays a crucial role in machine learning by effectively reducing noise and enhancing data quality Instruments such as Raman spectrometers frequently capture data that includes unwanted elements, which can obscure the true signal and result in inaccurate outcomes.

To achieve accurate results from machine learning models, it is essential to employ techniques that filter out noise and clean the data Signal processing plays a crucial role in enhancing the quality of input data, leading to improved predictions and insights.

Signal processing plays a crucial role in feature extraction for machine learning, enabling the identification of key features with strong predictive capabilities By employing various feature extraction techniques, signal processing helps algorithms concentrate on the most informative data aspects This process not only reduces dimensionality but also enhances performance and accelerates model training.

Signal processing techniques play a vital role in normalizing and preprocessing data, ensuring consistency and comparability across various signal types This crucial step eliminates biases, promoting fair representation of data in machine learning models By employing normalization methods like scaling, input data is aligned, leading to more reliable and unbiased predictions Often, raw signals can be weak, distorted, or of low quality, complicating the extraction of meaningful information; however, signal processing techniques effectively enhance these signals by reducing noise and sharpening clarity.

21 edges, improving contrast, or amplifying specific features This advancement improves the signal-to-noise ratio, enabling machine learning models to detect patterns and make accurate predictions (Diniz, 2023)

In today's fast-paced world, machine learning applications increasingly rely on real-time data analysis and decision-making Signal processing plays a crucial role in facilitating this by minimizing computational complexity and enabling swift, responsive analysis of incoming signals This capability is especially vital in time-sensitive sectors like finance, healthcare, and robotics.

Signal processing plays a crucial role in machine learning by laying the groundwork for precise data analysis and modeling It enhances the quality of input data through noise reduction, essential feature extraction, data preprocessing, and signal enhancement These techniques significantly improve the performance and effectiveness of machine learning models, ensuring more accurate outcomes.

Raman Spectroscopy

Raman spectroscopy, named after the renowned Indian physicist Sir Chandrasekhara Venkata Raman, is a pivotal technique in the study of light scattering Born on November 7, 1888, in the former Madras Province of India, Sir Raman received the Nobel Prize in Physics in 1930 for his significant contributions to the field He passed away on November 21, leaving a lasting legacy in the world of science.

In 1921, during his travels in Europe, Raman made a significant observation of the unique blue hues of the Mediterranean Sea and glaciers Fascinated by this phenomenon, he performed experiments with monochromatic light from a mercury arc lamp to analyze the spectrum of transparent materials His investigations uncovered distinct lines in the spectrum, which were subsequently recognized as Raman lines.

On March 16, 1928, during a scientific conference in Bangalore, C.V Raman presented his groundbreaking findings, which were met with skepticism as some physicists found it difficult to replicate his results However, Peter Pringsheim became the first German scientist to successfully reproduce Raman's work, validating its significance in the field of physics.

22 alleviated doubts and introduced the terms "Raman effect" and "Raman lines" to the scientific community

Raman spectroscopy is a non-destructive analytical technique that utilizes scattered light to analyze the vibrational energy modes of materials, providing valuable insights into their chemical structure, phase, polymorphism, crystallinity, and molecular interactions This method is based on the interaction of light with chemical bonds and was named after C V Raman, who, with K S Krishnan, first discovered Raman scattering in 1928.

Figure 2 4 Scheme of Raman scattering (Linley Li Lin, Xinyuan Bi, Yuqing Gu, Fu Wang, 2021)

Raman spectroscopy involves the scattering of high-intensity laser light by molecules in a sample While the majority of this scattered light, known as Rayleigh scattering, retains the same wavelength as the laser and offers limited information, a tiny fraction—approximately 0.0000001%—scatters at different wavelengths, referred to as Raman scattering This unique scattering provides insights into the sample's chemical structure and specific properties.

2.4.4 Application of Raman scattering in healthcare

Raman spectroscopy is an effective technique for early cancer detection through the analysis of tissue molecular composition By identifying subtle biochemical changes linked to cancer growth, this method utilizes Raman scattering to interact with vibrational modes of molecular bonds, facilitating the assessment of chemical compositions in cells and tissues It offers a noninvasive, label-free approach to detect alterations in the molecular fingerprints of cells or tissues affected by disease transformation.

Raman spectroscopy plays a crucial role in the identification and characterization of microorganisms in infectious diseases, enhancing diagnostic accuracy Viruses, regarded as "organisms at the edge of life," are associated with numerous human pandemics Traditional viral detection methods often require skilled labor and lengthy culture-based techniques, posing significant challenges Although polymerase chain reaction (PCR) is the gold standard for viral detection, it can be resource-intensive, highlighting the need for more efficient diagnostic approaches.

Raman spectroscopy has emerged as a powerful tool for analyzing biological samples, particularly in the realm of viral detection from bodily fluids This technique captures nanoscale biochemical signatures that indicate viral infections Various Raman spectroscopic methods, including surface-enhanced Raman spectroscopy (SERS), Raman tweezers, tip-enhanced Raman spectroscopy (TERS), and coherent anti-Stokes Raman scattering (CARS), are utilized to identify viral components, assess virus-induced immune responses, and observe changes in biomolecule distribution within body fluids SERS-based approaches, in particular, have shown significant promise in this field.

24 detecting viral infections in humans, demonstrating the potential of Raman spectroscopy in this critical area (Jijo Lukose, 2023)

Raman spectroscopy is an essential tool in the pharmaceutical industry for ensuring drug quality by analyzing molecular composition and confirming the presence of active ingredients while identifying contaminants This technique is integrated throughout the drug product life cycle, from laboratory drug discovery to production under good manufacturing practice (GMP) conditions It enables real-time measurements that enhance active pharmaceutical ingredient (API) reaction analytics, release testing, and statistical process control, making it particularly valuable for innovative manufacturing approaches such as continuous manufacturing This aligns with the US FDA's principles of process analytical technology (PAT), which emphasizes the importance of measuring critical process parameters to maintain quality attributes effectively.

The pharmaceutical industry is experiencing a growing need for rapid post-market testing procedures to confirm the identity, safety, and efficacy of drug products As the sector evolves, it faces increasingly complex challenges, particularly in ensuring the quality of biological drugs and non-biological complex drugs (NBCD), including nanomaterials These challenges stem from the sophisticated structures and advanced production processes of these drugs, which often involve more steps than traditional drug delivery systems Chemometrics-assisted Raman spectroscopy is expected to play a vital role in overcoming these obstacles, especially in addressing the analytical challenges posed by heterogeneous sample matrices and complex biologicals, such as mRNA vaccine technology developed for the SARS-CoV-2 pandemic.

Future challenges in the pharmaceutical industry will necessitate the development of follow-on products, commonly referred to as "bio- and nano-similar" products, as well as combination therapies and personalized medicine These evolving factors present new production challenges and are actively being discussed by international working groups.

Traditional intracellular imaging methods like electron microscopy, cryoelectronic microscopy, and immunofluorescence microscopy are invasive, often requiring fixation or freezing and the application of dyes or biomarkers, which can harm the cells under study To address these challenges, innovative label-free and nondestructive imaging techniques have emerged for biochemical research, including coherent anti-Stokes Raman (CARS) microscopy, multiphoton microscopy, and confocal Raman microscopy.

A confocal Raman system, built on a conventional confocal light microscope setup, is limited in resolution only by the diffraction limit Recent advancements in acquisition times allow for the real-time visualization of cellular compartments in living cells without fixation or drying Traditionally, high-resolution Raman imaging employed near-infrared laser excitation to reduce tissue damage and prevent autofluorescence in biological samples However, ongoing developments are expanding the usable spectral range into the visible spectrum.

Living cells and microorganisms possess distinct Raman spectra that serve as unique fingerprint-like signatures, enabling precise identification of various species and insights into their responses to environmental stressors Utilizing in situ Raman imaging with highly sensitive specialized instruments, these spectral fingerprints are transformed into detailed snapshots that reveal specific physiological reactions and molecular species.

Raman spectroscopy is ideal for live specimen analysis, enabling time-lapse experiments essential for studying growth-dependent phenomena and metabolic responses to drugs or substrates Its non-invasive approach, which eliminates direct contact with samples, presents valuable opportunities for advancements in biomedical research.

Background correction in Raman spectroscopy

The Raman background correction process is crucial for enhancing Raman signal processing by eliminating unwanted background noise while preserving the sample's distinctive Raman signal One prevalent source of background noise in Raman spectra is fluorescence, which occurs when laser photons induce unwanted luminescence in the sample or its surface, resulting in a long, strong background curve that can compromise the accuracy of Raman peaks Additionally, Rayleigh noise is another type of interference, caused by the reflection of the original laser light from the sample without any Raman interaction.

Background correction is essential for enhancing Raman spectra by removing noise from sources like sample fluorescence and environmental interference This process leads to a cleaner spectrum, which significantly improves the accuracy of analysis Additionally, background correction helps create a consistent baseline for further evaluations.

Raman spectrum, enhancing the visibility of sample characteristics and simplifying analysis, aids in normalizing Raman spectra, making them more comparable across different samples and maintaining consistent analytical results

The background correction process begins with the preparation and collection of data from various human body parts, including nails, behind the ears, and veins This data yields the magnetic spectrum of the sample To ensure accurate measurements, it is essential to implement methods that minimize background noise and allow for sufficient collection time to obtain a reliable signal.

To accurately determine the background signal, initial assessments are essential This involves analyzing the collected spectrum to pinpoint regions that may harbor background signals Generally, the background manifests as a continuous, smooth curve, in contrast to the characteristic Raman signals, which are identified by sharp peaks.

Mathematical techniques like Polynomial Fitting, Asymmetric Least Squares, and Smoothing Methods are utilized for data analysis Additionally, specialized software such as OriginLab, MATLAB, Renishaw WiRE, Horiba LabSpec, and Thermo Scientific OMNIC offer integrated tools and algorithms for effective background identification and subtraction.

Once background correction has been performed, it is necessary to check and fine-tune, adjust parameters or methods and repeat the process to achieve optimal results.

Results

Input Data for processing

There have been numerous studies and publications demonstrating that Raman spectroscopy can quantitatively distinguish substances According to M J Pelletier,

Raman spectroscopy is primarily known for its role in determining molecular structures and conducting qualitative analyses Nonetheless, it has been utilized for quantitative analysis of sample composition for more than 50 years, as documented in technical literature The concentrations of analytes in these applications usually fall within the range of 0.1 to 1.0.

Under optimal conditions, it is possible to accurately measure concentrations below 100 ppm, highlighting the ability to differentiate substance levels, particularly sugar concentrations in the human body (Pelletier, 2003).

In order to carry out my project, simulating blood sugar levels became essential, but I encountered challenges due to the absence of genuine data from real patients As a temporary solution, I opted for a practical approach to effectively address the data limitations until a more comprehensive and reliable method can be developed.

This study employs Raman spectroscopy to measure glucose concentration in the same individual’s thumbnails before and after breakfast The objective is to capture the signals that indicate the increase in glucose levels following food intake, contrasting with the lower levels observed prior to eating These signals will be analyzed using machine learning, specifically the SVM model, to assess its ability to differentiate between the two conditions and evaluate the accuracy of this differentiation Successfully distinguishing glucose concentration variations could pave the way for training models to predict glucose levels across diverse populations, ultimately helping to differentiate between healthy individuals and those with diabetes.

To collect Raman spectra effectively, I began by powering on the Raman meter and control software, ensuring proper calibration per the manufacturer's guidelines Next, I meticulously cleaned and dried the nails to remove any dirt or oil that could interfere with the signal I then selected the suitable laser wavelength, usually between 785 nm and 1064 nm, and adjusted the signal acquisition time and laser intensity to achieve a robust signal while preventing any potential damage.

After that, I place the hand of volunteers on a stable, flat surface, position the fingernail under the Raman probe, and use a stand or hand holder if available to prevent any

To achieve accurate results during measurements, it's essential to minimize movement and maintain a steady hand After completing the measurement, the Raman signal will appear on the software, enabling the analysis of the Raman spectrum to identify the chemical components present on or beneath the nail surface Additionally, when utilizing lasers, it is crucial to avoid prolonged skin exposure to ensure both safety and precise measurement outcomes.

The raw data collected from volunteers' bodies was saved in a CSV format, yielding 20 files per volunteer—10 recorded before breakfast and 10 after, 15 minutes post-meal Each CSV file consists of two columns: the first column indicates the wave number (x value) in shift Raman with units of 𝑐𝑚 −1, while the second column displays the intensity of the Raman signal at each corresponding wave number (y value) This intensity reveals the presence and quantity of specific molecular vibrations The raw data spans x values from 400 to 2300 𝑐𝑚 −1, but the processed data focuses on the range of 800 to 1800 𝑐𝑚 −1, which includes vibrational bands essential for identifying key chemical functional groups in biological molecules like proteins, lipids, nucleic acids, and carbohydrates Overall, this study generated 20,000 data points per volunteer for machine learning applications aimed at differentiating between various biological tissues.

2 labels, 10 000 point of data for each label

Figure 3.1 A part of unprocessed Raman signals

Figure 3.2 The graph of unprocessed Raman signals

Polynomial Fitting Method for baseline determination

The Pybaselines library offers a streamlined approach to baseline determination through various polynomial algorithms These algorithms are grounded in a fundamental general formula, allowing for effective baseline correction in data analysis.

𝑗=0 where 𝛽 is the array of coefficients for the polynomial

For regular polynomial fitting, the polynomial coefficients that best fit data are gotten from minimizing the least-squares:

𝑦 𝑖 and 𝑥 𝑖 are the measured data,

𝑝(𝑥 𝑖 ) is the polynomial estimate at 𝑥 𝑖 ,

𝑁 is the number of data points

Polynomial fitting offers a straightforward and efficient approach, making it a preferred method for in vivo biomedical Raman applications due to its speed However, its effectiveness is influenced by the selected spectral fitting range and the polynomial order used.

To enhance background correction, I utilize the Improved Modified Polynomial (IMP) method The process begins with a single polynomial fitting, denoted as 𝑃(𝑣), which is derived from the raw Raman signal 𝑂(𝑣), where 𝑣 represents the Raman shift in cm⁻¹ Subsequently, the residual 𝑅(𝑣) and its standard deviation (𝐷𝐸𝑉) are computed.

𝑛 is representing the number of data points on the spectral curve, and

To implement the algorithm for determining data baselines in my project, the first step is to install and import the Pybaselines library, which is essential for calculating, preprocessing, and correcting background data using the Polynomial Fitting Method This library also allows for the configuration of the Improved Modified Polynomial as per the specified formula Additionally, I utilize the numpy library for efficient array and matrix operations, and the matplotlib.pyplot library to generate high-quality graphs and charts from the processed data.

Figure 3.1 Workflow diagram of IMP fitting algorithm

(Jianhua Zhao, Harvey Lui, David I MCLean, Haishan Zeng, 2007)

I utilize the CSV library to efficiently read and write tabular data in CSV format, which includes essential functions for managing data input and output Additionally, the OS library allows for seamless interaction with the operating system, enabling operations like creating, deleting, and moving files.

Figure 3.2 Built-in libraries for IMP Fitting in a launch file

The initial step in enhancing data processing performance is providing input data To extract information from csv files, I employ various methods, specifically defining a function named `GetDataFromFileCSV` that takes three parameters: `source`.

To extract data from a CSV file, use the `source` variable to specify the file path, `filename` to indicate the name of the file, and `axit` to determine the axis for data extraction, either 'x' or 'y'.

Next, initialize two empty lists to store the data of the y-axis and x-axis: `datay = []` and

To read data from a CSV file, initialize an empty list with `datax = []` Utilize the `with open(f'{source}/{filename}', 'r') as csvfile` method to open the file for reading Create a CSV reader object from the opened file, using a comma as the delimiter to process the data effectively.

To process a CSV file, implement a loop that examines each line for data If a row is not empty, convert the first column's value to an integer and append it to the `datax` list, while converting the second column's value to a float and adding it to the `datay` list using `datax.append(int(row[0]))`.

`datay.append(float(row[1]))` methods

If the passed `axit` is 'x', the function will return a list of `datax`; if the `axit` is 'y', the function will return a list of `datay`

Figure 3.3 Locate data file csv and extract unprocessed signals

I developed a function called Draw_Graph to visualize signal line graphs, allowing for easy observation of changes before and after signal processing This function utilizes shift Raman parameter values and intensities, labeling each component of the graph, including axis names, graph titles, colors, and data points The plt.plot method is employed to create the graph lines, where x_axis denotes the wave number, y_axis indicates intensity, color specifies the line color, and line_name is the label shown in the legend.

Figure 3.4 Method to draw graph based on received signal

The SaveFlattenData function serves a purpose akin to the read function, but instead of reading data, it writes processed data into a CSV file line by line This function identifies empty lines for data insertion, and when paired with the Autoname function I developed, it generates a comprehensive CSV file that encapsulates the processed signal data.

Figure 3.5 Method to save processed signals

To effectively process noisy signals, establishing a baseline is essential for subtracting background noise and filtering the signal In this project, I employ the IMP method in conjunction with the algorithm chart referenced earlier, utilizing the Pybaseline library for support The method is executed using the syntax illustrated in Figure 3.6.

In the following code, x_raw and y_raw represent arrays containing Raman and Intensity shift values obtained from the unprocessed measurement signal The

The 'pbl.polynomial.imodpoly' method directs the library to apply the IMP technique for two input variables, incorporating a custom polynomial order parameter In this context, the representative value within the IMP formula is represented as 𝛽 𝑚.

The polynomial order plays a crucial role in the accuracy of background modeling A low polynomial order may lead to underfitting, failing to capture complex or non-linear backgrounds and missing important details On the other hand, a high polynomial order offers greater flexibility for fitting intricate background shapes but risks overfitting, where the model captures noise along with the background, causing unwanted fluctuations in the estimated results.

Figure 3.6 Determine signals’ baselines and draw graphs

In this project, I established baselines for unprocessed signals using polynomial order coefficients between 3 and 16 I faced challenges in selecting the optimal polynomial order based on visual graph analysis, so I identified key points within the range to determine the sample baseline for background correction I then processed the data and input it into a machine learning model to evaluate the results This method streamlined the process, as manually calculating using linear formulas would have been difficult due to the large number of signal samples and time constraints.

3, 8, and 16 for baseline determination consideration

Figure 3.7 Determine the signal’s baseline with polynomial order equals 3

Figure 3.8 Determine the signal’s baseline with polynomial order equals 8

Figure 3.9 Determine the signal’s baseline with polynomial order equals 16

Background correction results

Following the establishment of the baseline, the next crucial step is background correction, which entails subtracting the baseline-determined background noise from the signal intensity This process usually results in a decrease in the received signal intensity, as the subtracted background noise contributes positively to the overall intensity The background noise encompasses various elements, including fluorescence, Rayleigh scattering, and other environmental factors.

Figure 3.12 Background correction result with polynomial order equals 3

Figure 3.13 Background correction result with polynomial order equals 8

Figure 3.14 Background correction result with polynomial order equals 16

I focus on x values ranging from 800 cm⁻¹ to 1800 cm⁻¹, as this interval encompasses vibrational bands that identify key chemical functional groups in biological molecules, including proteins, lipids, nucleic acids, and carbohydrates This range offers valuable insights into the structure and chemical composition of biological tissues.

In addition, interference signal from water can also be avoided at this range (Valentina Teodolinda Vincoli, 2022)

Figure 3.15 Function to limit the range of Wave number.

Comparison of the signal before and after processing using SVM label classification

3.4.1 Reorganizing the pre- and post- processing data for comparision

Figure 3.16 Reorgazing Data form with 2 labels

The signals sets are systematically reorganized before and after processing, as depicted in Figure 3.16 The first 10 lines of data consist of samples representing measurements taken from the individual prior to breakfast, while the following lines continue the data collection.

10 lines are 10 samples show measurements taken after breakfast Each sample includes

1001 data points The labels "0" and "1" were respectively assigned to denote before and after breakfast

In this project, label classification serves to ensure the accuracy of processed signals compared to unprocessed signals for machine learning The two labels in these data sets

The study analyzes glucose concentration levels of an individual before and after breakfast, with 45 data points collected The accuracy rate reflects the effectiveness of machine learning in classifying whether the individual has eaten breakfast based on new measurements Currently, this accuracy does not extend to predicting diabetes Future developments aim to enable machine learning to distinguish between normal and diabetic cases by examining glucose concentration variations within the same individual Consequently, machine learning models can be trained on processed signals from multiple individuals, enhancing disease detection across diverse populations.

3.4.2 Accuracy of the unprocessed signals

I imported unprocessed Raman data into a Support Vector Machine (SVM) to evaluate its effectiveness in processing data with varying polynomial orders The initial accuracy assessment revealed that classifying unprocessed signals remains challenging, even when using just two labels.

In this project, I employed the Support Vector Machine (SVM) feature available in Google Colab, making necessary adjustments to the code for seamless integration with my input data The model leverages the Scikit-learn library, a robust Python library renowned for its capabilities in machine learning.

The library provides classes for Support Vector Classification and Support Vector Regression, allowing for the development and customization of SVM models tailored for classification tasks Scikit-learn supports multiple kernel types, including linear, polynomial, RBF (Radial Basis Function), and sigmoid Additionally, SVM can be utilized to assess the accuracy of processed signals when alternative signal processing methods are employed.

Figure 3.18 Accurary result check with SVM of unprocessed Raman signals

The Raman signals were significantly affected by fluorescence and various noise sources, leading to inaccurate results The Support Vector Machine (SVM) was executed multiple times, yielding accuracy rates between 48% and 55% This low accuracy for the two-label input data set (before and after breakfast) suggests that the current machine learning model struggles to distinguish between these two labels effectively.

3.4.3 Accuracy of the processed signals

Using polynomial orders of 8 or 16, the SVM demonstrated a notable improvement in performance, enhancing the accuracy rate from approximately 49% to 75% for predicting two labels This indicates that effective signal processing plays a crucial role in improving prediction accuracy However, it is essential to recognize that polynomial orders of 8 or 16 may not represent the optimal baseline Therefore, I will continue to experiment with varying polynomial orders within this range to determine the most effective configuration for this project.

Figure 3.19 Accurary check with SVM of processed Raman signals with 8-polynomial order

Figure 3.20 Accurary check with SVM of processed Raman signals with 16-polynomial order

The adjustment process is shaped by various factors such as standard deviation, weighting, and the collection of original signals When employing the IMP method within the Pybaseline library, I gradually modify the polynomial order, usually between 8 and 16, to achieve optimal results.

The machine learning approach has consistently delivered impressive accuracy, with the 12-polynomial order demonstrating the best results in my tests However, it's crucial to recognize that 12 may not always be the ideal polynomial order; adjustments should be tailored to the specific signal data, necessitating careful observation and evaluation of changes in each dataset.

Figure 3.21 Accurary check with SVM of processed Raman signals with 12-polynomial order

3.4.5 Using Extra Tree classifier instead of SVM for comparision

I utilized the Extra Tree model for comparison, which was automatically generated through Google Colab Additionally, I tailored certain aspects of the code to optimize it for input data featuring two labels.

Figure 3.23 Accurary check with Extra Tree of processed Raman signals with 12-polynomial order.

Discussion

Potential clinical implications

Raman spectroscopy presents a groundbreaking advancement in diabetes diagnosis through its non-invasive testing capabilities By analyzing blood or interstitial fluid, it offers a comfortable alternative to traditional invasive blood tests This innovative approach facilitates more frequent glucose monitoring, enhancing patient compliance and reducing the risk of complications Additionally, the rapid point-of-care testing provided by Raman spectroscopy is invaluable in emergency situations and for patients needing precise glucose management.

Integrating machine learning with Raman spectroscopy allows for the accurate identification of early diabetes signs, facilitating timely interventions before symptoms appear This innovative approach also enables the detection of biochemical changes associated with the disease.

55 associated with diabetes-related complications, aiding in their early diagnosis and management

The integration of machine learning with Raman spectroscopy significantly boosts analytical capabilities, resulting in enhanced diagnostic accuracy and improved patient outcomes Furthermore, the real-time monitoring capabilities of Raman spectroscopy, when used with live specimens, provide critical insights into the effectiveness of interventions by tracking metabolic responses to treatments.

Challenges for real-world application of the developed system

Raman spectroscopy holds significant potential for diabetes diagnosis; however, its clinical application faces several challenges A primary hurdle is the technology's complexity, requiring advanced equipment, specialized knowledge, and intricate algorithms for accurate interpretation of Raman spectra This technical sophistication can hinder its widespread use, especially in resource-constrained environments.

Cost plays a significant role in the accessibility of Raman spectroscopy, particularly in low-income areas, due to high initial setup and ongoing maintenance expenses Additionally, ensuring accurate calibration and standardization of these systems is crucial for reliable diagnostic results and to reduce inconsistencies.

Integrating an innovative diagnostic system into current healthcare IT infrastructure presents challenges, including the need for seamless interoperability with electronic health records (EHRs) and extensive training for healthcare professionals Additionally, addressing regulatory approval processes and ensuring strong privacy and security measures for patient data are essential for the successful clinical deployment of the system.

To fully realize the potential of Raman spectroscopy for diabetes diagnosis, ongoing research, technological advancements, and collaborative efforts are essential to address these formidable challenges

Conclusion

In my graduate project, I conducted extensive research to process Raman input signals using the IMP method, with the ultimate goal of training a machine learning model to detect diabetes To improve system accuracy, I carefully fine-tuned algorithm parameters, including polynomial order, weights, and standard deviation Furthermore, I compared outcomes from a single machine learning model against multiple models to ensure the reliability of the results.

I successfully improved the baseline correction of the original Raman signal from humans, significantly boosting the predictive accuracy of the machine learning model from 54% to around 81% for 2-label data However, challenges such as overfitting and underfitting related to the selection of polynomial orders remain unresolved, as the optimal choice of polynomial orders is contingent on the data derived from each individual.

This project has provided me with a strong foundation in both medicine and artificial intelligence, enabling the creation of disease diagnostic models As technology rapidly advances, sectors like healthcare are increasingly incorporating these innovations into various processes However, challenges remain, including risks and ethical issues related to machine usage, highlighting the continued importance of human expertise in specialized fields.

A Silge, K W.-M (2022, August 12) Trends in pharmaceutical analysis and quality control by modern

Raman spectroscopic techniques Trends in Analytical Chemistry, 153.

Cornell lectures (n.d.) Retrieved 11 1, 2023, from https://courses.cit.cornell.edu/ece303/Lectures/lecture34.pdf

Cowie CC, C S (2018) Diabetes in America 3rd edition August: NIDDK

Diniz, P S (2023) Signal Processing and Machine Learning Theory Elsevier Science

GeeksforGeeks (2023, August 31) ML | Underfitting and Overfitting Retrieved from GeeksforGeeks: https://www.geeksforgeeks.org/underfitting-and-overfitting-in-machine-learning/

Horiba Scientific (2022, July 12) History of Raman Spectroscopy (Horiba Scientific) Retrieved 11 15,

2023, from Horiba Scientific: https://www.horiba.com/fra/scientific/technologies/raman- imaging-and-spectroscopy/history-of-raman-spectroscopy/

IBM (2023, 12 27) What are support vector machines (SVMs)? Retrieved from IBM: https://www.ibm.com/topics/support-vector- machine#:~:text=A%20support%20vector%20machine%20(SVM,in%20an%20N%2Ddimension al%20space

IBM (2023, December 19) What is machine learning? (IBM) Retrieved 11 15, 2023, from IBM: https://www.ibm.com/topics/machine-learning

Institute for Health Metrics and Evaluation (2019) GBD Results (THE INSTITUTE FOR HEALTH METRICS

AND EVALUATION) Retrieved 11 14, 2023, from Global Burden of Disease (GBD): https://vizhub.healthdata.org/gbd-results/

Jianhua Zhao, Harvey Lui, David I MCLean, Haishan Zeng (2007) Automated Autofluorescence

Background Subtraction Algorithm The Laboratory for Advanced Medical Photonics (LAMP),

Department of Dermatology and Skin Science, University of British Columbia doi:10.1366/000370207782597003

Jijo Lukose, A K (2023) Raman spectroscopy for viral diagnostics PubMed Central

Kalakota, T D (2019) The potential for artificial intelligence in healthcare Future Healthcare Journal ,

Katharina Klein, A M (2012) Label-Free Live-Cell Imaging with Confocal Raman Microscopy Biophysical

Kaul V, E S (2020) The history of artificial intelligence in medicine Gastrointestinal Endoscopy, 807-

Leanne Bellamy, J.-P C (2009) Type 2 diabetes mellitus after gestational diabetes: a systematic review and meta-analysis PubMed

Lihao Zhang, Chengjian Li, Di Peng, Xiaofei Yi, Shuai He, Fengxiang Liu, Xiangtai Zheng, Wei E Huang,

Liang Zhao, Xia Huang (2022) Raman spectroscopy and machine learning for the classification of breast cancers ELSEVIER

Linley Li Lin, Xinyuan Bi, Yuqing Gu, Fu Wang (2021) Surface-enhanced Raman scattering nanotags for bioimaging AIP publisher

McCarthy, J (2007) WHAT IS ARTIFICIAL INTELLIGENCE? 2

According to the Ministry of Health, approximately 5 million people in Vietnam are suffering from a disease that leads to severe complications, including cardiovascular issues, neurological disorders, and amputations This alarming statistic highlights the urgent need for increased awareness and intervention to address the health crisis affecting a significant portion of the population For more information, visit the Ministry of Health Portal.

Norvig, S J (2020) Artificial Intelligence: A Modern Approach Prentice Hall

P.J Cadusch, M.M Hlaing, S.A Wade, S.L McArthur, P.R Stoddart (n.d.) Improved Methods for

Pelletier, M J (2003) Quantitative Analysis Using Raman Spectrometry Applied Spectroscopy, 1-112

Pezzotti, G (2021) Raman spectroscopy in cell biology and microbiology The Journal of Raman

Shuhan Hu, Hongyi Li, Chen Chen, Cheng Chen, Deyi Zhao, Bingyu Dong, Xiaoyi Lv, Kai Zhang, Yi Xie

(2022) Raman spectroscopy combined with machine learning algorithms to detect adulterated

Suichang native honey Scientific Reports

Sishan Cui, S Z (2018) Raman Spectroscopy and Imaging for Cancer Diagnosis PubMed Central

Smith, S (2013) Digital Signal Processing: A Practical Guide for Engineers and Scientists Elsevier

Tuo Wang, Liankui Dai (2017) Background Subtraction of Raman Spectra Based on Iterative Polynomial

Turing, A M (1950) Computing Machinery and Intelligence Mind, LIX, 433 - 460

University of California (2007) The Liver & Blood Sugar Retrieved 11 13, 2023, from Diabetes Education

I don't know!

University of Tartu (2023, June 26) 3.2 Raman spectroscopy Retrieved from University of Tartu: https://sisu.ut.ee/heritage-analysis/book/32-raman-spectroscopy

Valentina Teodolinda Vincoli, A C (2022) Electrospun Silk Fibroin Scaffolds for Tissue Regeneration:

Chemical, Structural, and Toxicological Implications of the Formic Acid-Silk Fibroin Interaction

Frontiers in Bioengineering and Biotechnology

Vinmec (2023, March 29) Chỉ số Glucose trong máu ở mức bao nhiêu là mắc bệnh tiểu đường? (Vinmec

The International General Hospital Joint Stock Company highlights that monitoring glucose levels is crucial for diabetes management According to their findings, a blood glucose level above 126 mg/dL when fasting indicates a potential diabetes diagnosis Regular testing and awareness of glucose levels can help in early detection and effective management of diabetes For more information, visit Vinmec's article on glucose levels and diabetes.

Vivek Kaul, S E (2020, January 18) History of artificial intelligence in medicine Gastrointestinal

WHO (2023, 4 5) Diabetes Retrieved from World Health Organization: https://www.who.int/news- room/fact- sheets/detail/diabetes#:~:text=Overview,hormone%20that%20regulates%20blood%20glucos e

Wiggins, S M., Robb, G., McNeil, B., Jaroszynski, D A., Jones, D., & Jamison, S (2002) Collective

Rayleigh scattering from dielectric particles Measurement Science and Technology, 13(3), 263-

Ngày đăng: 26/02/2025, 21:51

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN