Your search
Results 34 resources
-
The global pandemic triggered by the Corona Virus Disease firstly detected in 2019 (COVID-19), entered the fourth year with many unknown aspects that need to be continuously studied by the medical and academic communities. According to the World Health Organization (WHO), until January 2023, more than 650 million cases were officially accounted (with probably much more non tested cases) with 6,656,601 deaths officially linked to the COVID-19 as plausible root cause. In this Chapter, an overview of some relevant technical aspects related to the COVID-19 pandemic is presented, divided in three parts. First, the advances are highlighted, including the development of new technologies in different areas such as medical devices, vaccines, and computerized system for medical support. Second, the focus is on relevant challenges, including the discussion on how computerized diagnostic supporting systems based on Artificial Intelligence are in fact ready to effectively help on clinical processes, from the perspective of the model proposed by NASA, Technology Readiness Levels (TRL). Finally, two trends are presented with increased necessity of computerized systems to deal with the Long Covid and the interest on Precision Medicine digital tools. Analyzing these three aspects (advances, challenges, and trends) may provide a broader understanding of the impact of the COVID-19 pandemic on the development of Computerized Diagnostic Support Systems.
-
Covid-19 has hit the world unprepared, as the deadliest pandemic of the century. Governments and authorities, as leaders and decision makers fighting against the virus, enormously tap on the power of AI and its data analytics models for urgent decision supports at the greatest efforts, ever seen from human history. This book showcases a collection of important data analytics models that were used during the epidemic, and discusses and compares their efficacy and limitations. Readers who from both healthcare industries and academia can gain unique insights on how data analytics models were designed and applied on epidemic data. Taking Covid-19 as a case study, readers especially those who are working in similar fields, would be better prepared in case a new wave of virus epidemic may arise again in the near future.
-
COVID-19 is a respiratory disorder caused by CoronaVirus and SARS (SARS-CoV2). WHO declared COVID-19 a global pandemic in March 2020 and several nations’ healthcare systems were on the verge of collapsing. With that, became crucial to screen COVID-19-positive patients to maximize limited resources. NAATs and antigen tests are utilized to diagnose COVID-19 infections. NAATs reliably detect SARS-CoV-2 and seldom produce false-negative results. Because of its specificity and sensitivity, RT-PCR can be considered the gold standard for COVID-19 diagnosis. This test’s complex gear is pricey and time-consuming, using skilled specialists to collect throat or nasal mucus samples. These tests require laboratory facilities and a machine for detection and analysis. Deep learning networks have been used for feature extraction and classification of Chest CT-Scan images and as an innovative detection approach in clinical practice. Because of COVID-19 CT scans’ medical characteristics, the lesions are widely spread and display a range of local aspects. Using deep learning to diagnose directly is difficult. In COVID-19, a Transformer and Convolutional Neural Network module are presented to extract local and global information from CT images. This chapter explains transfer learning, considering VGG-16 network, in CT examinations and compares convolutional networks with Vision Transformers (ViT). Vit usage increased VGG-16 network F1-score to 0.94.
-
This chapter describes an AUTO-ML strategy to detect COVID on chest X-rays utilizing Transfer Learning feature extraction and the AutoML TPOT framework in order to identify lung illnesses (such as COVID or pneumonia). MobileNet is a lightweight network that uses depthwise separable convolution to deepen the network while decreasing parameters and computation. AutoML is a revolutionary concept of automated machine learning (AML) that automates the process of building an ML pipeline inside a constrained computing framework. The term “AutoML” can mean a number of different things depending on context. AutoML has risen to prominence in both the business world and the academic community thanks to the ever-increasing capabilities of modern computers. Python Optimised ML Pipeline (TPOT) is a Python-based ML tool that optimizes pipeline efficiency via genetic programming. We use TPOT builds models for extracted MobileNet network features from COVID-19 image data. The f1-score of 0.79 classifies Normal, Viral Pneumonia, and Lung Opacity.
-
The use of learning analytics (LA) in real-world educational applications is growing very fast as academic institutions realize the positive potential that is possible if LA is integrated in decision making. Education in schools on public health need to evolve in response to the new knowledge and th...
-
Crowdsensing exploits the sensing abilities offered by smart phones and users' mobility. Users can mutually help each other as a community with the aid of crowdsensing. The potential of crowdsensing has yet to be fully realized for improving public health. A protocol based on gamification to encoura...
-
Association Rule Mining by Aprior method has been one of the popular data mining techniques for decades, where knowledge in the form of item-association rules is harvested from a dataset. The quality of item-association rules nevertheless depends on the concentration of frequent items from the input dataset. When the dataset becomes large, the items are scattered far apart. It is known from previous literature that clustering helps produce some data groups which are concentrated with frequent items. Among all the data clusters generated by a clustering algorithm, there must be one or more clusters which contain suitable and frequent items. In turn, the association rules that are mined from such clusters would be assured of better qualities in terms of high confidence than those mined from the whole dataset. However, it is not known in advance which cluster is the suitable one until all the clusters are tried by association rule mining. It is time consuming if they were to be tested by brute-force. In this paper, a statistical property called prior probability is investigated with respect to selecting the best out of many clusters by a clustering algorithm as a pre-processing step before association rule mining. Experiment results indicate that there is correlation between prior probability of the best cluster and the relatively high quality of association rules generated from that cluster. The results are significant as it is possible to know which cluster should be best used for association rule mining instead of testing them all out exhaustively.
-
In this chapter, a mathematical model explaining generically the propagation of a pandemic is proposed, helping in this way to identify the fundamental parameters related to the outbreak in general. Three free parameters for the pandemic are identified, which can be finally reduced to only two independent parameters. The model is inspired in the concept of spontaneous symmetry breaking, used normally in quantum field theory, and it provides the possibility of analyzing the complex data of the pandemic in a compact way. Data from 12 different countries are considered and the results presented. The application of nonlinear quantum physics equations to model epidemiologic time series is an innovative and promising approach.
-
At the beginning of 2020, the World Health Organization (WHO) started a coordinated global effort to counterattack the potential exponential spread of the SARS-Cov2 virus, responsible for the coronavirus disease, officially named COVID-19. This comprehensive initiative included a research roadmap published in March 2020, including nine dimensions, from epidemiological research to diagnostic tools and vaccine development. With an unprecedented case, the areas of study related to the pandemic received funds and strong attention from different research communities (universities, government, industry, etc.), resulting in an exponential increase in the number of publications and results achieved in such a small window of time. Outstanding research cooperation projects were implemented during the outbreak, and innovative technologies were developed and improved significantly. Clinical and laboratory processes were improved, while managerial personnel were supported by a countless number of models and computational tools for the decision-making process. This chapter aims to introduce an overview of this favorable scenario and highlight a necessary discussion about ethical issues in research related to the COVID-19 and the challenge of low-quality research, focusing only on the publication of techniques and approaches with limited scientific evidence or even practical application. A legacy of lessons learned from this unique period of human history should influence and guide the scientific and industrial communities for the future.
-
Nowadays, the increasing number of medical diagnostic data and clinical data provide more complementary references for doctors to make diagnosis to patients. For example, with medical data, such as electrocardiography (ECG), machine learning algorithms can be used to identify and diagnose heart disease to reduce the workload of doctors. However, ECG data is always exposed to various kinds of noise and interference in reality, and medical diagnostics only based on one-dimensional ECG data is not trustable enough. By extracting new features from other types of medical data, we can implement enhanced recognition methods, called multimodal learning. Multimodal learning helps models to process data from a range of different sources, eliminate the requirement for training each single learning modality, and improve the robustness of models with the diversity of data. Growing number of articles in recent years have been devoted to investigating how to extract data from different sources and build accurate multimodal machine learning models, or deep learning models for medical diagnostics. This paper reviews and summarizes several recent papers that dealing with multimodal machine learning in disease detection, and identify topics for future research.
-
COVID-19 has hit the world unprepared, as the deadliest pandemic of the century. Governments and authorities, as leaders and decision makers fighting the virus, enormously tap into the power of artificial intelligence and its predictive models for urgent decision support. This book showcases a collection of important predictive models that used during the pandemic, and discusses and compares their efficacy and limitations. Readers from both healthcare industries and academia can gain unique insights on how predictive models were designed and applied on epidemic data. Taking COVID19 as a case study and showcasing the lessons learnt, this book will enable readers to be better prepared in the event of virus epidemics or pandemics in the future.
-
The use of computational tools for medical image processing are promising tools to effectively detect COVID-19 as an alternative to expensive and time-consuming RT-PCR tests. For this specific task, CXR (Chest X-Ray) and CCT (Chest CT Scans) are the most common examinations to support diagnosis through radiology analysis. With these images, it is possible to support diagnosis and determine the disease’s severity stage. Computerized COVID-19 quantification and evaluation require an efficient segmentation process. Essential tasks for automatic segmentation tools are precisely identifying the lungs, lobes, bronchopulmonary segments, and infected regions or lesions. Segmented areas can provide handcrafted or self-learned diagnostic criteria for various applications. This Chapter presents different techniques applied for Chest CT Scans segmentation, considering the state of the art of UNet networks to segment COVID-19 CT scans and a segmentation experiment for network evaluation. Along 200 epochs, a dice coefficient of 0.83 was obtained.
-
The application of different tools for predicting COVID19 cases spreading has been widely considered during the pandemic. Comparing different approaches is essential to analyze performance and the practical support they can provide for the current pandemic management. This work proposes using the susceptible-exposed-asymptomatic but infectious-symptomatic and infectious-recovered-deceased (SEAIRD) model for different learning models. The first analysis considers an unsupervised prediction, based directly on the epidemiologic compartmental model. After that, two supervised learning models are considered integrating computational intelligence techniques and control engineering: the fuzzy-PID and the wavelet-ANN-PID models. The purpose is to compare different predictor strategies to validate a viable predictive control system for the COVID19 relevant epidemiologic time series. For each model, after setting the initial conditions for each parameter, the prediction performance is calculated based on the presented data. The use of PID controllers is justified to avoid divergence in the system when the learning process is conducted. The wavelet neural network solution is considered here because of its rapid convergence rate. The proposed solutions are dynamic and can be adjusted and corrected in real time, according to the output error. The results are presented in each subsection of the chapter.
Explore
Academic Units
Resource type
- Book (3)
- Book Section (25)
- Conference Paper (1)
- Journal Article (4)
- Report (1)
United Nations SDGs
Publication year
-
Between 2000 and 2024
(34)
-
Between 2010 and 2019
(1)
- 2018 (1)
- Between 2020 and 2024 (33)
-
Between 2010 and 2019
(1)