Your search
Results 38 resources
-
The global pandemic triggered by the Corona Virus Disease firstly detected in 2019 (COVID-19), entered the fourth year with many unknown aspects that need to be continuously studied by the medical and academic communities. According to the World Health Organization (WHO), until January 2023, more than 650 million cases were officially accounted (with probably much more non tested cases) with 6,656,601 deaths officially linked to the COVID-19 as plausible root cause. In this Chapter, an overview of some relevant technical aspects related to the COVID-19 pandemic is presented, divided in three parts. First, the advances are highlighted, including the development of new technologies in different areas such as medical devices, vaccines, and computerized system for medical support. Second, the focus is on relevant challenges, including the discussion on how computerized diagnostic supporting systems based on Artificial Intelligence are in fact ready to effectively help on clinical processes, from the perspective of the model proposed by NASA, Technology Readiness Levels (TRL). Finally, two trends are presented with increased necessity of computerized systems to deal with the Long Covid and the interest on Precision Medicine digital tools. Analyzing these three aspects (advances, challenges, and trends) may provide a broader understanding of the impact of the COVID-19 pandemic on the development of Computerized Diagnostic Support Systems.
-
Covid-19 has hit the world unprepared, as the deadliest pandemic of the century. Governments and authorities, as leaders and decision makers fighting against the virus, enormously tap on the power of AI and its data analytics models for urgent decision supports at the greatest efforts, ever seen from human history. This book showcases a collection of important data analytics models that were used during the epidemic, and discusses and compares their efficacy and limitations. Readers who from both healthcare industries and academia can gain unique insights on how data analytics models were designed and applied on epidemic data. Taking Covid-19 as a case study, readers especially those who are working in similar fields, would be better prepared in case a new wave of virus epidemic may arise again in the near future.
-
COVID-19 is a respiratory disorder caused by CoronaVirus and SARS (SARS-CoV2). WHO declared COVID-19 a global pandemic in March 2020 and several nations’ healthcare systems were on the verge of collapsing. With that, became crucial to screen COVID-19-positive patients to maximize limited resources. NAATs and antigen tests are utilized to diagnose COVID-19 infections. NAATs reliably detect SARS-CoV-2 and seldom produce false-negative results. Because of its specificity and sensitivity, RT-PCR can be considered the gold standard for COVID-19 diagnosis. This test’s complex gear is pricey and time-consuming, using skilled specialists to collect throat or nasal mucus samples. These tests require laboratory facilities and a machine for detection and analysis. Deep learning networks have been used for feature extraction and classification of Chest CT-Scan images and as an innovative detection approach in clinical practice. Because of COVID-19 CT scans’ medical characteristics, the lesions are widely spread and display a range of local aspects. Using deep learning to diagnose directly is difficult. In COVID-19, a Transformer and Convolutional Neural Network module are presented to extract local and global information from CT images. This chapter explains transfer learning, considering VGG-16 network, in CT examinations and compares convolutional networks with Vision Transformers (ViT). Vit usage increased VGG-16 network F1-score to 0.94.
-
This chapter describes an AUTO-ML strategy to detect COVID on chest X-rays utilizing Transfer Learning feature extraction and the AutoML TPOT framework in order to identify lung illnesses (such as COVID or pneumonia). MobileNet is a lightweight network that uses depthwise separable convolution to deepen the network while decreasing parameters and computation. AutoML is a revolutionary concept of automated machine learning (AML) that automates the process of building an ML pipeline inside a constrained computing framework. The term “AutoML” can mean a number of different things depending on context. AutoML has risen to prominence in both the business world and the academic community thanks to the ever-increasing capabilities of modern computers. Python Optimised ML Pipeline (TPOT) is a Python-based ML tool that optimizes pipeline efficiency via genetic programming. We use TPOT builds models for extracted MobileNet network features from COVID-19 image data. The f1-score of 0.79 classifies Normal, Viral Pneumonia, and Lung Opacity.
-
The use of learning analytics (LA) in real-world educational applications is growing very fast as academic institutions realize the positive potential that is possible if LA is integrated in decision making. Education in schools on public health need to evolve in response to the new knowledge and th...
-
Facial expression recognition (FER) is essential for discerning human emotions and is applied extensively in big data analytics, healthcare, security, and user experience enhancement. This paper presents an empirical study that evaluates four existing deep learning models—VGG16, DenseNet, ResNet50, and GoogLeNet—utilizing the Facial Expression Recognition 2013 (FER2013) dataset. The dataset contains seven distinct emotional expressions: angry, disgust, fear, happy, neutral, sad, and surprise. Each model underwent rigorous assessment based on metrics including test accuracy, training duration, and weight file size to test their effectiveness in FER tasks. ResNet50 emerged as the top performer with a test accuracy of 69.46%, leveraging its residual learning architecture to effectively address challenges inherent in training deep neural networks. Conversely, GoogLeNet exhibited the lowest test accuracy among the models, suggesting potential architectural constraints in FER applications. VGG16, while competitive in accuracy, demonstrated lengthier training times and a larger weight file size (512MB), highlighting the inherent balance between model complexity and computational efficiency.
-
Crowdsensing exploits the sensing abilities offered by smart phones and users' mobility. Users can mutually help each other as a community with the aid of crowdsensing. The potential of crowdsensing has yet to be fully realized for improving public health. A protocol based on gamification to encoura...
-
Association Rule Mining by Aprior method has been one of the popular data mining techniques for decades, where knowledge in the form of item-association rules is harvested from a dataset. The quality of item-association rules nevertheless depends on the concentration of frequent items from the input dataset. When the dataset becomes large, the items are scattered far apart. It is known from previous literature that clustering helps produce some data groups which are concentrated with frequent items. Among all the data clusters generated by a clustering algorithm, there must be one or more clusters which contain suitable and frequent items. In turn, the association rules that are mined from such clusters would be assured of better qualities in terms of high confidence than those mined from the whole dataset. However, it is not known in advance which cluster is the suitable one until all the clusters are tried by association rule mining. It is time consuming if they were to be tested by brute-force. In this paper, a statistical property called prior probability is investigated with respect to selecting the best out of many clusters by a clustering algorithm as a pre-processing step before association rule mining. Experiment results indicate that there is correlation between prior probability of the best cluster and the relatively high quality of association rules generated from that cluster. The results are significant as it is possible to know which cluster should be best used for association rule mining instead of testing them all out exhaustively.
-
In this chapter, a mathematical model explaining generically the propagation of a pandemic is proposed, helping in this way to identify the fundamental parameters related to the outbreak in general. Three free parameters for the pandemic are identified, which can be finally reduced to only two independent parameters. The model is inspired in the concept of spontaneous symmetry breaking, used normally in quantum field theory, and it provides the possibility of analyzing the complex data of the pandemic in a compact way. Data from 12 different countries are considered and the results presented. The application of nonlinear quantum physics equations to model epidemiologic time series is an innovative and promising approach.
-
At the beginning of 2020, the World Health Organization (WHO) started a coordinated global effort to counterattack the potential exponential spread of the SARS-Cov2 virus, responsible for the coronavirus disease, officially named COVID-19. This comprehensive initiative included a research roadmap published in March 2020, including nine dimensions, from epidemiological research to diagnostic tools and vaccine development. With an unprecedented case, the areas of study related to the pandemic received funds and strong attention from different research communities (universities, government, industry, etc.), resulting in an exponential increase in the number of publications and results achieved in such a small window of time. Outstanding research cooperation projects were implemented during the outbreak, and innovative technologies were developed and improved significantly. Clinical and laboratory processes were improved, while managerial personnel were supported by a countless number of models and computational tools for the decision-making process. This chapter aims to introduce an overview of this favorable scenario and highlight a necessary discussion about ethical issues in research related to the COVID-19 and the challenge of low-quality research, focusing only on the publication of techniques and approaches with limited scientific evidence or even practical application. A legacy of lessons learned from this unique period of human history should influence and guide the scientific and industrial communities for the future.
-
Nowadays, the increasing number of medical diagnostic data and clinical data provide more complementary references for doctors to make diagnosis to patients. For example, with medical data, such as electrocardiography (ECG), machine learning algorithms can be used to identify and diagnose heart disease to reduce the workload of doctors. However, ECG data is always exposed to various kinds of noise and interference in reality, and medical diagnostics only based on one-dimensional ECG data is not trustable enough. By extracting new features from other types of medical data, we can implement enhanced recognition methods, called multimodal learning. Multimodal learning helps models to process data from a range of different sources, eliminate the requirement for training each single learning modality, and improve the robustness of models with the diversity of data. Growing number of articles in recent years have been devoted to investigating how to extract data from different sources and build accurate multimodal machine learning models, or deep learning models for medical diagnostics. This paper reviews and summarizes several recent papers that dealing with multimodal machine learning in disease detection, and identify topics for future research.
-
<jats:p>Facial expression recognition (FER) is essential for discerning human emotions and is applied extensively in big data analytics, healthcare, security, and user experience enhancement. This study presents a comprehensive evaluation of ten state-of-the-art deep learning models—VGG16, VGG19, ResNet50, ResNet101, DenseNet, GoogLeNet V1, MobileNet V1, EfficientNet V2, ShuffleNet V2, and RepVGG—on the task of facial expression recognition using the FER2013 dataset. Key performance metrics, including test accuracy, training time, and weight file size, were analyzed to assess the learning efficiency, generalization capabilities, and architectural innovations of each model. EfficientNet V2 and ResNet50 emerged as top performers, achieving high accuracy and stable convergence using compound scaling and residual connections, enabling them to capture complex emotional features with minimal overfitting. DenseNet, GoogLeNet V1, and RepVGG also demonstrated strong performance, leveraging dense connectivity, inception modules, and re-parameterization techniques, though they exhibited slower initial convergence. In contrast, lightweight models such as MobileNet V1 and ShuffleNet V2, while excelling in computational efficiency, faced limitations in accuracy, particularly in challenging emotion categories like “fear” and “disgust”. The results highlight the critical trade-offs between computational efficiency and predictive accuracy, emphasizing the importance of selecting appropriate architecture based on application-specific requirements. This research contributes to ongoing advancements in deep learning, particularly in domains such as facial expression recognition, where capturing subtle and complex patterns is essential for high-performance outcomes.</jats:p>
-
COVID-19 has hit the world unprepared, as the deadliest pandemic of the century. Governments and authorities, as leaders and decision makers fighting the virus, enormously tap into the power of artificial intelligence and its predictive models for urgent decision support. This book showcases a collection of important predictive models that used during the pandemic, and discusses and compares their efficacy and limitations. Readers from both healthcare industries and academia can gain unique insights on how predictive models were designed and applied on epidemic data. Taking COVID19 as a case study and showcasing the lessons learnt, this book will enable readers to be better prepared in the event of virus epidemics or pandemics in the future.
Explore
Academic Units
Resource type
- Book (3)
- Book Section (25)
- Conference Paper (1)
- Journal Article (8)
- Report (1)
United Nations SDGs
Publication year
-
Between 2000 and 2025
(38)
-
Between 2010 and 2019
(1)
- 2018 (1)
- Between 2020 and 2025 (37)
-
Between 2010 and 2019
(1)