Your search
Results 86 resources
-
The gold standard to detect SARS-CoV-2 infection considers testing methods based on Polymerase Chain Reaction (PCR). Still, the time necessary to confirm patient infection can be lengthy, and the process is expensive. In parallel, X-Ray and CT scans play an important role in the diagnosis and treatment processes. Hence, a trusted automated technique for identifying and quantifying the infected lung regions would be advantageous. Chest X-rays are two-dimensional images of the patient’s chest and provide lung morphological information and other characteristics, like ground-glass opacities (GGO), horizontal linear opacities, or consolidations, which are typical characteristics of pneumonia caused by COVID-19. This chapter presents an AI-based system using multiple Transfer Learning models for COVID-19 classification using Chest X-Rays. In our experimental design, all the classifiers demonstrated satisfactory accuracy, precision, recall, and specificity performance. On the one hand, the Mobilenet architecture outperformed the other CNNs, achieving excellent results for the evaluated metrics. On the other hand, Squeezenet presented a regular result in terms of recall. In medical diagnosis, false negatives can be particularly harmful because a false negative can lead to patients being incorrectly diagnosed as healthy. These results suggest that our Deep Learning classifiers can accurately classify X-ray exams as normal or indicative of COVID-19 with high confidence.
-
The Covid-19 pandemic evidenced the need Computer Aided Diagnostic Systems to analyze medical images, such as CT and MRI scans and X-rays, to assist specialists in disease diagnosis. CAD systems have been shown to be effective at detecting COVID-19 in chest X-ray and CT images, with some studies reporting high levels of accuracy and sensitivity. Moreover, it can also detect some diseases in patients who may not have symptoms, preventing the spread of the virus. There are some types of CAD systems, such as Machine and Deep Learning-based and Transfer learning-based. This chapter proposes a pipeline for feature extraction and classification of Covid-19 in X-ray images using transfer learning for feature extraction with VGG-16 CNN and machine learning classifiers. Five classifiers were evaluated: Accuracy, Specificity, Sensitivity, Geometric mean, and Area under the curve. The SVM Classifier presented the best performance metrics for Covid-19 classification, achieving 90% accuracy, 97.5% of Specificity, 82.5% of Sensitivity, 89.6% of Geometric mean, and 90% for the AUC metric. On the other hand, the Nearest Centroid (NC) classifier presented poor sensitivity and geometric mean results, achieving 33.9% and 54.07%, respectively.
-
In the last few years, the tourism industry has experienced rapid expansion and diversification, making it one of the fastest-growing financial industries in the world. Consequently, the hotel industry has significantly affected the environment's long-term viability. Many hotels have begun voluntarily implementing environmentally sustainable practices as they become more aware of their ecological footprint. There has been a great deal of discussion about the effects of hotel operations on the environment and tourism sustainability in Macau. It is because of these negative impacts that hoteliers have adopted green practices in an attempt to minimize them. By developing sustainability reports, hotels can set goals, measure performance, and manage change, resulting in better sustainability. It could also be viewed as a strategy to enhance the company’s sustainability reporting to ensure stakeholders know what the company does. The objective of this study is twofold based on the analysis of the official sustainability reports of four major hotel chains. Firstly, seven categories of sustainable practices effectively adopted by these chain hotels are identified and clusterized. Second, it is presented in which areas some hotels performed more efficiently than others, considering the UN Sustainable Development Goals (SDGs) as a reference. The results allow a comprehensive clusterized analysis of the industry in a highly developed gaming and entertainment area of South China and create a clear comparison between relevant players and their concerns about sustainability practices.
-
Monitoring signals such as fetal heart rate (FHR) are important indicators of fetal well-being. Computer-assisted analysis of FHR patterns has been successfully used as a decision support tool. However, the absence of a gold standard for the building blocks decision-making in the systems design process impairs the development of new solutions. Here we propose a prognostic model based on advanced signal processing techniques and machine learning algorithms for the fetal state assessment within a comprehensive evaluation process. Feature-engineering-based and time-series-based machine learning classifiers were modeled into three data segmentation schemas for CTU-UHB, HUFA, and DB-TRIUM datasets and the generalization performance was assessed by a two-way cross-dataset evaluation. It has been shown that the feature-based algorithms outperformed the time-series ones on data-limited scenarios. The Support Vector Machines (SVM) obtained the best results on the datasets individually: specificity (85.6% ) and sensitivity (67.5%). On the other hand, the most effective generalization results were achieved by the Multi-layer perceptron (MLP) with a specificity of 71.6% and sensitivity of 61.7%. The overall process provided a combination of techniques and methods that increased the final prognostic model performance, achieving relevant results and requiring a smaller amount of data when compared to the state-of-the-art fetal status assessment solutions.
-
Crowdsensing exploits the sensing abilities offered by smart phones and users' mobility. Users can mutually help each other as a community with the aid of crowdsensing. The potential of crowdsensing has yet to be fully realized for improving public health. A protocol based on gamification to encoura...
-
Association Rule Mining by Aprior method has been one of the popular data mining techniques for decades, where knowledge in the form of item-association rules is harvested from a dataset. The quality of item-association rules nevertheless depends on the concentration of frequent items from the input dataset. When the dataset becomes large, the items are scattered far apart. It is known from previous literature that clustering helps produce some data groups which are concentrated with frequent items. Among all the data clusters generated by a clustering algorithm, there must be one or more clusters which contain suitable and frequent items. In turn, the association rules that are mined from such clusters would be assured of better qualities in terms of high confidence than those mined from the whole dataset. However, it is not known in advance which cluster is the suitable one until all the clusters are tried by association rule mining. It is time consuming if they were to be tested by brute-force. In this paper, a statistical property called prior probability is investigated with respect to selecting the best out of many clusters by a clustering algorithm as a pre-processing step before association rule mining. Experiment results indicate that there is correlation between prior probability of the best cluster and the relatively high quality of association rules generated from that cluster. The results are significant as it is possible to know which cluster should be best used for association rule mining instead of testing them all out exhaustively.
-
In this chapter, a mathematical model explaining generically the propagation of a pandemic is proposed, helping in this way to identify the fundamental parameters related to the outbreak in general. Three free parameters for the pandemic are identified, which can be finally reduced to only two independent parameters. The model is inspired in the concept of spontaneous symmetry breaking, used normally in quantum field theory, and it provides the possibility of analyzing the complex data of the pandemic in a compact way. Data from 12 different countries are considered and the results presented. The application of nonlinear quantum physics equations to model epidemiologic time series is an innovative and promising approach.
-
At the beginning of 2020, the World Health Organization (WHO) started a coordinated global effort to counterattack the potential exponential spread of the SARS-Cov2 virus, responsible for the coronavirus disease, officially named COVID-19. This comprehensive initiative included a research roadmap published in March 2020, including nine dimensions, from epidemiological research to diagnostic tools and vaccine development. With an unprecedented case, the areas of study related to the pandemic received funds and strong attention from different research communities (universities, government, industry, etc.), resulting in an exponential increase in the number of publications and results achieved in such a small window of time. Outstanding research cooperation projects were implemented during the outbreak, and innovative technologies were developed and improved significantly. Clinical and laboratory processes were improved, while managerial personnel were supported by a countless number of models and computational tools for the decision-making process. This chapter aims to introduce an overview of this favorable scenario and highlight a necessary discussion about ethical issues in research related to the COVID-19 and the challenge of low-quality research, focusing only on the publication of techniques and approaches with limited scientific evidence or even practical application. A legacy of lessons learned from this unique period of human history should influence and guide the scientific and industrial communities for the future.
-
Even with more than 12 billion vaccine doses administered globally, the Covid-19 pandemic has caused several global economic, social, environmental, and healthcare impacts. Computer Aided Diagnostic (CAD) systems can serve as a complementary method to aid doctors in identifying regions of interest in images and help detect diseases. In addition, these systems can help doctors analyze the status of the disease and check for their progress or regression. To analyze the viability of using CNNs for differentiating Covid-19 CT positive images from Covid-19 CT negative images, we used a dataset collected by Union Hospital (HUST-UH) and Liyuan Hospital (HUST-LH) and made available at the Kaggle platform. The main objective of this chapter is to present results from applying two state-of-the-art CNNs on a Covid-19 CT Scan images database to evaluate the possibility of differentiating images with imaging features associated with Covid-19 pneumonia from images with imaging features irrelevant to Covid-19 pneumonia. Two pre-trained neural networks, ResNet50 and MobileNet, were fine-tuned for the datasets under analysis. Both CNNs obtained promising results, with the ResNet50 network achieving a Precision of 0.97, a Recall of 0.96, an F1-score of 0.96, and 39 false negatives. The MobileNet classifier obtained a Precision of 0.94, a Recall of 0.94, an F1-score of 0.94, and a total of 20 false negatives.
-
It is known that the probability is not a conserved quantity in the stock market, given the fact that it corresponds to an open system. In this paper we analyze the flow of probability in this system by expressing the ideal Black-Scholes equation in the Hamiltonian form. We then analyze how the non-conservation of probability affects the stability of the prices of the Stocks. Finally, we find the conditions under which the probability might be conserved in the market, challenging in this way the non-Hermitian nature of the Black-Scholes Hamiltonian.
-
Nowadays, the increasing number of medical diagnostic data and clinical data provide more complementary references for doctors to make diagnosis to patients. For example, with medical data, such as electrocardiography (ECG), machine learning algorithms can be used to identify and diagnose heart disease to reduce the workload of doctors. However, ECG data is always exposed to various kinds of noise and interference in reality, and medical diagnostics only based on one-dimensional ECG data is not trustable enough. By extracting new features from other types of medical data, we can implement enhanced recognition methods, called multimodal learning. Multimodal learning helps models to process data from a range of different sources, eliminate the requirement for training each single learning modality, and improve the robustness of models with the diversity of data. Growing number of articles in recent years have been devoted to investigating how to extract data from different sources and build accurate multimodal machine learning models, or deep learning models for medical diagnostics. This paper reviews and summarizes several recent papers that dealing with multimodal machine learning in disease detection, and identify topics for future research.
-
The continuous development of robust machine learning algorithms in recent years has helped to improve the solutions of many studies in many fields of medicine, rapid diagnosis and detection of high-risk patients with poor prognosis as the coronavirus disease 2019 (COVID-19) spreads globally, and also early prevention of patients and optimization of medical resources. Here, we propose a fully automated machine learning system to classify the severity of COVID-19 from electrocardiogram (ECG) signals. We retrospectively collected 100 5-minute ECGs from 50 patients in two different positions, upright and supine. We processed the surface ECG to obtain QRS complexes and HRV indices for RR series, including a total of 43 features. We compared 19 machine learning classification algorithms that yielded different approaches explained in a methodology session.
-
In 2020, the World Health Organization declared the Coronavirus Disease 19 a global pandemic. While detecting COVID-19 is essential in controlling the disease, prognosis prediction is crucial in reducing disease complications and patient mortality. For that, standard protocols consider adopting medical imaging tools to analyze cases of pneumonia and complications. Nevertheless, some patients develop different symptoms and/or cannot be moved to a CT-Scan room. In other cases, the devices are not available. The adoption of ambulatory monitoring examinations, such as Electrocardiography (ECG), can be considered a viable tool to address the patient’s cardiovascular condition and to act as a predictor for future disease outcomes. In this investigation, ten non-linear features (Energy, Approximate Entropy, Logarithmic Entropy, Shannon Entropy, Hurst Exponent, Lyapunov Exponent, Higuchi Fractal Dimension, Katz Fractal Dimension, Correlation Dimension and Detrended Fluctuation Analysis) extracted from 2 ECG signals (collected from 2 different patient’s positions). Windows of 1 second segments in 6 ways of windowing signal analysis crops were evaluated employing statistical analysis. Three categories of outcomes are considered for the patient status: Low, Moderate, and Severe, and four combinations for classification scenarios are tested: (Low vs. Moderate, Low vs. Severe, Moderate vs. Severe) and 1 Multi-class comparison (All vs. All)). The results indicate that some statistically significant parameter distributions were found for all comparisons. (Low vs. Moderate—Approximate Entropy p-value = 0.0067 < 0.05, Low vs. Severe—Correlation Dimension p-value = 0.0087 < 0.05, Moderate vs. Severe—Correlation Dimension p-value = 0.0029 < 0.05, All vs. All—Correlation Dimension p-value = 0.0185 < 0.05. The non-linear analysis of the time-frequency representation of the ECG signal can be considered a promising tool for describing and distinguishing the COVID-19 severity activity along its different stages.
-
COVID-19 has hit the world unprepared, as the deadliest pandemic of the century. Governments and authorities, as leaders and decision makers fighting the virus, enormously tap into the power of artificial intelligence and its predictive models for urgent decision support. This book showcases a collection of important predictive models that used during the pandemic, and discusses and compares their efficacy and limitations. Readers from both healthcare industries and academia can gain unique insights on how predictive models were designed and applied on epidemic data. Taking COVID19 as a case study and showcasing the lessons learnt, this book will enable readers to be better prepared in the event of virus epidemics or pandemics in the future.
Explore
USJ Theses and Dissertations
Academic Units
-
Faculty of Arts and Humanities
(1)
- Álvaro Barbosa (1)
-
Faculty of Business and Law
(71)
- Alexandre Lobo (71)
- Douty Diakite (2)
- Emil Marques (1)
- Ivan Arraut (3)
- Jenny Phillips (2)
- Sergio Gomes (2)
-
Institute for Data Engineering and Sciences
(2)
- George Du Wencai (2)
Resource type
- Book (3)
- Book Section (31)
- Conference Paper (14)
- Journal Article (23)
- Preprint (2)
- Thesis (13)