Your search
Results 111 resources
-
Monitoring signals such as fetal heart rate (FHR) are important indicators of fetal well-being. Computer-assisted analysis of FHR patterns has been successfully used as a decision support tool. However, the absence of a gold standard for the building blocks decision-making in the systems design process impairs the development of new solutions. Here we propose a prognostic model based on advanced signal processing techniques and machine learning algorithms for the fetal state assessment within a comprehensive evaluation process. Feature-engineering-based and time-series-based machine learning classifiers were modeled into three data segmentation schemas for CTU-UHB, HUFA, and DB-TRIUM datasets and the generalization performance was assessed by a two-way cross-dataset evaluation. It has been shown that the feature-based algorithms outperformed the time-series ones on data-limited scenarios. The Support Vector Machines (SVM) obtained the best results on the datasets individually: specificity (85.6% ) and sensitivity (67.5%). On the other hand, the most effective generalization results were achieved by the Multi-layer perceptron (MLP) with a specificity of 71.6% and sensitivity of 61.7%. The overall process provided a combination of techniques and methods that increased the final prognostic model performance, achieving relevant results and requiring a smaller amount of data when compared to the state-of-the-art fetal status assessment solutions.
-
Crowdsensing exploits the sensing abilities offered by smart phones and users' mobility. Users can mutually help each other as a community with the aid of crowdsensing. The potential of crowdsensing has yet to be fully realized for improving public health. A protocol based on gamification to encoura...
-
Association Rule Mining by Aprior method has been one of the popular data mining techniques for decades, where knowledge in the form of item-association rules is harvested from a dataset. The quality of item-association rules nevertheless depends on the concentration of frequent items from the input dataset. When the dataset becomes large, the items are scattered far apart. It is known from previous literature that clustering helps produce some data groups which are concentrated with frequent items. Among all the data clusters generated by a clustering algorithm, there must be one or more clusters which contain suitable and frequent items. In turn, the association rules that are mined from such clusters would be assured of better qualities in terms of high confidence than those mined from the whole dataset. However, it is not known in advance which cluster is the suitable one until all the clusters are tried by association rule mining. It is time consuming if they were to be tested by brute-force. In this paper, a statistical property called prior probability is investigated with respect to selecting the best out of many clusters by a clustering algorithm as a pre-processing step before association rule mining. Experiment results indicate that there is correlation between prior probability of the best cluster and the relatively high quality of association rules generated from that cluster. The results are significant as it is possible to know which cluster should be best used for association rule mining instead of testing them all out exhaustively.
-
In this chapter, a mathematical model explaining generically the propagation of a pandemic is proposed, helping in this way to identify the fundamental parameters related to the outbreak in general. Three free parameters for the pandemic are identified, which can be finally reduced to only two independent parameters. The model is inspired in the concept of spontaneous symmetry breaking, used normally in quantum field theory, and it provides the possibility of analyzing the complex data of the pandemic in a compact way. Data from 12 different countries are considered and the results presented. The application of nonlinear quantum physics equations to model epidemiologic time series is an innovative and promising approach.
-
At the beginning of 2020, the World Health Organization (WHO) started a coordinated global effort to counterattack the potential exponential spread of the SARS-Cov2 virus, responsible for the coronavirus disease, officially named COVID-19. This comprehensive initiative included a research roadmap published in March 2020, including nine dimensions, from epidemiological research to diagnostic tools and vaccine development. With an unprecedented case, the areas of study related to the pandemic received funds and strong attention from different research communities (universities, government, industry, etc.), resulting in an exponential increase in the number of publications and results achieved in such a small window of time. Outstanding research cooperation projects were implemented during the outbreak, and innovative technologies were developed and improved significantly. Clinical and laboratory processes were improved, while managerial personnel were supported by a countless number of models and computational tools for the decision-making process. This chapter aims to introduce an overview of this favorable scenario and highlight a necessary discussion about ethical issues in research related to the COVID-19 and the challenge of low-quality research, focusing only on the publication of techniques and approaches with limited scientific evidence or even practical application. A legacy of lessons learned from this unique period of human history should influence and guide the scientific and industrial communities for the future.
-
Even with more than 12 billion vaccine doses administered globally, the Covid-19 pandemic has caused several global economic, social, environmental, and healthcare impacts. Computer Aided Diagnostic (CAD) systems can serve as a complementary method to aid doctors in identifying regions of interest in images and help detect diseases. In addition, these systems can help doctors analyze the status of the disease and check for their progress or regression. To analyze the viability of using CNNs for differentiating Covid-19 CT positive images from Covid-19 CT negative images, we used a dataset collected by Union Hospital (HUST-UH) and Liyuan Hospital (HUST-LH) and made available at the Kaggle platform. The main objective of this chapter is to present results from applying two state-of-the-art CNNs on a Covid-19 CT Scan images database to evaluate the possibility of differentiating images with imaging features associated with Covid-19 pneumonia from images with imaging features irrelevant to Covid-19 pneumonia. Two pre-trained neural networks, ResNet50 and MobileNet, were fine-tuned for the datasets under analysis. Both CNNs obtained promising results, with the ResNet50 network achieving a Precision of 0.97, a Recall of 0.96, an F1-score of 0.96, and 39 false negatives. The MobileNet classifier obtained a Precision of 0.94, a Recall of 0.94, an F1-score of 0.94, and a total of 20 false negatives.
-
This research unveils to predict consumer ad preferences by detecting seven basic emotions, attention and engagement triggered by advertising through the analysis of two specific physiological monitoring tools, electrodermal activity (EDA), and Facial Expression Analysis (FEA), applied to video advertising, offering a twofold contribution of significant value. First, to identify the most relevant physiological features for consumer preference prediction. We integrated a statistical module encompassing inferential and exploratory analysis tools, which identified emotions such as Joy, Disgust, and Surprise, enabling the statistical differentiation of preferences concerning various advertisements. Second, we present an artificial intelligence (AI) system founded on machine learning techniques, encompassing k-Nearest Neighbors, Support Vector Machine, and Random Forest (RF). Our findings show that the RF technique emerged as the top performer, boasting an 81% Accuracy, 84% Precision, 79% Recall, and an F1-score of 81% in predicting consumer preferences. In addition, our research proposes an eXplainable AI module based on feature importance, which discerned Attention, Engagement, Joy, and Disgust as the four most pivotal features influencing consumer ad preference prediction. The results indicate that computerized intelligent systems based on EDA and FEA data can be used to predict consumer ad preferences based on videos and effectively used as supporting tools for marketing specialists.
-
It is known that the probability is not a conserved quantity in the stock market, given the fact that it corresponds to an open system. In this paper we analyze the flow of probability in this system by expressing the ideal Black-Scholes equation in the Hamiltonian form. We then analyze how the non-conservation of probability affects the stability of the prices of the Stocks. Finally, we find the conditions under which the probability might be conserved in the market, challenging in this way the non-Hermitian nature of the Black-Scholes Hamiltonian.
-
<jats:title>Abstract</jats:title><jats:p>This research unveils to predict consumer ad preferences by detecting seven basic emotions, attention and engagement triggered by advertising through the analysis of two specific physiological monitoring tools, electrodermal activity (EDA), and Facial Expression Analysis (FEA), applied to video advertising, offering a twofold contribution of significant value. First, to identify the most relevant physiological features for consumer preference prediction. We integrated a statistical module encompassing inferential and exploratory analysis tools, which identified emotions such as Joy, Disgust, and Surprise, enabling the statistical differentiation of preferences concerning various advertisements. Second, we present an artificial intelligence (AI) system founded on machine learning techniques, encompassing k‐Nearest Neighbors, Support Vector Machine, and Random Forest (RF). Our findings show that the RF technique emerged as the top performer, boasting an 81% Accuracy, 84% Precision, 79% Recall, and an F1‐score of 81% in predicting consumer preferences. In addition, our research proposes an eXplainable AI module based on feature importance, which discerned Attention, Engagement, Joy, and Disgust as the four most pivotal features influencing consumer ad preference prediction. The results indicate that computerized intelligent systems based on EDA and FEA data can be used to predict consumer ad preferences based on videos and effectively used as supporting tools for marketing specialists.</jats:p>
-
Nowadays, the increasing number of medical diagnostic data and clinical data provide more complementary references for doctors to make diagnosis to patients. For example, with medical data, such as electrocardiography (ECG), machine learning algorithms can be used to identify and diagnose heart disease to reduce the workload of doctors. However, ECG data is always exposed to various kinds of noise and interference in reality, and medical diagnostics only based on one-dimensional ECG data is not trustable enough. By extracting new features from other types of medical data, we can implement enhanced recognition methods, called multimodal learning. Multimodal learning helps models to process data from a range of different sources, eliminate the requirement for training each single learning modality, and improve the robustness of models with the diversity of data. Growing number of articles in recent years have been devoted to investigating how to extract data from different sources and build accurate multimodal machine learning models, or deep learning models for medical diagnostics. This paper reviews and summarizes several recent papers that dealing with multimodal machine learning in disease detection, and identify topics for future research.
-
Since early times, the effects of a booming sector in other sectors of a small economy have been of interest to scholars. There is a general perception that the booming Gaming sector has contributed to the overall growth in Macau through the trickle-down effect, passing on the benefits of growth to other sectors. After the liberalization of the gaming industry in 2002, this booming sector experienced several years of exponential growth, becoming the driving industry for Macao’s economy. Several scholars and researchers have dedicated their studies to the effects of the casino gaming industry as a booming sector in such a small economy. However, there is a gap in what concerns measuring the influence of the Gaming sector as a driving industry for several other sectors or following industries of Macau’s economy. The purpose of this research study is to investigate in what measure the Gaming sector in Macao leveraged the other economic sectors and how related or correlated are the different industries of Macao’s Economy. A protocol-driven understanding of the state of the art on the interrelations between economic sectors and different techniques used to study those inter-relations was conducted through a systematic literature review. Given the limited available data on the Gross Value Added (GVA), or Gross Domestic Product (GDP) on the supply side, as a central measure of economic activity in the different sectors, several possible interpolation models using auxiliary high-frequency data (indicators) were compared, to achieve the optimal model for interpolation of each variable. Several forecasts for the future performance of Macau's four major economic sectors were presented based on different regression techniques. Autoregressive Integrated Moving Average (ARIMA) models were developed to assess the dependence of the future performance of a sector’s GVA on its past performance. Optimal Vector Autoregressive (VAR) models were created to identify the explanatory power of some sectors of Macau’s economy in others. Based on available auxiliary data in high-frequency (quarterly) it was possible to interpolate the quarterly GVA per economic sector, available only in low-frequency (annually), for the major sectors of Macao’s economy. Some sectors have a considerable explanatory power on the performance of other sectors, however, the proposed regression models did not identify a clear relation between the performance of the Gaming sector and the performance of other major sectors from Macao’s economy
-
<jats:p>Facial expression recognition (FER) is essential for discerning human emotions and is applied extensively in big data analytics, healthcare, security, and user experience enhancement. This study presents a comprehensive evaluation of ten state-of-the-art deep learning models—VGG16, VGG19, ResNet50, ResNet101, DenseNet, GoogLeNet V1, MobileNet V1, EfficientNet V2, ShuffleNet V2, and RepVGG—on the task of facial expression recognition using the FER2013 dataset. Key performance metrics, including test accuracy, training time, and weight file size, were analyzed to assess the learning efficiency, generalization capabilities, and architectural innovations of each model. EfficientNet V2 and ResNet50 emerged as top performers, achieving high accuracy and stable convergence using compound scaling and residual connections, enabling them to capture complex emotional features with minimal overfitting. DenseNet, GoogLeNet V1, and RepVGG also demonstrated strong performance, leveraging dense connectivity, inception modules, and re-parameterization techniques, though they exhibited slower initial convergence. In contrast, lightweight models such as MobileNet V1 and ShuffleNet V2, while excelling in computational efficiency, faced limitations in accuracy, particularly in challenging emotion categories like “fear” and “disgust”. The results highlight the critical trade-offs between computational efficiency and predictive accuracy, emphasizing the importance of selecting appropriate architecture based on application-specific requirements. This research contributes to ongoing advancements in deep learning, particularly in domains such as facial expression recognition, where capturing subtle and complex patterns is essential for high-performance outcomes.</jats:p>
-
The continuous development of robust machine learning algorithms in recent years has helped to improve the solutions of many studies in many fields of medicine, rapid diagnosis and detection of high-risk patients with poor prognosis as the coronavirus disease 2019 (COVID-19) spreads globally, and also early prevention of patients and optimization of medical resources. Here, we propose a fully automated machine learning system to classify the severity of COVID-19 from electrocardiogram (ECG) signals. We retrospectively collected 100 5-minute ECGs from 50 patients in two different positions, upright and supine. We processed the surface ECG to obtain QRS complexes and HRV indices for RR series, including a total of 43 features. We compared 19 machine learning classification algorithms that yielded different approaches explained in a methodology session.
-
In 2020, the World Health Organization declared the Coronavirus Disease 19 a global pandemic. While detecting COVID-19 is essential in controlling the disease, prognosis prediction is crucial in reducing disease complications and patient mortality. For that, standard protocols consider adopting medical imaging tools to analyze cases of pneumonia and complications. Nevertheless, some patients develop different symptoms and/or cannot be moved to a CT-Scan room. In other cases, the devices are not available. The adoption of ambulatory monitoring examinations, such as Electrocardiography (ECG), can be considered a viable tool to address the patient’s cardiovascular condition and to act as a predictor for future disease outcomes. In this investigation, ten non-linear features (Energy, Approximate Entropy, Logarithmic Entropy, Shannon Entropy, Hurst Exponent, Lyapunov Exponent, Higuchi Fractal Dimension, Katz Fractal Dimension, Correlation Dimension and Detrended Fluctuation Analysis) extracted from 2 ECG signals (collected from 2 different patient’s positions). Windows of 1 second segments in 6 ways of windowing signal analysis crops were evaluated employing statistical analysis. Three categories of outcomes are considered for the patient status: Low, Moderate, and Severe, and four combinations for classification scenarios are tested: (Low vs. Moderate, Low vs. Severe, Moderate vs. Severe) and 1 Multi-class comparison (All vs. All)). The results indicate that some statistically significant parameter distributions were found for all comparisons. (Low vs. Moderate—Approximate Entropy p-value = 0.0067 < 0.05, Low vs. Severe—Correlation Dimension p-value = 0.0087 < 0.05, Moderate vs. Severe—Correlation Dimension p-value = 0.0029 < 0.05, All vs. All—Correlation Dimension p-value = 0.0185 < 0.05. The non-linear analysis of the time-frequency representation of the ECG signal can be considered a promising tool for describing and distinguishing the COVID-19 severity activity along its different stages.
-
COVID-19 has hit the world unprepared, as the deadliest pandemic of the century. Governments and authorities, as leaders and decision makers fighting the virus, enormously tap into the power of artificial intelligence and its predictive models for urgent decision support. This book showcases a collection of important predictive models that used during the pandemic, and discusses and compares their efficacy and limitations. Readers from both healthcare industries and academia can gain unique insights on how predictive models were designed and applied on epidemic data. Taking COVID19 as a case study and showcasing the lessons learnt, this book will enable readers to be better prepared in the event of virus epidemics or pandemics in the future.
Explore
USJ Theses and Dissertations
Academic Units
-
Faculty of Arts and Humanities
(1)
- Álvaro Barbosa (1)
-
Faculty of Business and Law
(90)
- Alexandre Lobo (90)
- Douty Diakite (2)
- Emil Marques (1)
- Ivan Arraut (3)
- Jenny Phillips (2)
- Sergio Gomes (2)
- Silva, Susana C. (1)
-
Institute for Data Engineering and Sciences
(2)
- George Du Wencai (2)
Resource type
- Book (3)
- Book Section (31)
- Conference Paper (16)
- Journal Article (40)
- Preprint (2)
- Thesis (19)
United Nations SDGs
Student Research and Output
Publication year
- Between 2000 and 2025 (111)