Your search
Results 49 resources
-
In the wave of digital transformation, Chinese banks have prioritized digital banking services as key strategic goals, aiming to revolutionize the mobile banking experience. This study aims to assess the factors influencing the willingness to use the various financial and contextual services offered through digital banking. Specifically, it is proposed a model based on users' perceptions of mobile banking scenarios and examines how the development of digital banking services influences users' willingness to use them. The study involved qualitative in-depth interviews with 12 mobile banking users, with the interview content analyzed using Nvivo qualitative analysis software. The data analysis identified 9 core coding categories: Financial Professionalism, Security, Marketing Stimulation, Innovative Products, Use Experience, Strong Relationship, Trust, Perceived Usefulness, and Willingness to Use. These categories were further refined to construct a theoretical model of user willingness in digital banking services, drawing from the optimized Technology Acceptance Model (TAM). The findings provide valuable insights for the banking industry in Macau, aiding in understanding customer needs and supporting the positive development of mobile finance and contextual digital banking services in the region.
-
Construction projects are complex endeavours, with potential obstacles that can cause delays which can have particularly profound implications potentially impacting on company's financial health, business continuity and reputation. It is becoming increasingly recognised that delays are context-specific and multifaceted, requiring more industry-oriented perceptions. This work proposes the exploratory use of Machine Learning based on Classification and Regression Trees (CART) Decision Trees (DT) to assess the predictive analysis of these approaches, considering surveys (primary data) collected from 100 specialists with different backgrounds and experiences in the construction industry. Survey responses are discussed, followed by the CART DTs, which are used as predictor for clarifying underneath relationship among different variables in a project environment. The major issue presented is related to Project Design, with "The firm is not allowed to apply for an extension of contract period", with two possible predictors, firstly, as the main factor it is found "Mistakes, inconsistencies, and ambiguities in specification and drawing", while other aspect highlights "Poor site supervision and management by the contractor". The results indicate that the correct use of Artificial Intelligence techniques with relevant data are potential tools to support the analysis of scenarios and avoidance of project delays in Project Management.
-
China growing awareness of sustainability has brought out relevant aspects to move towards a green environment. Since its subscription in 2016, China has been committed to implementing the Paris Agreement, and the Greater Bay Area (GBA) development plan prioritizes ecology and pursuing green development. The primary purpose of this research is to perceive the companies' insights concerning the implementation of sustainable buildings’ projects in Macau. For this multi-case study analysis, primary data was gathered from interviews with two groups involved in the construction projects’ lifecycle: Consultants and Contractors, to analyze different perceptions and concerns. The interviews considered two different themes about the main topic: (1) Perception on Companies’ Experience in Sustainable Projects; (2) Key Drivers towards Sustainable Buildings’ Projects’ Implementation. In conclusion, according to the analyzed data, it is essential to notice that companies’ background and the market particularities affect their corporate performance specially connected to the green construction frameworks. The data also indicate that it is necessary to move towards regulations and policies to change corporate and people's mindset.
-
Artificial intelligence (AI) and deep learning (DL) are advancing in stock market prediction, attracting the attention of researchers in computer science and finance. This bibliometric review analyzes 525 articles published from 1991 to 2024 in Scopus-indexed journals, utilizing VOSviewer software to identify key research trends, influential contributors, and burgeoning themes. The bibliometric analysis encompasses a performance analysis of the most prominent scientific contributors and a network analysis of scientific mapping, which includes co-authorship, co-occurrence, citation, bibliographical coupling, and co-citation analyses enabled by the VOSviewer software. Among the 693 countries, significant hubs of knowledge production include China, the US, India, and the UK, highlighting the global relevance of the field. Various AI and DL technologies are increasingly employed in stock price predictions, with artificial neural networks (ANN) and other methods such as long short-term memory (LSTM), Random Forest, Sentiment Analysis, Support Vector Machine/Regression (SVM/SVR), among the 1399 keyword counts in publications. Influential studies such as LeBaron (1999) and Moghaddam (2016) have shaped foundational research in 8159 citations. This review offers original insights into the bibliometric landscape of AI and DL applications in finance by mapping global knowledge production and identifying critical AI methods advancing stock market prediction. It enables finance professionals to learn about technological developments and trends to enhance decision-making and gain market advantage.
-
The gold standard to detect SARS-CoV-2 infection considers testing methods based on Polymerase Chain Reaction (PCR). Still, the time necessary to confirm patient infection can be lengthy, and the process is expensive. In parallel, X-Ray and CT scans play an important role in the diagnosis and treatment processes. Hence, a trusted automated technique for identifying and quantifying the infected lung regions would be advantageous. Chest X-rays are two-dimensional images of the patient’s chest and provide lung morphological information and other characteristics, like ground-glass opacities (GGO), horizontal linear opacities, or consolidations, which are typical characteristics of pneumonia caused by COVID-19. This chapter presents an AI-based system using multiple Transfer Learning models for COVID-19 classification using Chest X-Rays. In our experimental design, all the classifiers demonstrated satisfactory accuracy, precision, recall, and specificity performance. On the one hand, the Mobilenet architecture outperformed the other CNNs, achieving excellent results for the evaluated metrics. On the other hand, Squeezenet presented a regular result in terms of recall. In medical diagnosis, false negatives can be particularly harmful because a false negative can lead to patients being incorrectly diagnosed as healthy. These results suggest that our Deep Learning classifiers can accurately classify X-ray exams as normal or indicative of COVID-19 with high confidence.
-
The Covid-19 pandemic evidenced the need Computer Aided Diagnostic Systems to analyze medical images, such as CT and MRI scans and X-rays, to assist specialists in disease diagnosis. CAD systems have been shown to be effective at detecting COVID-19 in chest X-ray and CT images, with some studies reporting high levels of accuracy and sensitivity. Moreover, it can also detect some diseases in patients who may not have symptoms, preventing the spread of the virus. There are some types of CAD systems, such as Machine and Deep Learning-based and Transfer learning-based. This chapter proposes a pipeline for feature extraction and classification of Covid-19 in X-ray images using transfer learning for feature extraction with VGG-16 CNN and machine learning classifiers. Five classifiers were evaluated: Accuracy, Specificity, Sensitivity, Geometric mean, and Area under the curve. The SVM Classifier presented the best performance metrics for Covid-19 classification, achieving 90% accuracy, 97.5% of Specificity, 82.5% of Sensitivity, 89.6% of Geometric mean, and 90% for the AUC metric. On the other hand, the Nearest Centroid (NC) classifier presented poor sensitivity and geometric mean results, achieving 33.9% and 54.07%, respectively.
-
Even with more than 12 billion vaccine doses administered globally, the Covid-19 pandemic has caused several global economic, social, environmental, and healthcare impacts. Computer Aided Diagnostic (CAD) systems can serve as a complementary method to aid doctors in identifying regions of interest in images and help detect diseases. In addition, these systems can help doctors analyze the status of the disease and check for their progress or regression. To analyze the viability of using CNNs for differentiating Covid-19 CT positive images from Covid-19 CT negative images, we used a dataset collected by Union Hospital (HUST-UH) and Liyuan Hospital (HUST-LH) and made available at the Kaggle platform. The main objective of this chapter is to present results from applying two state-of-the-art CNNs on a Covid-19 CT Scan images database to evaluate the possibility of differentiating images with imaging features associated with Covid-19 pneumonia from images with imaging features irrelevant to Covid-19 pneumonia. Two pre-trained neural networks, ResNet50 and MobileNet, were fine-tuned for the datasets under analysis. Both CNNs obtained promising results, with the ResNet50 network achieving a Precision of 0.97, a Recall of 0.96, an F1-score of 0.96, and 39 false negatives. The MobileNet classifier obtained a Precision of 0.94, a Recall of 0.94, an F1-score of 0.94, and a total of 20 false negatives.
-
It is plausible to assume that the component waves in ECG signals constitute a unique human characteristic because morphology and amplitudes of recorded beats are governed by multiple individual factors. According to the best of our knowledge, the issue of automatically classifying different ’identities’ of QRS morphology has not been explored within the literature. This work proposes five alternative mathematical models for representing different QRS morphologies providing the extraction of a set of features related to QRS shape. The technique incorporates mechanisms of combining the mathematical functions Gaussian, Mexican-Hat and Rayleigh probability density function and also a mechanism for clipping the waveform of those functions. The searching for the optimal parameters which minimize the normalized RMS error between each mathematical model and a given QRS search window enables to find an optimal model. Such modeling behaves as a robust alternative for delineating heartbeats, classifying beat morphologies, detecting subtle and anomalous changes, compression of QRS complex windows among others. The validation process evaluates the ability of each model to represent different QRS morphology classes within 159 full ECG signal records from QT database and 584 QRS search windows from MIT-BIH Arrhythmia database. From the experimental results, we rank the winning rates for which each mathematical model best models and also discriminates the most predominant QRS morphologies Rs, rS, RS, qR, qRs, R, rR’s and QS. Furthermore, the average time errors computed for QRS onset and offset locations when using the corresponding winner mathematical models for delineation purposes were, respectively, 12.87±8.5 ms and 1.47±10.06 ms.
-
Association Rule Mining by Aprior method has been one of the popular data mining techniques for decades, where knowledge in the form of item-association rules is harvested from a dataset. The quality of item-association rules nevertheless depends on the concentration of frequent items from the input dataset. When the dataset becomes large, the items are scattered far apart. It is known from previous literature that clustering helps produce some data groups which are concentrated with frequent items. Among all the data clusters generated by a clustering algorithm, there must be one or more clusters which contain suitable and frequent items. In turn, the association rules that are mined from such clusters would be assured of better qualities in terms of high confidence than those mined from the whole dataset. However, it is not known in advance which cluster is the suitable one until all the clusters are tried by association rule mining. It is time consuming if they were to be tested by brute-force. In this paper, a statistical property called prior probability is investigated with respect to selecting the best out of many clusters by a clustering algorithm as a pre-processing step before association rule mining. Experiment results indicate that there is correlation between prior probability of the best cluster and the relatively high quality of association rules generated from that cluster. The results are significant as it is possible to know which cluster should be best used for association rule mining instead of testing them all out exhaustively.
-
<jats:title>Abstract</jats:title><jats:p>This research unveils to predict consumer ad preferences by detecting seven basic emotions, attention and engagement triggered by advertising through the analysis of two specific physiological monitoring tools, electrodermal activity (EDA), and Facial Expression Analysis (FEA), applied to video advertising, offering a twofold contribution of significant value. First, to identify the most relevant physiological features for consumer preference prediction. We integrated a statistical module encompassing inferential and exploratory analysis tools, which identified emotions such as Joy, Disgust, and Surprise, enabling the statistical differentiation of preferences concerning various advertisements. Second, we present an artificial intelligence (AI) system founded on machine learning techniques, encompassing k‐Nearest Neighbors, Support Vector Machine, and Random Forest (RF). Our findings show that the RF technique emerged as the top performer, boasting an 81% Accuracy, 84% Precision, 79% Recall, and an F1‐score of 81% in predicting consumer preferences. In addition, our research proposes an eXplainable AI module based on feature importance, which discerned Attention, Engagement, Joy, and Disgust as the four most pivotal features influencing consumer ad preference prediction. The results indicate that computerized intelligent systems based on EDA and FEA data can be used to predict consumer ad preferences based on videos and effectively used as supporting tools for marketing specialists.</jats:p>
-
<jats:p>Facial expression recognition (FER) is essential for discerning human emotions and is applied extensively in big data analytics, healthcare, security, and user experience enhancement. This study presents a comprehensive evaluation of ten state-of-the-art deep learning models—VGG16, VGG19, ResNet50, ResNet101, DenseNet, GoogLeNet V1, MobileNet V1, EfficientNet V2, ShuffleNet V2, and RepVGG—on the task of facial expression recognition using the FER2013 dataset. Key performance metrics, including test accuracy, training time, and weight file size, were analyzed to assess the learning efficiency, generalization capabilities, and architectural innovations of each model. EfficientNet V2 and ResNet50 emerged as top performers, achieving high accuracy and stable convergence using compound scaling and residual connections, enabling them to capture complex emotional features with minimal overfitting. DenseNet, GoogLeNet V1, and RepVGG also demonstrated strong performance, leveraging dense connectivity, inception modules, and re-parameterization techniques, though they exhibited slower initial convergence. In contrast, lightweight models such as MobileNet V1 and ShuffleNet V2, while excelling in computational efficiency, faced limitations in accuracy, particularly in challenging emotion categories like “fear” and “disgust”. The results highlight the critical trade-offs between computational efficiency and predictive accuracy, emphasizing the importance of selecting appropriate architecture based on application-specific requirements. This research contributes to ongoing advancements in deep learning, particularly in domains such as facial expression recognition, where capturing subtle and complex patterns is essential for high-performance outcomes.</jats:p>
-
Consumers' selections and decision-making processes are some of the most exciting and challenging topics in neuromarketing, sales, and branding. From a global perspective, multicultural influences and societal conditions are crucial to consider. Neuroscience applications in international marketing and consumer behavior is an emergent and multidisciplinary field aiming to understand consumers' thoughts, reactions, and selection processes in branding and sales. This study focuses on real-time monitoring of different physiological signals using eye-tracking, facial expressions recognition, and Galvanic Skin Response (GSR) acquisition methods to analyze consumers' responses, detect emotional arousal, measure attention or relaxation levels, analyze perception, consciousness, memory, learning, motivation, preference, and decision-making. This research aimed to monitor human subjects' reactions to these signals during an experiment designed in three phases consisting of different branding advertisements. The nonadvertisement exposition was also monitored while gathering survey responses at the end of each phase. A feature extraction module with a data analytics module was implemented to calculate statistical metrics and decision-making supporting tools based on Principal Component Analysis (PCA) and Feature Importance (FI) determination based on the Random Forest technique. The results indicate that when compared to image ads, video ads are more effective in attracting consumers' attention and creating more emotional arousal.
-
The gold standard to detect SARS-CoV-2 infection consider testing methods based on Polymerase Chain Reaction (PCR). Still, the time necessary to confirm patient infection can be lengthy, and the process is expensive. On the other hand, X-Ray and CT scans play a vital role in the auxiliary diagnosis process. Hence, a trusted automated technique for identifying and quantifying the infected lung regions would be advantageous. Chest X-rays are two-dimensional images of the patient’s chest and provide lung morphological information and other characteristics, like ground-glass opacities (GGO), horizontal linear opacities, or consolidations, which are characteristics of pneumonia caused by COVID-19. But before the computerized diagnostic support system can classify a medical image, a segmentation task should usually be performed to identify relevant areas to be analyzed and reduce the risk of noise and misinterpretation caused by other structures eventually present in the images. This chapter presents an AI-based system for lung segmentation in X-ray images using a U-net CNN model. The system’s performance was evaluated using metrics such as cross-entropy, dice coefficient, and Mean IoU on unseen data. Our study divided the data into training and evaluation sets using an 80/20 train-test split method. The training set was used to train the model, and the evaluation test set was used to evaluate the performance of the trained model. The results of the evaluation showed that the model achieved a Dice Similarity Coefficient (DSC) of 95%, Cross entropy of 97%, and Mean IoU of 86%.
-
The continuous development of robust machine learning algorithms in recent years has helped to improve the solutions of many studies in many fields of medicine, rapid diagnosis and detection of high-risk patients with poor prognosis as the coronavirus disease 2019 (COVID-19) spreads globally, and also early prevention of patients and optimization of medical resources. Here, we propose a fully automated machine learning system to classify the severity of COVID-19 from electrocardiogram (ECG) signals. We retrospectively collected 100 5-minute ECGs from 50 patients in two different positions, upright and supine. We processed the surface ECG to obtain QRS complexes and HRV indices for RR series, including a total of 43 features. We compared 19 machine learning classification algorithms that yielded different approaches explained in a methodology session.
-
In 2020, the World Health Organization declared the Coronavirus Disease 19 a global pandemic. While detecting COVID-19 is essential in controlling the disease, prognosis prediction is crucial in reducing disease complications and patient mortality. For that, standard protocols consider adopting medical imaging tools to analyze cases of pneumonia and complications. Nevertheless, some patients develop different symptoms and/or cannot be moved to a CT-Scan room. In other cases, the devices are not available. The adoption of ambulatory monitoring examinations, such as Electrocardiography (ECG), can be considered a viable tool to address the patient’s cardiovascular condition and to act as a predictor for future disease outcomes. In this investigation, ten non-linear features (Energy, Approximate Entropy, Logarithmic Entropy, Shannon Entropy, Hurst Exponent, Lyapunov Exponent, Higuchi Fractal Dimension, Katz Fractal Dimension, Correlation Dimension and Detrended Fluctuation Analysis) extracted from 2 ECG signals (collected from 2 different patient’s positions). Windows of 1 second segments in 6 ways of windowing signal analysis crops were evaluated employing statistical analysis. Three categories of outcomes are considered for the patient status: Low, Moderate, and Severe, and four combinations for classification scenarios are tested: (Low vs. Moderate, Low vs. Severe, Moderate vs. Severe) and 1 Multi-class comparison (All vs. All)). The results indicate that some statistically significant parameter distributions were found for all comparisons. (Low vs. Moderate—Approximate Entropy p-value = 0.0067 < 0.05, Low vs. Severe—Correlation Dimension p-value = 0.0087 < 0.05, Moderate vs. Severe—Correlation Dimension p-value = 0.0029 < 0.05, All vs. All—Correlation Dimension p-value = 0.0185 < 0.05. The non-linear analysis of the time-frequency representation of the ECG signal can be considered a promising tool for describing and distinguishing the COVID-19 severity activity along its different stages.
-
<jats:p>Detecting emotions is a growing field aiming to comprehend and interpret human emotions from various data sources, including text, voice, and physiological signals. Electroencephalogram (EEG) is a unique and promising approach among these sources. EEG is a non-invasive monitoring technique that records the brain’s electrical activity through electrodes placed on the scalp’s surface. It is used in clinical and research contexts to explore how the human brain responds to emotions and cognitive stimuli. Recently, its use has gained interest in real-time emotion detection, offering a direct approach independent of facial expressions or voice. This is particularly useful in resource-limited scenarios, such as brain–computer interfaces supporting mental health. The objective of this work is to evaluate the classification of emotions (positive, negative, and neutral) in EEG signals using machine learning and deep learning, focusing on Graph Convolutional Neural Networks (GCNN), based on the analysis of critical attributes of the EEG signal (Differential Entropy (DE), Power Spectral Density (PSD), Differential Asymmetry (DASM), Rational Asymmetry (RASM), Asymmetry (ASM), Differential Causality (DCAU)). The electroencephalography dataset used in the research was the public SEED dataset (SJTU Emotion EEG Dataset), obtained through auditory and visual stimuli in segments from Chinese emotional movies. The experiment employed to evaluate the model results was “subject-dependent”. In this method, the Deep Neural Network (DNN) achieved an accuracy of 86.08%, surpassing SVM, albeit with significant processing time due to the optimization characteristics inherent to the algorithm. The GCNN algorithm achieved an average accuracy of 89.97% in the subject-dependent experiment. This work contributes to emotion detection in EEG, emphasizing the effectiveness of different models and underscoring the importance of selecting appropriate features and the ethical use of these technologies in practical applications. The GCNN emerges as the most promising methodology for future research.</jats:p>
-
Small and medium-sized enterprises (SMEs) can benefit significantly from open innovation by gaining access to a broader range of resources and expertise using absorptive capacitive, and increasing their visibility and reputation. Nevertheless, multiple barriers impact their capacity to absorb new technologies or adapt to develop them. This paper aims to perform an analysis of relevant topics and trends in Open Innovation (OI) and Absorptive Capacity (AC) in SMEs based on a bibliometric review identifying relevant authors and countries, and highlighting significant research themes and trends. The defined string query is submitted to the Web of Science database, and the bibliometric analysis using VOSviewer software. The results indicate that the number of scientific publications has consistently increased during the past decade, indicating a growing interest of the scientific community, reflecting the industry interest and possibly adoption of OI, considering Absorptive. This bibliometric analysis can provide insights on the most relevant regions the research areas are under intensive development.
Explore
USJ Theses and Dissertations
Academic Units
-
Faculty of Arts and Humanities
(1)
- Álvaro Barbosa (1)
-
Faculty of Business and Law
(48)
- Alexandre Lobo (48)
- Emil Marques (1)
- Ivan Arraut (1)
- Jenny Phillips (1)
- Sergio Gomes (1)
Resource type
- Book (2)
- Book Section (14)
- Conference Paper (12)
- Journal Article (20)
- Thesis (1)
United Nations SDGs
Publication year
- Between 2000 and 2025 (49)