Your search
Results 43 resources
-
Consumer neuroscience analyzes individuals’ preferences through the assessment of physiological data monitoring, considering brain activity or other bioinformation to assess purchase decisions. Traditional marketing tactics include customer surveys, product evaluations, and comments. For product or brand marketing and mass production, it is important to understand consumer neurological responses when seeing an ad or testing a product. In this work, we use the bi-clustering method to reduce EEG noise and automatic machine learning to classify brain responses. We analyze a neuromarketing EEG dataset that contains EEG data from product evaluations from 25 participants, collected with a 14 channel Emotiv Epoch + device, while examining consumer items. Four components comprised the research methodology. Initially, the Welch Transform was used to filter the EEG raw data. Second, the best converted signal biclusterings are used to train different classification models. Each biclustering is evaluated with a separate classifier, considering F1-Score. After that, the H2O.ai AutoML library is used to select the optimal biclustering and models. Instead of traditional procedures, two thresholds are used. First-threshold values indicate customer satisfaction. Low values of the second threshold reflect consumer dissatisfaction. Values between the first and second criteria are classified as uncertain values. We outperform the state of the art with a 0.95 F1-Score value.
-
The potential of blockchain technology extends beyond cryptocurrencies and has the power to transform various sectors, including accounting and auditing. Its integration into auditing practices presents opportunities and challenges, and auditors must navigate new standards and engage with clients effectively. Blockchain technology provides tamper-proof record-keeping and fraud prevention, enhancing efficiency, transparency, and security in domains such as finance, insurance, healthcare, education, e-voting, and supply chain management. This paper conducts a bibliometric analysis of blockchain technology literature to gain insights into the current state and future directions of blockchain technology in auditing. The study identifies significant research themes and trends using keyword and citation analysis. The Vosviewer software was used to analyze the data and visualize the results. Findings reveal significant growth in blockchain research, particularly from 2021 onwards, with China emerging as a leading contributor, followed by the USA, India, and the UK. This study provides valuable insights into current trends, key contributors, and global patterns in blockchain technology research within auditing practices, and future research may explore thematic areas in greater depth.
-
The spontaneous symmetry breaking phenomena applied to Quantum Finance considers that the martingale state in the stock market corresponds to a ground (vacuum) state if we express the financial equations in the Hamiltonian form. The original analysis for this phenomena completely ignores the kinetic terms in the neighborhood of the minimal of the potential terms. This is correct in most of the cases. However, when we deal with the martingale condition, it comes out that the kinetic terms can also behave as potential terms and then reproduce a shift on the effective location of the vacuum (martingale). In this paper, we analyze the effective symmetry breaking patterns and the connected vacuum degeneracy for these special circumstances. Within the same scenario, we analyze the connection between the flow of information and the multiplicity of martingale states, providing in this way powerful tools for analyzing the dynamic of the stock markets.
-
Objetivo: Explorar a aplicação de inteligência artificial (IA) na predição da idade óssea a partir de imagens de raios-X. Método: Utilizou-se a Metodologia Interdisciplinar para o Desenvolvimento de Tecnologias em Saúde (MIDTS) para desenvolver uma ferramenta de predição. O treinamento foi realizado com redes neurais convolucionais (CNNs) usando um conjunto de dados de 14.036 imagens de raios-X. Resultados: A ferramenta alcançou um coeficiente de determinação (R²) de 0,94807 e um Erro Médio Absoluto (MAE) de 6,97, destacando sua precisão e potencial de aplicação clínica. Conclusão: O projeto demonstrou grande potencial para aprimorar a predição da idade óssea, com possibilidades de evolução conforme a base de dados aumenta e a IA se torna mais sofisticada.
-
Following the World Health Organization proclaims a pandemic due to a disease that originated in China and advances rapidly across the globe, studies to predict the behavior of epidemics have become increasingly popular, mainly related to COVID-19. The critical point of these studies is to discuss the disease's behavior and the progression of the virus's natural course. However, the prediction of the actual number of infected people has proved to be a difficult task, due to a wide range of factors, such as mass testing, social isolation, underreporting of cases, among others. Therefore, the objective of this work is to understand the behavior of COVID-19 in the state of Ceará to forecast the total number of infected people and to aid in government decisions to control the outbreak of the virus and minimize social impacts and economics caused by the pandemic. So, to understand the behavior of COVID-19, this work discusses some forecast techniques using machine learning, logistic regression, filters, and epidemiologic models. Also, this work brings a new approach to the problem, bringing together data from Ceará with those from China, generating a hybrid dataset, and providing promising results. Finally, this work still compares the different approaches and techniques presented, opening opportunities for future discussions on the topic. The study obtains predictions with R2 score of 0.99 to short-term predictions and 0.93 to long-term predictions.
-
The use of computational tools for medical image processing are promising tools to effectively detect COVID-19 as an alternative to expensive and time-consuming RT-PCR tests. For this specific task, CXR (Chest X-Ray) and CCT (Chest CT Scans) are the most common examinations to support diagnosis through radiology analysis. With these images, it is possible to support diagnosis and determine the disease’s severity stage. Computerized COVID-19 quantification and evaluation require an efficient segmentation process. Essential tasks for automatic segmentation tools are precisely identifying the lungs, lobes, bronchopulmonary segments, and infected regions or lesions. Segmented areas can provide handcrafted or self-learned diagnostic criteria for various applications. This Chapter presents different techniques applied for Chest CT Scans segmentation, considering the state of the art of UNet networks to segment COVID-19 CT scans and a segmentation experiment for network evaluation. Along 200 epochs, a dice coefficient of 0.83 was obtained.
-
COVID-19 is a respiratory disorder caused by CoronaVirus and SARS (SARS-CoV2). WHO declared COVID-19 a global pandemic in March 2020 and several nations’ healthcare systems were on the verge of collapsing. With that, became crucial to screen COVID-19-positive patients to maximize limited resources. NAATs and antigen tests are utilized to diagnose COVID-19 infections. NAATs reliably detect SARS-CoV-2 and seldom produce false-negative results. Because of its specificity and sensitivity, RT-PCR can be considered the gold standard for COVID-19 diagnosis. This test’s complex gear is pricey and time-consuming, using skilled specialists to collect throat or nasal mucus samples. These tests require laboratory facilities and a machine for detection and analysis. Deep learning networks have been used for feature extraction and classification of Chest CT-Scan images and as an innovative detection approach in clinical practice. Because of COVID-19 CT scans’ medical characteristics, the lesions are widely spread and display a range of local aspects. Using deep learning to diagnose directly is difficult. In COVID-19, a Transformer and Convolutional Neural Network module are presented to extract local and global information from CT images. This chapter explains transfer learning, considering VGG-16 network, in CT examinations and compares convolutional networks with Vision Transformers (ViT). Vit usage increased VGG-16 network F1-score to 0.94.
-
This chapter describes an AUTO-ML strategy to detect COVID on chest X-rays utilizing Transfer Learning feature extraction and the AutoML TPOT framework in order to identify lung illnesses (such as COVID or pneumonia). MobileNet is a lightweight network that uses depthwise separable convolution to deepen the network while decreasing parameters and computation. AutoML is a revolutionary concept of automated machine learning (AML) that automates the process of building an ML pipeline inside a constrained computing framework. The term “AutoML” can mean a number of different things depending on context. AutoML has risen to prominence in both the business world and the academic community thanks to the ever-increasing capabilities of modern computers. Python Optimised ML Pipeline (TPOT) is a Python-based ML tool that optimizes pipeline efficiency via genetic programming. We use TPOT builds models for extracted MobileNet network features from COVID-19 image data. The f1-score of 0.79 classifies Normal, Viral Pneumonia, and Lung Opacity.
-
Artificial intelligence (AI) and deep learning (DL) are advancing in stock market prediction, attracting the attention of researchers in computer science and finance. This bibliometric review analyzes 525 articles published from 1991 to 2024 in Scopus-indexed journals, utilizing VOSviewer software to identify key research trends, influential contributors, and burgeoning themes. The bibliometric analysis encompasses a performance analysis of the most prominent scientific contributors and a network analysis of scientific mapping, which includes co-authorship, co-occurrence, citation, bibliographical coupling, and co-citation analyses enabled by the VOSviewer software. Among the 693 countries, significant hubs of knowledge production include China, the US, India, and the UK, highlighting the global relevance of the field. Various AI and DL technologies are increasingly employed in stock price predictions, with artificial neural networks (ANN) and other methods such as long short-term memory (LSTM), Random Forest, Sentiment Analysis, Support Vector Machine/Regression (SVM/SVR), among the 1399 keyword counts in publications. Influential studies such as LeBaron (1999) and Moghaddam (2016) have shaped foundational research in 8159 citations. This review offers original insights into the bibliometric landscape of AI and DL applications in finance by mapping global knowledge production and identifying critical AI methods advancing stock market prediction. It enables finance professionals to learn about technological developments and trends to enhance decision-making and gain market advantage.
-
It is plausible to assume that the component waves in ECG signals constitute a unique human characteristic because morphology and amplitudes of recorded beats are governed by multiple individual factors. According to the best of our knowledge, the issue of automatically classifying different ’identities’ of QRS morphology has not been explored within the literature. This work proposes five alternative mathematical models for representing different QRS morphologies providing the extraction of a set of features related to QRS shape. The technique incorporates mechanisms of combining the mathematical functions Gaussian, Mexican-Hat and Rayleigh probability density function and also a mechanism for clipping the waveform of those functions. The searching for the optimal parameters which minimize the normalized RMS error between each mathematical model and a given QRS search window enables to find an optimal model. Such modeling behaves as a robust alternative for delineating heartbeats, classifying beat morphologies, detecting subtle and anomalous changes, compression of QRS complex windows among others. The validation process evaluates the ability of each model to represent different QRS morphology classes within 159 full ECG signal records from QT database and 584 QRS search windows from MIT-BIH Arrhythmia database. From the experimental results, we rank the winning rates for which each mathematical model best models and also discriminates the most predominant QRS morphologies Rs, rS, RS, qR, qRs, R, rR’s and QS. Furthermore, the average time errors computed for QRS onset and offset locations when using the corresponding winner mathematical models for delineation purposes were, respectively, 12.87±8.5 ms and 1.47±10.06 ms.
-
The gold standard to detect SARS-CoV-2 infection considers testing methods based on Polymerase Chain Reaction (PCR). Still, the time necessary to confirm patient infection can be lengthy, and the process is expensive. In parallel, X-Ray and CT scans play an important role in the diagnosis and treatment processes. Hence, a trusted automated technique for identifying and quantifying the infected lung regions would be advantageous. Chest X-rays are two-dimensional images of the patient’s chest and provide lung morphological information and other characteristics, like ground-glass opacities (GGO), horizontal linear opacities, or consolidations, which are typical characteristics of pneumonia caused by COVID-19. This chapter presents an AI-based system using multiple Transfer Learning models for COVID-19 classification using Chest X-Rays. In our experimental design, all the classifiers demonstrated satisfactory accuracy, precision, recall, and specificity performance. On the one hand, the Mobilenet architecture outperformed the other CNNs, achieving excellent results for the evaluated metrics. On the other hand, Squeezenet presented a regular result in terms of recall. In medical diagnosis, false negatives can be particularly harmful because a false negative can lead to patients being incorrectly diagnosed as healthy. These results suggest that our Deep Learning classifiers can accurately classify X-ray exams as normal or indicative of COVID-19 with high confidence.
-
The gold standard to detect SARS-CoV-2 infection consider testing methods based on Polymerase Chain Reaction (PCR). Still, the time necessary to confirm patient infection can be lengthy, and the process is expensive. On the other hand, X-Ray and CT scans play a vital role in the auxiliary diagnosis process. Hence, a trusted automated technique for identifying and quantifying the infected lung regions would be advantageous. Chest X-rays are two-dimensional images of the patient’s chest and provide lung morphological information and other characteristics, like ground-glass opacities (GGO), horizontal linear opacities, or consolidations, which are characteristics of pneumonia caused by COVID-19. But before the computerized diagnostic support system can classify a medical image, a segmentation task should usually be performed to identify relevant areas to be analyzed and reduce the risk of noise and misinterpretation caused by other structures eventually present in the images. This chapter presents an AI-based system for lung segmentation in X-ray images using a U-net CNN model. The system’s performance was evaluated using metrics such as cross-entropy, dice coefficient, and Mean IoU on unseen data. Our study divided the data into training and evaluation sets using an 80/20 train-test split method. The training set was used to train the model, and the evaluation test set was used to evaluate the performance of the trained model. The results of the evaluation showed that the model achieved a Dice Similarity Coefficient (DSC) of 95%, Cross entropy of 97%, and Mean IoU of 86%.
-
The Covid-19 pandemic evidenced the need Computer Aided Diagnostic Systems to analyze medical images, such as CT and MRI scans and X-rays, to assist specialists in disease diagnosis. CAD systems have been shown to be effective at detecting COVID-19 in chest X-ray and CT images, with some studies reporting high levels of accuracy and sensitivity. Moreover, it can also detect some diseases in patients who may not have symptoms, preventing the spread of the virus. There are some types of CAD systems, such as Machine and Deep Learning-based and Transfer learning-based. This chapter proposes a pipeline for feature extraction and classification of Covid-19 in X-ray images using transfer learning for feature extraction with VGG-16 CNN and machine learning classifiers. Five classifiers were evaluated: Accuracy, Specificity, Sensitivity, Geometric mean, and Area under the curve. The SVM Classifier presented the best performance metrics for Covid-19 classification, achieving 90% accuracy, 97.5% of Specificity, 82.5% of Sensitivity, 89.6% of Geometric mean, and 90% for the AUC metric. On the other hand, the Nearest Centroid (NC) classifier presented poor sensitivity and geometric mean results, achieving 33.9% and 54.07%, respectively.
-
<jats:p>Detecting emotions is a growing field aiming to comprehend and interpret human emotions from various data sources, including text, voice, and physiological signals. Electroencephalogram (EEG) is a unique and promising approach among these sources. EEG is a non-invasive monitoring technique that records the brain’s electrical activity through electrodes placed on the scalp’s surface. It is used in clinical and research contexts to explore how the human brain responds to emotions and cognitive stimuli. Recently, its use has gained interest in real-time emotion detection, offering a direct approach independent of facial expressions or voice. This is particularly useful in resource-limited scenarios, such as brain–computer interfaces supporting mental health. The objective of this work is to evaluate the classification of emotions (positive, negative, and neutral) in EEG signals using machine learning and deep learning, focusing on Graph Convolutional Neural Networks (GCNN), based on the analysis of critical attributes of the EEG signal (Differential Entropy (DE), Power Spectral Density (PSD), Differential Asymmetry (DASM), Rational Asymmetry (RASM), Asymmetry (ASM), Differential Causality (DCAU)). The electroencephalography dataset used in the research was the public SEED dataset (SJTU Emotion EEG Dataset), obtained through auditory and visual stimuli in segments from Chinese emotional movies. The experiment employed to evaluate the model results was “subject-dependent”. In this method, the Deep Neural Network (DNN) achieved an accuracy of 86.08%, surpassing SVM, albeit with significant processing time due to the optimization characteristics inherent to the algorithm. The GCNN algorithm achieved an average accuracy of 89.97% in the subject-dependent experiment. This work contributes to emotion detection in EEG, emphasizing the effectiveness of different models and underscoring the importance of selecting appropriate features and the ethical use of these technologies in practical applications. The GCNN emerges as the most promising methodology for future research.</jats:p>
-
This work provides a comprehensive systematic review of optimization techniques using artificial intelligence (AI) for energy storage systems within renewable energy setups. The primary goals are to evaluate the latest technologies employed in forecasting models for renewable energy generation, load forecasting, and energy storage systems, alongside their construction parameters and optimization methods. The review highlights the progress achieved, identifies current challenges, and explores future research directions. Despite the extensive application of machine learning (ML) and deep learning (DL) in renewable energy generation, consumption patterns, and storage optimization, few studies integrate these three aspects simultaneously, underscoring the significance of this work. The review encompasses studies from Web of Science, Scopus, and Science Direct up to December 2023, including works scheduled for publication in 2024. Each study related to renewable energy storage was individually analyzed to assess its objectives, methodology, and results. The findings reveal useful insights for developing AI models aimed at optimizing storage systems. However, critical areas need further exploration, such as real-time forecasting, long-term storage predictions, hybrid neural networks for demand-based generation forecasting, and the evaluation of various storage scales and battery technologies. The review also notes a significant gap in research on large-scale storage systems in Brazil and Latin America. In conclusion, the study emphasizes the need for continued research and the development of new algorithms to address existing limitations in the field.
-
Consumers' selections and decision-making processes are some of the most exciting and challenging topics in neuromarketing, sales, and branding. From a global perspective, multicultural influences and societal conditions are crucial to consider. Neuroscience applications in international marketing and consumer behavior is an emergent and multidisciplinary field aiming to understand consumers' thoughts, reactions, and selection processes in branding and sales. This study focuses on real-time monitoring of different physiological signals using eye-tracking, facial expressions recognition, and Galvanic Skin Response (GSR) acquisition methods to analyze consumers' responses, detect emotional arousal, measure attention or relaxation levels, analyze perception, consciousness, memory, learning, motivation, preference, and decision-making. This research aimed to monitor human subjects' reactions to these signals during an experiment designed in three phases consisting of different branding advertisements. The nonadvertisement exposition was also monitored while gathering survey responses at the end of each phase. A feature extraction module with a data analytics module was implemented to calculate statistical metrics and decision-making supporting tools based on Principal Component Analysis (PCA) and Feature Importance (FI) determination based on the Random Forest technique. The results indicate that when compared to image ads, video ads are more effective in attracting consumers' attention and creating more emotional arousal.
-
Small and medium-sized enterprises (SMEs) can benefit significantly from open innovation by gaining access to a broader range of resources and expertise using absorptive capacitive, and increasing their visibility and reputation. Nevertheless, multiple barriers impact their capacity to absorb new technologies or adapt to develop them. This paper aims to perform an analysis of relevant topics and trends in Open Innovation (OI) and Absorptive Capacity (AC) in SMEs based on a bibliometric review identifying relevant authors and countries, and highlighting significant research themes and trends. The defined string query is submitted to the Web of Science database, and the bibliometric analysis using VOSviewer software. The results indicate that the number of scientific publications has consistently increased during the past decade, indicating a growing interest of the scientific community, reflecting the industry interest and possibly adoption of OI, considering Absorptive. This bibliometric analysis can provide insights on the most relevant regions the research areas are under intensive development.
-
Association Rule Mining by Aprior method has been one of the popular data mining techniques for decades, where knowledge in the form of item-association rules is harvested from a dataset. The quality of item-association rules nevertheless depends on the concentration of frequent items from the input dataset. When the dataset becomes large, the items are scattered far apart. It is known from previous literature that clustering helps produce some data groups which are concentrated with frequent items. Among all the data clusters generated by a clustering algorithm, there must be one or more clusters which contain suitable and frequent items. In turn, the association rules that are mined from such clusters would be assured of better qualities in terms of high confidence than those mined from the whole dataset. However, it is not known in advance which cluster is the suitable one until all the clusters are tried by association rule mining. It is time consuming if they were to be tested by brute-force. In this paper, a statistical property called prior probability is investigated with respect to selecting the best out of many clusters by a clustering algorithm as a pre-processing step before association rule mining. Experiment results indicate that there is correlation between prior probability of the best cluster and the relatively high quality of association rules generated from that cluster. The results are significant as it is possible to know which cluster should be best used for association rule mining instead of testing them all out exhaustively.
-
There are many systematic reviews on predicting stock. However, each reveals a different portion of the hybrid AI analysis and stock prediction puzzle. The principal objective of this research was to systematically review the existing systematic reviews on Artificial Intelligence (AI) models applied to stock market prediction to provide valuable inputs for the development of strategies in stock market investments. Keywords that would fall under the broad headings of AI and stock prediction were looked up in Scopus and Web of Science databases. We screened 69 titles and read 43 systematic reviews, including more than 379 studies, before retaining 10 for the final dataset. This work revealed that support vector machines (SVM), long short-term memory (LSTM), and artificial neural networks (ANN) are the most popular AI methods for stock market prediction. In addition, the time series of historical closing stock prices are the most commonly used data source, and accuracy is the most employed performance metric of the predictive models. We also identified several research gaps and directions for future studies. Specifically, we indicate that future research could benefit from exploring different data sources and combinations, while we also suggest comparing different AI methods and techniques, as each may have specific advantages and applicable scenarios. Lastly, we recommend better evaluating different prediction indicators and standards to reflect prediction models’ actual value and impact.
Explore
Academic Units
-
Faculty of Arts and Humanities
(1)
- Álvaro Barbosa (1)
-
Faculty of Business and Law
(43)
- Alexandre Lobo (43)
- Emil Marques (1)
- Ivan Arraut (1)
- Sergio Gomes (1)
Resource type
- Book (2)
- Book Section (14)
- Conference Paper (8)
- Journal Article (19)