Your search

In authors or contributors
  • Even with more than 12 billion vaccine doses administered globally, the Covid-19 pandemic has caused several global economic, social, environmental, and healthcare impacts. Computer Aided Diagnostic (CAD) systems can serve as a complementary method to aid doctors in identifying regions of interest in images and help detect diseases. In addition, these systems can help doctors analyze the status of the disease and check for their progress or regression. To analyze the viability of using CNNs for differentiating Covid-19 CT positive images from Covid-19 CT negative images, we used a dataset collected by Union Hospital (HUST-UH) and Liyuan Hospital (HUST-LH) and made available at the Kaggle platform. The main objective of this chapter is to present results from applying two state-of-the-art CNNs on a Covid-19 CT Scan images database to evaluate the possibility of differentiating images with imaging features associated with Covid-19 pneumonia from images with imaging features irrelevant to Covid-19 pneumonia. Two pre-trained neural networks, ResNet50 and MobileNet, were fine-tuned for the datasets under analysis. Both CNNs obtained promising results, with the ResNet50 network achieving a Precision of 0.97, a Recall of 0.96, an F1-score of 0.96, and 39 false negatives. The MobileNet classifier obtained a Precision of 0.94, a Recall of 0.94, an F1-score of 0.94, and a total of 20 false negatives.

  • <jats:p>Detecting emotions is a growing field aiming to comprehend and interpret human emotions from various data sources, including text, voice, and physiological signals. Electroencephalogram (EEG) is a unique and promising approach among these sources. EEG is a non-invasive monitoring technique that records the brain’s electrical activity through electrodes placed on the scalp’s surface. It is used in clinical and research contexts to explore how the human brain responds to emotions and cognitive stimuli. Recently, its use has gained interest in real-time emotion detection, offering a direct approach independent of facial expressions or voice. This is particularly useful in resource-limited scenarios, such as brain–computer interfaces supporting mental health. The objective of this work is to evaluate the classification of emotions (positive, negative, and neutral) in EEG signals using machine learning and deep learning, focusing on Graph Convolutional Neural Networks (GCNN), based on the analysis of critical attributes of the EEG signal (Differential Entropy (DE), Power Spectral Density (PSD), Differential Asymmetry (DASM), Rational Asymmetry (RASM), Asymmetry (ASM), Differential Causality (DCAU)). The electroencephalography dataset used in the research was the public SEED dataset (SJTU Emotion EEG Dataset), obtained through auditory and visual stimuli in segments from Chinese emotional movies. The experiment employed to evaluate the model results was “subject-dependent”. In this method, the Deep Neural Network (DNN) achieved an accuracy of 86.08%, surpassing SVM, albeit with significant processing time due to the optimization characteristics inherent to the algorithm. The GCNN algorithm achieved an average accuracy of 89.97% in the subject-dependent experiment. This work contributes to emotion detection in EEG, emphasizing the effectiveness of different models and underscoring the importance of selecting appropriate features and the ethical use of these technologies in practical applications. The GCNN emerges as the most promising methodology for future research.</jats:p>

Last update from database: 4/12/25, 5:01 AM (UTC)

Explore

Resource type

United Nations SDGs

Cooperation