Your search

Publication year
  • The area of clinical decision support systems (CDSS) is facing a boost in research and development with the increasing amount of data in clinical analysis together with new tools to support patient care. This creates a vibrant and challenging environment for the medical and technical staff. This chapter presents a discussion about the challenges and trends of CDSS considering big data and patient-centered constraints. Two case studies are presented in detail. The first presents the development of a big data and AI classification system for maternal and fetal ambulatory monitoring, composed by different solutions such as the implementation of an Internet of Things sensors and devices network, a fuzzy inference system for emergency alarms, a feature extraction model based on signal processing of the fetal and maternal data, and finally a deep learning classifier with six convolutional layers achieving an F1-score of 0.89 for the case of both maternal and fetal as harmful. The system was designed to support maternal–fetal ambulatory premises in developing countries, where the demand is extremely high and the number of medical specialists is very low. The second case study considered two artificial intelligence approaches to providing efficient prediction of infections for clinical decision support during the COVID-19 pandemic in Brazil. First, LSTM recurrent neural networks were considered with the model achieving R2=0.93 and MAE=40,604.4 in average, while the best, R2=0.9939, was achieved for the time series 3. Second, an open-source framework called H2O AutoML was considered with the “stacked ensemble” approach and presented the best performance followed by XGBoost. Brazil has been one of the most challenging environments during the pandemic and where efficient predictions may be the difference in saving lives. The presentation of such different approaches (ambulatory monitoring and epidemiology data) is important to illustrate the large spectrum of AI tools to support clinical decision-making.

  • Covid-19 has hit the world unprepared, as the deadliest pandemic of the century. Governments and authorities, as leaders and decision makers fighting against the virus, enormously tap on the power of AI and its data analytics models for urgent decision supports at the greatest efforts, ever seen from human history. This book showcases a collection of important data analytics models that were used during the epidemic, and discusses and compares their efficacy and limitations. Readers who from both healthcare industries and academia can gain unique insights on how data analytics models were designed and applied on epidemic data. Taking Covid-19 as a case study, readers especially those who are working in similar fields, would be better prepared in case a new wave of virus epidemic may arise again in the near future.

  • Following the World Health Organization proclaims a pandemic due to a disease that originated in China and advances rapidly across the globe, studies to predict the behavior of epidemics have become increasingly popular, mainly related to COVID-19. The critical point of these studies is to discuss the disease's behavior and the progression of the virus's natural course. However, the prediction of the actual number of infected people has proved to be a difficult task, due to a wide range of factors, such as mass testing, social isolation, underreporting of cases, among others. Therefore, the objective of this work is to understand the behavior of COVID-19 in the state of Ceará to forecast the total number of infected people and to aid in government decisions to control the outbreak of the virus and minimize social impacts and economics caused by the pandemic. So, to understand the behavior of COVID-19, this work discusses some forecast techniques using machine learning, logistic regression, filters, and epidemiologic models. Also, this work brings a new approach to the problem, bringing together data from Ceará with those from China, generating a hybrid dataset, and providing promising results. Finally, this work still compares the different approaches and techniques presented, opening opportunities for future discussions on the topic. The study obtains predictions with R2 score of 0.99 to short-term predictions and 0.93 to long-term predictions.

  • The spontaneous symmetry breaking phenomena applied to Quantum Finance considers that the martingale state in the stock market corresponds to a ground (vacuum) state if we express the financial equations in the Hamiltonian form. The original analysis for this phenomena completely ignores the kinetic terms in the neighborhood of the minimal of the potential terms. This is correct in most of the cases. However, when we deal with the martingale condition, it comes out that the kinetic terms can also behave as potential terms and then reproduce a shift on the effective location of the vacuum (martingale). In this paper, we analyze the effective symmetry breaking patterns and the connected vacuum degeneracy for these special circumstances. Within the same scenario, we analyze the connection between the flow of information and the multiplicity of martingale states, providing in this way powerful tools for analyzing the dynamic of the stock markets.

  • It is plausible to assume that the component waves in ECG signals constitute a unique human characteristic because morphology and amplitudes of recorded beats are governed by multiple individual factors. According to the best of our knowledge, the issue of automatically classifying different ’identities’ of QRS morphology has not been explored within the literature. This work proposes five alternative mathematical models for representing different QRS morphologies providing the extraction of a set of features related to QRS shape. The technique incorporates mechanisms of combining the mathematical functions Gaussian, Mexican-Hat and Rayleigh probability density function and also a mechanism for clipping the waveform of those functions. The searching for the optimal parameters which minimize the normalized RMS error between each mathematical model and a given QRS search window enables to find an optimal model. Such modeling behaves as a robust alternative for delineating heartbeats, classifying beat morphologies, detecting subtle and anomalous changes, compression of QRS complex windows among others. The validation process evaluates the ability of each model to represent different QRS morphology classes within 159 full ECG signal records from QT database and 584 QRS search windows from MIT-BIH Arrhythmia database. From the experimental results, we rank the winning rates for which each mathematical model best models and also discriminates the most predominant QRS morphologies Rs, rS, RS, qR, qRs, R, rR’s and QS. Furthermore, the average time errors computed for QRS onset and offset locations when using the corresponding winner mathematical models for delineation purposes were, respectively, 12.87±8.5 ms and 1.47±10.06 ms.

  • Association Rule Mining by Aprior method has been one of the popular data mining techniques for decades, where knowledge in the form of item-association rules is harvested from a dataset. The quality of item-association rules nevertheless depends on the concentration of frequent items from the input dataset. When the dataset becomes large, the items are scattered far apart. It is known from previous literature that clustering helps produce some data groups which are concentrated with frequent items. Among all the data clusters generated by a clustering algorithm, there must be one or more clusters which contain suitable and frequent items. In turn, the association rules that are mined from such clusters would be assured of better qualities in terms of high confidence than those mined from the whole dataset. However, it is not known in advance which cluster is the suitable one until all the clusters are tried by association rule mining. It is time consuming if they were to be tested by brute-force. In this paper, a statistical property called prior probability is investigated with respect to selecting the best out of many clusters by a clustering algorithm as a pre-processing step before association rule mining. Experiment results indicate that there is correlation between prior probability of the best cluster and the relatively high quality of association rules generated from that cluster. The results are significant as it is possible to know which cluster should be best used for association rule mining instead of testing them all out exhaustively.

Last update from database: 5/2/24, 5:10 AM (UTC)