Your search

Author or contributor
  • Stock movement prediction is one of the most challenging problems in time series analysis due to the stochastic nature of financial markets. In recent years, a plethora of statistical methods and machine learning algorithms were proposed for stock movement prediction. Specifically, deep learning models are increasingly applied for the prediction of stock movement. The success of deep learning models relies on the assumption that massive training data are available. However, this assumption is impractical for stock movement prediction. In stock markets, a large number of stocks do not have enough historical data, especially for the companies which underwent initial public offering in recent years. In these situations, the accuracy of deep learning models to predict the stock movement could be affected. To address this problem, in this paper, we propose novel instance-based deep transfer learning models with attention mechanism. In the experiments, we compare our proposed methods with state-of-the-art prediction models. Experimental results on three public datasets reveal that our proposed methods significantly improve the performance of deep learning models when limited training data are available.

  • The key challenge of Unsupervised Domain Adaptation (UDA) for analyzing time series data is to learn domain-invariant representations by capturing complex temporal dependencies. In addition, existing unsupervised domain adaptation methods for time series data are designed to align marginal distribution between source and target domains. However, existing UDA methods (e.g. R-DANN Purushotham et al. (2017), VRADA Purushotham et al. (2017), CoDATS Wilson et al. (2020)) neglect the conditional distribution discrepancy between two domains, leading to misclassification of the target domain. Therefore, to learn domain-invariant representations by capturing the temporal dependencies and to reduce the conditional distribution discrepancy between two domains, a novel Attentive Recurrent Adversarial Domain Adaptation with Top-k time series pseudo-labeling method called ARADA-TK is proposed in this paper. In the experiments, our proposed method was compared with the state-of-the-art UDA methods (R-DANN, VRADA and CoDATS). Experimental results on four benchmark datasets revealed that ARADA-TK achieves superior classification accuracy when it is compared to the competing methods.

  • As safety is one of the most important properties of drugs, chemical toxicology prediction has received increasing attentions in the drug discovery research. Traditionally, researchers rely on in vitro and in vivo experiments to test the toxicity of chemical compounds. However, not only are these experiments time consuming and costly, but experiments that involve animal testing are increasingly subject to ethical concerns. While traditional machine learning (ML) methods have been used in the field with some success, the limited availability of annotated toxicity data is the major hurdle for further improving model performance. Inspired by the success of semi-supervised learning (SSL) algorithms, we propose a Graph Convolution Neural Network (GCN) to predict chemical toxicity and trained the network by the Mean Teacher (MT) SSL algorithm. Using the Tox21 data, our optimal SSL-GCN models for predicting the twelve toxicological endpoints achieve an average ROC-AUC score of 0.757 in the test set, which is a 6% improvement over GCN models trained by supervised learning and conventional ML methods. Our SSL-GCN models also exhibit superior performance when compared to models constructed using the built-in DeepChem ML methods. This study demonstrates that SSL can increase the prediction power of models by learning from unannotated data. The optimal unannotated to annotated data ratio ranges between 1:1 and 4:1. This study demonstrates the success of SSL in chemical toxicity prediction; the same technique is expected to be beneficial to other chemical property prediction tasks by utilizing existing large chemical databases. Our optimal model SSL-GCN is hosted on an online server accessible through: https://app.cbbio.online/ssl-gcn/home.

Last update from database: 4/20/24, 4:54 AM (UTC)

Explore

Resource type

Publication year