Your search

In authors or contributors
  • <jats:p>Personalized recommendation plays an important role in many online service fields. In the field of tourism recommendation, tourist attractions contain rich context and content information. These implicit features include not only text, but also images and videos. In order to make better use of these features, researchers usually introduce richer feature information or more efficient feature representation methods, but the unrestricted introduction of a large amount of feature information will undoubtedly reduce the performance of the recommendation system. We propose a novel heterogeneous multimodal representation learning method for tourism recommendation. The proposed model is based on two-tower architecture, in which the item tower handles multimodal latent features: Bidirectional Long Short-Term Memory (Bi-LSTM) is used to extract the text features of items, and an External Attention Transformer (EANet) is used to extract image features of items, and connect these feature vectors with item IDs to enrich the feature representation of items. In order to increase the expressiveness of the model, we introduce a deep fully connected stack layer to fuse multimodal feature vectors and capture the hidden relationship between them. The model is tested on the three different datasets, our model is better than the baseline models in NDCG and precision.</jats:p>

  • <jats:title>Abstract</jats:title> <jats:p> <jats:italic>Objective.</jats:italic> Mild cognitive impairment (MCI) is a precursor stage of dementia characterized by mild cognitive decline in one or more cognitive domains, without meeting the criteria for dementia. MCI is considered a prodromal form of Alzheimer’s disease (AD). Early identification of MCI is crucial for both intervention and prevention of AD. To accurately identify MCI, a novel multimodal 3D imaging data integration graph convolutional network (GCN) model is designed in this paper. <jats:italic>Approach.</jats:italic> The proposed model utilizes 3D-VGGNet to extract three-dimensional features from multimodal imaging data (such as structural magnetic resonance imaging and fluorodeoxyglucose positron emission tomography), which are then fused into feature vectors as the node features of a population graph. Non-imaging features of participants are combined with the multimodal imaging data to construct a population sparse graph. Additionally, in order to optimize the connectivity of the graph, we employed the pairwise attribute estimation (PAE) method to compute the edge weights based on non-imaging data, thereby enhancing the effectiveness of the graph structure. Subsequently, a population-based GCN integrates the structural and functional features of different modal images into the features of each participant for MCI classification. <jats:italic>Main results.</jats:italic> Experiments on the AD Neuroimaging Initiative demonstrated accuracies of 98.57%, 96.03%, and 96.83% for the normal controls (NC)-early MCI (EMCI), NC-late MCI (LMCI), and EMCI-LMCI classification tasks, respectively. The AUC, specificity, sensitivity, and F1-score are also superior to state-of-the-art models, demonstrating the effectiveness of the proposed model. Furthermore, the proposed model is applied to the ABIDE dataset for autism diagnosis, achieving an accuracy of 91.43% and outperforming the state-of-the-art models, indicating excellent generalization capabilities of the proposed model. <jats:italic>Significance.</jats:italic> This study demonstrate<jats:bold>s</jats:bold> the proposed model’s ability to integrate multimodal imaging data and its excellent ability to recognize MCI. This will help achieve early warning for AD and intelligent diagnosis of other brain neurodegenerative diseases.</jats:p>

Last update from database: 11/13/25, 7:01 PM (UTC)