Your search

Author or contributor
Resource type
  • There are a large number of symptom consultation texts in medical and healthcare Internet communities, and Chinese health segmentation is more complex, which leads to the low accuracy of the existing algorithms for medical text classification. The deep learning model has advantages in extracting abstract features of text effectively. However, for a large number of samples of complex text data, especially for words with ambiguous meanings in the field of Chinese medical diagnosis, the word-level neural network model is insufficient. Therefore, in order to solve the triage and precise treatment of patients, we present an improved Double Channel (DC) mechanism as a significant enhancement to Long Short-Term Memory (LSTM). In this DC mechanism, two channels are used to receive word-level and char-level embedding, respectively, at the same time. Hybrid attention is proposed to combine the current time output with the current time unit state and then using attention to calculate the weight. By calculating the probability distribution of each timestep input data weight, the weight score is obtained, and then weighted summation is performed. At last, the data input by each timestep is subjected to trade-off learning to improve the generalization ability of the model learning. Moreover, we conduct an extensive performance evaluation on two different datasets: cMedQA and Sentiment140. The experimental results show that the DC-LSTM model proposed in this paper has significantly superior accuracy and ROC compared with the basic CNN-LSTM model.

  • To solve the problem of one-sided pursuit of the shortest distance but ignoring the tourist experience in the process of tourism route planning, an improved ant colony optimization algorithm is proposed for tourism route planning. Contextual information of scenic spots significantly effect people’s choice of tourism destination, so the pheromone update strategy is combined with the contextual information such as weather and comfort degree of the scenic spot in the process of searching the global optimal route, so that the pheromone update tends to the path suitable for tourists. At the same time, in order to avoid falling into local optimization, the sub-path support degree is introduced. The experimental results show that the optimized tourism route has greatly improved the tourist experience, the route distance is shortened by 20.5% and the convergence speed is increased by 21.2% compared with the basic algorithm, which proves that the improved algorithm is notably effective.

  • The number of tourist attractions reviews, travel notes and other texts has grown exponentially in the Internet age. Effectively mining users’ potential opinions and emotions on tourist attractions, and helping to provide users with better recommendation services, which is of great practical significance. This paper proposes a multi-channel neural network model called Pre-BiLSTM combined with a pre-training mechanism. The model uses a combination of coarse and fine- granularity strategies to extract the features of text information such as reviews and travel notes to improve the performance of text sentiment analysis. First, we construct three channels and use the improved BERT and skip-gram methods with negative sampling to vectorize the word-level and vocabulary-level text, respectively, so as to obtain more abundant textual information. Second, we use the pre-training mechanism of BERT to generate deep bidirectional language representation relationships. Third, the vectors of the three channels are input into the BiLSTM network in parallel to extract global and local features. Finally, the model fuses the text features of the three channels and classifies them using SoftMax classifier. Furthermore, numerical experiments are conducted to demonstrate that Pre-BiLSTM outperforms the baselines by 6.27%, 12.83% and 18.12% in average in terms of accuracy, precision and F1-score.

  • Text classification is an important topic in natural language processing, with the development of social network, many question-and-answer pairs regarding health-care and medicine flood social platforms. It is of great social value to mine and classify medical text and provide targeted medical services for patients. The existing algorithms of text classification can deal with simple semantic text, especially in the field of Chinese medical text, the text structure is complex and includes a large number of medical nomenclature and professional terms, which are difficult for patients to understand. We propose a Chinese medical text classification model using a BERT-based Chinese text encoder by N-gram representations (ZEN) and capsule network, which represent feature uses the ZEN model and extract the features by capsule network, we also design a N-gram medical dictionary to enhance medical text representation and feature extraction. The experimental results show that the precision, recall and F1-score of our model are improved by 10.25%, 11.13% and 12.29%, respectively, compared with the baseline models in average, which proves that our model has better performance.

Last update from database: 4/26/24, 11:02 PM (UTC)