The traditional collaborative filtering recommendation algorithm has the problem of data sparsity and expansibility. Aiming at this problem, and improved bisecting k-means collaborative filtering algorithm proposed.The algorithm first removes unrated items in the rating data matrix based on the Weighted Slope One algorithm preprocessing to reduce its sparsity. Then the preprocessed rating data is clustered based on the bisecting K-means algorithm, which reduces the nearest neighbor search space of the target user by assembling similar objects, thereby improving the algorithm’s expansibility. Finally, use the recommendation algorithm to generate the final result.Experimental results show that the improved bisecting k-means algorithm improves the recommendation effect.
Currently, there is a lack of voice samples in the speech emotion recognition field, which leads to poor recognition rate and over-fitting of data. Inspire by this, we propose speech emotion recognition based on data enhancement. The Berlin Emotional Corpus is enhanced from two directions: Time Domain and Frequency Domain. The samples was extracted and trained. Research and analyze the recognition rate of two classifiers: K-Nearest Neighbor and Support Vector Machine. Experiments show that the effect after data enhancement is better.
As one of the most critical tasks of natural language processing (NLP), emotion classification has a wide range of applications in many fields. However, restricted by corpus, semantic ambiguity, and other constraints, researchers in emotion classification face many difficulties, and the accuracy of multi-label emotion classification is not ideal. In this paper, to improve the accuracy of multi-label emotion classification, especially when semantic ambiguity occurs, we proposed a fusion model for text based on self-attention and topic clustering. We use the Pre-trained BERT to extract the hidden emotional representations of the sentence, and use the improved LDA topic model to cluster the topics of different levels of text. Then we fuse the hidden representations of the sentence and use a classification neural network to calculate the multi-label emotional intensity of the sentence. After testing on the Chinese emotion corpus Ren_CECPs corpus, extensive experimental results demonstrate that our model outperforms several strong baselines and related works. The F1-score of our model reaches 0.484, which is 0.064 higher than the best results in similar studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.