Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2019 / Article
Special Issue

Advanced Signal Processing and Adaptive Learning Methods

View this Special Issue

Research Article | Open Access

Volume 2019 |Article ID 1604392 | 10 pages | https://doi.org/10.1155/2019/1604392

Adaptive Learning Emotion Identification Method of Short Texts for Online Medical Knowledge Sharing Community

Academic Editor: Zoran Stamenkovic
Received09 Jan 2019
Revised10 Apr 2019
Accepted08 May 2019
Published25 Jun 2019

Abstract

The medical knowledge sharing community provides users with an open platform for accessing medical resources and sharing medical knowledge, treatment experience, and emotions. Compared with the recipients of general commodities, the recipients in the medical knowledge sharing community pay more attention to the intensity or overall evaluation of emotional vocabularies in the comments, such as treatment effects, prices, service attitudes, and other aspects. Therefore, the overall evaluation is not a key factor in medical service comments, but the semantics of the emotional polarity is the key to affect recipients of the medical information. In this paper, we propose an adaptive learning emotion identification method (ALEIM) based on mutual information feature weight, which captures the correlation and redundancy of features. In order to evaluate the proposed method’s effectiveness, we use four basic corpus libraries crawled from the Haodf’s online platform and employ Taiwan University NTUSD Simplified Chinese Emotion Dictionary for emotion classification. The experimental results show that our proposed ALEIM method has a better performance for the identification of the low-frequency words’ redundant features in comments of the online medical knowledge sharing community.

1. Introduction

More and more comments, opinions, suggestions, ratings, and feedback are produced on social networks with the rapid development of the Internet [1]. While those on social networks are meant to be useful, this part of the contents requires adopting text mining and emotion analysis techniques. Until now, emotional analysis and evaluation still face several challenges [2], which are shown in Table 1. These challenges become obstacles to accurately analyze emotional polarity.


AuthorYearDomain orientedChallenge typeReview structure

Jia et al. [3]2009Health/medical domainTheoreticalSemi-structured
Hogenboom et al. [4]2011Movie reviewsTheoreticalUnstructured
Alexandra and Ralf [5]2009Online news reviewsTheoreticalSemistructured/unstructured
Mukherjee and Bhattacharyya [6]2012ProductsTechnicalSemistructured
Chetan and Atul [7]2014TweetsTechnicalUnstructured
Doaa and Osama [8]2015Scientific papersTheoretical + technicalStructured

In recent years, more and more research has been done on emotion analysis. Among them, unstructured natural language texts have received the widest attention of scholars [9]. Emotion analysis is the inference of users’ opinions, positions, and attitudes through written or spoken contents [10]. Solving emotion analysis tasks typically uses dictionary-based and learning-based approach [11, 12]. The dictionary-based approach analyzes the relevance of each word to a particular emotion by using the predefined dictionary [13]. Learning-based methods typically use labeled samples to train the specific-purpose models under supervision [14].

Emotional analysis is increasingly used to analyze human emotions, but the fatal shortcoming of current emotion analysis methods is lack of aspect level granularity improvement, and also they are rarely applied to online knowledge communities, especially medical knowledge communities, so it is necessary to find an emotional classification method for medical knowledge communities. In light of these considerations, we propose an adaptive learning emotion identification method (ALEIM) based on mutual information feature weight, which captures the correlation and redundancy of features. Its effectiveness is verified on the datasets crawled from the Haodf’s online platform, in which the eigenvalues corresponding to the feature nouns are assigned according to the emotional dictionary NTUSD compiled by Taiwan University. Finally, the experimental results show that our proposed ALEIM method achieves a better performance.

The remainder of this paper is organized as follows. Section 2 reviews the related work of our study. Section 3 presents our proposed ALEIM method, which contains problem description and assumptions, feature selection based on mutual information, and emotional polarity selection based on mutual information weight. Section 4 presents the datasets, evaluation measures, experimental performance, and the discussion. Finally, Section 5 presents the conclusions.

2.1. Feature Extraction

Natural language processing and text analysis techniques are used to extract emotion features in emotion comments [9]. However, the feature selection method based on mutual information is developed to obtain the true feature, which is an information entropy estimation method independent of classifiers and datasets and superior to other feature extraction methods [15, 16]. A redundant algorithm for constructing the mutual information feature subset was proposed and used to improve the emotion classification accuracy [17]. The maximal relevance and minimal redundancy (mRMR) algorithm was proposed on the basis of the principle of mutual information, which was compared with the SVM classification [18, 19] and the recommended three ratio classification methods; the proposed accuracy is superior to traditional method, and recognition speed is faster than the intelligent method [20].

2.2. Emotion Analysis

Emotion analysis has been widely used in many fields [21, 22], such as consumer management, precision marketing, social network, etc. Unsupervised learning algorithm and the foremost supervised learning algorithm were used to classify emotion polarity of comments [23]. Moreover, emotion analysis is divided into many levels: document level [24], sentence level [25], word/term level, or aspect level [26].

Until now, the emotion classification methods can be roughly divided into three fields: machine learning methods, emotion dictionary-based methods [27], and deep learning emotion classification approaches [28]. Some common classifiers for machine learning methods are decision trees [29], Bayes [30], and support vector machines [31]. Emotion dictionary-based approach is to achieve classification by using the different granularity of emotion words polarity. The common emotion lexicons include the following: SentiWordNet [32], General Inquirer [33], SenticNet [34], Opinion Lexicon, HowNet Emotional Dictionary, Subjective Lexicon, DUTIR emotional vocabulary ontology library, and NTUSD [35]. However, it is very difficult to construct a complete emotion dictionary, which may have polarity of all emotion words. Therefore, it is necessary to obtain the polarity of emotional words by context. Deep learning emotion classification approaches are usually used to achieve emotion classification at aspect level. In terms of natural language processing, deep learning has far superior performance to machine learning [18], and it has been proved in the fields of text recognition [36] and semantic mining [37]. More recently, deep learning, especially convolutional neural network is widely used to improve the emotion analysis accuracy [3840].

3. Semisupervised Emotion Classification Method

3.1. Problem Description and Assumptions

Let the basic corpus denote , the domain indicate the source review set exists comments, , be the nth comment, and be the total number of comments. The feature noun set of comments denotes , is the comment feature, and is the total number of feature noun. Among them, the overall characteristics of the review (patient satisfaction, efficacy) are also known as the identification category, which is recorded as . The range of eigenvalues is ; it forms an information function with and : . Let ; then, is the eigenvalue vector of the comment and , and is the number of eigenvalues of the feature noun . is the eigenvalue of the comment (the eigenvalue is related to the adjective corresponding to the noun). The new comment is recorded as ; the comment feature matrix can be defined as . The data in the comments are multiisomerized, so it is necessary to normalize the eigenvalues.

We number all the adjectives contained in each feature and substitute the number as eigenvalues into the matrix for subsequent calculations.

Let be the eigenvalues of the comment ; then, is converted to .

Due to the uncertainty of the adjective language selection in the commentary library, the probability is used to describe its distribution characteristics. is the probability of feature values ; after the commentator’s emotional polarity is determined, the word is uncertain, and use the probability to eliminate the influences of the commenters’ decision.

The uncertainty of the emotional polarity of comments is concentrated in the feature redundancy of the comment set. Mutual information can effectively measure the redundancy between variables in a feature set. It is thus possible to find a set of input features that has a large mutual information value with the identification category and low redundancy between other features. The feature Relation-Redundancy Coefficient (R2C) is used for discrimination considering both the range of feature values and the distribution of values.

In the feature selection process, the joint action of multiple candidate features on the category , due to the redundancy. In this paper, the redundancy between and selected feature and the redundancy between all features in are collectively referred to as the redundancy of the feature, denoted by . The eigenvalue number of feature is k; then, its information entropy can be denoted as

If , , and , according to the joint distribution rate, the conditional entropy can be denoted as

Definition 1. Comment space mutual information.
In , the mutual information between and in feature set can be expressed asThe larger is, the closer the relationship between the feature random variables and is; when approaches zero, the two are independent of each other. The relationship between mutual information and information entropy can be expressed as

3.2. Feature Selection Based on Mutual Information

Definition 2. Let be the ratio of the mutual information between selected feature and identification category to the information entropy of the feature ; then, , .
meets the following characteristics:(i)When the range of feature values is the same, the more uniform the value is, the less important it is(ii)When the feature values are evenly distributed, the larger the value range is, the less important it is(iii)Then, the mutual information formula of feature redundancy in the MIFS-U method is expressed asThe ratio of mutual information between maximum correlation and minimum redundancy denotes the ratio of feature correlation and redundancy. When , is a constant used to measure the influence degree of redundancy between features in the feature set on classification accuracy, and it can be set according to the actual situation. The parameter called the feature Relation-Redundancy Coefficient (R2C) that measures the redundancy of the selected feature set is expressed by a nonnegative number :In , the Relation-Redundancy Coefficient has the following four effects:(i)When , the correlation of candidate features and the identification category is zero; is an irrelevant feature of .(ii)When , the redundancy of the candidate features and is stronger than and other features; then, it is a redundancy feature.(iii)When , the correlation between the candidate feature and the identification category is stronger than the redundancy of the candidate features and and brings new information for classification; then, it is called the correlation feature. Here we set a threshold based on the actual values for the correlation features. The features are strong correlation features when ; otherwise, they are weak correlation features.(iv)When , it only needs to analyze the mutual information between and the identification category ; the corresponding of the maximum value can be selected into the set . According to the abovementioned effects, the optimal feature set including the features is finally obtained.Given the mutual information and redundancy of the features, the empirical index is given by the expert. Using the mutual information method to obtain the comprehensive weight of the feature in the comment space ,As an important parameter of the model, plays an important role in the accuracy of the classification.

3.3. Emotional Polarity Selection Based on Mutual Information Weight

Based on the corpus data in the basic database, we obtain the optimal subset with the least feature redundancy and the relative weights of each feature in the feature set and calculate the emotion value of the unmarked corpus in the marked feature based on this weight. The specific steps are as follows:(i)Extract the emotional words from unmarked corpus and convert them to a basic corpus.(ii)According to the basic corpus, the optimal features including weights that remove redundant features are filtered out.(iii)The eigenvalues corresponding to the feature nouns are assigned according to the emotional dictionary NTUSD compiled by Taiwan University; the positive word is assigned 1, the negative word is assigned −1, and the emotional value according to the weight is calculated (which ignores the impact of the adverb or grammatical structure for the emotional value) and the emotional threshold based on the basic corpus is set.(iv)Judge the polarity and accuracy of the test corpus according to the weights and emotional thresholds based on the training library.

4. Numerical Experiment

Our experimental analysis is performed between mutual information method and emotion lexicon, TI-IDF and SVM. Using four datasets crawled from the Haodf’s online platform to evaluate the performance of our proposed ALEIM method, the experiment is divided into four aspects:(i)The datasets used in the experiments.(ii)The overall flow and evaluation measures of the experiments.(iii)The description of the experimental details by using the four datasets crawled from the Haodf’s online platform.(iv)The discussion of the experiments.

4.1. Datasets

The experimental datasets are crawled from the Haodf’s online platform. These medical service comments are extracted using the octopus, and then the word segmentation is reorganized using Java programming, and each sentence in the comment is split into the metamatrix structure of “noun + verb.” We first select 100 doctors and randomly collect 750 data in their comment areas and construct four basic corpus training libraries with different comments based on the above data, which is shown in Figure 1.

The number of positive and negative comments in the four basic corpus training libraries varies, and the positive comments ratio is higher than the negative ones. Due to the random extraction of the comment data as a corpus training library, the distribution of positive and negative comments in the training library is uncertain. Such randomly extracted data are used as training corpus data, which can test not only the dependence of different classification algorithms based on different category numbers but also the learning ability of the specific marked category based on a small sample. 400 data are prepared as the test data in Table 2, including 200 positive comments and 200 negative comments to test the accuracy of the training library for emotion classification under different algorithms.


The number of dataPositiveNegativeThe number of featureUsed for

100703037Training corpus
1501005039Training corpus
2001208041Training corpus
30018012041Training corpus
40020020042Test corpus

When feature extraction is performed, the features extracted from the corpus with 100 data are all included in the other volumes corpus; the features extracted from the corpus with 150 data are all included in the corpus of 200, 300, and 400 data; the corpuses of 200 and 300 data extract the same features; the feature number extracted from the test corpus with 400 data is 42, and an additional feature is extracted from the corpus with the 200 and 300 data.

Since the data are randomly crawled, the corpus has low data repeatability between each other, so it can approximate that the probability of new features appearing decreases rapidly as the selected comment corpus data increase. Therefore, the amount of comment data for a suitable training corpus is determined, and the extracted features can contain almost all the features included in the medical comments (some special features extracted by small probability often not related to the medical service itself). This indicates that the features in comments often have limitations compared with traditional commodity comments due to the uniformity and standardization of medical services. The general commodity comments are not fixed due to the feature attributes of products; the products are highly different, and different products often contain unique features, which often affect the overall polarity of the comments. Therefore, commodity comments have high requirements for feature extraction, and it is necessary to continuously update the extracted features based on a large amount of data to achieve accurate classification of emotional polarity. Since the medical service does not have the variability of general commodity, the features of the comments are limited, so selecting a certain amount of data extract features can almost involve all the features in the medical service comments.

4.2. Experimental Design and Evaluation Measures

We employ Taiwan University NTUSD Simplified Chinese Emotion Dictionary Corpora for emotion and emotion classification. The overall flowchart of experiments in this paper is shown in Figure 2.

The SVM and feature weight algorithm used in this experiment are implemented by using MATLAB. Among them, the mutual information algorithm and the IDF algorithm calculate the feature weight by using the basic corpus and then combine the emotion dictionary NTUSD to calculate the emotion value of the corpus in the training library and set the emotional threshold according to the corpus data (calculate the positive and negative comments, respectively, and then use the weighted average of the two types emotional mean as the emotional threshold). The emotional polarity of the test corpus based on the feature weight and the threshold is judged. We have selected the following indicators as evaluation indicators:(i)True positive: originally positive emotions, classified as positive emotions.(ii)True negative: originally negative emotions, classified as negative emotions.(iii)False positive: originally negative emotion, classified as positive emotion.(iv)False negative: originally positive emotion, classified as negative emotion.

The accuracy reflects the ability of the classifier to determine the entire sample: the positive decision can be positive and the negative decision negative and can be expressed as

The precision reflects the proportion of the true positive sample in the positive case determined by the classifier and can be expressed as

The recall reflects the proportion of positive cases that are correctly judged as the total positive examples and can be expressed as

4.3. Implementation Details of Experiments

Figure 3 shows that the accuracy of the classification algorithm of IDF and mutual information considering the feature weight and the emotion dictionary-based classification algorithm are significantly higher than the SVM algorithm using the Gaussian kernel function for four basic corpus libraries. As the number of samples increases, the accuracy of emotion lexicon maintains constant basically. However, as the number of samples increases, the accuracy of mutual information increases rapidly and is higher than the other three methods. As can be seen from Figure 3, the performance of mutual information method is better than the other three methods. SVM algorithm requires that the number of different types in the training database must be substantially the same to achieve optimal learning. However, the online medical service comments have a large proportion of positive and negative polarities; support vector machine algorithm is difficult to achieve the optimal data ratio. Constructing the training library according to the actual ratio often leads to the identification of negative polarity data with less proportion and leads to lower overall accuracy.

Table 3 illustrates the detailed significant test results of accuracy between mutual information and other three methods in terms of the p value on the four basic corpus libraries. As can be seen from the table, the mutual information method is superior to the other three methods on 150 data, 200 data, and 300 data. The results show that when the sample size increases, p values between mutual information and other three methods are less than 0.05. This means the classification results of mutual information method are significantly better than the other three methods.


DatasetsMetricsMethods
MI and emotion lexiconMI and TI-IDFMI and SVM (RBF)

100 datap value0.09060.00630.1304
150 data0.04870.01970.0043
200 data0.04350.04370.0226
300 data0.02550.04320.0021

Figure 4 shows that the precision of the classification algorithms of IDF and mutual information considering the feature weight are slightly higher than the other two algorithms. The mutual information algorithm has lower precision when the training data are less, and the precision is improved with the training data increase but is slightly lower than the IDF weighting algorithm.

Table 4 illustrates the detailed significant test results of precision between mutual information and other three methods in terms of the p value on the four basic corpus libraries. As can be seen from the table, there is a significant difference among the mutual information method, emotion lexicon, and SVM methods because p values between mutual information and other two methods are less than 0.05, but when the number of samples increases, there is no significant difference between the mutual information method and TI-IDF method.


DatasetsMetricsMethods
MI and emotion lexiconMI and TI-IDFMI and SVM (RBF)

100 datap value0.04130.00430.0343
150 data0.03870.06670.0342
200 data0.02340.07310.0106
300 data0.00550.09020.0049

Figure 5 indicates that our proposed algorithm which considers the weight of each feature has the superior performance than other two comparison approaches. Since the negative emotion polarity data of the training inventory are less, the recall of the other two algorithms is extremely low, and the weight of the feature weight algorithm is not dependent on the weight of the data category, so the learning effect on the limited negative polarity data is better, and the recognition of the negative emotion data in the test data is higher. The recall rate of the mutual information algorithm is significantly higher than that of the IDF algorithm. It shows that the mutual information algorithm considering the feature weight has strong recognition ability for negative emotion.

Table 5 illustrates the detailed significant test results of recall between mutual information and other three methods in terms of the p value on the four basic corpus libraries. From this table, it can be seen that there is a significant difference among the mutual information method, emotion lexicon, and SVM methods because p values between mutual information and other two methods are less than 0.05, but there is no significant difference between the mutual information method and TI-IDF method.


DatasetsMetricsMethods
MI and emotion lexiconMI and TI-IDFMI and SVM (RBF)

100 datap value0.03130.10250.0034
150 data0.04780.07060.0147
200 data0.04430.08310.0321
300 data0.01420.05020.0079

Figure 6 shows the comparison of 41 feature weights in 300 training library corpora under mutual information weighting algorithm and TI-IDF algorithm. It can be seen that the weights of the two algorithms of feature 1, feature 25, feature 35, and feature 5 and feature 41 are quite different, corresponding to the condition, attitude, doctor, and side effects and consultation.

The mutual information algorithm weights are significantly higher than the IDF weights for the first three features. These three features are commonly found in medical comments. IDF algorithm believes that these comments with high frequency are of lower importance and are filtered to give small weights, while mutual information algorithm according to the high mutual information value and low redundancy of the identification feature gives the high weight, and such weight causes the mutual information algorithm to have a lower accuracy than the IDF in identifying the positive emotional polarity. These features are used as the basic features of comments; it tends to have a lower guiding effect on the emotional polarity of the reviewer in the positive comments and a primary role in the orientation of the emotional polarity in the negative comments.

In the latter two features, the mutual information algorithm weight is significantly lower than IDF algorithm. These features belong to the low-frequency features and appear 6 times and 7 times in 300 data, respectively. The IDF algorithm assumes that low-frequency words can more affect the emotional polarity of the comments for the comment library as a whole, and the mutual information algorithm considers these features to be small mutual information values with high redundancy and low weight. The experiments show that these two features actually weaken the emotional polarity of the comments. The IDF algorithm classifies all the errors in the six test comment categories with the above two features, and the mutual information recognition rate is 100%.

4.4. Discussion of Experiments

From the above experimental analysis, we can obtain that mutual information is the most appropriate method to solve such problem. It shows good performance in terms of accuracy when the number of samples increases and only requires a moderate computational cost for solving emotion classification problems of short texts for online medical knowledge sharing community. However, in terms of precision and recall, there is no significant difference between the mutual information method and TI-IDF method, but Figure 6 shows that the accuracy of IDF algorithm in identifying negative emotion polarity is significantly lower than that of mutual information algorithm. Experiments show that low-frequency words existing in medical review data are often redundant features, and mutual information algorithm has higher accuracy for the identification of such redundant features. However, our experiments need to be further improved due to only four basic corpus libraries involved in the experiment. Therefore, we plan to crawl more different types comments on online medical knowledge sharing community to achieve parameter optimization and method performance promotion.

5. Conclusions

Emotion analysis has been widely used in many fields and becomes an important tool for extracting emotional information of the comments. Emotional analysis in medical knowledge sharing community is still relatively lacking compared with the general commodities. The information recipients in the medical knowledge sharing community are more concerned with the intensity of the emotional words in the comments or the overall evaluation. In this research, we propose an adaptive learning emotion identification method based on mutual information feature weight, which captures the correlation and redundancy of features. Its effectiveness is verified on the dataset crawled from the Haodf’s online platform, and we employ Taiwan University NTUSD Simplified Chinese Emotion Dictionary Corpora for emotion classification. Finally, the experimental results show that the proposed ALEIM method can achieve good performance, especially in terms of the low-frequency words feature extraction in comments of the online medical knowledge sharing community.

Data Availability

The experimental data come from the Haodf’s online platform and can be crawled from https://www.haodf.com.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (grant no. 71571105) and the Fundamental Research Funds for China’s Central Universities (grant no. 63172074).

References

  1. Y. Wang, Y. Rao, X. Zhan, H. Chen, M. Luo, and J. Yin, “Sentiment and emotion classification over noisy labels,” Knowledge-Based Systems, vol. 111, pp. 207–216, 2016. View at: Publisher Site | Google Scholar
  2. D. M. El-Din and M. Hussein, “A survey on sentiment analysis challenges,” Journal of King Saud University Engineering Sciences, vol. 30, no. 4, pp. 330–338, 2018. View at: Publisher Site | Google Scholar
  3. L. Jia, C. Yu, and W. Meng, “The effect of negation on sentiment analysis and retrieval effectiveness,” in Proceedings of the 18th ACM Conference on Information and Knowledge Management-CIKM ‘09, Hong Kong, China, November 2009. View at: Google Scholar
  4. A. Hogenboom, P. Van Iterson, and B. Heerschop, “Determining negation scope and strength in sentiment analysis,” in Proceedings of the IEEE International Conference on Systems, IEEE, Anchorage, AK, USA, October 2011. View at: Google Scholar
  5. B. Alexandra and S. Ralf, “Rethinking sentiment analysis in the news: from theory to practice and back,” in Proceedings of the WOMSA 2009, pp. 1–12, Seville, Spain, Novmber 2009. View at: Google Scholar
  6. S. Mukherjee and P. Bhattacharyya, Feature Specific Sentiment Analysis for Product Reviews, Springer, Berlin, Germany, 2012.
  7. K. Chetan and M. Atul, “A scalable lexicon based technique for sentiment analysis,” International Journal in Foundations of Computer Science & Technology, vol. 4, no. 5, pp. 267–307, 2014. View at: Publisher Site | Google Scholar
  8. M. Mohey, M. O. Hoda, and O. Ismael, “Online paper review analysis,” International Journal of Advanced Computer Science and Applications, vol. 6, no. 9, pp. 99–107, 2015. View at: Publisher Site | Google Scholar
  9. B. Agarwal, N. Mittal, P. Bansal, and S. Garg, “Sentiment analysis using common-sense and context information,” Computational Intelligence and Neuroscience, vol. 2015, Article ID 715730, 9 pages, 2015. View at: Publisher Site | Google Scholar
  10. G. Katz, N. Ofek, and B. Shapira, “Consent: context-based sentiment analysis,” Knowledge-Based Systems, vol. 84, pp. 162–178, 2015. View at: Publisher Site | Google Scholar
  11. A. Weichselbraun, S. Gindl, and A. Scharl, “Enriching semantic knowledge bases for opinion mining in big data applications,” Knowledge-Based Systems, vol. 69, pp. 78–85, 2014. View at: Publisher Site | Google Scholar
  12. F. Bravo-Marquez, M. Mendoza, and B. Poblete, “Meta-level sentiment models for big social data analysis,” Knowledge-Based Systems, vol. 69, pp. 86–99, 2014. View at: Publisher Site | Google Scholar
  13. E. Cambria, B. Schuller, Y. Xia, and C. Havasi, “New avenues in opinion mining and sentiment analysis,” IEEE Intelligent Systems, vol. 28, no. 2, pp. 15–21, 2013. View at: Publisher Site | Google Scholar
  14. B. Pang, L. Lee, and S. Vaithyanathan, “Thumbs up: sentiment classification using machine learning techniques,” in Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Philadelphia, PA, USA, July 2002. View at: Google Scholar
  15. C. Pascoal, M. R. Oliveira, A. Pacheco, and R. Valadas, “Theoretical evaluation of feature selection methods based on mutual information,” Neurocomputing, vol. 226, pp. 168–181, 2017. View at: Publisher Site | Google Scholar
  16. Y. Wu, B. Liu, and W. Wu, “Grading glioma by radionics with feature selection based on mutual information,” Journal of Ambient Intelligence & Humanized Computing, vol. 9, pp. 1–12, 2018. View at: Google Scholar
  17. G. U. Chao, Y. Yang, and X. X. Zhang, “Feature selection for transformer fault diagnosis based on maximal relevance and minimal redundancy criterion,” Advanced Technology of Electrical Engineering & Energy, vol. 37, no. 7, pp. 84–89, 2018. View at: Google Scholar
  18. S. Poria, A. Gelbukh, E. Cambria, A. Hussain, and G.-B. Huang, “Emosenticspace: a novel framework for affective common-sense reasoning,” Knowledge-Based Systems, vol. 69, pp. 108–123, 2014. View at: Publisher Site | Google Scholar
  19. L. Shen, H. Chen, Z. Yu et al., “Evolving support vector machines using fruit fly optimization for medical data classification,” Knowledge-Based Systems, vol. 96, pp. 61–75, 2016. View at: Publisher Site | Google Scholar
  20. V. Loia and S. Senatore, “A fuzzy-oriented sentic analysis to capture the human emotion in web-based content,” Knowledge-Based Systems, vol. 58, pp. 75–85, 2014. View at: Publisher Site | Google Scholar
  21. T. Chalothom and J. Ellman, Simple Approaches of Sentiment Analysis via Ensemble Learning, Information Science and Applications, Springer, Berlin, Germany, 2015.
  22. F. N. Ribeiro, M. Araújo, P. Gonçalves et al., “SentiBench-a benchmark comparison of state-of-the-practice sentiment analysis methods,” EPJ Data Science, vol. 5, no. 1, 2016. View at: Google Scholar
  23. P. D. Turney, “Thumbs up or thumbs down: semantic orientation applied to unsupervised classification of reviews,” in Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia, PA, USA, July 2002. View at: Google Scholar
  24. Y. Ainur, Y. Yisong, and C. Claire, “Multi-level structured models for document-level sentiment classification,” in Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, Cambridge, MA, USA, October 2010. View at: Google Scholar
  25. N. Farra, E. Challita, and R. Assi, “Sentence-level and document-level sentiment mining for arabic texts,” in Proceedings of the IEEE International Conference on Data Mining Workshops, IEEE, Vancouver, Canada, December 2011. View at: Google Scholar
  26. H. Zhou and F. Song, “Aspect-level sentiment analysis based on a generalized probabilistic topic and syntax model,” in Proceedings of the FLAIRS 2015, Hollywood, FL, USA, May 2014. View at: Google Scholar
  27. W. Medhat, A. Hassan, and H. Korashy, “Sentiment analysis algorithms and applications: a survey,” AIN Shams Engineering Journal, vol. 5, no. 4, pp. 1093–1113, 2014. View at: Publisher Site | Google Scholar
  28. H. H. Do, P. Prasad, A. Maag, and A. Alsadoon, “Deep learning for aspect-based sentiment analysis: a comparative review,” Expert Systems With Applications, vol. 118, pp. 272–299, 2019. View at: Publisher Site | Google Scholar
  29. N. Liu, E.-S. Qi, M. Xu, B. Gao, and G.-Q. Liu, “A novel intelligent classification model for breast cancer diagnosis,” Information Processing & Management, vol. 56, no. 3, pp. 609–623, 2019. View at: Publisher Site | Google Scholar
  30. K. Ravi and V. Ravi, “A survey on opinion mining and sentiment analysis: tasks, approaches and applications,” Knowledge-Based Systems, vol. 89, no. 6, pp. 14–46, 2015. View at: Publisher Site | Google Scholar
  31. N. Liu, J. Shen, M. Xu et al., “Improved cost-sensitive support vector machine classifier for breast cancer diagnosis,” Mathematical Problems in Engineering, vol. 2018, Article ID 3875082, 13 pages, 2018. View at: Publisher Site | Google Scholar
  32. B. Agarwal and N. Mittal, “Prominent feature extraction for review analysis: an empirical study,” Journal of Experimental & Theoretical Artificial Intelligence, vol. 28, no. 3, pp. 485–498, 2014. View at: Publisher Site | Google Scholar
  33. H. Zhang, D. Wang, H. Xu, and S. Sun, “Sentiment classification of micro-blog public opinion based on convolution neural network,” Journal of the China Society for Scientific and Technical Information, vol. 37, no. 7, pp. 695–702, 2018. View at: Google Scholar
  34. C. Hung, “Word of mouth quality classification based on contextual sentiment lexicons,” Information Processing & Management, vol. 53, no. 4, pp. 751–763, 2017. View at: Publisher Site | Google Scholar
  35. B. Rojas, “Deep learning for sentiment analysis,” Language & Linguistics Compass, vol. 10, no. 12, pp. 205–212, 2016. View at: Publisher Site | Google Scholar
  36. Z. Yang, R. Salakhutdinov, and W. Cohen, “Multi-task cross-lingual sequence tagging from scratch,” 2016, https://arxiv.org/abs/1603.06270. View at: Google Scholar
  37. D. Marcheggiani, A. Frolov, and I. Titov, “A simple and accurate syntax-agnostic neural model for dependency-based semantic role labeling,” in Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), Vancouver, Canada, August 2017. View at: Google Scholar
  38. M. Hu and B. Liu, “Mining and summarizing customer reviews,” in Proceedings of the ACM International Conference on Knowledge Discovery and Data Mining, Seattle, WA, USA, August 2014. View at: Google Scholar
  39. M. Saeidi, G. Bouchard, M. Liakata, and SentiHood, “Targeted aspect based sentiment analysis dataset for urban neighbourhoods,” in Proceedings of the 26th International Conference on Computational Linguistics, Osaka, Japan, May 2016. View at: Google Scholar
  40. S. Poria, I. Chaturvedi, E. Cambria, and L. D. A. Sentic, “Improving on LDA with semantic similarity for aspect-based sentiment analysis,” in Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), IEEE, Vancouver, Canada, July 2016. View at: Google Scholar

Copyright © 2019 Dan Gan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

588 Views | 362 Downloads | 1 Citation
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.