Research Article
A Neural Network-Inspired Approach for Improved and True Movie Recommendations
Table 13
Comparison between sentiment classification models.
| Classification models | IMDB | Yelp 2013 | Yelp 2014 | Accuracy | RMSE | Accuracy | RMSE | Accuracy | RMSE |
| Without using user and product information | Majority | 0.196 | 2.495 | 0.411 | 1.060 | 0.392 | 1.097 | Trigram | 0.399 | 1.783 | 0.569 | 0.814 | 0.577 | 0.804 | Text feature | 0.402 | 1.793 | 0.556 | 0.845 | 0.572 | 0.800 | AvgWordvec + SVM | 0.304 | 1.985 | 0.526 | 0.898 | 0.530 | 0.893 | SSWE + SVM | 0.312 | 1.973 | 0.549 | 0.849 | 0.557 | 0.851 | Paragraph vector | 0.341 | 1.814 | 0.554 | 0.832 | 0.564 | 0.802 | RNTN + recurrent | 0.400 | 1.764 | 0.574 | 0.804 | 0.582 | 0.821 | CNN and without UP (UPNN) | 0.405 | 1.629 | 0.577 | 0.812 | 0.585 | 0.808 | NSC | 0.443 | 1.465 | 0.627 | 0.701 | 0.637 | 0.686 | NSC + LA | 0.487 | 1.381 | 0.631 | 0.706 | 0.630 | 0.715 |
| Using user and product information | Trigram + UPF | 0.404 | 1.764 | 0.570 | 0.803 | 0.576 | 0.789 | Text feature + UPF | 0.402 | 1.774 | 0.561 | 1.822 | 0.579 | 0.791 | JMARS | N/A | 1.773 | N/A | 0.985 | N/A | 0.999 | UPNN (CNN) | 0.435 | 1.602 | 0.596 | 0.784 | 0.608 | 0.764 | UPNN (NSC) | 0.471 | 1.443 | 0.631 | 0.702 | N/A | N/A | NSC + UMA | 0.533 | 1.281 | 0.650 | 0.692 | 0.667 | 0.654 |
|
|