Wireless Communications and Mobile Computing

Wireless Communications and Mobile Computing / 2021 / Article
Special Issue

Deep Feature Learning for Big Data

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 5514003 | https://doi.org/10.1155/2021/5514003

Weina Jiang, Qi Yong, Ning Liu, Yuze Luo, "RAPOT: An Adaptive Multifactor Risk Assessment Framework on Public Opinion for Trial Management", Wireless Communications and Mobile Computing, vol. 2021, Article ID 5514003, 11 pages, 2021. https://doi.org/10.1155/2021/5514003

RAPOT: An Adaptive Multifactor Risk Assessment Framework on Public Opinion for Trial Management

Academic Editor: Amr Tolba
Received08 Jan 2021
Revised08 Apr 2021
Accepted17 Apr 2021
Published17 May 2021

Abstract

Since public opinion from social media has a growing impact and supervision on trial, risk assessment on public opinion is increasingly important in refined trial management. However, the tremendous amount of public opinion and the insufficient historical logs of trial procedures bring challenges to risk assessment on public opinion. To address this, we propose an adaptive multifactor risk assessment framework on public opinion with fuzzy numbers. Initially, we establish a multilayer indicator model for assessing the risk of public opinion (POR) with multilayer analysis and decision methods. Then, we explore the association rules hidden in the process logs to update the indicator model periodically. Moreover, we design a public opinion analysis module for indicator evaluation, including analysis in public opinion sentiment, hot search, and social media coverage to deal with big data on social media. Especially, the public opinion sentiment is classified by topic-based BiLSTM (T-BiLSTM), which is more accurate. Finally, the fuzzy number similarity is employed to determine POR’s level in the nine-level risk system. Experimental results validate the efficiency of our framework when assessing the POR.

1. Introduction

Serious and complicated cases bring severe challenges to trial management nowadays. Some of them have raised much attention due to their case type, related parties, and well-known judges. Simultaneously, people are used to expressing their opinions for the concerned cases on platforms such as Facebook, WeChat, Weibo, and Twitter. A mass of public opinion has both positive and negative impacts on trial procedures. Hence, public opinion assessment and supervision are crucial for credible trials. Actually, public opinion in social media has its characteristics, such as mass amount, fast propagation, and chaotic content. Furthermore, the mass data in social media reveals the inherent information we are concerned about. After analyzing multisource public opinion comprehensively, we could figure out its propagation mode to make POR’s warning come earlier. Therefore, POR assessment is beneficial for early responding to negative public opinions and improving the court’s initiative ability. There are two main tasks while accomplishing the task. One is to handle public opinion with big data theory, and the other is to conduct the risk assessment with insufficient historical data.

For the explosive comments that emerge on social media, sentiment analysis has become a research hotspot. Besides, sentiment analysis on comments about hot cases plays a vital role in promoting trial management. Thus, it is crucial to carry out an efficient analysis and supervision method for comments about cases. So far, research on machine learning-based sentiment analysis has a lot of achievements, such as KNN [1], maximum entropy [2], SVM [3], and Bayes [4]. Nowadays, with the rapid development and outstanding performance of deep learning, many researchers concentrate on methods with CNN [5], RNN [6], and LSTM [7] to improve the classification accuracy and have significant progress.

For risk assessment, due to insufficient historical data, together with the fuzziness and uncertainty of risks, researchers adopt a fuzzy set theory to analyze the risk [8]. Singh et al. [9] propose an assessment framework for risk analysis of food disaster based on fuzzy similarity, and they quantitatively calculate the risk level of targets separately. The fuzzy similarity-based method performs well with a quantitative risk assessment for trial cases.

However, there still exists some challenges to achieve the assessment of POR. Firstly, there is no suitable indicator model for this task. An efficient assessment relies on fine-grained indicators and objective weights for indicators, and it remains unsolved. Secondly, comments about cases in the trial on social media have many characteristics that are hard to analyze. Hence, it remains much work to ensure the accuracy of sentiment classification for the specific use. Thirdly, how to evaluate risks quantitatively is not easy but crucial.

To address these issues, this paper implements a Risk Assessment framework on Public Opinion for Trial management (RAPOT). The framework provides a fine-grained risk assessment based on fuzzy numbers. By computing fuzzy number similarities, the framework decides its risk level in the nine-level assessment system. Our main contributions in this paper are as follows: (i)Fine-Grained Risk Rating System. We employ fuzzy number similarities to achieve risk assessment with little historical data in trial procedure management. At first, a multilayer risk indicator model is established based on the analytic hierarchy process (AHP) method and extended technique for order preference by similarity to an ideal solution (extended TOPSIS) method. The model contains a fine-grained indicator layer, and each one contains a risk indicator and its impact factor. When assessing the risks, we transform both impact factors and indicator values into fuzzy numbers. Then, we aggregate the fuzzy numbers into one and rank the integrated one in the nine-level assessment system(ii)Adaptive Indicator Model. Considering that the system logs accumulated during trial processing contain many latent association rules of the procedures, we propose the RApriori algorithm to explore the association rules. These latent rules are updated to the indicator model for improving the applicability and robustness of the model(iii)Efficient Comment Sentiment Analysis. We define three kinds of input sources and submodules for indicator evaluation. Significantly, the sentiment of public opinion is classified based on topics. The sentiment analysis that we propose consists of single-pass-based topic clustering and T-BiLSTM-based sentiment analysis. Sentiment analysis for topics is precise and more comprehensive. Besides, our framework has extensive indicators such as the topic’s heat and coverage of media(iv)Experimental Evaluations. To demonstrate the performance of RAPOT, we conduct a case study with three cases that are paid much attention recently. The results illustrate that our framework is applicable and efficient in practical cases with a reasonable assessment level

The rest of this paper is structured as follows. We talk about the related work in Section 2. The RAPOT framework is described in Section 3. In Section 4, we illustrate the experimental results, and we conclude the paper in Section 5.

Due to the fuzziness and uncertainty of risks, researchers adopt a fuzzy set theory to analyze the risk. The theory of fuzzy numbers has been widely applied in risk analysis [10], approximate reasoning [11], and risk pattern recognition [12]. For risk analysis, the existing methods can be divided into the fuzzy ranking-based [13], fuzzy inference-based [14, 15], fuzzy matrix-based [16, 17], and fuzzy number similarity-based [9] risk assessment models. Zhang et al. [13] figure out the risky area based on a water security evaluation framework by comparing the risk of related areas. Hence, the qualitative analysis measures the risk level comparatively. Nevertheless, in the trial, the POR of the two cases cannot be compared at the same pace. Karasan et al. [14] propose the safety and critical effect analysis (SCEA). Furthermore, it adopts Pythagorean fuzzy sets [18] to provide a comprehensive risk assessment. However, fuzzy inference-based methods are usually used in industry and are not suitable for trial applications. Can et al. [16] present a three-stage fuzzy risk matrix-based risk assessment and dynamically combine multicriteria decision-making with fuzzy logic. Though fuzzy matrix-based methods can reduce risk ties [19] efficiently, they still provide a qualitative assessment that is not precise enough. As for similarity-based method, Khorshidi and Nikfalazar [20] present an improved method to compute the degree of similarity between generalized fuzzy numbers. The proposed method has been used for fuzzy risk analysis, and it could determine each manufacturer’s risk level. In summary, the similarity-based model is suitable for the quantitative risk assessment for an individual object. At the same time, risks in the trial process management system (TPMS) are quite fuzzy and uncertain in fact. Besides, the historical data have not been digitalized well. So we adopt a fuzzy number similarity-based model to achieve risk assessment.

The existing fuzzy number similarity-based methods always have three main modules. They are the risk indicator model, risk aggregation, and risk level determination. Among them, fuzzy number similarity calculation is important for risk level determination precisely. Referring to fuzzy number similarities, researchers have defined various features of generalized fuzzy number (GFN) to distinguish the numbers, such as the center of gravity (COG) [21], the area [20], and the radius of gyration [22]. Then, researchers adopt geometric distance, Hausdorff distance [23], and so on to measure the similarity of the feature values. Xu et al. [24] present a COG-based method while with a limitation that two different fuzzy numbers may have the same COG. To address the limitation, Yong et al. [25] employ ROG of the area to measure the similarities. Moreover, Chutia and Gogoi [10] expand GFN with left height and the right height to further distinguish traditional GFNs with the same COG. However, these two methods still suffer from invalid results. Therefore, we select a similarity measurement on generalized fuzzy numbers to map the integrated fuzzy number into a linguistic term in the nine-level risk system [26]. The similarity measure algorithm we employ constrains the similarity of two fuzzy numbers in the range of , with fewer invalid results and at the same time has a high distinguishability.

3. Framework of Risk Assessment of Public Opinion

In this section, we discuss the critical issues while assessing the POR. Firstly, we present the risk indicator model in Sections 3.1 and 3.2. Secondly, we talk about evaluating risk indicators and the public opinion sentiment analysis in Section 3.3. Then, we explain the fuzzy number similarity-based risk assessment method in Section 3.4.

Figure 1 is the framework of our RAPOT. It includes a risk indicator model, indicator evaluation module, and risk aggregation module. In the beginning, a multilayer indicator model is built to define the fine-grained risk indicators with corresponding impact factors, and the model is dynamically updated by exploring new association rules on system logs. Then, the indicator evaluation module figures out the indicator values based on process data, data from the public opinion analysis module, and the other systems. The indicator aggregation module is aimed at deciding the risk level with the impact factors and the risk probabilities.

3.1. Risk Indicator Model Initialization

To overcome the difficulty of lacking historical data, we employ AHP and extended TOPSIS to construct an initialized risk indicator model. The hierarchy model defines amounts of risk indicators along with their impact factors. Figure 2 describes the procedures for building our indicator model. First, a hierarchy structure is built, and an evaluation dataset for risk indicators is collected based on AHP. Then, an evaluation matrix for the risk indicators is constructed based on the collected dataset. We adopt extended TOPSIS to analyze the evaluation dataset to calculate the impact factors of the indicators. Our model construction method combines AHP and extended TOPSIS to work out a group of accurate impact factors with limited historical data.

3.1.1. Hierarchical Structure Determination

AHP is an efficient multilayer analysis and decision method [27, 28]. It first composes the decision problem into a hierarchy of subproblems that each one can be treated independently. Once the hierarchy is built, the expert group evaluates the elements in the same layer by comparing them to each other according to their impact on the father element. Table 1 shows the 1-9 scales used to evaluate each element’s impact factor. The APH transforms the evaluations to numerical values that can be calculated over the decision problem’s entire range. Finally, a priority is derived for each element in the hierarchy by iteratively verifying the comparison matrix’s consistency after adjusting the priorities each time.


Intensity of importanceDefinition

1Equal importance
2Weak
3Moderate importance
4Moderate plus
5Strong importance
6Strong plus
7Very strong or demonstrated importance
8Very, very strong
9Extreme importance

At first, we refer to expertise, existing laws, regulations, and the classical hot cases and form the set of risks as , where is the number of risks. Then, the hierarchical structure is established based on AHP. As shown in Figure 3, our risk indicator model consists of three layers: (i)Objective Layer (OL). Risk assessment of public opinion for trial management is the objective of our work. We need to figure out the impacts of public opinion on the trial procedure(ii)Criteria Layer (CL). The elements in this layer are the judge, the parties involved, the case, and the public opinion. The expert group defines the elements referring to the existing documents(iii)Indicator Layer (IL). This layer contains the indicators which would impact the trial procedure by public opinion. Each indicator belongs to their father elements in the criteria layer

After that, an evaluation dataset is collected to gain the indicators’ impact factors, and the impact factor represents the indicator’s weight when integrating the POR. To evaluate the impact factor accurately, the expert compares the risk indicators with pairs to complete a comparison matrix as where is the comparison value of and . The expert assigns the value according to Table 1. Then, the consistency of has to be verified by where is the maximum eigenvalue of and is the dimension of the matrix. The consistency is complete when and decreases with increasing. Then, AHP uses a random consistency indicator to define a refined which is

When , the matrix is consistent and is a predefined dictionary [29]. If the validation fails, the expert has to adjust the comparison matrix until the validation comes to success.

The eigenvector of the approved evaluation matrix gives a sort of risk indicators by their impact factors. For risk assessment with fuzzy numbers, the expert assigns a linguistic term in LT ={“AbsolutelyLow (AL)”, “VeryLow (VL)”, “Low (L)”, “FairlyLow (FL)”, “Medium (M)”, “FairlyHigh (FH)”, “High (H)”, “VeryHigh (VH)”, “AbsoluteHigh (AH)”} to each risk indicator based on the order.

3.1.2. Impact Factor Calculation

Serval law experts evaluate the impact factors according to our hierarchical structure and construct an evaluation dataset. The dataset contains several evaluation items for experts and is consists of , each item comes from a law expert for in set . Then, we employ TOPSIS to aggregate the evaluations of different experts. TOPSIS is a multicriteria decision analysis method, which identifies weights for each criterion by calculating the geometric distances from each alternative to the positive ideal solution and the negative ideal solution, respectively [30]. When evaluating the risk indicator’s impact factor, the positive ideal solution is defined as the lowest impact on cost optimization. Namely, the lower impact of the risk indicator brings less cost in risk prevention and control. Hence, we adopt the extended TOPSIS [31] to calculate the impact factors for the POR assessment designed for the trial scene.

First, an evaluation matrix with linguistic terms is established based on the dataset as where , are the number of experts and risk indicators. In the matrix, is given by expert for the indicator to measure the importance of the indicator. And then, is transformed into a fuzzy number according to Table 2 for weight fusion of impact. After that, we get an evaluation matrix with fuzzy numbers. here is a generalized fuzzy number represented as and .


Lingustic termsGeneralized fuzzy numbers

AbsolutelyLow(0.0, 0.0, 0.0, 0.0; 1.0)
VeryLow(0.0, 0.0, 0.02, 0.07; 1.0)
Low(0.04, 0.1, 0.18, 0.23; 1.0)
FairlyLow(0.17, 0.22, 0.36, 0.42; 1.0)
Medium(0.32, 0.41, 0.58, 0.65; 1.0)
FairlyHigh(0.58, 0.63, 0.80, 0.86; 1.0)
High(0.72, 0.78, 0.92, 0.97; 1.0)
VeryHigh(0.93, 0.98, 1.0, 1.0; 1.0)
AbsolutelyHigh(1.0, 1.0, 1.0, 1.0; 1.0)

In the extended TOPSIS, the positive and negative ideal solutions are here, and are defined as

Then, the distance between and the positive ideal solution is calcuated as

Similarly, the geometric distance between and the negative ideal solution is here , , and are generalized fuzzy numbers. After that, we obtain the weight of each alternative by normalizing the distance ratios as

Finally, the impact factor of indicator is calculated by weighted summing the alternatives as

3.2. Risk Indicator Model Update

Considering that the trial process is strict and complicated, POR’s initial indicator model can be hardly applicable to the POR assessment continuously. Also, the system logs accumulated during trial processing contain many latent association rules of the procedures. Figure 4 shows a fragment of the trial process, each block is a process node, and each ellipse represents the risk confirmation. Therefore, we propose a reversed Apriori (RApriori) algorithm to explore the association rules hide in the system logs. The association rule we want to search is defined as , here represents a failed rule check in the process node and is a risk confirmation node. By investigating the practical TPMS, we figure out the process nodes are arranged in a single sequence. According to it, we optimize the classical Apriori by ordering the nodes and extending the association set in reverse. The details of the proposed RApriori are shown in Algorithm 1.

Require: system logs generate from T1 to T2
Ensure: association rules
1:
2:
3:
4: for; ; do
5:  
6:  
7:  
8:  
9:  
10:  
11:  
12:  repeat
13:   
14:   
15:   
16:   
17:   until cSet is empty
18: end for
19:

In the algorithm, we assign numerical codes to both process nodes and risk confirm nodes based on their sequence in trial. Firstly, the search of latent association rules always starts from a frequent risk confirm node and set it as the root of the tree we show in Figure 5. Secondly, the frequent process nodes whose numerical codes less than are reversely sorted in a candidate list . Thirdly, we join each item in the list with to form a set separately, such as , and then check the corresponding support score to create layer 2. The support score is defined as where is a set and is the amount of the logs. Fourthly, the tree moves to the next layer by orderly combining a set in the current layer with items in the candidate list that are less than the minimum node in the set. Then, iteratively increase the height of until there is no more satisfied new set. At last, we calculate the support score of the satisfied sets and work out the association rules. The support score is defined as

The RApriori method is executed regularly, and the searched association rules are added to update the indicator model of POR. The experimental results show that our algorithm decreases the computational complexity significantly.

3.3. Risk Indicator Evaluation and Public Opinion Analysis

Besides the indicator factor, we have to calculate the probability of indicator occurrence, which we call the indicator value. The data sources of value computing can be divided into three categories: (1) social media, (2) manual input, and (3) document analysis. For indicator C3.1, the judge can report the POR during the trial. As for C1.2, C1.4, C2.1, and C3.3, the indicator values are determined by the other subsystems in the TPMS, for instance, the case division system. Apart from them, the values of indicators C1.1, C1.3, C2.2, and C3.2 are inferred from the social media analysis module. Figure 6 illustrates the structure of our module for social media analysis. It is composed of three parts listed as follows: (i)Analysis in Public Opinion Sentiment. This part explores how people are interested in the case and how intensely they discuss the related topics. If the public cares much about the case and shows negative sentiment in their expressions, the indicator value will be large. On the contrary, the indicator value will come near zero(ii)Analysis in Hot Search. The judge or the parties frequently searched in social media is an important indicator that this case may have the POT during the trial(iii)Analysis in Media Coverage. If the media in our maintained important-media list has taken part in the related topic, this case’s media coverage will increase. The POT level increases with the coverage reaching a threshold

In this section, we mainly describe the public opinion sentiment based on topics. The comments collected from social media related to the case are divided into some topics to address this. Then, the texts and the related topics are fed into a neural network to train a classifier used to analyze the sentiment. The details are as follows.

3.3.1. Input Embedding

Firstly, a short text is split into a word sequence which contains words. After that, we transform words to vectors by a Word2vec model [32] and obtain the embedding matrix which consists of all word embeddings.

3.3.2. Topic Clustering

Single-pass clustering [33] with the cosine similarity is employed to literately partition short texts into clusters, the topics can be represented as , and is a set of some keywords. The similarity is calculated as where and are vectors of two short texts. Then, the keywords in the clusters are detected to be the topics. Moreover, we get the embedding matrix , which contains all keyword embeddings of a topic through word embedding.

3.3.3. T-BiLSTM-Based Comment Sentiment Analysis

Since BiLSTM [34] has been proven efficient for sentiment analysis, we propose the T-BiLSTM network to train a text sentiment classifier. Figure 7 illustrates the structure of the T-BiLSTM. On the right side, we employ a BiLSTM layer to capture the contextual features of the text. On the left side, we adopt a LSTM layer to explore the contextual features of the topic. Next, we concatenate the outputs of both sides and feed it into a softmax layer. The above processes are represented as where and are the weight matrix and bias, respectively. In addition, we use cross-entropy loss to lead the network training.

3.3.4. Evaluation of Indicator C1.1

The public opinion sentiment for topics is defined as

Here, is the number of negative comments in topic , is the threshold which is used to testify whether a topic is discussed widely, and the evaluation of indicator C1.1 is calculated as where is the count of texts in topic , and is the total amount of texts in the case.

3.4. Risk Assessment on Public Opinion for Trial Management

In this section, we describe the fuzzy number similarity-based risk assessment module which evaluates the risk level in the nine-level risk system. At first, the risk indicator evaluations we talk about in Section 3.3 are converted into fuzzy numbers as

Here, and ; is a linguistic term which is in , and is a generalized fuzzy number defined in Table 2. Since the risk of public opinion has various indicators, the risk assessment module aggregating risk of each indicator by the weighted average method is

As Figure 8 shows, the selected method’s similarity drops smoothly with the distance increases compared with the other algorithms. The risk level is calculated as

4. Experiment

In this section, we discuss the results of the three experiments: (A) efficiency of algorithm RApriori, (B) efficiency of the classifier T-BiLSTM, and (C) the case study of the whole framework RAPOT.

4.1. Efficiency of RApriori

To validate the efficiency of RApriori, we compare it with the classical Apriori and FP-Growth. There are three subexperiments in this section: (a) time costs with different rule lengths, (b) time costs with different rule counts, and (c) time costs with different datasets. We carry on these experiments on the simulation datasets generated with the parameters shown in Table 3. In experiment (a), we employ Apriori, FP-Growth, and RApriori to work out rules with different lengths. Figure 10 shows that Apriori and FP-Growth’s time costs sharply increase with more extended rules. In experiment (b), we compare the three methods for dealing with different counts of rules. Figure 11 illustrates our method’s time cost grows slower than the other methods. In experiment (c), we conduct the three algorithms on three datasets with different data sizes. Figure 12 shows that our method has a better efficiency than Apriori and FP-Growth while tolerating data explosion.


ParametersValue

Count of process nodes80
Count of confirm nodes80
Error rate0.15
Confirm rate0.15

4.2. Efficiency of T-BiLSTM

We train the classifier for public opinion sentiment analysis with the dataset contains 18000 positive comments and 18000 negative comments come from Weibo. The validating set has 3600 positive items and 3600 negative items. In addition, we compare the T-BiLSTM-based sentiment classifier with the KNN, maximum entropy, Bayes, SVM, and traditional BiLSTM. We adopt accuracy, positive-precision, positive-recall, and Macro-F1 as the evaluation metrics that are defined as where is the number of correct predictions, and is the total number of valide samples. For and , they represent the amount of the predicted “Positive” samples which are correct and incorrect, respectively, which are similar to and . As for Macro-F1, it is defined as the average of and and is used to evaluate the efficiency of each classifer comprehensively. Table 4 shows the comparison result, and we can see that our T-BiLSTM exceeds the other methods.


ClassifierAccPrecisionRecallMacro-F1

T-BiLSTM0.880.900.850.88
BiLSTM0.870.880.860.87
ME0.810.820.790.81
Bayes0.840.810.890.84
KNN0.730.670.900.72
SVM0.800.790.820.80

4.3. Case Study of RAPOT

In this section, we evaluate the efficiency and applicability of RAPOT with a case study. It includes three sets of short texts corresponding to three cases; the size of the three sets are 764, 306, and 156. At first, the risk indicator model of RAPOT is shown as Figure 3. There are nine indicators in the aspects of the case, the related parties, and the judge. Then, we figure out the indicator values for each case, and the mapped linguistic terms are shown in Table 5. In the next step, the linguistic terms are turned into corresponding fuzzy numbers. Then, the impact factors and evaluations of the indicators are aggregated into a fuzzy number for each case. Finally, we compute the fuzzy number similarities to figure out the risk level.


IndicatorsImpact factorsCase 1Case 2Case 3

C1.1VLVLFHFH
C1.2HALALAL
C1.3HMLVL
C1.4MALAHAH
C2.1HAHAHAL
C2.2MAHAHAL
C3.1FHALAHAL
C3.2MALALAL
C3.3LALALAL

Table 6lists the similarities. Therefore, the POR of case 1 is fairly low, the POR of case 2 is medium, and the POR of case 3 is low. Combined with Table 5, case 3 has the least heat. Meanwhile, the judge and the parties are not unique identities. Even though the case type is at high risk, without hot discussion, the POR is low. As for case 1, the public opinion is quite positive, so the risk assessment result is “FairlyLow”. Referring to case 2, one of the related parties has unique identities, and he has attracted much attention on social media. Nevertheless, media coverage is low, which illustrates that the issue has not been widespread yet. As we can see, the RAPOT recognizes the risk of POR successfully and distinguishes the three cases in risk measurement. To validate our framework’s efficiency, we compare five similarity measure algorithms. As we can see in Figure 9, the selected method’s output is the same as the majorities without outliner.


Risk levelCase 1Case 2Case 3

Absolutely low0.49700.35060.5813
Very low0.52290.34120.6386
Low0.69170.43180.8592
Fairly low0.91460.58720.7900
Medium0.68660.79970.5561
Fairly high0.47000.73470.3880
High0.38950.59550.3246
Very high0.31850.47700.2712
Absolutely high0.31080.48810.2616

5. Conclusion

The accurate and fine-grained risk assessment on public opinion in the trial procedure is crucial for refined trial management. Our framework proposed in this paper provides an objective and efficient assessment for POR in the trial without using a large amount of historical data, which is quite lacking, and we propose T-BiLSTM to analyze public sentiment opinion based on topics. The method is more comprehensive than traditional BiLSTM in practice. The risk assessment framework for POR consists of three modules: (1) an adaptive multifactor indicator model for POR assessment, (2) the indicator evaluation module with an accurate public opinion analysis, and (3) the objective risk ranking module. The experimental results show the efficiency and practicability of our framework. In the future, we will work hard on the considerable amount of processing logs in the TPMS to further improve our indicator model’s adaptation and robustness.

Data Availability

The dataset used to support the findings of this study is available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors gratefully acknowledge the support of the National Key R&D Program of China under grant No. 2018YFC0830500.

References

  1. M. Bilal, H. Israr, M. Shahid, and A. Khan, “Sentiment classification of Roman-Urdu opinions using naive Bayesian, decision tree and KNN classification techniques,” Journal of King Saud University - Computer and Information Sciences archive, vol. 28, no. 3, pp. 330–344, 2016. View at: Google Scholar
  2. H. Htet, S. S. Khaing, and Y. M. Yi, “Tweets sentiment analysis for healthcare on big data processing and IoT architecture using maximum entropy classifier,” in International Conference on Big Data Analysis and Deep Learning Applications, pp. 28–38, Singapore, 2019. View at: Google Scholar
  3. J. Prusa, T. M. Khoshgoftaar, and D. J. Dittman, “Using ensemble learners to improve classifier performance on tweet sentiment data,” in 2015 IEEE International Conference on Information Reuse and Integration, pp. 252–257, San Francisco, CA, USA, 2015. View at: Google Scholar
  4. S. Shubha and P. Suresh, “An efficient machine learning bayes sentiment classification method based on review comments,” in 2017 IEEE International Conference on Current Trends in Advanced Computing (ICCTAC), pp. 1–6, Bangalore, India, 2017. View at: Google Scholar
  5. H. Kim and Y. S. Jeong, “Sentiment classification using convolutional neural networks,” Applied Sciences, vol. 9, no. 11, 2019. View at: Google Scholar
  6. S. Lai, X. Liheng, K. Liu, and J. Zhao, “Recurrent convolutional neural networks for text classification,” in Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI'15, pp. 2267–2273, Austin, Texas, USA, 2015. View at: Google Scholar
  7. M. Yang, T. Wenting, J. Wang, X. Fei, and X. Chen, “Attention-based lstm for target-dependent sentiment classification,” in Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17, pp. 5013-5014, San Francisco, California, USA, 2017. View at: Google Scholar
  8. H. Kanj and P. E. Abi-Char, “A new fuzzy-TOPSIS based risk decision making framework for dangerous good transportation,” in 2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), Zhangjiajie, China, 2019. View at: Google Scholar
  9. P. Singh, N. K. Mishra, M. Kumar, S. Saxena, and V. Singh, “Risk analysis of flood disaster based on similarity measures in picture fuzzy environment,” Afrika Matematika, vol. 29, no. 7-8, pp. 1019–1038, 2018. View at: Google Scholar
  10. R. Chutia and M. K. Gogoi, “Fuzzy risk analysis in poultry farming using a new similarity measure on generalized fuzzy numbers,” Computers & Industrial Engineering, vol. 115, pp. 543–558, 2018. View at: Google Scholar
  11. S. Kabir, C. Wagner, T. C. Havens, and D. T. Anderson, “A bidirectional subsethood based similarity measure for fuzzy sets,” in 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–7, Rio de Janeiro, 2018. View at: Google Scholar
  12. S. Cheng, S. Chen, and T. Lan, “A new similarity measure between intuitionistic fuzzy sets for pattern recognition based on the centroid points of transformed fuzzy numbers,” in 2015 IEEE International Conference on Systems, Man, and Cybernetics, pp. 2244–2249, Hong Kong, China, 2015. View at: Google Scholar
  13. Y. Zhang, X. Yin, and Z. Mao, “Study on risk assessment of pharmaceutical distribution supply chain with bipolar fuzzy information,” Journal of Intelligent and Fuzzy Systems, vol. 37, no. 2, pp. 2009–2017, 2019. View at: Google Scholar
  14. A. Karasan, E. Ilbahar, S. Cebi, and C. Kahraman, “A new risk assessment approach: safety and critical effect analysis (SCEA) and its extension with pythagorean fuzzy sets,” Safety Science, vol. 108, pp. 173–187, 2018. View at: Google Scholar
  15. M. Zaky, “Risk analysis using fuzzy system based risk matrix methodology,” Arab Journal of Nuclear Sciences and Applications, vol. 51, no. 4, pp. 204–212, 2018. View at: Google Scholar
  16. G. F. Can and P. Toktas, “A novel fuzzy risk matrix based risk assessment approach,” Kybernetes, vol. 47, no. 9, pp. 1721–1751, 2018. View at: Publisher Site | Google Scholar
  17. T. Luo, C. Wu, and L. Duan, “Fishbone diagram and risk matrix analysis method and its application in safety assessment of natural gas spherical tank,” Journal of Cleaner Production, vol. 174, pp. 296–304, 2018. View at: Google Scholar
  18. R. R. Yager, “Pythagorean membership grades in multicriteria decision making,” IEEE Transactions on Fuzzy Systems, vol. 22, no. 4, pp. 958–965, 2014. View at: Google Scholar
  19. H. Ni, A. Chen, and N. Chen, “Some extensions on risk matrix approach,” Safety Science, vol. 48, no. 10, pp. 1269–1278, 2010. View at: Google Scholar
  20. H. A. Khorshidi and S. Nikfalazar, “An improved similarity measure for generalized fuzzy numbers and its application to fuzzy risk analysis,” Applied Soft Computing, vol. 52, pp. 478–486, 2017. View at: Google Scholar
  21. J. Li and W. Zeng, “Fuzzy risk analysis based on the similarity measure of generalized trapezoidal fuzzy numbers,” Journal of Intelligent and Fuzzy Systems, vol. 32, no. 3, pp. 1673–1683, 2017. View at: Google Scholar
  22. Y. Qi, W. Jiang, and N. Liu, “Trial risk analysis based on a novel similarity measure on generalized fuzzy numbers,” in Proceedings of the 2020 4th International Conference on Management Engineering, Software Engineering and Service Sciences, pp. 157–163, Wuhan, China, 2020. View at: Google Scholar
  23. S. A. Ahmad, D. Mohamad, N. H. Sulaiman, J. M. Shariff, and K. Abdullah, “A distance and set theoretic-based similarity measure for generalized trapezoidal fuzzy numbers,” in AIP Conference Proceedings, vol. 1974, pp. 020–043, AIP Publishing LLC, 2018. View at: Google Scholar
  24. Z. Xu, S. Shang, W. Qian, and W. Shu, “A method for fuzzy risk analysis based on the new similarity of trapezoidal fuzzy numbers,” Expert Systems with Applications, vol. 37, no. 3, pp. 1920–1927, 2010. View at: Google Scholar
  25. D. Yong, S. Wenkang, D. Feng, and L. Qi, “A new similarity measure of generalized fuzzy numbers and its application to pattern recognition,” Pattern Recognition Letters, vol. 25, no. 8, pp. 875–883, 2004. View at: Publisher Site | Google Scholar
  26. K. J. Schmucker, Fuzzy Sets, Natural Language Computations, and Risk Analysis, vol. 27, no. 3, Computer Science Press, 1984.
  27. A. Darko, A. P. C. Chan, E. E. Ameyaw, E. K. Owusu, P Ã.¤r. Erika, and D. J. Edwards, “Review of application of analytic hierarchy process (AHP) in construction,” International Journal of Construction Management, vol. 19, no. 5, pp. 436–452, 2019. View at: Google Scholar
  28. G. Tian, M. Zhou, H. Zhang, and H. Jia, “An integrated AHP and VIKOR approach to evaluating green design alternatives,” in 2016 IEEE 13th International Conference on Networking, Sensing, and Control (ICNSC), pp. 1–6, Mexico City, Mexico, 2016. View at: Google Scholar
  29. A. Farkas, “Multi-criteria comparison of bridge designs,” Acta Polytechnica Hungarica, vol. 8, p. 173, 2011. View at: Google Scholar
  30. S. A. K. Muhammad, A. Ali, S. Abdullah, F. Amin, and F. Hussain, “New extension of TOPSIS method based on pythagorean hesitant fuzzy sets with incomplete weight information,” Journal of Intelligent and Fuzzy Systems, vol. 35, no. 5, pp. 5435–5448, 2018. View at: Google Scholar
  31. N. L. H. Mo and Y. Qi, “Trial risk index model and assessment system based on extended TOPSIS method,” in 2020 International Conference on Data Intelligence and Security, South Padre Island, TX, USA, 2020. View at: Google Scholar
  32. H. Tian and L. Wu, “Microblog emotional analysis based on TF-IWF weighted Word2vec model,” in 2018 IEEE 9th International Conference on Software Engineering and Service Science (ICSESS), pp. 893–896, Beijing, China, 2018. View at: Google Scholar
  33. H. Bo, Y. Yang, A. Mahmood, and W. Hongjun, “Microblog topic detection based on LDA model and single-pass clustering,” in International Conference on Rough Sets and Current Trends in Computing, pp. 166–171, Berlin, Heidelberg, 2012. View at: Google Scholar
  34. T. Chen, R. Xu, Y. He, and X. Wang, “Improving sentiment analysis via sentence type classification using BiLSTM-CRF and CNN,” Expert Systems with Applications, vol. 72, pp. 221–230, 2016. View at: Google Scholar

Copyright © 2021 Weina Jiang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views164
Downloads302
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.