Abstract

Since public opinion from social media has a growing impact and supervision on trial, risk assessment on public opinion is increasingly important in refined trial management. However, the tremendous amount of public opinion and the insufficient historical logs of trial procedures bring challenges to risk assessment on public opinion. To address this, we propose an adaptive multifactor risk assessment framework on public opinion with fuzzy numbers. Initially, we establish a multilayer indicator model for assessing the risk of public opinion (POR) with multilayer analysis and decision methods. Then, we explore the association rules hidden in the process logs to update the indicator model periodically. Moreover, we design a public opinion analysis module for indicator evaluation, including analysis in public opinion sentiment, hot search, and social media coverage to deal with big data on social media. Especially, the public opinion sentiment is classified by topic-based BiLSTM (T-BiLSTM), which is more accurate. Finally, the fuzzy number similarity is employed to determine POR’s level in the nine-level risk system. Experimental results validate the efficiency of our framework when assessing the POR.

1. Introduction

Serious and complicated cases bring severe challenges to trial management nowadays. Some of them have raised much attention due to their case type, related parties, and well-known judges. Simultaneously, people are used to expressing their opinions for the concerned cases on platforms such as Facebook, WeChat, Weibo, and Twitter. A mass of public opinion has both positive and negative impacts on trial procedures. Hence, public opinion assessment and supervision are crucial for credible trials. Actually, public opinion in social media has its characteristics, such as mass amount, fast propagation, and chaotic content. Furthermore, the mass data in social media reveals the inherent information we are concerned about. After analyzing multisource public opinion comprehensively, we could figure out its propagation mode to make POR’s warning come earlier. Therefore, POR assessment is beneficial for early responding to negative public opinions and improving the court’s initiative ability. There are two main tasks while accomplishing the task. One is to handle public opinion with big data theory, and the other is to conduct the risk assessment with insufficient historical data.

For the explosive comments that emerge on social media, sentiment analysis has become a research hotspot. Besides, sentiment analysis on comments about hot cases plays a vital role in promoting trial management. Thus, it is crucial to carry out an efficient analysis and supervision method for comments about cases. So far, research on machine learning-based sentiment analysis has a lot of achievements, such as KNN [1], maximum entropy [2], SVM [3], and Bayes [4]. Nowadays, with the rapid development and outstanding performance of deep learning, many researchers concentrate on methods with CNN [5], RNN [6], and LSTM [7] to improve the classification accuracy and have significant progress.

For risk assessment, due to insufficient historical data, together with the fuzziness and uncertainty of risks, researchers adopt a fuzzy set theory to analyze the risk [8]. Singh et al. [9] propose an assessment framework for risk analysis of food disaster based on fuzzy similarity, and they quantitatively calculate the risk level of targets separately. The fuzzy similarity-based method performs well with a quantitative risk assessment for trial cases.

However, there still exists some challenges to achieve the assessment of POR. Firstly, there is no suitable indicator model for this task. An efficient assessment relies on fine-grained indicators and objective weights for indicators, and it remains unsolved. Secondly, comments about cases in the trial on social media have many characteristics that are hard to analyze. Hence, it remains much work to ensure the accuracy of sentiment classification for the specific use. Thirdly, how to evaluate risks quantitatively is not easy but crucial.

To address these issues, this paper implements a Risk Assessment framework on Public Opinion for Trial management (RAPOT). The framework provides a fine-grained risk assessment based on fuzzy numbers. By computing fuzzy number similarities, the framework decides its risk level in the nine-level assessment system. Our main contributions in this paper are as follows: (i)Fine-Grained Risk Rating System. We employ fuzzy number similarities to achieve risk assessment with little historical data in trial procedure management. At first, a multilayer risk indicator model is established based on the analytic hierarchy process (AHP) method and extended technique for order preference by similarity to an ideal solution (extended TOPSIS) method. The model contains a fine-grained indicator layer, and each one contains a risk indicator and its impact factor. When assessing the risks, we transform both impact factors and indicator values into fuzzy numbers. Then, we aggregate the fuzzy numbers into one and rank the integrated one in the nine-level assessment system(ii)Adaptive Indicator Model. Considering that the system logs accumulated during trial processing contain many latent association rules of the procedures, we propose the RApriori algorithm to explore the association rules. These latent rules are updated to the indicator model for improving the applicability and robustness of the model(iii)Efficient Comment Sentiment Analysis. We define three kinds of input sources and submodules for indicator evaluation. Significantly, the sentiment of public opinion is classified based on topics. The sentiment analysis that we propose consists of single-pass-based topic clustering and T-BiLSTM-based sentiment analysis. Sentiment analysis for topics is precise and more comprehensive. Besides, our framework has extensive indicators such as the topic’s heat and coverage of media(iv)Experimental Evaluations. To demonstrate the performance of RAPOT, we conduct a case study with three cases that are paid much attention recently. The results illustrate that our framework is applicable and efficient in practical cases with a reasonable assessment level

The rest of this paper is structured as follows. We talk about the related work in Section 2. The RAPOT framework is described in Section 3. In Section 4, we illustrate the experimental results, and we conclude the paper in Section 5.

Due to the fuzziness and uncertainty of risks, researchers adopt a fuzzy set theory to analyze the risk. The theory of fuzzy numbers has been widely applied in risk analysis [10], approximate reasoning [11], and risk pattern recognition [12]. For risk analysis, the existing methods can be divided into the fuzzy ranking-based [13], fuzzy inference-based [14, 15], fuzzy matrix-based [16, 17], and fuzzy number similarity-based [9] risk assessment models. Zhang et al. [13] figure out the risky area based on a water security evaluation framework by comparing the risk of related areas. Hence, the qualitative analysis measures the risk level comparatively. Nevertheless, in the trial, the POR of the two cases cannot be compared at the same pace. Karasan et al. [14] propose the safety and critical effect analysis (SCEA). Furthermore, it adopts Pythagorean fuzzy sets [18] to provide a comprehensive risk assessment. However, fuzzy inference-based methods are usually used in industry and are not suitable for trial applications. Can et al. [16] present a three-stage fuzzy risk matrix-based risk assessment and dynamically combine multicriteria decision-making with fuzzy logic. Though fuzzy matrix-based methods can reduce risk ties [19] efficiently, they still provide a qualitative assessment that is not precise enough. As for similarity-based method, Khorshidi and Nikfalazar [20] present an improved method to compute the degree of similarity between generalized fuzzy numbers. The proposed method has been used for fuzzy risk analysis, and it could determine each manufacturer’s risk level. In summary, the similarity-based model is suitable for the quantitative risk assessment for an individual object. At the same time, risks in the trial process management system (TPMS) are quite fuzzy and uncertain in fact. Besides, the historical data have not been digitalized well. So we adopt a fuzzy number similarity-based model to achieve risk assessment.

The existing fuzzy number similarity-based methods always have three main modules. They are the risk indicator model, risk aggregation, and risk level determination. Among them, fuzzy number similarity calculation is important for risk level determination precisely. Referring to fuzzy number similarities, researchers have defined various features of generalized fuzzy number (GFN) to distinguish the numbers, such as the center of gravity (COG) [21], the area [20], and the radius of gyration [22]. Then, researchers adopt geometric distance, Hausdorff distance [23], and so on to measure the similarity of the feature values. Xu et al. [24] present a COG-based method while with a limitation that two different fuzzy numbers may have the same COG. To address the limitation, Yong et al. [25] employ ROG of the area to measure the similarities. Moreover, Chutia and Gogoi [10] expand GFN with left height and the right height to further distinguish traditional GFNs with the same COG. However, these two methods still suffer from invalid results. Therefore, we select a similarity measurement on generalized fuzzy numbers to map the integrated fuzzy number into a linguistic term in the nine-level risk system [26]. The similarity measure algorithm we employ constrains the similarity of two fuzzy numbers in the range of , with fewer invalid results and at the same time has a high distinguishability.

3. Framework of Risk Assessment of Public Opinion

In this section, we discuss the critical issues while assessing the POR. Firstly, we present the risk indicator model in Sections 3.1 and 3.2. Secondly, we talk about evaluating risk indicators and the public opinion sentiment analysis in Section 3.3. Then, we explain the fuzzy number similarity-based risk assessment method in Section 3.4.

Figure 1 is the framework of our RAPOT. It includes a risk indicator model, indicator evaluation module, and risk aggregation module. In the beginning, a multilayer indicator model is built to define the fine-grained risk indicators with corresponding impact factors, and the model is dynamically updated by exploring new association rules on system logs. Then, the indicator evaluation module figures out the indicator values based on process data, data from the public opinion analysis module, and the other systems. The indicator aggregation module is aimed at deciding the risk level with the impact factors and the risk probabilities.

3.1. Risk Indicator Model Initialization

To overcome the difficulty of lacking historical data, we employ AHP and extended TOPSIS to construct an initialized risk indicator model. The hierarchy model defines amounts of risk indicators along with their impact factors. Figure 2 describes the procedures for building our indicator model. First, a hierarchy structure is built, and an evaluation dataset for risk indicators is collected based on AHP. Then, an evaluation matrix for the risk indicators is constructed based on the collected dataset. We adopt extended TOPSIS to analyze the evaluation dataset to calculate the impact factors of the indicators. Our model construction method combines AHP and extended TOPSIS to work out a group of accurate impact factors with limited historical data.

3.1.1. Hierarchical Structure Determination

AHP is an efficient multilayer analysis and decision method [27, 28]. It first composes the decision problem into a hierarchy of subproblems that each one can be treated independently. Once the hierarchy is built, the expert group evaluates the elements in the same layer by comparing them to each other according to their impact on the father element. Table 1 shows the 1-9 scales used to evaluate each element’s impact factor. The APH transforms the evaluations to numerical values that can be calculated over the decision problem’s entire range. Finally, a priority is derived for each element in the hierarchy by iteratively verifying the comparison matrix’s consistency after adjusting the priorities each time.

At first, we refer to expertise, existing laws, regulations, and the classical hot cases and form the set of risks as , where is the number of risks. Then, the hierarchical structure is established based on AHP. As shown in Figure 3, our risk indicator model consists of three layers: (i)Objective Layer (OL). Risk assessment of public opinion for trial management is the objective of our work. We need to figure out the impacts of public opinion on the trial procedure(ii)Criteria Layer (CL). The elements in this layer are the judge, the parties involved, the case, and the public opinion. The expert group defines the elements referring to the existing documents(iii)Indicator Layer (IL). This layer contains the indicators which would impact the trial procedure by public opinion. Each indicator belongs to their father elements in the criteria layer

After that, an evaluation dataset is collected to gain the indicators’ impact factors, and the impact factor represents the indicator’s weight when integrating the POR. To evaluate the impact factor accurately, the expert compares the risk indicators with pairs to complete a comparison matrix as where is the comparison value of and . The expert assigns the value according to Table 1. Then, the consistency of has to be verified by where is the maximum eigenvalue of and is the dimension of the matrix. The consistency is complete when and decreases with increasing. Then, AHP uses a random consistency indicator to define a refined which is

When , the matrix is consistent and is a predefined dictionary [29]. If the validation fails, the expert has to adjust the comparison matrix until the validation comes to success.

The eigenvector of the approved evaluation matrix gives a sort of risk indicators by their impact factors. For risk assessment with fuzzy numbers, the expert assigns a linguistic term in LT ={“AbsolutelyLow (AL)”, “VeryLow (VL)”, “Low (L)”, “FairlyLow (FL)”, “Medium (M)”, “FairlyHigh (FH)”, “High (H)”, “VeryHigh (VH)”, “AbsoluteHigh (AH)”} to each risk indicator based on the order.

3.1.2. Impact Factor Calculation

Serval law experts evaluate the impact factors according to our hierarchical structure and construct an evaluation dataset. The dataset contains several evaluation items for experts and is consists of , each item comes from a law expert for in set . Then, we employ TOPSIS to aggregate the evaluations of different experts. TOPSIS is a multicriteria decision analysis method, which identifies weights for each criterion by calculating the geometric distances from each alternative to the positive ideal solution and the negative ideal solution, respectively [30]. When evaluating the risk indicator’s impact factor, the positive ideal solution is defined as the lowest impact on cost optimization. Namely, the lower impact of the risk indicator brings less cost in risk prevention and control. Hence, we adopt the extended TOPSIS [31] to calculate the impact factors for the POR assessment designed for the trial scene.

First, an evaluation matrix with linguistic terms is established based on the dataset as where , are the number of experts and risk indicators. In the matrix, is given by expert for the indicator to measure the importance of the indicator. And then, is transformed into a fuzzy number according to Table 2 for weight fusion of impact. After that, we get an evaluation matrix with fuzzy numbers. here is a generalized fuzzy number represented as and .

In the extended TOPSIS, the positive and negative ideal solutions are here, and are defined as

Then, the distance between and the positive ideal solution is calcuated as

Similarly, the geometric distance between and the negative ideal solution is here , , and are generalized fuzzy numbers. After that, we obtain the weight of each alternative by normalizing the distance ratios as

Finally, the impact factor of indicator is calculated by weighted summing the alternatives as

3.2. Risk Indicator Model Update

Considering that the trial process is strict and complicated, POR’s initial indicator model can be hardly applicable to the POR assessment continuously. Also, the system logs accumulated during trial processing contain many latent association rules of the procedures. Figure 4 shows a fragment of the trial process, each block is a process node, and each ellipse represents the risk confirmation. Therefore, we propose a reversed Apriori (RApriori) algorithm to explore the association rules hide in the system logs. The association rule we want to search is defined as , here represents a failed rule check in the process node and is a risk confirmation node. By investigating the practical TPMS, we figure out the process nodes are arranged in a single sequence. According to it, we optimize the classical Apriori by ordering the nodes and extending the association set in reverse. The details of the proposed RApriori are shown in Algorithm 1.

Require: system logs generate from T1 to T2
Ensure: association rules
1:
2:
3:
4: for; ; do
5:  
6:  
7:  
8:  
9:  
10:  
11:  
12:  repeat
13:   
14:   
15:   
16:   
17:   until cSet is empty
18: end for
19:

In the algorithm, we assign numerical codes to both process nodes and risk confirm nodes based on their sequence in trial. Firstly, the search of latent association rules always starts from a frequent risk confirm node and set it as the root of the tree we show in Figure 5. Secondly, the frequent process nodes whose numerical codes less than are reversely sorted in a candidate list . Thirdly, we join each item in the list with to form a set separately, such as , and then check the corresponding support score to create layer 2. The support score is defined as where is a set and is the amount of the logs. Fourthly, the tree moves to the next layer by orderly combining a set in the current layer with items in the candidate list that are less than the minimum node in the set. Then, iteratively increase the height of until there is no more satisfied new set. At last, we calculate the support score of the satisfied sets and work out the association rules. The support score is defined as

The RApriori method is executed regularly, and the searched association rules are added to update the indicator model of POR. The experimental results show that our algorithm decreases the computational complexity significantly.

3.3. Risk Indicator Evaluation and Public Opinion Analysis

Besides the indicator factor, we have to calculate the probability of indicator occurrence, which we call the indicator value. The data sources of value computing can be divided into three categories: (1) social media, (2) manual input, and (3) document analysis. For indicator C3.1, the judge can report the POR during the trial. As for C1.2, C1.4, C2.1, and C3.3, the indicator values are determined by the other subsystems in the TPMS, for instance, the case division system. Apart from them, the values of indicators C1.1, C1.3, C2.2, and C3.2 are inferred from the social media analysis module. Figure 6 illustrates the structure of our module for social media analysis. It is composed of three parts listed as follows: (i)Analysis in Public Opinion Sentiment. This part explores how people are interested in the case and how intensely they discuss the related topics. If the public cares much about the case and shows negative sentiment in their expressions, the indicator value will be large. On the contrary, the indicator value will come near zero(ii)Analysis in Hot Search. The judge or the parties frequently searched in social media is an important indicator that this case may have the POT during the trial(iii)Analysis in Media Coverage. If the media in our maintained important-media list has taken part in the related topic, this case’s media coverage will increase. The POT level increases with the coverage reaching a threshold

In this section, we mainly describe the public opinion sentiment based on topics. The comments collected from social media related to the case are divided into some topics to address this. Then, the texts and the related topics are fed into a neural network to train a classifier used to analyze the sentiment. The details are as follows.

3.3.1. Input Embedding

Firstly, a short text is split into a word sequence which contains words. After that, we transform words to vectors by a Word2vec model [32] and obtain the embedding matrix which consists of all word embeddings.

3.3.2. Topic Clustering

Single-pass clustering [33] with the cosine similarity is employed to literately partition short texts into clusters, the topics can be represented as , and is a set of some keywords. The similarity is calculated as where and are vectors of two short texts. Then, the keywords in the clusters are detected to be the topics. Moreover, we get the embedding matrix , which contains all keyword embeddings of a topic through word embedding.

3.3.3. T-BiLSTM-Based Comment Sentiment Analysis

Since BiLSTM [34] has been proven efficient for sentiment analysis, we propose the T-BiLSTM network to train a text sentiment classifier. Figure 7 illustrates the structure of the T-BiLSTM. On the right side, we employ a BiLSTM layer to capture the contextual features of the text. On the left side, we adopt a LSTM layer to explore the contextual features of the topic. Next, we concatenate the outputs of both sides and feed it into a softmax layer. The above processes are represented as where and are the weight matrix and bias, respectively. In addition, we use cross-entropy loss to lead the network training.

3.3.4. Evaluation of Indicator C1.1

The public opinion sentiment for topics is defined as

Here, is the number of negative comments in topic , is the threshold which is used to testify whether a topic is discussed widely, and the evaluation of indicator C1.1 is calculated as where is the count of texts in topic , and is the total amount of texts in the case.

3.4. Risk Assessment on Public Opinion for Trial Management

In this section, we describe the fuzzy number similarity-based risk assessment module which evaluates the risk level in the nine-level risk system. At first, the risk indicator evaluations we talk about in Section 3.3 are converted into fuzzy numbers as

Here, and ; is a linguistic term which is in , and is a generalized fuzzy number defined in Table 2. Since the risk of public opinion has various indicators, the risk assessment module aggregating risk of each indicator by the weighted average method is

As Figure 8 shows, the selected method’s similarity drops smoothly with the distance increases compared with the other algorithms. The risk level is calculated as

4. Experiment

In this section, we discuss the results of the three experiments: (A) efficiency of algorithm RApriori, (B) efficiency of the classifier T-BiLSTM, and (C) the case study of the whole framework RAPOT.

4.1. Efficiency of RApriori

To validate the efficiency of RApriori, we compare it with the classical Apriori and FP-Growth. There are three subexperiments in this section: (a) time costs with different rule lengths, (b) time costs with different rule counts, and (c) time costs with different datasets. We carry on these experiments on the simulation datasets generated with the parameters shown in Table 3. In experiment (a), we employ Apriori, FP-Growth, and RApriori to work out rules with different lengths. Figure 10 shows that Apriori and FP-Growth’s time costs sharply increase with more extended rules. In experiment (b), we compare the three methods for dealing with different counts of rules. Figure 11 illustrates our method’s time cost grows slower than the other methods. In experiment (c), we conduct the three algorithms on three datasets with different data sizes. Figure 12 shows that our method has a better efficiency than Apriori and FP-Growth while tolerating data explosion.

4.2. Efficiency of T-BiLSTM

We train the classifier for public opinion sentiment analysis with the dataset contains 18000 positive comments and 18000 negative comments come from Weibo. The validating set has 3600 positive items and 3600 negative items. In addition, we compare the T-BiLSTM-based sentiment classifier with the KNN, maximum entropy, Bayes, SVM, and traditional BiLSTM. We adopt accuracy, positive-precision, positive-recall, and Macro-F1 as the evaluation metrics that are defined as where is the number of correct predictions, and is the total number of valide samples. For and , they represent the amount of the predicted “Positive” samples which are correct and incorrect, respectively, which are similar to and . As for Macro-F1, it is defined as the average of and and is used to evaluate the efficiency of each classifer comprehensively. Table 4 shows the comparison result, and we can see that our T-BiLSTM exceeds the other methods.

4.3. Case Study of RAPOT

In this section, we evaluate the efficiency and applicability of RAPOT with a case study. It includes three sets of short texts corresponding to three cases; the size of the three sets are 764, 306, and 156. At first, the risk indicator model of RAPOT is shown as Figure 3. There are nine indicators in the aspects of the case, the related parties, and the judge. Then, we figure out the indicator values for each case, and the mapped linguistic terms are shown in Table 5. In the next step, the linguistic terms are turned into corresponding fuzzy numbers. Then, the impact factors and evaluations of the indicators are aggregated into a fuzzy number for each case. Finally, we compute the fuzzy number similarities to figure out the risk level.

Table 6lists the similarities. Therefore, the POR of case 1 is fairly low, the POR of case 2 is medium, and the POR of case 3 is low. Combined with Table 5, case 3 has the least heat. Meanwhile, the judge and the parties are not unique identities. Even though the case type is at high risk, without hot discussion, the POR is low. As for case 1, the public opinion is quite positive, so the risk assessment result is “FairlyLow”. Referring to case 2, one of the related parties has unique identities, and he has attracted much attention on social media. Nevertheless, media coverage is low, which illustrates that the issue has not been widespread yet. As we can see, the RAPOT recognizes the risk of POR successfully and distinguishes the three cases in risk measurement. To validate our framework’s efficiency, we compare five similarity measure algorithms. As we can see in Figure 9, the selected method’s output is the same as the majorities without outliner.

5. Conclusion

The accurate and fine-grained risk assessment on public opinion in the trial procedure is crucial for refined trial management. Our framework proposed in this paper provides an objective and efficient assessment for POR in the trial without using a large amount of historical data, which is quite lacking, and we propose T-BiLSTM to analyze public sentiment opinion based on topics. The method is more comprehensive than traditional BiLSTM in practice. The risk assessment framework for POR consists of three modules: (1) an adaptive multifactor indicator model for POR assessment, (2) the indicator evaluation module with an accurate public opinion analysis, and (3) the objective risk ranking module. The experimental results show the efficiency and practicability of our framework. In the future, we will work hard on the considerable amount of processing logs in the TPMS to further improve our indicator model’s adaptation and robustness.

Data Availability

The dataset used to support the findings of this study is available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors gratefully acknowledge the support of the National Key R&D Program of China under grant No. 2018YFC0830500.