Security is a pending challenge in cooperative spectrum sensing (CSS) as it employs a common channel and a controller. Spectrum sensing data falsification (SSDF) attacks are challenging as different types of attackers use them. To address this issue, the sifting and evaluation trust management algorithm (SETM) is proposed. The necessity of computing the trust for all the secondary users (SUs) is eliminated based on the use of the first phase of the algorithm. The second phase is executed to differentiate the random attacker and the genuine SUs. This reduces the computation and overhead costs. Simulations and complexity analyses have been performed to prove the efficiency and appropriateness of the proposed algorithm for combating SSDF attacks.

1. Introduction

Exploitation of a licensed spectrum by unlicensed users without interference is a prodigy for wireless communications. Cognitive radio technology (CR) has introduced the issue of the utilisation of the spectrum based on the process of dynamic spectrum access (DSA). The CR is anticipated to be a vital technology for fifth-generation (5G) communications. The CR technology enables the unlicensed or secondary users (SUs) to access opportunistically the prescribed channels of the licensed or primary users (PUs) without interference. DSA involves spectral sensing, analysis, and allocation by sensing the white spaces (holes) in the frequency bands. Cooperative spectrum sensing (CSS) increases the accuracy of the PU occupancy detection by collecting sensing information from the SUs [1]. However, CSS paves the way for malicious attacks that modify the sensing reports given that the channel used for reporting is common.

The alteration of the sensing report by malicious users (MU) is the spectrum data falsification attack (SSDF) that typically occurs at the data link layer. The SSDF attack can be classified into five classes [2]: (a) always yes: in the absence or presence of the PU, the malicious user always reports the presence of a PU to divest the genuine SU;(b) always no: to induce interference, the malicious user always reports the absence of the PU even in its presence; (c) alternator: the malicious user always conflicts the genuine SUs detection output; (d) selfish: at times, the malicious attacker does not participate in the sensing process to save energy; and (e) contingent: report data falsely in a random manner. They play smartly such that identification becomes difficult.

2. Methodology

Trust is a firm belief in reliability. Thus, the genuine nature of the user (SU) can be quantified by measuring the degree of trust. However, the evaluation of trust for all the SUs may increase the complexity with respect to time and space. To address this particular issue in trust-based security systems, a new algorithm is proposed in this study referred to as sifting and evaluation trust management (SETM). The algorithm comprises two phases: (i) sifting and (ii) evaluation. In the first phase, identification of MUs subject to the first four categories is achieved by screening the sensing reports of the SUs with the fusion centre (FC) outputs over a period of time. Thus, most of the MUs are filtered in this phase. The second phase aids in the identification of contingent attackers from the genuine SUs by evaluating the trust of different categories, such as past events, registries, requites, and reliabilities. Additionally, the efficiency and appropriateness of SETM are proven using simulations and complexity analyses.

The rest of the study is organised as follows: Section 3 introduces related works pertaining to trust-based systems. Section 4 provides insights on the system model, the spectrum sensing process, and the identification of MUs. Section 5 provides the proposed SETM algorithm. Finally, the results, discussion, and conclusion sections are outlined in Sections 6 and 7.

Chen et al. [3] have proposed a reputation-based cooperative spectrum sensing model in which the reputation is set by the FC for each SU based on the sensing results. The SU is then weighted according to the reputation value. However, the reputation degree is not adaptable. Kar et al. [4] proposed multifactor trust management wherein the MUs are identified based on the evaluation of history, action, incentive, and consistency trust. No profiteering technique is employed and selfish nodes cannot be identified. A trust-based on a noncooperative game is given by Bennaeur et al. [2], wherein the penalty associated with the faulty SUs is addressed by exempting the MUs from participating in the sensing process for a particular period of time. However, penalisation of genuine SUs who report falsely owing to channel conditions would face considerable expenses. Wang et al. [1] have advocated certificate-based trust worth respect for the behaviour of SUs in a distributed manner. Feng et al. [5] have presented an XOR distance analysis to suppress the collusive SSDF attack. Zhao et al. have addressed a dynamic collusive SSDF attack based on a trust fluctuation clustering analysis to measure the similarities between two attackers that reflect the trust values [6]. A distance outlier detection approach is given by Singh et al. [7] for CSS–SSDF attacks. The methodology exploits the data stored in the FC to identify the MUs. The prior knowledge of the user is needed in the implementation of the method. Furthermore, an adaptive reputation evaluation for individual and collaborative SSDF attackers based on a linear weighted combination scheme has been proposed by Wan et al. [8]. Consistency measures are executed to detect outliers. Relying on a weighted scheme may not prove appropriate in random attack scenarios. Also, Wang et al. [9] have proposed a combatting technique for SSDF by calculating the suspicious level and consistency level of trust for the all the SUs that participate in the sensing. But detection of all types of SSDF attackers cannot be resolved by just measuring the consistency levels.

4. System Model

Let “” be the number of SUs who sense the spectrum over a time slot “,” and “” be the number of MUs pertaining to all the attacking strategies. The system model is depicted in Figure 1. Additionally, the FC is considered to be the most powerful user with considerable potential and accumulated data to identify all types of SSDF attackers. The channel conditions are assumed to be perfect for transmission (control channels), and all the SUs use the energy detection technique to detect the presence and absence of the PU. Furthermore, the path loss is neglected given that the area of coverage of the CR system is assumed to be small [10].

4.1. Spectrum Sensing of SU

The presence and absence of PUs are determined by the FC based on hypothesis testing (statistical testing). Mathematically, it can be expressed according to Equations (1) and (2) as, where is the received signal, and is the white Gaussian noise function for “K” SUs . It is also considered that the noise has a zero mean and a variance . is the th sample of the primary user signal detected by the SUs. represents the absenteeism of , and represents the existence of PU. Mathematically, Equation (3) represents the hypothesis.

In this study, a majority voting ( out of ), hard decision fusion rule is applied. This rule has an advantage in terms of bandwidth and implementation. Each of the SUs sends a binary output to the FC indicating the presence (binary “1”) or absence (binary “0”) for the formulation of a final decision. FC decides in favour of the SUs if out of users vote for the same output. The FC calculates the received signal based on Equation (4) [11].


The detection probability of the SU can be considered as one of the performance measures of the SU. This is mathematically expressed by Equation (5) [12].

Similarly, the probability of false alarm can be represented by Equation (6)

Moreover, the probability of miss-detection can be represented by Equation (7), where is the signal-to-noise ratio (SNR) of the received signal, and is the threshold, is the product of time () and bandwidth (), and represents the marqum function. The cumulative probability of detection and false alarm at the FC are represented by Equations (8) and (9),

4.2. Identification of MU’s

Trust is a time-varying parameter expressing the assured reliance on someone’s ability, strength, and truth. Thus, the study of past events of the SU would reveal their worthiness. In our proposed system, the trust analysis is pursued and completed in two phases, namely, the sifting and evaluation phases. In the sifting phase, the worthiness of the SU is observed based on the recording of past SU events over a period of time. Consider a sliding time window from 0 to T comprising equal time subplots denoted by . During each timeslot, the spectrum sensing is completed by each SU. Let the sensing output of each SU be , and let the time of the observation range from to , where , where is an integer. The FC accumulates all the reports in a record table, as shown in Table 1.

The FCO is the final decision by the FC based on the consideration of the out of rule. The details listed in different rows of the table show the output of “” SUs at different timeslots, and the details listed in different columns show the sensing results of each SU.

5. The SETM Algorithm

5.1. Sifting Phase

In this study, the SETM algorithm is presented to identify all the aforementioned types of MUs, where is the symbol used to denote sifting, and is used to denote evaluation. As the name indicates, sifting means filtering. Therefore, by analysing the columns of the reports of each SU, it is possible to infer the nature of the malicious behaviours of the MUs. For example, by analysing the first column of the Table 1 for time slot to , it can be inferred that the SU belongs to the MU class “always yes,” if all the columns have the sensing report set to binary “1.” After “” timeslots, this phase can be continued to the next timeslot ( to ) during which the MUs can even change their behaviours from to , wherein and {1, 2, 3, 4, 5}. Thus, during the sifting phase, the necessity of calculating the trust values is neglected. This reduces the computational complexity. The following algorithm performs the sifting phase of the SETM method.

Input: SU1r to SUSr from ts0 to tsk, , is integer, w(i).
Output: FCO: Final decision of FC, categorisation of types of MUs and genuine SUs.
 (1) Pile the sensing reports of ‘S’ SUs for the mentioned timeslot in the table
 (2) Compute the FCO by majority voting
 (3) Let d(i) be the FCO output at tsi
 (4) Let w(i) represent the count of participation in sensing
 (5) for d(i) = 0 or 1 & w(i) = k,
   if SUir = 1 ∀ tsk, then
  else go to next step
 (6) for d(i) = 0 or 1 & w(i) = k,
   If SUir = 0 ∀ tsk, then
    SUi = MU2 (ALWAYS NO MU)
  else go to next
 (7) for w(i) = k,
   if d(i) = 0 and SUir = 1 ∀ tsi &
   if d(i) =1 and SUir = 0 ∀ tsi
  then SUi = MU3 (ALTERNATOR)
  else go to next step
 (8) for d(i) = 0 or 1 & w(i) ≠ k,
   then SUi = MU4 (SELFISH)
     else SUi = Random or genuine
5.2. Evaluation Phase

This evaluation process is conducted only for SUs categorised as random or genuine. The evaluation of trust is based on the weighted average of four different types of trust. Past events include trust (), participation trust (), requite trust (), and reliability trust (). (i)Past event trust (): the past trust event is evaluated using Equation (10), where is the initial trust or cumulative trust of the previous time slots. represents the spectrum sensing report of over the timeslot . represents the final decision of the FC at time slot .(ii)Registry trust () is a measure of the activity of the SU as selfish SUs may skip their participation in spectrum sensing to save their energy. The participation trust can be evaluated by considering the ratio of the total number of active participations () to the total number of time slots. Mathematically, (iii)Requite trust: this provides the reward to the genuine SU or punishment to the malicious SU. Equation (12) describes the evaluation of the requite trust as, where represents the number of falsely reported decisions by the , and is the number of timeslots.(iv)Reliability trust: reliability is the quality of being trustworthy on a consistent basis. Thus, the estimation of this trust will enable us to know to what extent the SU is consistently genuine. Additionally, the reliability of MU would change often. Consistency can be measured by summing up the squared deviation of each value in any distribution divided by the number of values in the distribution. Equation (13) lists the expression used to quantify consistency where and . is the trust considered based on the beta distribution, and is the summation of all the individual trusts considered over the time slot .

After the calculation of all the above trust indices, the weighted combination of trust is estimated based on Equation (14)

, where are the weights, and.

Input: , , , , ,
Output: Cumulative trust calculation and inference about SUi
 (1) Calculate the past events trust using equation (10)
 (2) Calculate the participation trust using equation (11)
 (3) Calculate the requite trust using equation (12)
 (4) Calculate the reliability trust using equation (13)
 (5) Find the cumulative Trust using equation (14)
 (6) If , then SUi = Genuine  else SUi = Random malicious attacker

6. Results and Discussions

The efficacy and expediency of a trust management system can be evaluated based on performance augmentation parameters, such as the low-false alarm and low miss-detection rates, low overhead, and high-sensing performance. The complexity analysis would also prove the appropriateness of the proposed system [13].

6.1. Performance Augmentation

In this section, the efficacy and expediency of the proposed SETM algorithm have been discussed based on the consideration of the variation of the probability of false alarms in conjunction with the trust value and threshold. The simulations were executed initially with ten SUs with MATLAB (2014, Math Works, Natick, MA, USA). The SUs were assumed to be randomly cited around the PU whose spectrum had to be sensed. The simulation parameters for evaluation are given in Table 2.

During the sifting phase of the SETM algorithm, the sensing reports from all the SUs are given to the FCO within the prescribed time slot. The FCO identifies the different types of MUs just by analysing the sensing outputs of SUs over the given time slot with its final decision. The FCO sorts the SU as always yes MU, if the sensed output of the SU is “1” for all the time slots. The SU is sorted as always no MU, if the sensed output of the SU is “0” for the prescribed timeslot. Further, if the SU sensing output alternates for every time slot, then it is sorted as alternator MU and for any of the sensing time the SU does not sense—then the SU is sorted as selfish MU. However, if any SU is not sorted as MU during the sifting phase, then to prove the honesty of SU, the evaluation phase is conducted. During this phase, five types of trust such as past events, registries, requites, reliabilities, and cumulative trusts are evaluated. Depending on the value of the cumulative trust, the SU is classified as a random MU or as a genuine SU.

Table 3 provides an insight on the variation of past event, requite, reliability, and cumulative trusts, with a noted increase in the percentage of malicious users. The registry trust is considered to be “1” on the assumption that the SU provides all the sensing results to the FCO within the given time slot. Further, the cumulative trust is calculated by considering the weights , , , and , as described by Equation (14). Specifically, when a SU reports falsely only once (10%), the cumulative trust value is equal to 1.67, when SU reports falsely more than once (20% to 90%) the cumulative trust decreases from the value of 1.434 to -0.264. The inference is that as the maliciousness of a SU increases, the trust value decreases. Thus, the cumulative trust can be used as a performance metric to prove whether the SU is genuine or malicious. Figure 2 shows the variation of trust as a function of the percentage of malicious behaviour in each SU, based on the use of the SETM algorithm. Specifically, for every 10% increase in malicious behavior of a SU, there is a decrease of approximately 15% in the value of cumulative trust.

Theoretically, the threshold for finding the MU is calculated by setting the variance to unity () and the probability of the false alarm to 0.1 in Equation (6). The theoretical threshold value is 1.57; signifying that for the trust value of above 1.57, the SU is genuine and below that it is malicious. From Table 2, it is clear that the SU having a trust value of 1.67 and above, it is considered as genuine SU else malicious. Thus, the evaluated threshold and theoretical threshold more or less satisfies the criteria of finding the MU. Figure 3 shows the variation of the threshold as a function of the probability of the false alarm (). As the probability of the false alarm increases, the threshold value decreases.

The variations of different values of trust at different timeslots are shown in Figures 4, 5, 6, and 7. The variation of past event trusts () at different timeslots for different types of SUs is shown in Figure 4. The genuine SUs exhibit a variation between the upper and lower thresholds, where the upper threshold indicates that the respective SU is 100% genuine (having all sensing outputs correct in the previous time slots) and lower threshold indicates that the SU is 90% genuine (has reported one sensing result falsely). Thus, the past event trust of any SU fluctuates between the two thresholds (10.5 to 8.5) cannot be malicious. Another inference is that a selfish SU (MU) exhibits a trust variation between alternate timeslots. As a selfish SU does not participate in all the sensing rounds for a particular interval of time for saving its resources, the never reaches the upper threshold value. Also, the trust value () for a random attacker (MU) lies at values which are much lower than the lower threshold, specifically below 8.5. Thus, just by evaluating , the three types of SUs can be identified.

The identification of selfish users is also based on the calculation of the requite trust and registry trust. Figure 5 shows the variation of the requite trust () as a function of time for a genuine SU and a selfish SU. increases consistently with respect to time. However, the of selfish users does not yield a consistent increase. Furthermore, there is a dip of about 10% in the value of the compared with the genuine SU response. Figure 6 shows the variation of the registry trust () with the time slot. As time increases, the value for the genuine SU stays at a constant value, but for the selfish SU, the varies with a maximum dip of about 33% from the constant value of genuine SU.

Reliability measures the consistency of a node. Figure 7 shows the variation of the reliability trust () as a function of time. For the MU, the never reaches the value of unity, but for genuine SU, the value stays at a constant value for most of the time. Specifically, the value for genuine SU never falls below 90%, but for MU, the maximum value is only 86%. Thus, by comparing , the genuine SU and MU can be identified.

6.2. Complexity Analysis

This section investigates the appropriateness of the proposed SETM algorithm in terms of message, storage, time, and computational complexities. (i)Message complexity (MC): defined as the number of spectrum sensing outputs required by the FC to formulate the final decision. The majority rule is used by the FC to calculate the final output. The number of SUs considered for the timeslot is “.” Therefore, the message complexity would be Storage complexity (SC): defined as the amount of memory needed for the storage of the trust value. Let “” be the size of the register in the register table of the FC for storing the trust value. With regard to the SETM algorithm, only the trust values of random MU, selfish MU, and genuine SU, need to be stored. Thus, the total storage complexity would be , where is the number of malicious users filtered in the first phase of the SETM algorithm (ii)Time complexity (TC): the timeslots needed by the FC to receive the spectrum sensed output of the SUs. Let “” be the predefined sampling period needed by the FC (iii)Computational complexity (CC): it is a measure that evaluates the maximum number of times the SETM is executed for the identification of malicious users. SETM has two phases: the first phase has a complexity of . The second phase also has a complexity of , where is the total number of SUs. represents the linear complexity

Tables 4 and 5 give the parameters and complexity measured for the SETM algorithm. The message complexity is the same as the number of SUs.

The storage complexity is needed only for the genuine and random attacker SU, and not for other types of MU’s. Thus, the storage requirement and overhead are minimised. Further, the time complexity purely depends on the sampling period of CR (few milliseconds). Also, the computation complexity is the least complexity of any algorithm that is linear complexity [14]. Thus, as the computational complexity is minimum, the SETM algorithm proves to be a low-reckoning trust management system.

7. Conclusion

In this study, the SETM algorithm was proposed to combat SSDF attacks based on the identification of all types of malicious users that contribute to these attacks. In the first phase, the four types of SSDF attackers (always yes, always no, selfish, and alternator) were determined without the need to evaluate all the trusts. The second phase of the SETM algorithm evaluates the trust to differentiate random and genuine SUs. Thus, our method eliminates the need to evaluate the trusts for all SUs, thereby providing a low-reckoning trust-based system. Additionally, the trust values for genuine and random attackers only need to be stored, thus lowering the overhead. As future work, the analysis of trusts can be extended by considering more than one PU.

Data Availability

The authors confirm that the data supporting the findings of this study are available within the article.

Conflicts of Interest

The authors of this manuscript and St. Joseph’s Group of Institutions declare no competing interests.