Abstract

The basis of this paper is Wei and Tirkkonen, 2012, in which expressions for the key performance metrics of the sphericity test applied to the multiantenna cooperative spectrum sensing of multiple primary transmitters in cognitive radio networks over nonfading channels are provided. The false alarm and the detection probabilities were derived in Wei and Tirkkonen, 2012, based on approximations obtained by matching the moments of the test statistics to the Beta distribution. In this paper we show that the model adopted in Wei and Tirkkonen, 2012, does not apply directly to fading channels, yet being considerably inaccurate for some system parameters and channel conditions. Nevertheless, we show that the original expressions from Wei and Tirkkonen, 2012, can be simply and accurately applied to a modified model that considers fixed or time-varying channels with any fading statistic. We also analyze the performance of the sphericity test and other competing detectors with a varying number of primary transmitters, considering different situations in terms of the channel gains and channel dynamics. Based on our results, we correct several interpretations from Wei and Tirkkonen, 2012, in what concerns the performance of the detectors, both over a fixed-gain additive white Gaussian noise channel and over a time-varying Rayleigh fading channel.

1. Introduction

The cognitive radio (CR) [1] concept has come as a promising solution for alleviating the problem of spectrum scarcity in wireless communication systems and is one of the key enabling technologies of the fifth-generation (5G) of these systems [2]. In this concept, unused spectrum bands in the primary (incumbent) network can be opportunistically used by secondary CR networks. In order to accomplish this task, a spectrum sensing [3] technique detects unused bands so that the CRs can use them without causing harm interference to the primary users. In order to increase the reliability of the decisions upon the occupancy of a given channel, cooperative spectrum sensing has become the main choice [3].

As pointed out in [4], most of the literature on cooperative spectrum sensing predominately adopt the assumption of a single primary transmitter. However, this assumption may fail in most real networks where the existence of more than one primary transmitter prevails. In [4] the authors give an important contribution to the theoretical analysis of the performance of the spectrum sensing under multiple primary users. Specifically, they consider a multiantenna cooperative spectrum sensing and adopt the covariance-based technique known as sphericity test. Expressions for the false alarm probability and the detection probability were derived in [4] by means of approximations for the distributions of the test statistics under the hypothesis of absence and presence of the primary signals. The approximations were obtained by matching the moments of the test statistics to the Beta distribution. It is claimed in [4] that the derived approximations are easily computable and that they are accurate for the considered sensor sizes , number of samples , number of primary users , and corresponding signal-to-noise ratios , . Empirical results were compared with analytical ones in order to validate their claims. As an incremental result, in [4] the sphericity test detector has been compared with competing detectors in the presence of multiple primary users.

Opportunities for Extended Results. In the numerical results’ section of [4], it is stated that the channel between the primary transmitters and the secondary sensors is considered fixed during the sensing interval, which is a commonly adopted and reasonable assumption if the sensing time is considerably smaller than the coherence time of the channel. It is also stated that the channel gains were independently drawn from a complex Gaussian distribution, corresponding to Rayleigh fading. However, in the sequel the authors consider that the channel is the same in all Monte Carlo simulation runs, which contradicts the fading channel assumption. The simulated channel is in fact a fixed gain additive white Gaussian noise (AWGN) channel with configurable SNRs from the primary transmitters to the sensors. This fact has a major impact on the results, as we show later on in this paper.

Moreover, in [4] the channel gains are normalized to have unitary second moment, which further prevents the derived expressions to be used when the channel is in fact time varying. This is because such normalization changes the fading statistics, making them depart from the predefined ones.

Not less important, to apply the expression of the detection probability derived in [4], one must use a covariance matrix that relies on a single realization of the channel gains. When the channel is considered fixed and the same in all sensing intervals (all simulation runs), and the above-mentioned gain normalization is applied, a good agreement is achieved between theoretical and empirical results. This is because the same covariance matrix applies to all simulation runs and, thus, keeps consistence with the theoretical calculations. Nevertheless, it is reasonable to accept that in a fading channel one must not rely on a single channel realization to predict the system performance over the varying channel gains. In fact, if the channel is made variable and no normalization is applied to the channel gains, rare casual agreements are achieved between empirical results and the theoretical results obtained from the expressions in [4].

It can also be identified that some results in [4] are considerably inaccurate for some system parameters different from those originally reported. It was not claimed in [4] that accuracy is achieved for any system parameter, but it was not mentioned either that inaccuracy could result depending on the choice of these parameters. Moreover, in spite of the fact that the authors of [4] have claimed that the derived approximations are easily computable, computation errors may result depending on the parameters chosen and software package used.

In Section 4.2 of [4], the performance of the sphericity test is compared with other competing detectors in terms of receiver operating characteristic (ROC) curves. In Figures 35 of that paper, such comparison has been made under the assumption of fixed and normalized channel, using a single channel realization for all results and all simulation runs. However, different channel realizations change the detection probabilities and, as a consequence, modify the corresponding ROC curves. Since different detection techniques can be differently affected by the channel gains, the performance ranking or the performance gaps or both can be modified from a channel realization to another.

We add that the approach adopted in [4], which motivated the present paper, was also adopted in [5]. Thus, most of the above comments also apply to [5].

The sphericity test alone is also considered in [6], where the authors claim that their contribution is the first to address the cooperative spectrum sensing problem in a multiple primary user scenario, considering multipath fading channels and using eigenvalue-based detectors. In [6], the expression for the probability of detection derived in [4] for the AWGN channel is numerically averaged over the probability density function of the Rayleigh fading signal-to-noise ratio (SNR). However, this approach is not correct because the probability of detection does not depend on the SNR in such a direct manner. In fact, it is a function of the determinant of a covariance matrix, which in turn depends on the channel gains from each primary transmitter to each secondary receiver. Thus, the SNR influence is implicit in this channel gains.

Contributions. Having highlighted the limitations of the analytical results in [4], we can list the following main contributions of this paper:(i)A thorough analysis of the channel normalization and channel dynamics on the performance of the sphericity test is made, considering the detection of multiple primary transmitters. Specifically, we analyze four possible channel conditions: fixed and normalized gains, fixed and nonnormalized gains, time-varying and normalized gains, and time-varying and nonnormalized gains. Interpretations of a large number of new results are given as a consequence of this analysis.(ii)Numerical problems regarding the computation of the expressions derived in [4] are explored and guidelines are given to solve them.(iii)We also provide a number of examples and discussions regarding the situations in which the expressions derived in [4] are not accurate.(iv)We propose a simple semianalytic method that makes use of the original expressions and show that the method is accurate enough for analyzing the performance of the sphericity test in fixed as well as in time-varying channels with any fading statistic. Our method is validated by simulation considering a Rayleigh fading channel as a case study. This method also corrects the one proposed in [6], where the average behavior of the fading was not correctly taken into account in the derivation of the probability of detection over a Rayleigh fading channel.(v)We analyze exemplifying situations in which different detection techniques are differently affected by the channel gains, influencing the performance ranking of these techniques or the performance gaps or both from a channel realization to another. We modify or correct accordingly the related interpretations given in [4].(vi)Last, but not least, we also give new interpretations concerning the performance of the sphericity test and other competing detectors when applied to the detection of multiple primary users. Some of these interpretations ratify those in [4], some of them contradict those provided in [4].

Along with the expressions in [4], the new results and discussions reported here constitute important tools for the understanding, the design, and the analysis of the sphericity-test-based cooperative spectrum sensing and other competing detectors over fading and nonfading channels, in the presence of multiple primary users.

Paper Organization. The remaining of this paper is organized as follows. In Section 2 we reproduce, in a condensed way, the main results from [4] concerning the system model and the expressions for the false alarm and the detection probabilities of the sphericity test. Section 3 presents results for validating the analytical and empirical computations throughout the paper. Section 4 is devoted to the analysis of the channel normalization and channel dynamics on the performance assessment of the sphericity test. In Section 5 our semianalytic method for computing the detection probability of the sphericity test over time-varying fading channels is described. The performance of the sphericity test and other competing detectors is investigated in Section 6, for both the fixed-gain and the time-varying fading channels. Section 7 concludes the paper summarizing the main achievements of our work.

2. Main Results from [4]

In this section we describe the system model and provide the main expressions derived in [4] for computing the false alarm and the detection probabilities of the sphericity test (ST). The aim is to make this paper self-contained and facilitate the understanding and the application of such expressions. We use the same notation of [4] for the sake of consistency.

2.1. System Model

The system model is the standard one, which considers a -sensor cooperative sensing in the presence of primary transmitters. The sensors may be receive antennas in one secondary receiver or single-antenna secondary devices, or any combination of these. A realization of the received data vector is where , the matrix represents the channels between the primary transmitters and the sensors, and the vector represents the zero mean transmitted signals from the primary users; denotes the transposition operation. The vector represents the complex Gaussian noise with zero mean and covariance matrix , where the noise power and the identity matrix of order . By collecting i.i.d. (independent and identically distributed) observations of the vector , the matrix is formed.

Under the assumption of constant channel matrix during the sensing interval and primary user signals following an i.i.d. zero mean Gaussian distribution and uncorrelated with the noise, the population covariance matrix of the received signal under the hypotheses of absence () and presence () of the primary signals is, respectively, where denotes complex conjugate transpose, denotes the expectation operation, and denotes the transmission power of the th primary user. In this case the th received SNR is defined by where is the Euclidian norm of the underlying vector.

Since is positive definite, that is, , the ST tests the null hypothesis against all other alternatives corresponding to . However, since the population covariance matrix is not available in practice, the sphericity test relies on the sample covariance matrix . In this case, the test statistic of the ST-based detector is where and are the determinant and the trace of , respectively, and are the ordered eigenvalues of . If the test statistic is greater than some threshold , the detector declares ; it declares otherwise.

Interestingly enough, the threshold range for the ST lies in the interval , no matter the system parameters chosen. This also differs the ST test from most of the tests for spectrum sensing, and represents a clear advantage in practice.

2.2. False Alarm Probability of the Sphericity Test

From Proposition  1 in [4], for any sensor size and sample size , the two-first-moment Beta-approximation to the cumulative distribution function (CDF) of under is where is the Beta function, with being the gamma function, and is the incomplete Beta function. The parameters and are given by with where The false alarm probability as a function of the threshold is then

2.3. Detection Probability of the Sphericity Test

From the Proposition  3 in [4], for any sensor size and sample size , the two-first-moment Beta-approximation to the CDF of under is where the parameters and are given by with where with being the ordered eigenvalues of .

The detection probability as a function of the threshold is

3. Validation, Counterexamples, and Possible Numerical Problems

In this section we reproduce some results from [4] so that subsequent results reported here can be trusted. We also give some counterexamples in which the theoretical results from the expressions in [4] cannot be obtained due to numerical limitations, or do not match empirical results.

3.1. Validation

Figures 1 and 2 show analytical and empirical results for the false alarm and the detection probability of the sphericity test as a function of the detection threshold, respectively, for some values of and . The adherence between analytical and empirical results is the same observed in [4]; one should expect possible shifts of the curves when compared with those in [4], since the realization of the channel matrix used in [4] was almost surely different from the one used to produce the corresponding results here. As in [4], we assume three primary users () with dB, dB, and dB. The entries of the channel matrix are independently drawn from a standard complex Gaussian distribution. The channel is fixed during the sensing interval and normalized as , . The power of the noise is set to , without loss of generality. Thus, the transmission power of the th primary user is computed from (3) as , and the population covariance matrices under and are formed according to (2). The empirical results were obtained from Monte Carlo simulation runs, the same number used in [4], keeping the same realization of in all runs (one simulation run corresponds to a single sensing event during which samples per sensor are collected). Thus, AWGN channels with fixed SNRs and fixed gains are considered from the primary transmitters to the sensors. The entries of the transmitted signal matrix and of the noise vector are drawn from a zero-mean complex Gaussian distribution. The entries of the th row of have variance , and the entries of the noise vector have unitary variance. In each curve there are 1000 threshold values whose minimum and maximum were, respectively, obtained from the minimum and maximum values of the test statistics under and . These values were precomputed from a separate Monte Carlo simulation with 10000 runs.

In Section 4.1 of [4], where the corresponding setup is described, the authors inadvertently state that the considered channel is a Rayleigh fading channel. It is also where the channel normalization that modify the predefined fading statistics in the case of different channel realizations in each sensing interval is defined.

We also correct a typo in Section 4.1 of [4], where the authors state that they have “set the powers of the zero mean Gaussian signal and noise to be 1.” From (3), if and the channel is normalized to , the signal powers must be different from 1 if the SNRs are different from 0 dB.

The results shown in Figure 1 are in perfect agreement with those given in Figure 2 of [6].

3.2. Counterexamples

In this subsection we provide some counterexamples in which the theoretical results obtained from the expressions in [4] do not match empirical results, due to some inaccuracy of the Beta approximation under . It can be seen from Figure 3 that when the SNRs are large or the number of samples is small, or both, a nonnegligible disagreement between theoretical and empirical results appears in the case of the detection probability of the sphericity test. This is an evidence that the Beta approximation under the hypothesis is not accurate for all system parameters, whereas it is always accurate under .

3.3. Numerical Limitations

In spite of the fact that it is claimed in [4] that the derived expressions are easily computable, errors or limitations may result depending on the parameters chosen, on the software package used, and on how these expressions are entered in the software environment.

We have made simulations and computed the expressions of [4] using the Mathcad 15 and the Matlab R2009a software packages. If the expressions are entered in their original forms, computations are interrupted and floating-point error messages are prompted in Mathcad and “NaN” (not a number) or “inf” (infinite) values are attributed to variables in Matlab, for values of around , which limits the choice of important system parameters: the number of sensors and the number of samples . These errors occur in the computations of the moments , due to large values (>10307) of the gamma function that are not properly handled by these softwares in the floating-point representation. To avoid such errors, we had to do and recommend the following:(i)Simplify the quotient in (7) and (12) in order to avoid using the function defined in (8). The resulting quotient for is (ii)Replace the gamma function by its natural logarithm. The gamma function grows rapidly for moderately large arguments, which can cause numerical instabilities and errors. Many computing environments include a function that returns the natural logarithm of the gamma function (which is the case of Mathcad and Matlab). This function grows much more slowly than the gamma function and allows for adding and subtracting logs instead of multiplying and dividing very large values.(iii)Compute the CDFs (5) and (10) by applying the alternative definition of the incomplete Beta function , which applies directly to (5) and (10) through the use of a built-in Mathcad and Matlab function.

We have also computed the expressions of [4] using the software package Mathematica and no computation errors have been produced. However, it is by far more difficult to implement the complete Monte Carlo simulation in the Mathematica environment than in the Matlab or the Mathcad.

4. Effect of the Channel Normalization and the Channel Dynamics

In this section we analyze the effect of normalizing or not normalizing the channel gains, combined with the effect of considering a single fixed channel realization or random channel realizations in each sensing interval. Then, there are four scenarios, as described in the sequel. It is worth remembering that the is not affected by the channel and, thus, it does not need to be taken into account in the present analysis.

4.1. Fixed and Normalized Channel

This is the scenario considered in [4] and in the previous section. Just one channel realization is used to compute theoretical results and in all simulation runs; the entries of the channel matrix are independently drawn from any complex distribution. The channel gains are normalized as , . From a theoretical viewpoint, the fixed channel realization keeps the empirical results consistent with theoretical ones; that is, the same channel gains that define the population covariance matrix used in the theoretical computations are used in all simulation runs. In spite of possibly resulting in a good agreement between theoretical and empirical results, depending on the system parameters, the different channel realization used in each time that is calculated will produce shifts in the curves. Some results are shown in Figure 4 for , with dB, dB, and dB and and . The entries of the channel matrix in each channel realization were drawn from a zero-mean complex Gaussian distribution. Notice the different positions of the curves for different realizations of .

A correct way of applying the scenario of fixed and normalized channel is to consider that does not change; that is, the entries of , even if drawn from any distribution, should be stored and reused to represent the same set of channel gains of the AWGN channels from the primary transmitters to the sensors.

4.2. Fixed and Nonnormalized Channel

Again, just one channel realization is used to compute theoretical results, in all simulation runs, but the channel gains are not normalized. The entries of the channel matrix are independently drawn from any complex distribution. The transmission power of the th primary user is computed from (3) as . Similarly to the previous scenario, the possibility of having a different channel realization in each time that the detection probability is calculated will produce shifts in the curves, as can be noticed in Figure 5. The system parameters are the ones used in the previous subsection. The fixed channel realization keeps the empirical results consistent with theoretical ones and a good agreement between theoretical and empirical curves can be obtained, depending on the system parameters. Notice, however, that the spread of the curve positions is even larger in comparison with those in Figure 4. This is caused by an increase of the nonnormalized channel variability with respect to the normalized channel variability.

A careful look at Figures 3 and 4 in the low region allows one to see small disagreements between theoretical and empirical results for some channel realizations. This is to say that the Beta approximation under the hypothesis might not be accurate for any realization of the channel matrix , even if the system parameters are not changed.

4.3. Time-Varying and Normalized Channel

The channel gains are normalized, but a new channel matrix is randomly chosen in each simulation run. Again, the entries of are independently drawn from any complex distribution. In order to obtain the desired SNRs, the transmission power of the th primary user is computed from . If the second moment of the magnitude of the original channel gains is , then .

In this case a time-varying fading channel is being considered, but this time variability can not be captured by the theoretical results, since a single channel realization is used to compute the population covariance matrix in (12). A disagreement between theoretical and empirical results is expected, as can be noticed from Figure 6. Notice also that all empirical results merge together, as expected, since the simulation captures the average influence of the time-varying channel. Moreover, is now a random variable, which means that the normalized channel gains do not anymore follow the original and desired fading statistics. This can be observed in Figure 7, where a Rayleigh probability density function (PDF) is plotted along with empirical PDFs of the magnitude of an entry of the normalized and nonnormalized channel matrix. In this exemplifying situation, the entries of were independently drawn from a zero-mean complex Gaussian distribution, which would correspond to a Rayleigh fading channel if no channel normalization were made.

4.4. Time-Varying and Nonnormalized Channel

This is the most realistic scenario from a practical point of view. The channel gains are not normalized, and a new channel matrix is randomly chosen in each simulation run. Again, the entries of are independently drawn from any complex distribution. The nonnormalized channel gains now follow the original and desired fading statistics. The transmission powers are computed in the same way described in the previous scenario. Likewise the previous scenario, the time variability of the channel cannot be captured by the theoretical analysis, and large disagreements between theoretical and empirical results appear, as can be noticed from Figure 8. Again, all empirical results merge together, since the simulation captures the average influence of the time-varying channel. The system parameters are the same as those adopted in the previous scenario: Rayleigh fading channel, , , , dB, dB, and dB.

5. A Simple Method for Computing the Detection Probability of the ST over Fading Channels

A well-known method for considering the random variations of some parameter in the computation of a given quantity is to average the expression for computing that quantity, conditioned on the parameter, over the probability density function of the parameter. In the present analysis, from (12) it can be seen that the detection probability computed via (14) can be regarded as being conditioned on the determinant of the population covariance matrix and can be rewritten as . Then, when the channel matrix is random to represent a time-varying fading channel, the average detection probability can be computed by averaging the expression of over the PDF of , which in turn depends on the channel statistics. The analytical derivation of this average probability of detection is beyond our reach by now. Nevertheless, we give a simple alternative numerical computation of this average, as described and exemplified in the sequel. We call it a semianalytic solution. In [6], the average probability of detection was computed by averaging the expression of from [4] over the PDF of the fading SNR. Since our semianalytic approach was validated here with simulation results, there is a strong evidence that the averaging procedure adopted in [6] is not the correct one, though in a first look it seems to be.

Given the desired fading channel statistics, a number of random channel matrices is generated. The entries of the matrix , , are independently drawn from any complex distribution; the second moment of the magnitude of each entry must be equal to 1 to guarantee the desired received SNRs. For each of the channel matrices, a covariance matrix is computed according to where is the nonnormalized th column of the th channel matrix. We stress that the original channel normalization used in [4] must not be applied, so that the desired fading statistics are preserved.

For each covariance matrix , detection probabilities for all values of are computed using (10)–(14), and an estimate of the average detection probability is obtained as

To illustrate the accuracy of this simple method, in Figure 9 we present semianalytic and empirical results for some sets of system parameters. The channel is a Rayleigh fading channel and no channel normalization was applied, meaning that misfits similar to those shown in Figure 8 would be produced by directly applying the expressions for given in [4]. We have used in (17), but for practical purposes good fits can be achieved with as small as .

Notice in Figure 9 that a misfit between the semianalytic and the empirical results has been produced for the case of , , , dB, dB, and dB, which is the set of system parameters that have produced a misfit also in the case of the fixed-gain AWGN channel, as shown in Figure 3. However, this misfit has been inherited from the inaccuracy of the Beta approximation under for the corresponding parameters, not from the average computed via (17).

We emphasize that the method just described can be used with any distribution of the entries of the matrix , even with nonzero mean, broadening the applications of the expressions derived in [4].

6. Performance of the ST Detector and Other Competing Detectors

Complementing the numerical results in [4], in this section the sphericity test (ST) detector is compared with the eigenvalue ratio (ER) detector, John’s detector (JD), energy detector (ED), largest eigenvalue (LE) detector, and scaled largest eigenvalue (SLE) detector in terms of ROC curves. The test statistics for these competing detectors are, respectively, , , , , and .

As in [4], here we also consider the effect of the worst-case noise uncertainty in the performances of the tests ED and LE. In this case the noise variances under and become and , respectively, where and is the noise uncertainty in dB. These modified noise variances are differently associated with and in [4]. Since the results in [4] are consistent with the correct association, this means that, in the comparison of the test statistics for the ED and the LE with the threshold, the noise variance information has been used in [4] differently from here. This is because, also differently from here, these test statistics were defined in [4] without the explicitness of the noise variance information. Here, the worst-case noise variance under clearly must be , so that the test statistic is increased and the false alarm probability is also increased. Similarly, the worst-case noise variance under must be , so that the test statistic is decreased and the detection probability is also decreased.

6.1. Performance over Fixed-Gain AWGN Channels

In Section 4.2 of [4], the sphericity test is compared with the above-mentioned detectors. In Figures 35 of that paper, such comparison has been made under the assumption of fixed and normalized channel, using a single channel realization for all results and all simulation runs. A performance ranking has been established under this scenario. However, as previously shown, different channel realizations change the detection probabilities and, as a consequence, modify the corresponding ROC curves. Since different detection techniques can be differently affected by the channel gains, the performance ranking, gaps, or both can be modified from a channel realization to another. In this section we give an example of this situation and discuss about the related interpretations given in [4].

In Figure 10 we present ROC curves for all detectors under analysis, considering that the channel matrix is the one in (18). Since we do not know which channel matrix has been used by the authors in [4], we have found this matrix through an exhaustive search process, attempting to approximately reproduce Figure 5(b) of [4]. All the system parameters are the same as those considered in [4] for the case at hand. Consider

In Figure 11 we give ROC curves for the channel matrix (19) and the same parameters considered for constructing Figure 10. The modification in the original ranking and relative performance gaps of the detectors is apparent. From this counterexample we can state that the interpretations in [4] cannot be considered as general, since most of them are strongly dependent on the channel realization used to assess the performance of the detectors. This is not meant to state that the original expressions of [4] cannot be applied to AWGN channels with randomly chosen channel gains. These gains must carry some useful information on the the physical conditions of the channel in what concerns their magnitudes and phase rotations. Simply coining some channel realization from a given distribution does not bring too much for the performance analysis of the cooperative spectrum sensing. Consider

Particularly referring to the comparison between the performances of the ST and the JD, the variation of versus the SNR for a fixed is reported in Table 1 of [4] for both detectors. The system parameters are , , , dB, and dB. From that table the authors of [4] have concluded that “when the SNRs of the primary users increase (eigenvalues of become more distinct), ST detector achieves better performance than John’s detector, though the difference is small.” Using the above parameters and the channel matrix,we have found results approximately equal to those in Table 1 of [4]. Our results are shown in Table 1 and certify the above-mentioned conclusion for the given channel matrix. The bold-face numbers indicate the higher for a given SNR; the SNR corresponding to the crossing point between the performances of the JD and the ST is shifted from 0.5 dB in [4] to 2 dB here. On the other hand, by using the channel matrix we have obtained the results shown in Table 2, from where we readily see the superior performance of the JD for all SNR values (again, the bold-face numbers indicate the higher for a given SNR). In fact, we have observed that this superiority has prevailed not only for , but in all range . The difference in favor of the JD is higher than the differences in favor of the ST or the JD reported in [4]. As stated in [4], their performance gap is not expected to be large due to their same asymptotic performance as measured by the Pitman efficiency. Notice, however, that and are not large enough to be analyzed under the asymptotic regime, which justify the larger gaps observed in Table 2. Moreover, from Figure 11 it is clear the superior performance of the JD for the given channel matrix (19), in the situation of the number of primary users larger than the number of sensors. This is in contrast with [4].

From the previous paragraph we see that the corresponding conclusions drawn in [4] relative to the performances of the ST and the JD are not always valid, for they depend on the channel matrix realization and on the size of and .

In the case of two sensors, the JD and the ST detectors indeed achieve the same performance, since their test statistics are the same, up to a linear transformation, when . Then, their performances change with the channel realization but are the same. In the case of a single primary transmitter, the JD is indeed preferable, no matter the realization of the channel matrix. These statements are in agreement with [4].

Comparing the SLE and the ER detectors, it is concluded in [4] that, when the number of active primary users is more than one, the ST detector outperforms the SLE detector. However, from Figure 11 one can see a contradiction.

It is also stated in [4] that the ST always outperforms the ER detector, which is justified by the fact that, “for the ER detector, the test statistic depends only on the extreme eigenvalues of the sample covariance matrix , whereas the test statistic of the ST detector is a function of all the eigenvalues of .” The superiority of the ST against the ER can also be verified from our results.

Also agreeing with [4], when there is no noise uncertainty, the ED and the LE detectors almost always outperform the ST detector. However, the performances of the ED and the LE detectors are very sensitive to noise uncertainty.

6.2. Performances over Time-Varying Fading Channels

In this subsection, the performances of the detectors over fading channels are assessed. New interpretations are given about the performance raking and gaps. When considering a time-varying fading channel, the interpretations are general for the considered fading statistics, since the average detection and false alarm probabilities capture the average influence of the fading statistics, in addition to the average influence of the transmitted signal and noise statistics. However, there is still the possibility of variations in the performance ranking and gaps due to different configurations of the SNRs, which corresponds to different primary transmit powers, even if the sum of these powers is kept unchanged. This is not considered as an influencing situation in [4].

We consider Rayleigh fading channels from the primary transmitters to the sensors. The channels are fixed during the sensing interval and the entries of are independently drawn from a zero-mean complex Gaussian distribution with unitary second moment. This situation corresponds to the scenario D described in Section 4. The entries of the transmitted signal matrix and of the noise vector are drawn from a zero-mean complex Gaussian distribution. The entries of the noise vector have unitary variance. In order to obtain the desired SNRs, the transmission power of the th primary user is computed from , so the entries of the th row of have variance . The analytic results for the ST test were computed according to Section 5. To plot each empirical ROC curve, 1000 equally spaced threshold values were used in a Monte Carlo simulation with runs. The minimum and maximum threshold values were, respectively, obtained from the minimum and maximum values of the test statistics under and . These values were precomputed from a separate Monte Carlo simulation with 10000 runs. In simulations, the threshold for the ST was varied throughout the corresponding range, and the threshold for the remaining detectors were computed from the ST thresholds as follows, taking the ER test as an example (the other ones were determined analogously):

In Figures 1214 we have adopted the same system parameters used to plot Figures 35 of [4], respectively, but only for dB and dB. To plot Figures 15 and 16 we have used the parameters adopted in [4] for Figures 4 and 5, respectively, but only for and for equal SNRs (equal primary transmitters’ powers), keeping the same total transmit power. These parameters are summarized in the captions of these figures.

The first important observation from Figures 1216 is the close agreement between the empirical and semianalytic performances of the ST, again certifying the accuracy of the method proposed in Section 5. It is in order to highlight that the small disagreement observed in Figures 13 and 15 (and before in Figure 3) is a consequence of the inaccurate Beta approximation under , not from the method described in Section 5.

As in the previous subsection, let us first compare the performances of the ST and the JD detectors. From Figures 1216 one can see that their performance gap is not large, but the JD outperforms the ST in all situations analyzed, with a very small gap in the case of Figure 16. Even in higher SNR regimes, we did not find a situation in which the ST performs better than the JD in the fading channel, as found in the case of the nonfading channel. However, from the small gap shown in Figure 16, we believe that such situation can show up in the fading environment. As highlighted in [4], a complete understanding of the conditions under which the JD outperforms the ST, or vice-versa, seems difficult due to the nonexistence of an accurate analytical ROC computation for the JD test.

In the case of two sensors, since the JD and the ST test statistics are the same, up to a linear transformation, their performances are the same also in the fading channel.

From Figures 1316 one can observe that the SLE detector outperforms the ER detector only in the situation depicted in Figure 13. This partially contradicts the conclusion in [4] which states that when the number of active primary users is more than one, the ST detector outperforms the SLE detector.

Agreeing with the arguments in [4], which also apply to the fading channel, one can see that the ST always outperforms the ER detector.

Also agreeing with [4], when there is no noise uncertainty, the ED and the LE detectors always outperform the ST detector. However, the performances of the ED and the LE detectors are very sensitive to noise uncertainty.

Finally, now we investigate the possibility of variations in the performance ranking and gaps due to different configurations of the SNRs, which corresponds to different primary transmit powers. Figures 15 and 16 report results for equal SNRs, keeping the same total transmit power adopted for the scenarios depicted in the lower part of Figures 13 and 14, respectively. In both figures one can notice changes in performance ranking and gaps. All tests had their performances degraded from the situation of unequal SNRs to equal SNRs, except the ED, which has unveiled an improvement. It is an expected result, since the combination of the energies from each sensor becomes optimum when the SNRs are the same [7]. The LE detector seems to be less sensitive to the change from the unequal to the equal SNRs. The JD, ST, ER, and SLE have approximately the same sensitivity, although the smaller sensitivity of the ER in the case of has been enough for allowing it to be bet by the SLE detector from the equal to the unequal SNRs scenario. Also, from Figures 15 and 16, it can be noticed that the sensitivity to the change from the unequal to the equal SNRs is larger for the JD, ST, ER, and SLE tests when the number of primary transmitters is larger.

The variations in the performance ranking and gaps due to different configurations of the SNRs can be justified as follows: although the eigenvalues of the received signal covariance matrix carry the interaction of the signal eigenvalues and the noise eigenvalues, their spread is highly influenced by the transmit powers. Since the test statistics operate differently on the eigenvalues, it is reasonable to expected that they will be affected in different amounts for different sets of the transmit powers. This fact has also been noticed in [8], in the context of eigenvalue-based estimation of the number of sources, where one can see the large influence of the source powers on the accuracy of the estimate.

If the results considering a fading channel are compared with those considering nonfading scenarios, one can notice the small performance degradation when fading is taken into account. This expected small degradation is due to the diversity gain (proportional to ) produced by the cooperative spectrum sensing over fading channels.

Last but not least, if we compare the performances of the ST over Rayleigh fading channels, as shown in Figures 12, 13, and 14, with those in Figures  3, 4 and 5 of [6], respectively, we can see that they are not in agreement. This confirms our belief that the expression used to compute the probability of detection derived in [6] is not correct, since our analytic results were validated by simulations; no simulation results were given in [6] for validating the results reported.

7. Conclusions

In this paper we have shown that the system model adopted in [4] does not apply directly to fading channels, yet being considerably inaccurate for some system parameters and channel conditions. We have shown that the original expressions from [4] can be simply and accurately applied to a modified model that considers fixed or time-varying channels with any fading statistic. We have also analyzed the performance of the sphericity test and other competing detectors with a varying number of primary transmitters, considering different situations in terms of the channel gains and channel dynamics. Based on a bunch of new results, we have corrected several interpretations from [4] in what concerns the performance of the detectors, not only over a fixed-gain additive white Gaussian noise channel, but also over a time-varying Rayleigh fading channel. Some typos identified in [4] have been corrected as well.

The main conclusions drawn from our results are related to the comparison among the sphericity test and other detectors. Although some well-grounded interpretations given in [4] were verified in this paper, some of them were contradicted, mainly because the high influence of the channel realization used for computing the detection probability over a fixed-gain AWGN channel. Specifically, a single channel realization randomly obtained from a given distribution cannot be used to access the performance of the spectrum sensing, unless this single realization carries some physical meaning in what concerns the actual channel over which the system is expected to operate. Based on this fact, we have identified the possibility of changes in the performance ranking and performance gaps of the detectors depending on the coined channel matrix. Moreover, we have also identified that the performance ranking and gaps are also affected by the way in which the primary transmission powers are distributed, which further prevents the assumption of all conclusions and interpretations given in [4] as general ones.

Along with the expressions in [4], the results reported here constitute important tools for the understanding, the design and the analysis of the sphericity-test-based cooperative spectrum sensing and other competing detectors over fading and nonfading channels, in the presence of multiple primary users.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was partially supported by Finep/Funttel Grant no. 01.14.0231.00, under the Radiocommunication Reference Center (Centro de Referência em Radiocomunicações, CRR) Project of the National Institute of Telecommunications (Instituto Nacional de Telecomunicações, Inatel), Brazil.