Abstract

Cognitive Radio (CR) is a novel technology that permits secondary users (SUs) to transmit alongside primary users (PUs). PUs retain transparent communications whereas SUs perform spectrum sensing and adaptive transmission to avoid collisions. Ultra-wideband sensing is of primary importance for SU to sense and access opportunistically several bands at a time. Reliable detection in wide geographical regions needs collaborative sensing. Optimal collaborative multiband sensing is not analytically solvable unless some approximations and solution domain restrictions are applied for convexity exploitation. In this paper, we demonstrate that convex constraints are deleterious. We propose an alternative optimization technique based on genetic algorithms. Genetic programming performs a direct search of the optimal solution without approximations and solution domain restrictions. As a consequence, collaborative multiband sensing can be consistently optimized without limitations. Additionally the genetic optimization exploits the correlation of time-varying channels for fast adaptive convergence.

1. Introduction

Recent studies have revealed a deep underutilization of the electromagnetic resource due to the static allocation of spectrum band licenses [1]. Large portions of spectrum remain unexploited during certain periods of time and in certain geographical regions, yielding to an average 90% idleness of the licensed bands—the so-called spectral holes. The few unlicensed ISM bands are rapidly overloading due to the boom of wireless applications demanding for sporadic access to the spectrum. Cognitive Radio (CR) is a novel technology introduced with the intent of intelligently exploiting the unused spectrum [2, 3]. In a cognitive network, secondary users (SUs) are allowed to operate in licensed bands, under the condition that they do not interfere with the transmissions of the primary users (PUs), legal lessees of the license. Cognitive SUs detect the presence or the absence of transmissions, and identify the unused portions of spectrum. Then they may adapt their time-frequency transmission parameters in order to occupy the detected spectral holes. Some new standards are already adopting this paradigm. IEEE P.1900, for instance, is developing a whole type of ad-hoc wireless network based on Dynamic Spectrum Access (DSA) to the available spectrum [4]. IEEE 802.22 standard is considering the TV bands of the Ultra-High Frequencies (UHFs) for Wireless Regional Area Networks (WRANs) [5, 6].

Spectrum sensing is performed to detect the existing transmissions and identify the spectral holes. Reliable and extensive detection should be performed in order to avoid unwanted collisions and find as much free resources as possible. Observing an ultra-wide range of frequencies at a time is a challenging task since it requires expensive high-speed RF equipment. Although there have been some proposals of wavelet decomposition for multiresolution analysis [7], a common approach is to use tunable bandpass filters and observe one band at a time. Multiband Joint Detection proposed by Quan et al. in [8] applies narrowband sensing techniques to operate an opportunistic optimization of the throughput over multiple independent bands. The channel is divided into nonoverlapping narrow subbands which may be utilized by distinct primary systems or may be blindly sensed, that is, without knowing who is transmitting there. Energy thresholds for each subband have to be chosen for optimal detection aimed at maximizing the throughput and limiting the interference.

Single CR sensing results can be affected by shadowing or multipath fading [9]. SU may cause harmful interference to the primaries for unreliable decision. Multiple geographically distributed CR can perform collaborative sensing by combining their results and improve performance. In Linear Statistics Combination (LSC), a simple fusion rule of the power levels is applied to perform a reliability-enhanced unique decision. The weights of a linear combination have to be determined by formulating an optimization problem. LSC has demonstrated to noticeably enhance the performance of individual sensors [10].

By applying LSC to multiband detection, Quan proposed the Spatial-Spectral Joint Detection [11]. Multiple independently-sensed levels are bandwise combined in order to increase the reliability in each subband. As a consequence, the useful throughput is increased while further limiting the interference.

Optimal multiband detection is achieved by formulating an optimization problem. Since in general the aforementioned formulations are nonconvex, some methods have been proposed to transform or approximate the problems and limit the solution domain in order to exploit the hidden convexity [8]. Such constraints bind the maximum interference and minimum channel utilization, with a loss of generality in the practical detection configurations.

In this paper we propose at first an alternative formulation of the multiband sensing and then an implementation of the genetic algorithms that solve the presented formulations of detection problems. Our formulation of the multiband sensing interprets the limited interference regime as a bound on the interference caused in each subband, rather than the aggregate disturbance throughout the wideband channel. This yields to a direct solution for the noncollaborative multiband problem. It also reduces the collaborative multiband sensing problem to a set of simple narrowband LSC optimizations, still keeping a controlled interference regime and the aggregate throughput as the objective of the maximization. We then propose the genetic programming as a solving technique that avoids reformulations, approximations, and limitations. Genetic Algorithms (GAs) are extensively used to find true or approximate solutions in different communications engineering applications such as, for instance, network design [12] and adaptive modulations [13]. GAs perform a direct search of the best solution by considering sets of candidates and evaluating them singularly by means of a fitness measure, objective of the maximization. Then they iteratively drop unfit elements and select the fittest ones for combination (reproduction) and random alteration (genetic mutation) [14]. GAs work one step above mathematical analysis, dealing well with nondifferentiable functions as well as functions with multiple local maxima. Exploitation of hidden convexity is not necessary so that no reformulations or approximations of the problem are performed. We show how the genetic programming, by working directly at the root of the problem, can be an efficient and powerful optimization and analysis tool for all possible CR systems. Additionally, GAs demonstrate to exploit the correlation of different detection conditions with time-varying channels. Moving CR senses a channel with variable statistics. GAs exploit the statistical dependence of consecutive sensing events by performing an optimization that starts from the result of the previous instant. The sensing precision is drastically improved as well as the convergence time.

The paper is organized as follows. In Section 2, the multiband detection framework is presented. Section 3 introduces the collaborative sensing within the opportunistic multiband fashion. Section 4 analyzes the genetic algorithms and the possibilities of application. In Section 5 are produced the generic numerical results, whereas Section 6 introduces the results for adaptive optimization and Section 7 concludes the paper.

2. Opportunistic Multiband Sensing

Multiband sensing considers a channel divided into narrow subbands where one or more primary communication systems may be transmitting. A cognitive SU applies narrowband sensing to each subchannel in order to maximize its own transmission without harm for the PU.

2.1. Narrowband Signal Detection

A CR senses constantly the spectrum to discover which of the subchannels are free of primary transmissions (Figure 1). Deciding for the condition of the single th subband means posing the following binary condition [15]: where represents the absence of primary signal (only Gaussian noise with power ) and represents the presence of primary signal , corrupted by Gaussian noise . Capital letters indicate that we are considering the frequency spectra. is the channel gain between the primary transmitter and the secondary receiver. The presence or absence of the primary signal in each subband is then verified as follows: where is the subband energy threshold. The test statistics can be considered asymptotically normally distributed if is large enough (e.g., ). The performance of the detection is calculated in terms of(i)probability of identifying the spectral hole: (ii)probability of missed detection (interference):

where is the probability of false alarm and is the probability of detecting a primary signal. The random variables have Gaussian distribution under hypothesis and under .

Thus, and can be calculated as the tail of a normal distribution: Increasing the utilization of the channel implies higher interference to the PU. The thresholds have to be optimized in order to maximize the utilization while limiting the interference.

2.2. Multiband Detection

Opportunistic multiband detection optimizes the aggregate throughput throughout the subbands while limiting the interference (Figure 1). The design objective is to find an optimal vector of thresholds such that the free spectrum holes are efficiently exploited and a controlled level of interference is produced. The probabilities of false alarm and detection can be represented compactly as follows: Given that denotes the probabilities of detecting the free subbands, denotes the achievable throughput over the th subband and that , we can express the aggregate opportunistic throughput reached over the subbands as Similarly, if is the vector of probabilities of interference and is a measure of cost caused by transmitting in the subbands, then the aggregate interference can be expressed as

The maximization of (8) and the minimization of (9) are conflicting tasks. According to the formulation suggested by Quan et al. in [8], we limit the per-band interference (), as well as the aggregate interference (9), and ensure a minimum utilization (). The optimization becomes finding the appropriate thresholds that maximize (8) with the aforementioned bounds [8]: The subchannel interference and utilization bounds can be translated to the linear constraint to be imposed to (P1), where Problem (P1) is convex if the utilization is at least 50% () and the interference at most 50% (). Although these prerequisites are reasonable in many cases, convex maximization is not able to solve (P1) without these impositions.

In Section 4, we introduce the the genetic algorithms to solve the optimization for nonconvex CR systems, including the cases and .

3. Collaborative Multiband Sensing

The introduction of cooperative sensing aims at improving the sensing reliability in order to reduce the interference. Let us consider now a set of CR operating in the primary region (Figure 2). The sensors detect individually the presence of primary transmissions by sensing the surrounding channel in each narrow subband and pose the binary condition (1).

3.1. Reliable Sensing

The receiving conditions of one single CR are subject to fading so that the sensing of only one single terminal can produce unreliable results (Figure 2) [9]. (i)Hidden terminal: a sensor (CR1) is located behind an obstacle. It senses a low power signal from PTx and may thus decide for also when the transmission is present, affecting the reception of PRx.(ii)Far terminal: a sensor (CR2) lies outside the primary range. It receives a low power level due to the distance and thus decides for . Its transmission can produce interference to PRx, which is inside the primary range.

Space diversity is exploited in CS to increase the reliability of the decision by combining the sensed power levels in a spatially distributed channel.

3.2. Collaborative Detection

Let us consider the signal received by each CR in one single narrow subband. The structure of a narrowband energy detector is (2) for each th subchannel.

The sensing results are collected channelwise into a vector . Then, for each band, the sensed levels are linearly combined with a vector of weights and compared to the subband threshold [10]: The weight factors represent the contributions of the different CR in the respective subchannels. A high weight is more likely to correspond to a sensor with good SNR. The probabilities of false alarm (5) and detection (6) are now expressed as: Throughput and interference depend now on the weight matrix and the threshold vector :

Coherently with the noncollaborative formulation, the collaborative multiband detection as proposed by Quan et al. in [8] is The previous considerations apply. This formulation provides further complications for convex optimization. The maximization is not convex and has to be lower bounded with another convex function. This brings loss of performance in terms of achievable throughput [8].

In the next section we introduce an alternative formulation of the multiband detection problem that can be solved by maximizing the LSC.

Then in Section 4 we introduce GA to solve directly the joint detection problem without the limitation of the convexity constraints.

3.3. Multiband Detection without Aggregate Constraint: LSC Optimization

Multiband detection without aggregate constraint can be seen as a collaborative detection with multiband aggregation. This alternative formulation of the multiband detection problem aims at achieving performance optimization with linear collaboration maximization applied to the single bands, still maximizing the aggregate throughput. LSC problem is less complex compared to the aforementioned formulations, which results in a faster and more precise solution with GA.

Let the secondary CR sense subchannels which are licensed each one to a different primary system. Then the aggregate interference (16) caused by the SU is not a relevant index of caused harm because it is distributed between distinct PUs. It is rather more critical to limit the interference in the individual bands and look for the best weights and thresholds that maximize the throughput (15): No minimum utilization is imposed, given that it may be not always necessary. A weight optimization for the linear statistic combination must be performed in each subband, in order to maximize the subchannel utilization: We can explicit the threshold from the interference constraint by means of(14) and we can solve for , the following unconstrained maximization: where and are the vector of the average power levels and the matrix of covariances relative to the th subband in case of , respectively— and in case of , respectively.

Function (18) is again nonconvex in general. Convex maximization is feasible in the case , where hidden convexity is exploitable. Then separate convex subdomains may be considered for the application of techniques such as Semidefinite Programming (SDP), which requires complex reformulations in matrix forms [16]. The eventuality of a system with nonexploitable convexity is not far from reality especially when few CRs collaborate together. Other solving techniques have to be implemented.

3.4. CR Systems Classification

A common classification of CR systems is based on the per-band utilization and interference (Figure 3) [10](i)Conservative System: A conservative CR system has a utilization less or equal to 50% () and a probability of interference smaller than 50% ().(ii)Aggressive System: An aggressive CR system achieves a utilization higher than 50% () and an interference less than 50% ().(iii)Hostile System: A hostile CR system targets more than 50% of spectrum utilization (), causing at least 50% of interference ().

The Q(x) function is convex if and concave if . Consequently, a necessary condition for (P1) and (P2) to have convex objective with concave constraints is that that is, the system must be aggressive. For the problem (P1) this is also a sufficient condition, whereas when considering the weights (P2) the objective throughput is to be additionally lower bounded by a function, for which the aforementioned conditions are sufficient for convexity. The lower bound brings a loss of performance as documented in [8]. The convexity region includes the values from the conservative region where and from the hostile systems where (Figure 3). The inside of the other regions is in any case not solvable with convex maximization.

The and values are bounds for the minimum and maximum per-band threshold by means of (11), which limit above and below aggregate interference and throughput. An exemplary behavior is that an aggressive system may not transmit with low bitrate because a minimum per-band utilization is imposed. It can not either cause less disturbance than a lower implied threshold by the utilization bound.

Section 4 introduces thus the genetic algorithm for solving without convexity limitations these three sensing problems.(i)Individual Multiband Detection (P1).(ii)Collaborative Multiband Detection (P2).(iii)Collaborative Multiband Detection without aggregate interference constraint—LSC (P3).

We show how the genetic approach performs a direct search of the solution that provides the highest throughput by generating, comparing, and discarding various solutions. The advantages of such approach are that it does not need any problem reformulations or mathematical constraints. It is thus suggested as an acceptable way of solving directly the sensing problems for all CR system classes.

4. Optimization Using Genetics

Genetic Algorithms (GA) belong to the class of the evolutionary models for solving optimization and maximization problems (OP) [14]. The natural evolution processes are simulated by means of natural selection and survival of the fittest in order to find the maximum of a function. GA demonstrated optimum performance with complex multidimensional OP. Nonconvex maximizations (P1), (P2), and (P3) are problems where a lot of methods fail or manage to solve only a subset of cases.

4.1. The Genetic Algorithms

GAs consider a population of potential solution vectors to the OP and iteratively select good elements and drop unfits to let the best ones survive from generation to generation. The independent variables are called genes and they form a vector called chromosome . The numerical realization of such variables (the genotype) is a potential solution to the problem—also referred to as an individuals or an element of a population. The genes are different for the noncollaborative and collaborative problems. Namely the chromosome is The genes are the independent variables of the throughput function (8), which is the objective of the maximization. A generation is a set of genotypes at the th iteration. The subscript distinguishes the genotypes of a certain generation: The index of the generation is . In order to determine which genotypes fit the problem, a fitness score is calculated for each element: The fitness function is the objective of the maximization, that is, the throughput (15) in (P1) and (P2) and (18) in (P3). Consecutive generations of solutions and their scores are examined iteratively in order to determine which individuals are suitable for surviving to the next generation as shown in Figure 4.

The generation zero is created randomly, whereas each generation is created recursively in three steps. (1)Selection: An intermediate population is created by performing the (natural) selection. The individuals with the best fitness scores are duplicated (in a predefined percentage) and the rest are discarded.(2)Crossover: The intermediate population is recombined to simulate the reproduction. Survived elements are taken couplewise and, according to a mixing criterion, a number of children members are created from each couple.(3)Mutation: A percentage of the offspring randomly mutates to create new genotypes. Spreading the search at each generation avoids restraining in a local maximum due to deception (see Section 4.2).

After these steps a new generation is created from the previous one, with closer elements to the optimal solution (Figure 4). The children elements that do not respect the constraints are dropped. An equal number of elements is regenerated and the constraint is reevaluated, which can increase the computational load.

The evaluation and generation steps are performed iteratively, in order to increase the percentage of fit members. The computation is terminated when the fitness of the population remains unchanged for a sufficient lapse of generations or if the maximum number of generations is reached. The genotype with the highest fitness is chosen as the final solution. The convergence precision depends on multiple factors and influences directly the time consumption of the process, as it is discussed in the following Section. For more details about GA refer to [14, 17].

4.2. Feasibility of the Genetic Approach

GAs are introduced because of the limitations observed when treating the intrinsically nonconvex detection problem with convex optimization methods. There is a single convex domain in (P1) and (P2), with the constraints (19) [8]. LSC (18) has two convex subdomains under a maximum interference bound, to be treated separately. Each one can be optimized with complex methods such as the SDP technique [16]. Other methods like the so-called hill-climbing methods need the objective function to be well behaved, that is, it has to be continuous as well as its derivatives. The function also has to be unimodal, that is, with only one peak, because with many peaks the search may stop at the first undergone relative maximum.

The main motivation for using GA is that they solve such complex multidimensional problems without in-depth function study, constraints, or reformulations. GAs do not have mathematical limitations such as the convexity requirement. They abstract from the smoothness of the objective function, because they calculate isolated points ignoring discontinuities, cusps, and inflections. GAs also perform well in presence of multiple relative maxima (e.g., in presence of ripples) by spreading the population variety and evaluate as much genotype variety as possible. This way of working one step above the complications of function analysis makes GA suitable for solving the multiband detection problem with any values of and .

Drawbacks of genetic programming are the deception and the computational load. The deception is the surviving of an apparently fit subpopulation that leads away from the global optimum [18]. This is equivalent to say that local maxima may cause ambiguity that let a GA converge away from the global maximum or not converge at all in reasonable time (slow finishing). Although this is in general not desirable, it has been proven that it is worth using GA if the OP to be solved presents a certain degree of deception, whereas regular-behaved problems are better solved with other methods [18]. In fact, GA have a strong attitude at escaping from local maxima by spreading around the search. Setting up appropriately the parameters of our GA is fundamental to find an optimal configuration for each kind of problem.

GAs also have a characteristic that makes them profitable in time-varying channels, such as the mobile radio channel. The channel statistics vary due to the movement of the sensors with respect to the PTx. At each sensing instant, reliable cooperation requires an optimization of the weights to adapt to the new statistics. Performing a new optimization procedure at each sensing instant is inefficient, if it is done with SDP or other convex maximization methods. Since the statistics are supposed not to vary too much, the weights of each sensing instant are correlated with the preceding ones. GAs can keep memory of the previous weights and use them as the starting point for the new elaboration. The convergence is faster since the starting vector should be already close to the new optimum.

4.3. Computational Cost

The computational effort is measured as the number of function evaluations that have to be done to complete a computation. As function evaluations we consider both fitness evaluations and constraints verifications indistinctly, since they have the same form and imply the same number of floating point operations. Comparisons and data duplications are negligible. In order to converge with a certain accuracy our GA may need big populations and/or more generations, whereas small populations may result in insufficient precision or slow finishing.

The number of computations grows linearly with the population size. Since the aggregate interference constraint cannot be explicited, it has to be computed for each member. Those members that do not respect the constraint are regenerated and a further constraint evaluation is performed. In (P1) the verification of the per-band interference and per-band utilization (10) is made in the genotype space, so they do not yield to other function evaluations and no further increase of the computational load. All constraints in (P2) have instead to be verified for each generation. In general if we name the total number of function evaluations that have to be computed for each member, then the total number of computations for a genetic optimization is In unconstrained optimization such as (P3), . In multiband problems (P1) and (P2), , because of one fitness and one constraint verification. The value grows as much as how many regenerations are needed in average to find one constraints-fit member. This depends on the crossover criterion, on the random mutation and on the stochastic realizations of the process, as well as on the problem itself. This index of computational effort is quite critical because before establishing a valid offspring the algorithm can undergo several computations.

5. Simulation Results

We analyze now the performance achieved by a multiband collaborative detection framework. The optimization is conducted with variable bounds in order to compare convexity-limited and nonconvex systems. The testbed channel has subbands, secondary rates between and  kbps, and individual interference costs between and . The SNR in each band is between and  dB. The length of the detection interval is . Throughput-interference characteristics (I-R) are depicted for the three CR classes in Figure 5. The collaborative and noncollaborative systems differ for the LSC in the subbands. Interference and utilization in the single bands are linked together as shown by the curves in Figure 6. The collaborative case has an improved reliability, which corresponds to a smaller interference. On the other hand, (P1) optimizes only the thresholds, whereas (P2) requires the contemporary optimization of a nonhomogeneous set of variables (weights and thresholds), yielding inevitably to a more complex problem. Two procedures have been proposed in [8] to solve (P2).(1)Sequential optimization performs at first a spatial optimization that maximizes the modified deflection coefficient: Then it performs a spectral optimization of the thresholds as if there were only one sensing CR. Sequential optimization performs close to the optimal global solution. The spectral optimization is actually the procedure followed to solve also the thresholds optimization in the noncollaborative case.(2)Joint optimization finds directly thresholds and weights for a global maximization of the throughput. It is optimal in a global sense. Exploitation of hidden convexity needs heavy approximations, so that the final performance is compromised [8].

Then single-band weights optimization (P3) is also analyzed by discussing the throughput graphics.

5.1. Analysis of Nonconvex Classes of Multiband Detection Systems

The multiband frameworks (P1) and (P2) suffer from the same limitations of the convexity constraints. The latter additionally has to be lower bounded when not solved with sequential optimization, which reduces the achievable throughput [8]. The characteristics are analogous for the two problems, because the utilization and interference bounds have the same implications. So the presented results are valid for both frameworks. With the genetic solution the I-R characteristics are calculated for the three CR system classes with the aggregate interference as independent variable. Figure 5 shows four case studies for the problem (P1). The case studies are also presented schematically in Figure 3.

The and values are bounds for the minimum and maximum per-band interference and utilization. By choosing a minimum subband utilization we also impose a minimum interference, and vice versa, for these two quantities are determined by the threshold . Figure 6 shows the utilization-interference characteristics. If the convexity imposes to operate inside the shaded region then it is impossible to reduce the interference under a certain value, resulting in compromised performance. The region with less utilization but also less unnecessary interference has been excluded before and it is included in the genetic optimization. Convexity exploitation through the utilization constraint is counterproductive for the performance of the system. Therefore comes the importance of a nonconvex maximization tool such as GA.

Different CR systems show different achievable throughput because less or more interference and utilization is permitted in the single bands as shown in Figure 5.

Conservative systems break the convexity with values of utilization below 50%. By reducing we allow transmitting with small bitrates over the subchannels with poor SNR, which are the cause of a high interference. A higher percentage of false alarms is implicitly provoked, but the trade-off is more favorable so that the operative point has a higher throughput and lower interference as decreases. The interested region is mainly for low aggregate interference, whereas, asinthotically, the systems have the same characteristic. The case with and is common in literature for demonstration purposes [8], but it is largely outperformed. Aggressive systems with higher than 0.5 show even less favorable characteristics, with low rates at high interferences.

Hostile systems have a maximum interference beyond 50%. By increasing this value we permit more per-band interference with a gain in the throughput. The missed detections increase, so the bitrate increases with a higher slope, because more disturb on the individual bands is permitted. The other systems with low increase with a poorer slope. This interests mainly the higher interferences, whereas small interferences are largely beyond the bound and the operative point remains similar.

Aggressive systems may not transmit with low bitrate and low interference because a minimum is imposed to both of them. They pay the mild per-band interference and the acceptable per-band utilization with a worse I-R characteristic.

Figure 7 shows the bar diagrams of the thresholds and the per-band average interference and utilization, for one case of each system in the multiband problem. We notice that the limits for convexity are exceeded. On one hand, more interference is caused in some bands, but with a gain in the throughput that is reflected in the I-R characteristics in Figure 5. On the other hand, less utilization is permitted in other bands, especially those with poor SNR (2nd and 5th bands), but with lower interference. Such systems bring an improvement with respect to the aggressive case for higher and lower interferences, respectively.

Not all combinations of and are admitted. Some combinations are unfeasible if the utilization limit implies an interference that does not respect the condition on . Low SNR may support the appearance of such cases just by bringing the Gaussian pdfs of the sensed levels one close to each other.

5.2. Analysis of Multiband Detection without Aggregate Constraint

By removing the constraint on the aggregate interference, the problem of maximizing the utilization with a per-band interference constraint has become a series of independent LSC optimizations. Optimum weights are calculated to provide the maximum probability of transmission in each subband with fixed probability of interference. The thresholds are implicit. LSC provides the graphics of transmission-versus-interference probability in Figure 6. The case with one CR is equivalent to a multiband aggregation problem, but the solution is immediate by means of (17). Then the aggregate throughput is calculated afterwards as a linear combination of the subchannel rates. The total rates against the interference to each subchannel system is plotted in Figure 8, which is a direct result of the reliable detection whose characteristics are shown in Figure 6. Besides, LSC is far more simple to be solved, both for GA, that converge easier, and for convex maximization, when working with interference below 50%.

5.3. Genetic Design

Setting up the parameters of our GA is important to optimize the computation, from the point of view of the expenditure of resources and the convergence precision.

When using a GA, as well as any iterative solving method, a finite difference between the true maximum and the one computed by the GA is expected, because of the finite number of iterations and individual evaluations. The optimization is considered solved when it approaches a negligible error. An error of tens of kilobits (out of some megabits) on the aggregate throughput is considered acceptable. For the aggregate throughput distance in (P1) and (P2) we use a relative measure, the Mean Absolute Percentage Error (MAPE): From the collaborative side, a squared error of around on the subband fractional utilization () is also acceptable. The subchannel utilization error is measured with the Mean Squared Error (MSE): and are the value calculated by the GA during the th experiment and is the number of experiments. and represent a solution of the maximization several orders of magnitude more precise than the other optimizations. They are obtained at the expenses of a practically unfeasible algorithm but enough accurate to evaluate the precision of the other computations.

We vary the dimension of ordinary elaborations in order to find a compromise between complexity and accuracy. By setting the value of the population size () we control the dimension of the GA in order to evaluate a wider range of genotypes and generate a fitter population. By eventually setting the limit on the maximum number of generations () we avoid the algorithm running for a long time before converging.

The mean number of function evaluations to complete an elaboration () is our index of complexity which depends directly on the population size and on the number of generations.

We also examine different crossover functions, for a well-chosen crossover criterion converges faster and with higher accuracy.

5.3.1. Spectral Optimization

It consists in the optimization of the thresholds for the noncollaborative problem (P1), as well as for the sequential optimization of (P2) with the weights part solved otherwise (modified deflection coefficient or GA applied to LSC). The characteristics for noncollaborative detection are shown in Figure 5. A MAPE less than 0.1% is enough to infer that the algorithm has converged with acceptable precision. Figure 9 shows the convergence precision while varying the population size in terms of Cumulative Distribution Function (CDF) of the relative error.

5.3.2. Joint Optimization

It consists in the joint optimization of the spatial and spectral variables in (P2). The optimized characteristic is analogous to the one shown in Figure 5. The genes are not homogeneous (weights and thresholds) according to (20), so the evolution results more complex. An expedient for helping the convergence is to set some initial points as the starting population of the GA. We first obtain the initial weights from the maximization of the modified deflection coefficient (with a simple GA or with the procedure in [10]): Then the initial thresholds are uniformly distributed between the minimum and maximum calculated by means of (11): where With these starting points the GA needs only few generations of joint elaboration to converge to the global optimal solution. Figure 10 shows the CDF of the squared error with different population sizes.

5.3.3. Weights Optimization

It consists in the optimization of the subband weights (18), whose result is shown in Figure 6. It solves the alternative multiband optimization without aggregate constraint (P3). An acceptable convergence is said to be reached with an MSE around . Figure 11 shows the CDF of the squared error reached by the GA in weight optimization with various dimensions of the population. Since weights optimization is unconstrained, the computational load is exactly .

6. Variation of the Channel Statistics

Let us consider now the variation of the receiving conditions of the sensors due to the movement. Channel statistics in presence of moving sensors are supposed to vary in time with a certain correlation. A simulated variation in time of the SNR at the radio interface of one CR is shown in Figure 12. The statistic of the received level in presence of transmission has lognormal distribution, as it derives from long-term fading and shadowing. The cutoff of the sensor is also simulated, in case of sudden loss of the sensing contribution. A similar variation is followed by each node in the simulation. Measurements correlation is exploited by keeping the result of one elaboration and refining it in the successive instant. Figure 13 shows the precision reached by running the GA for LSC optimization with a fixed number of generations at each instant. The two curves correspond to a genetic optimization from random points every time (memoryless) and from the previous weights as starting vector (with memory). We can see that we gain an order of magnitude of MSE by iteratively updating the objective function (with the channel statistics) and keeping trace of the previous calculated weights. Figure 14 shows instead the number of generations that are needed to reach a certain precision with memoryless optimization using at each instant the previous weights as starting point. In average, 20 generations less are needed to converge, since the weights are tightly correlated between consecutive instants.

7. Conclusion

GAs were proposed as a valid technique for solving the detection problem efficiently and without convexity constraints. The solution is practically analogous to the true mathematical maximum. GAs are able to exploit the correlation of the mobile channel. Unpractical limits due to mathematically unfeasible problems are avoided. Conservative systems demonstrate to outperform aggressive systems and the throughput increases as we reduce the minimum subband occupancy. In general the complexity of our GA demonstrates to be sustainable and controllable. The computational load does not increase too much as the sensing problem grows or as the GA dimension increases.