Abstract

This work is about reducing energy consumption in the receiver chain by limiting the use of the equalizer. It is to make the radio receiver aware of its environment and able to take decision to turn on or off the equalizer according to its necessity or not. When the equalizer is off, the computational complexity is reduced and the rate of reduction depends on the percentage of time during which this component is disabled. In order to achieve this scenario of adapting the use of the equalizer, we need to develop a decision-making technique that provides the receiver with the capacities of awareness and adaptability to the state of its environment. For this, we improve a technique based on a statistical modeling of the environment by defining two metrics as channel quality indicators to evaluate the effect of the intersymbol interferences and the channel fading. The statistical modeling technique allows to take into account the impact of the uncertainties of the estimated metrics on the decision making.

1. Introduction

The requirements in wireless communications are increasingly growing in terms of services and transmission’s quality. Technological advances in this sense, either in the nodes or in the base stations, have improved the quality of service offered to the users of the wireless communication networks. However, these advances are not without side effects; with this development and with the growing number of mobile phones, the wireless communications consume more and more energy causing so an increase in emissions. In front of this phenomenon, the solutions that make communications less consuming in terms of energy are needed for the preservation of the environment. With this aim, the concept of green communications, or green radio, allows to develop a radio equipment that consumes less energy. One of the solutions to obtain a green radio is to make it able to adapt dynamically to the environment and to use this adaptability in order to reduce energy consumption [1]. In fact, according to the state of its environment, the radio receiver with cognitive capacities can take some decisions of reconfigurations, and so it can select the reconfigurations that need less amount of energy than others.

Within this context, we focus in this paper on reducing energy consumption in the radio receiver through its adaptation to its environment. In particular, we are looking at the equalizer component and we seek to limit its use in order to reduce energy consumption. The idea is to make the radio receiver capable of being aware of its environment and deciding to turn off the equalizer when it is not useful and to turn it on as soon as it becomes necessary, so that the receiver performance does not degrade. The fact to get rid of this component for a period of time of the communication should reduce energy consumption in the receiver chain. So we try also to determine from what period of time, of disabling the equalizer, can we save energy.

The reminder of this paper is organized as follows. Section 2 describes some previous works that were looking for the same reason of switching off the equalizer. In Section 3 we develop the decision-making method for deciding to turn off the equalizer. In Section 4 we present the results of the computational complexity reduction and other results of simulation.

2. Previous Works

In the literature, the idea of limiting the use of the equalizer during a period of time of the communication is not new; some previous research works have focused on this for different reasons other than the energy consumption reduction. In fact, we notice this in the work [2] where the authors tried to adapt the use of the equalizer to the state of the channel in order to avoid the presence of this component in an environment without delay spread distortion as it can cause sensitivity degradation. The proposed solution in this work consists of two coherent detection algorithms; the first contains a decision feedback equalizer and the second is without any equalizer. A detector selection algorithm is used in order to dynamically select the corresponding detector depending on whether delay spread distortion is present or not. For this the correlation of the two kinds of detectors is measured and the selection algorithm chooses the one that have the greater correlation compared to a threshold. The problem of this invention is that the equalizer is always used, but also there is an additional stage of a detector without equalizer; this makes the computational complexity of the solution very high. In [3], the authors proposed a radio receiver with a selectively disabled equalizer. The aim of this work is to develop a technique for selectively and automatically disabling the equalizer in particular situations in order to avoid a prejudicial impact of the equalization in these cases. In particular, the discussed case is to disable the equalizer as long as a spurious signal is present for the FM radio receiver. For this, an equalizer controller compares the RF demodulated signal with a predetermined distribution for the received signal in order to determine if a spurious signal is present or not and so to disable or to keep the equalizer. The energy consumption aspect was not considered in this work since it was not the essential purpose of the invention.

The purpose of reducing the energy consumption started to be highlighted in [4]. In this work, the equalizer is powered off for reduced power consumption when there is small intersymbol interference and it is powered on for better signal reception. The idea is based on the analysis of the received signal quality after demodulation, by inserting a decision circuit at the receiver that compares the quality of the demodulated signal to a predetermined threshold. Basing on a couple of integrator and a low-pass filter, the decision error estimation of the signal quality is compared to a threshold and the result allows to decide if the quality of the signal is poor or good. According to this decision, the equalizer is powered on or off. In determining the quality of the received signal, the authors do not specify whether the degradation of the signal is due to the intersymbol interference or to other phenomena, but they are relying on the total degradation. For the same reason of reducing energy consumption, [5] proposes a new method, called conditional equalization, which minimizes the power consumption by equalizing only when it is most needed. At each time slot, the receiver decides if the equalization is needed or not by defining two criteria in order to distinguish between weakly dispersive channels and dispersive channels in the case of a two-path Rayleigh channel. The first criterion (C1) is based on the paths’ energies; in fact, the receiver decides to not equalize the received signal only if the energy of one path is far greater than the total energy of the other paths. This criterion is simple to implement and so the additional computational complexity is small [6]. The second criterion (C2) is based on the intersymbol interference contribution in the degradation of the signal. For this, the determination of the path having the greatest energy is needed, and the receiver decides to not equalize only if the projection of the other paths on the axis defined by the greatest energy path is far smaller. This means that the effect of the intersymbol interference is negligible. In this work, the authors assumed that the channel coefficients are perfectly estimated and known; in addition to this, they did not study the impact of the variation of the channel on the decision system. The reduction of the energy consumption by disabling the equalizer was also enhanced in [7] where the authors developed a receiver with hybrid equalizer and RAKE receiver. The receiver enables or disables the equalizer according to the performance of the system for RAKE only and for RAKE with the equalizer. A measure for the channel quality allows to switch between the mode RAKE only and the mode RAKE with the equalizer. For this, two evaluations are taken into consideration: the first is the measure of the Signal to Interference plus Noise Ratio (SINR) and the second evaluation determines the speed of the channel. Finally we mention the work [8] which proposes a technique for selecting between an equalizing demodulator and a nonequalizing demodulator in a receiver.

Although the previous works had the objective of reducing the energy consumption or the computational complexity, none has tackled this notion, not even the existence of such a reduction related to their solutions. In our work, we present precisely some results concerning the rate of the complexity reduction when we limit the use of the equalizer in the receiver chain and also the maximum gains that can be achieved. Moreover, our work will be distinguished from the earlier works described above by the decision-making technique. In fact, we develop a technique that takes into account the uncertainty of the channel quality indicators’ values used to decide to disable or keep the equalizer as they are estimated unlike the other techniques that ignore this aspect by assuming a perfect estimate of the metrics [5, 6] or by not considering the impact of these uncertainties on the decision [7, 8].

3. A Decision-Making Method to Adapt the Use of the Equalizer

Our idea is to statistically model and characterize the radio environment by using some techniques of the statistical inference like statistical estimation [9] and statistical detection [10]. We define for this some metrics in order to evaluate the quality of the channel. Basing on these channel indicators and on their statistical characteristics, we can evaluate the current state of the channel and then decide if the equalizer is necessary in this case or not. The use of the statistical aspect of the channel quality indicators allows to take into account the estimation errors of these metrics in the decision-making system which affect the decision performance. In the following paragraphs, we present the decision-making method based on the statistical modeling of the environment for the scenario of driving the equalizer in the receiver chain.

3.1. Definition of the Channel Quality Indicators

The first step to do is to define the channel quality indicators that are necessary to evaluate the state of the channel and then to choose between a use of the equalizer or not. We consider for this a multipath channel from length of with an added Gaussian noise. The received signal is then expressed as in with : the transmitted signal, : the received signal, : the added Gaussian noise, : the coefficients of the multi-path channel.

We assume that the interferences, which the signal suffers from, are limited to the intersymbol interferences caused by delay dispersion of the multiple paths of the channel. At the receiver, the signal can be deteriorated by the noise and by the inter-symbol interferences. In order to distinguish between the two sources of signal degradation, we are defining two independent radio metrics. The first that we denote is the signal-to-noise ratio without taking into account the effect of the inter-symbol interferences and the second that we denote is the power of the inter-symbol interferences. For this we will consider two cases of the channel: the Rician channel and the Rayleigh channel.

3.1.1. In the Rician Channel

Since the Rician channel contains a direct path, we can express the received signal as in

evaluates the level of the degradation of the received signal by the noise and the fading without considering the interferences. This means that we consider only the fading of the signal observed in the direct path . So in this case, is defined in

with and are, respectively, the variance of the input signal and the variance of the noise. Concerning the second metric , we are interested in the term of the relation (2), which describes the phenomenon of the inter-symbol interferences. From this expression, we can define a new metric to represent the power of the inter-symbol interferences as described in (4). We assume that the input symbols follow an uniform distribution with mean as follows: where denotes the expectation operation.

Assuming that the transmitted symbols are identically distributed and independent, the evaluation of the expression (4) leads to the following relation

3.1.2. In the Rayleigh Channel

The Rayleigh channel does not contain a direct path, so in order to define the metric we consider the path that has the highest energy, which means the highest signal to noise ratio. We denote this path and then the metric is expressed in with is the signal to noise ratio of the th path.

The rest of the paths are adding the inter-symbol interferences, so we can define the metric as follows:

As a conclusion, if we consider that the transmitted signal is normalized , then our two channel quality indicators are expressed as follows for the Rician case and the Rayleigh case:

In the next paragraphs, we are focusing on the estimation of these metrics derived from the channel estimation, then we present our decision system based on the statistical modeling of the environment.

3.2. Estimation of the Channel Quality Indicators

Since the metrics and are based on the channel coefficients , their estimation needs an algorithm for channel estimation. Many techniques for channel estimation are proposed in the literature; we can distinguish blind techniques, techniques with pilot symbols, adaptive techniques, and nonadaptive techniques. In our case we search for one technique that provides a less estimation error and a less computational complexity. For this, we propose a comparison between some techniques of estimation and we select the one that presents the best compromise between the less mean-squared error of the estimation and the less computational complexity expressed in terms of the number of multiplication operations in the algorithm.

LS (Least-Squared) Channel Estimation [11]. It is a non-adaptive algorithm that is based on a known training sequence from the transmitted signal. For each received frame, a training matrix is defined as follows: where is the number of the pilot symbols.

The LS estimation is a minimization of the quadratic error: where and are, respectively, the hermitien and the inverse matrices and is the vector of the channel coefficients.

Technique by Intercorrelation. It is an estimation method that we have developed by inspiring from [12]. Its idea is based on the intercorrelations between the transmitted pilot symbols, with delay, and the received symbols. In fact, the intercorrelation product between the received symbol and the transmitted symbol delayed by is expressed as In practice is approximated with a finite number of symbols that represent the training sequence; this provides an estimated value of the coefficients, that is, .

LMS (Least Mean-Squared) Channel Estimation. It is an adaptive algorithm that is based on the stochastic gradient. Assuming the error of the adaptive filter at the instant time such as , the update of the channel coefficients estimation is expressed in The parameter represents the step size and has an influence on the convergence of the algorithm.

All these techniques are with pilot symbols, which causes a bad impact on the useful throughput, unlike the blind techniques that are without pilot symbols. As one of these blind methods, we can mention the constant modulus algorithm.

Constant Modulus Algorithm [13, 14]. It is also an adaptive algorithm but blind, which tries to minimize the cost function with , , .

Then the updated channel coefficient estimation is expressed in

Then from the obtained values of , we can deduce the estimation of our metrics and as follows:

In order to select the best channel estimation algorithm for us, we dress, in Figures 1 and 2, the mean-squared Error curves of the estimation of and by using the methods LS, LMS, CMA, and inter-correlation. Since we are in an environment variable in time, we will study the effect of the variation, of the inter-symbol interferences, and of the fading, on the estimates of and . For this, we draw the curves , , and , . The curves of simulation are not depending on the type of the channel. In another side, as we search also for algorithms with minimum computational complexity, we determine the equations of the complexity of the estimation of and with each technique. For this, we present in Tables 1 and 2 the computational complexity of the estimation operations that we define as the number of the operations of multiplication. Basing on the two criteria (MSE and complexity), we will choose the channel estimation algorithm.

We denote by(i), , , and : the estimated value of the metric by using the channel estimation algorithm LS, LMS, CMA, and inter-correlation. can be or , (ii): the number of the pilot symbols for the nonblind algorithms, (iii): the number of the received symbols considered in the estimation for the blind algorithm, (iv): the length of the channel.

In particular, for the values , , and , we found operations, while operations, operations, and operations.

It is true that, unlike the algorithms with pilot symbols, the blind algorithm CMA allows a throughput more interesting. But this technique is very sensitive to both the intersymbol interferences and the channel fading. In fact according to the curves of MSE in Figures 1(a) and 2(a), it is clear that the higher the value of , the higher the mean-squared error of estimation with CMA technique. Similarly, we notice from Figures 1(b) and 2(b) that the mean squared error of this technique increases while the is degraded. Furthermore this method has a computational complexity relatively high compared to the other methods.

Concerning the estimation technique by intercorrelations, although it is the least complex one, it presents an increased MSE especially in Figures 1(a), 1(b) and 2(b). Finally, the LS technique has a good performance of estimation, but it is the most complex one. In conclusion, in this comparison, the LMS technique seems to be the most compatible in the compromise (estimation performance/complexity). In addition, it is an adaptive algorithm that is very adapted to the change of the radio channel. So in our simulations, we choose to use the LMS algorithm for the estimation of and .

3.3. Statistical Modeling of the Environment and Decision Making

In this section, we will describe the method that we have developed in order to determine the decision rule to make decision for using or not using the equalizer from the statistical modeling of the environment [15].

We consider (resp., ) one estimation of the (resp., ) in the instant , given by the sensor algorithms selected in Section 3.2. Then the vectors (resp., ) represent estimations of (resp., ), with (resp., ) being identically distributed and independent following the same distribution as a random variable (resp., ).

In order to characterize statistically our channel quality indicators and , we have to determine the densities of probability of the variables and . From the statistical inference theory, two classes of techniques are proposed in order to estimate the density of probability of a random variable given its realizations; for this, we mention the parametric estimation techniques and the nonparametric estimation techniques [9, 10]. The nonparametric method estimates both the distribution shape and the statistical parameters of a set of observations whereas the parametric method estimates only the statistical parameters when the distribution of the observations is known. In our case the analysis that we perform on the estimation of our radio metrics and shows that a Gaussian distribution represents well the distributions of the observations provided by the sensors of the metrics, and therefore the parametric method is enough to determine the statistical parameters of these observations. So, since the information about the distributions of the observed metrics is known, we do not need the nonparametric method especially if it adds complexity to estimate these distributions.

We denote by and the statistical parameters (mean and variance) of and . By applying the parametric technique, maximum likelihood estimator, we find the expressions following:

So, after realizations of and , we obtain the estimated values of our metrics, as described in

By using these estimated values and their statistical parameters, we try to evaluate the channel quality and to determine the decision rule for deciding whether the equalizer is necessary or unnecessary. For this. The evaluation of the channel quality indicators is performed by asserting or refuting some general hypothesis. The objective is to decide between the two actions:

It is necessary to determine the situations of the environment where is possible and those where is possible. In fact, according to the state of and , we confirm if the received signal is deteriorated by the noise only, by the inter-symbol interferences only, or by both of them. The evaluation of this state is done according to the thresholds of performance and which we defined as the values of and for a minimum bit error rate . While the value of is normalized by the standard used in the communication, there is no indication about the value of . In our work, we determined this threshold by taking the best measured value of that provides .

According to the values of and , we can define three states of the environment. Table 3 presents these states with the corresponding actions for each one.

When the power of the inter-symbol interference is greater than , the signal is affected and the equalizer is necessary to reduce the inter-symbol interference. In the case where , we choose to keep the equalizer because, despite the fact that the still above , the equalizer will reduce it anyway. The case where we can decide to turn off the equalizer is when the signal is not deteriorated neither by the noise nor by the inter-symbol interferences.

To translate this evaluation, statistically, and deduce the decision rule of our decision system, we construct the binary hypothesis test described in (18), where corresponds to the decision to turn off the equalizer and the hypothesis for the decision to turn it on.

The resolution of this hypothesis test leads to the decision rule to decide to disable the equalizer, and this occours by identifying new thresholds and since the quantities and are the estimated values and not the exact values. For this, we use the technique of the Neyman Pearson test and we are relying on the statistical characteristics of the metrics estimated and modeled previously. The technique of the Neyman Pearson consists of choosing the hypothesis which has the highest probability of correct decision under the constraint of a false alarm less than which is the maximum probability of the false alarm tolerated for the decision system and fixed in advance. According to this method, there is no indication about how to set the value of . The result is presented in (19), where is the decision rule to decide to turn off the equalizer: where (i) is the inverse function of and is the distribution function of the standard normal. (ii) is the number of the realizations of the metrics’ estimation. (iii) and are Gaussian random variables with means near to the real values of and and with variances, respectively, and (Figure 3),(iv) is a fixed probability of false alarm. In our simulations, we fix this value to .

This decision rule is defined in such a way that takes into account the impact of the uncertain measures of and on the decision thresholds.

Now in order to determine the theoretical performance of this decision system, we express the probabilities of false alarm and correct decision defined as follows:

By introducing the terms , , and as described in Figure 3, we developed these probabilities and we found the following relations:

From these expressions we can determine the conditions on the value of (number of the metrics’ estimations used to one decision) in order to get situations where we start to have a probability of correct decision equal to 1 (no errors of decision) (23), and to get situations where we start to have a probability of false alarm equal to 0 (24).

4. Simulation Results

4.1. Proposed Solutions for Driving the Equalizer

In the implementation of the decision-making method to drive the equalizer, we have thought of two solutions for this. The first solution is described in Figure 4(b); this solution starts with a decision making block which contains our treatments, of metrics’ estimations and decision making rule, which we developed previously in Section 3. According to the output of this block, the equalizer is turned off or on. In the second solution, described in Figure 4(c), we keep constantly the calculation of the vector of coefficients of the equalizer’s FIR . After this, according to the output of the decision block, we turn off or on the equalizer’ FIR. With this solution, we aim to avoid a suspected delay of the equalizer when it is re-launched especially in a channel that varies in time.

4.2. Decision-Making Result in a Time-Variable Channel

Since we are in a time-varying channel, the decision of the receiver to turn off or on the equalizer must follow in real time the variation of the channel’s state. So we have to determine the constraint on the number of metrics’ estimated values that allows to make decision without any delay. For this, we consider a mobile equipment with a speed equal to , in a multipath channel with a Doppler frequency described in (25): with : the speed of light, : the carrier frequency.

The time of coherence of the channel is defined as the period of time where the channel does not change, and it is expressed in

In order to follow the variation of the channel, the decision time must be before the time of coherence. Assuming to be the transmission time of one frame, then the decision making must be done after a number of frames, , which verifies the following condition: If is the number of the training symbols in one frame necessary for the metrics’ estimations, then the number of the estimated values considered for one decision is defined as in

For simulations, we consider a multipath channel variable in time with length equal to . We will consider first the case of a Rician channel and then the Rayleigh channel. For each case, we simulate the scenario of decision making to drive the equalizer by using first solution 1 and second solution 2 described in Figures 4(b) and 4(c). At the same time, we will take two examples of each channel: a slow varying channel and a fast varying channel. The conditions of simulation are described in Table 4.

4.2.1. A Case of Rician Channel

The two examples of the Rician channel that we are using are represented in Figure 5 which describes the behavior of the channel’s paths in time when it varies slowly (Figure 5(a)) and rapidly (Figure 5(b)). We present also in Figures 6(a) and 6(b) the bit error rate at the receiver, without the equalizer, of the two examples of the channel. The results given by the decision-making block are presented in Figures 7(a) and 7(b); the output of this block is “1” when the decision is to turn off the equalizer and “0” when the decision is to turn it on. The decision block starts to deliver the results after the reception of frames, the time required for booting the system of decision. As we can see for the slow channel (Figure 7(a)), the decision is following the state of the channel described by the ; the equalizer is turned off when the is less than or equal to from the th received frame to the th received frame. We note also that the decision makes some false alarms especially for the frame numbers and , in fact, although these frames are received by a equal to , the decision is to turn on the equalizer. But these false alarms are not affecting the performance of the system, because the will be under . Our decision method can also follow the change of the channel even if it is fast (Figure 7(b)), in fact this is because we are taking which verifies the constraint (28).

The two solutions of configuring the decision scenario, discussed above, are providing the same decision results, since the same decision-making method is used. However, the equalization performance after the decision-making differs from one solution to another. We present in Figures 8 and 9 the after the decision making scenario with or without the equalizer for both solutions and for both examples of channels. We notice that the performance of the equalization in solution 2 is better than that in solution 1. Indeed, this is explained by the fact that in solution 1 when the equalizer is turned on, there is a period of time during which the calculation of the equalizer’s coefficients is launched. During this time, the channel changes and the equalization will not be efficient. This is why in Figures 8(a) and 8(b) the after equalization still degraded in solution 1, whereas for solution 2 (Figures 9(a) and 9(b)), the after equalization is always less than .

Although solution 1 of driving the equalizer is less efficient in a variable channel, it allows to reduce the computational complexity more than solution 2. In fact we present the results of the computational complexity reduction in Table 5 which describes for each example of channel and for each solution the percentage of the complexity reduction compared to the permanent use of the equalizer. We specify that, in determining the computational complexity of the scenario of driving the equalizer, we include the treatments that we perform to make the decisions (estimation of the metrics, modeling, and decision rule).

4.2.2. A Case of a Rayleigh Channel

For the simulation of this scenario in the case of a Rayleigh channel, we assume the same conditions of mobility as described in Table 4 for the Rician channel. The two examples of the Rayleigh channel that we used are presented in Figure 10. As in the case of the Rician channel, the results of decision making are presented in Figures 11, 12, 13 and 14. From these figures, we can notice that for the same channel’s mobility conditions, the Rayleigh channel is harder than the Rician channel; there is a less chance of turning off the equalizer in the Rayleigh case than in the Rice case. We can see this in Table 6 where the rate of time for turning off the equalizer is less than that of the Rician channel described in Table 5.

We notice in Figure 13(b) that the final is sometimes greater that , but this is not due to an error of decision. In fact, if we observe clearly the curves, the is degraded while the equalizer functions (decision output = 0), this means that this bit error rate is not due to the inter-symbol interferences but rather to the strong attenuation of the path ( degraded). Despite this, the equalizer has contributed to the reduction of the even if it remains higher than . This statement justifies the channel evaluation that we made in developing our decision making method (Table 3) concerning the act of keeping the equalizer when the signal to noise ratio is degraded.

4.3. Computational Complexity Reduction

We are interested in this section to the study of the computational complexity reduction through the scenario of driving the equalizer. The goal is to determine, for a period of time of a communication, from what percentage of time without the equalizer we begin to reduce the complexity, and also what is the maximum gain in complexity. For this, we will first determine the theoretical equation of the computational complexity reduction, and then we will compare this with the result of simulation.

We denote (i): the computational complexity of the permanent use of the equalizer, (ii): the computational complexity of the added treatments necessary for the decision making method (metrics estimation, modeling, decision rule); this value depends on the width of the window of observations, , considered for the decision, (iii): the total computational complexity of the scenario of driving the equalizer,(iv): the rate of time during which the equalizer is turned off (v): the rate of the computational complexity reduction.

In a period of time of a communication, during which the equalizer is turned off with a percentage of time of , the total computational complexity is reduced by compared with the complexity of the permanent use of the equalizer. So we can express as follows:

On the other hand, the total complexity of the scenario contains the complexity of the added treatments of the decision-making method and the complexity of the equalizer when it is turned on. So we can also express differently as in

From (29) and (30), we conclude that We deduce then the expression of the complexity reduction as follows:

From this equation, we can conclude that we begin to reduce the complexity when We notice also that the maximum reduction of the complexity for the considered communication is

We plot in Figures 15 and 16 the curves of theoretically as in (32) and by simulations, first in the case of solution 1 and then in the case of solution 2.

For solution 1, we begin to reduce the complexity of the receiver when we turn off the equalizer during at least % of the communication period, whereas for solution 2 this value is %. When the receiver decides to turn off the equalizer during the whole communication period, it can reduce the complexity by % in solution 1 and % in solution 2. As a result, solution 1 is better in terms of computational complexity reduction; however, the equalization’s performance in this solution is less efficient than that in the case of solution 2 mainly when the channel varies quickly.

5. Conclusion

The work presented in this paper is a part of the green communication, it consists of reducing energy consumption within the receiver by making it able to adapt dynamically to the changes of its environment. For this, we are particularly interested in adapting the use of the equalizer in the receiver chain according to the state of the channel. In fact, it is to make the receiver able to choose to turn on or off the equalizer as it is necessary or not. The purpose of dispensing with this component, for a period of time, was to reduce the energy consumed in the receiver chain. To achieve this objective, it was necessary that we develop a method of decision making for the receiver so it can be aware of its environment and able to make the right decision concerning the presence of the equalizer. So we defined a technique of decision making based on statistical modeling of the environment. Within this method, we defined two metrics as channel quality indicators ( and ) used to evaluate the state of the channel with respect to the intersymbol interferences and the channel fading. These metrics are then statistically modeled by determining their respective densities of probability from the sets of their estimated values. This statistical model is used in order to construct a statistical decision rule to decide to turn off the equalizer. This decision rule is defined to take into account the impact of the uncertain measures of the metrics on the decision thresholds. Once the decision method developed, we applied it to adapt the use of the equalizer by proposing two solutions to drive this component. In the first solution, the receiver decides to turn off the equalizer, according to the result of the decision rule, and decides to turn it on by restarting it. While in the second solution, we kept constantly the calculation of the equalizer’s coefficients. By adopting the adaptive algorithm of channel estimation LMS, we have simulated these two solutions in the case of a time-varying multipath channel (Rice and Rayleigh). We have concluded, first, that the rate of reduction of the computational complexity within the limitation of the equalizer use is a linear function of the percentage of time during which this component is off. Second, we have noticed that the first solution to adapt the use of the equalizer reduces the complexity more than the second solution; in fact, for a period of time according to the reception of 50 frames, we begin to reduce the computational complexity when the equalizer is off for at least % of the total period, whereas this percentage is % for the second solution. However, the equalizer in the second solution is more efficient, in fact, unlike the first solution, when this component is turned on it is not restarted because of the permanent presence of the equalizer’s coefficients, this allows to avoid the delay in this operation when it begins to operate, especially when the channel varies rapidly. We notice also that the smaller the size of the equalizer, the less the delay caused by the calculation of the coefficients, and so the gap between the two solutions will be reduced. Within the same goal of reducing energy consumption in the radio receiver, we seek, in a future work, to treat other situations, of the adaptation of the receiver to the environment, that allow to reduce the computational complexity of the receiver chain.