Abstract

The devices of Internet of Things (IoT) will grow rapidly in the near future, and the power consumption and radio spectrum management will become the most critical issues in the IoT networks. Long Term Evolution (LTE) technology will become a promising technology used in IoT networks due to its flat architecture, all-IP network, and greater spectrum efficiency. The 3rd Generation Partnership Project (3GPP) specified the Discontinuous Reception (DRX) to reduce device’s power consumption. However, the DRX may pose unexpected communication delay due to missing Physical Downlink Control Channel (PDCCH) information in sleep mode. Recent studies mainly focus on optimizing DRX parameters to manage the tradeoff between the energy consumption and communication latency. In this paper, we proposed a fuzzy-based power saving scheduling scheme for IoT over the LTE/LTE-Advanced networks to deal with the issues of the radio resource management and power consumption from the scheduling and resource allocation perspective. The proposed scheme considers not only individual IoT device’s real-time requirement but also the overall network performance. The simulation results show that our proposed scheme can meet the requirements of the DRX cycle and scheduling latency and can save about half of energy consumption for IoT devices compared to conventional approaches.

1. Introduction

The concept of Internet of Things (IoT) enables devices to connect to the Internet to sense data and interact with each other. Different type of IoT devices, such as fitness, entertainment, location and tracking, and surveillance, may have different real-time requirement for different purpose. It can be predicted that IoT devices also referred to as IoT User Equipment (UE) pieces will grow rapidly and the power consumption and radio spectrum management will become the critical issues in the IoT networks. Moreover, the wireless connectivity is a promising way to the Internet for IoT devices. The Long Term Evolution (LTE) that, nowadays, is the dominant technology in wireless communication makes it ideal for IoT applications due to its flat architecture, all-IP network, and greater spectrum efficiency than 2G or 3G [1].

In the LTE network, the 3rd Generation Partnership Project (3GPP) specified the Discontinuous Reception/Transmission (DRX/DTX) mechanism to alleviate the power consumption issue which can severely affect the battery lifetime of the IoT UEs. The DRX mechanism allows UEs to stop monitoring the Physical Downlink Control Channel (PDCCH) and to enter low-power consumption mode to extend its battery lifetime. Without DRX mechanism, the LTE UEs have to continuously monitor the PDCCH in every subframe in order to check UE-specific scheduling assignments such as Downlink (DL) resource allocation, Uplink (UL) grants, and PRACH (Physical Random Access Channel) responses. Not only the UEs but also the Evolved Node B (eNodeB) can also benefit from the DRX mechanism to reduce its power and signal consumptions, such as the Channel State Information (CSI) or Sounding Reference Signal (SRS), and to improve its resource utilization. However, the main issue of the DRX mechanism is that it may pose unexpected delay if the UEs are in the sleep mode while the scheduling information arrived. Particularly, it should be noted that the PDCCH carries the scheduling information rather than service data and the scheduling information is interleaved in the PDCCH in order to reduce the collision of blind decoding of the UEs. This implies that extending the on duration time of the DRX mechanism may not help to overcome this issue. Unfortunately, the specifications say nothing about the means of the DRX parameter configuration. Furthermore, different from the single DRX cycle in the Universal Mobile Telecommunications System (UMTS), the LTE network introduces two types of DRX cycles, the short and long DRX cycles, which complicates the DRX parameters configuration.

The DRX parameters must be configured to maximize power saving while the constraints of the communication latency and system throughput are satisfied. The DRX cycles allow UE to temporarily turn off its radio interface and enter a sleep mode for several Transmission Time Intervals (TTIs). On the one hand, too short of DRX cycle will lead to inefficient power saving. Too long DRX cycle, on the other hand, will result in long communication latency decreasing system throughput. Thus, the scheduler of the IoT networks must manage the tradeoff among the power consumption, communication latency, and the system throughput.

In this paper, we designed a fuzzy-based power saving scheduling scheme for IoT over LTE/LTE-Advanced networks to deal with the issues of the radio spectrum management and the power consumption as well as the unexpected delay from UE’s scheduling and resource allocation perspective instead of complicated DRX parameters adjustment. Not only individual IoT UE’s real-time requirement but also the overall network performance is taken into account. The main issue of the DRX parameters adjustment is that it will cause the inter-parameter interaction. Although the power saving can be achieved by adjusting the DRX parameters dynamically, higher signaling overhead is also introduced [2]. Since the UE’s DRX parameters are semistatically configured by the eNodeB, the parameters can only be adjusted in a signaling procedure [3]. In addition, it is more difficult to adjust the DRX parameters dynamically because the UEs automatically enter the sleep mode, and the eNodeB does not know about this change. Moreover, because of the uncertainty of the wireless network and service traffic, optimizing the DRX parameters is challenging [2]. In contrast, dealing with these issues from the perspective of the UE’s scheduling and resource allocation is more suitable to be realized in practice. The main idea behind the proposed scheme is to schedule the UEs which in the period of the on duration to prevent the unexpected delay. To successfully receive the scheduling information, reduce the power consumption, and meet the real-time requirement of IoT applications, the proposed scheme considers the UE’s DRX cycle and scheduling latency when allocating radio resources. Furthermore, in addition to satisfying individual UE’s requirements, maintaining overall system throughput is necessary.

The major contributions of this paper are that this paper is the first paper taking these two important issues, the power consumption and radio spectrum management, into account for IoT over LTE/LTE-Advanced networks, and our proposed scheme that benefited from the characteristics of low complexity of fuzzy theory can be realized in practice. In particular, different from previous works, our design considers the key aspects of the LTE/LTE-Advanced networks. In addition, we proposed a systematic simulation model to investigate the DRX performance based on the concept of the LTE networks. The simulation results show that our proposed scheme can provide approximately decision to meet the requirements of the DRX cycle and scheduling latency and to save about half of energy consumption for IoT UEs compared to conventional approaches.

The rest of this paper is organized as follows. Section 2 provides an overview on related work. Section 3 introduces the discontinuous reception mechanism used in LTE/LTE-Advanced networks. Section 4 formulates the problem of maximum radio resource utilization. Section 5 describes the proposed fuzzy-based power saving scheduling scheme. Section 6 presents the simulation model. The simulation results and the discussions are presented in Section 7. Finally, Section 8 concludes this paper.

The recent researches [29] are concentrated on modeling, investigating, or analyzing the DRX mechanism for 3GPP LTE/LTE-Advanced networks, and these studies show that the DRX mechanism has the ability to save UE’s power consumption. However, the research [3] points out that many studies neglect the key aspects of LTE. In [4, 5, 7], the semi-Markov chain is used to model and analyze the wake-up latency and power saving factor of the DRX mechanism. A light sleeping mode is proposed in [10]. From hardware perspective, the light sleep mode allows UE to just turn off its power amplifier in order to reduce energy consumption while satisfying the delay requirement of Quality of Service (QoS).

In [2], the authors proposed a concept of burst-based scheduling scheme for the LTE networks to increase power saving efficiency while the desired QoS of UEs are satisfied. The main idea of this paper is to prevent UEs from entering the period of the opportunity for DRX with no data reception by adjusting scheduling priority using forward and backward strategies. However, the paper only considers fixed inactive timers, and the multi-DRX cycle scenario and the multiple access scheme of the LTE networks have not been taken into account.

The guaranteeing quality of service is used to optimize the DRX parameters in [1, 11]. In [1], the authors proposed two schemes, the three-stage scheme and the packet scheduling scheme, for the UEs and the eNodeB to cooperate with each other, respectively. The UE’s power is saved by reducing its wake-up period. To accomplish this goal, these algorithms must determine the UE’s DRX parameters, such as the short and long DRX cycles, on duration time, and the in active timer, in advance. However, there may be an issue of starvation in the packet scheduling scheme since this paper only considers higher channel rate and stringent data that are going to be swapped out from the buffer.

In [11], the authors mainly proposed two decision algorithms, the DRX parameters decision algorithm and scheduling-start offset decision algorithm. The former is used to determine the UE’s on duration time, and the latter is used to disperse the UE’s on duration time. The main idea of this paper is to prevent overlapping of UE’s on duration time in order to fully utilize the network resource. However, the algorithms might be hard to be deployed in practice since the UE’s DRX parameters are semistatically configured by the eNodeB rather than the UEs and the parameters can only be adjusted in a signaling procedure [3]. In addition, the authors also neglect the key aspects of the multiple access scheme of the LTE networks.

However, from our understanding, the following two key aspects of the LTE networks are usually neglected by recent research. Extending the on duration time may not help to improve QoS of service traffic since the UEs check the PDCCH scheduling information rather than service data in the on duration period and the scheduling information is interleaved in the PDCCH in order to reduce the collision of blind decoding. The UE’s DRX parameters are semistatically configured by the eNodeB rather than the UEs; the parameters can only be adjusted in a signaling procedure [3]. In addition, most of them do not consider multicellular networks in their simulation works.

3. Discontinuous Reception in LTE

The DRX mechanism is known as a sleep mode in which UEs can stop monitoring the PDCCH and enter low-power consumption states to extend its battery lifetime. As mentioned in Section 1, the PDCCH carries the scheduling information, such as the UE-specific scheduling assignments for Downlink (DL) resource allocation, Uplink (UL) grants, and PRACH (Physical Random Access Channel) responses, rather than service data for the UEs, and the scheduling information is interleaved in the PDCCH in order to reduce the collision of the UE’s blind decoding. Depending on UEs in active or idle mode, two mobile’s Radio Resource Control (RRC) states, RRC_IDLE and RRC_CONNECTED, are used in the DRX mechanism. When a UE in RRC_IDLE state, the eNodeB will free up its resources for other users to maximize the UE’s battery life and will know nothing about this UE. After that, the procedure of paging request will be typically used if the eNodeB wishes to contact this UE, and procedure will take more time to establish the connection between them. The eNodeB can negotiate the DRX parameters with UE through the RRC configuration procedure. In this paper, we concentrated on dealing with the DRX issues in the RRC_CONNECTED state since the state enabling data transmission and reception is important.

In the LTE networks, different from the single DRX cycle used in the UMTS, two types of DRX cycles, the short and long DRX cycles, are introduced. The long DRX cycle offers greater opportunity to achieve better power saving than the short DRX cycle. The short DRX cycle is optional. If both are configured by the eNobeB, the UE starts with 16 short DRX cycles, 2 to 640 subframes, and then enters a long DRX cycle, 10 to 2560 subframes, without receiving any scheduling information on the PDCCH.

The DRX parameters, typically, consist of three timers, the inactivity timer, on duration timer, and opportunity for DRX timer as shown in Figure 1. The UE should start the on duration timer in the subframe which satisfied (1) and (2) for the long DRX cycle and short DRX cycle, respectively [1214]:where the System Frame Number (SFN) which consists of 10 subframes will be reset to zero if it is greater than 1023 and the DRX start offset indicates the start of the on duration timer.

The three timers, the inactivity timer, on duration timer, and the opportunity for DRX timer, are contributed to the DRX mechanism. First, the UE stays awake for an inactivity time, 1 to 2560 subframes, when it received every PDCCH information. If the inactivity timer expires, the UE monitors the PDCCH for an on duration time, 1 to 200 subframes, and then it may go to sleep for an opportunity for DRX time, periodically. However, it should be noted that the DRX-to-connected latency is specified as less than 50 ms for the LTE networks and less than tightly 10 ms for the LTE-Advanced networks in the 3GPP specification [15]. This implies that the maximum DRX cycle must be 32 Transmission Time Interval (TTI) or 32 one-millisecond subframes for the LTE networks. In addition, the DRX cycles are expressed as power of 2 and all DRX cycles are always a multiple of the maximum DRX cycle in order to maintain the periodicity of DRX cycle. Therefore, there are six DRX cycle combinations and six DRX cycle granularities, 32, 16, 8, 4, and 2 subframes, and 1 subframe, can be used in LTE DRX mechanism as shown in Figure 2.

The selection of DRX combination is based on the Control Channel Elements (CCEs) aggregation level of PDCCH, which indicates the numbers of consecutive CCEs interleaved in the control region. The CCE aggregation level is set based on the UE’s channel condition. The CCE carries the Downlink Control Information (DCI), which contains resource assignments and control information for each UE. For example, in the case of the UE with good channel condition, one CCE might be sufficient, but eight CCEs might be required in the case of the UE with poor channel condition. Therefore, the UE’s reported Channel Quality Indicator (CQI) should be taken into account in the scheduling decision to ensure the overall system performance.

4. Problem Formulation

We consider the downlink transmission of the LTE networks in the proposed scheme. In this section, the problem of UE scheduling and radio resource allocation of the LTE networks is formulated as maximum radio resource utilization based on the fuzzy theory. We aim to find the UEs with higher scheduling priority and then to allocate radio resources for them. Given an eNodeB and a set of UEs with its corresponding Channel Quality Indicator (CQI) , the scheduling efficiency which involves the exact DRX time, CQI, and scheduling latency is given aswhere is the exact DRX timer of th UE and and denote the CQI index and scheduling latency of , respectively. The CQI index 0 represents the out of range due to bad signal quality or DTX. Thus, the maximum radio resource utilization can be defined as where denotes the number of Resource Blocks (RBs) in downlink transmission and represents the total number of UEs served by the eNodeB. The number of RBs depends on the system bandwidth of the eNodeB. The restriction of entails that each resource block can only be allocated to a UE. Particularly, the is fuzzy set not crisp set in the proposed scheme in order to make appropriate decision. The resource block is the smallest entity that can be scheduled for the UEs in the frequency domain. The primary goal of (4) is to maximize the scheduling efficiency of the system bandwidth by adjusting the fuzzy set . The appropriate fuzzy set can be figured out by using three fuzzy membership functions and a low-complexity intersection function described in the next section.

5. Proposed Fuzzy-Based Power Saving Scheduling Scheme

In this section, we introduce a fuzzy-based power saving scheduling scheme to deal with the issues of the radio spectrum management and the power consumption as well as the unexpected delay for IoT over LTE networks. The proposed scheme considers not only individual IoT UE’s real-time requirement but also overall system performance. We intend to allocate RBs to those UEs who have stringent real-time requirement, exact DRX cycle, and higher CQI. The tradeoff among them should be determined to achieve this goal. Fortunately, fuzzy theory provides means to make approximate decisions especially in multiobjective problems. The data even in different result spaces can be combined with each other through fuzzy set operations, such as union, intersection, and complement operation. An approximate decision is typically decided according to the highest membership values intersected by fuzzy operation.

A block diagram of designed fuzzy-based approach is shown in Figure 3. The proposed scheme mainly consists of two fuzzy functions, the fuzzy membership function and low-complexity fuzzy intersection function. The IoT UE’s CQI, exact DRX cycle, and guaranteed scheduling latency are considered as the scheduling metrics. The approximate scheduling decision is made according to the result of the low-complexity intersection function and performed by the scheduler and resource allocator of the Radio Resource Management (RRM) in the LTE/LTE-Advanced networks. In the LTE networks, the RRM comprises scheduler and resource allocator that are responsible for the UE selection and radio resource assignment, respectively [16]. The guaranteed scheduling latency and exact DRX cycle is used to ensure individual IoT UE’s requirements such as the power saving and real-time demand. In addition, we maintain overall system performance according to the CQI. The details of three metrics as well as the fuzzy membership function, low-complexity fuzzy intersection function, and an adaptive power control function are described as follows.

5.1. Exact DRX Cycle

In the proposed scheme, the exact DRX cycle timer is used to prevent the unexpected delay caused by the sleep mode. To achieve this goal, the eNodeB must schedule the UEs within the on duration period in order to successfully receive the scheduling information. Because the DRX parameters are configured by the eNodeB, the eNodeB has the ability to handle the UE’s exact DRX cycle. The maximum DRX cycle that is considered as the timer of the short and long DRX cycle combination is shown in Figure 4. Because of the constraint of 50 subframes DRX-to-connected latency, the maximum DRX cycle must be within the  ms and  ms for the LTE networks. The exact DRX cycle must be the period of  ms to -Opportunity for DRX time. In the proposed scheme, if the exact DRX cycle expires, it will be restarted when the IoT UEs are scheduled and allocated RBs again by the eNodeB. The RB, which consists of 12 subcarriers of 15 kHz, and the 1 ms subframe, which consists of two RBs, are the smallest entity that can be scheduled for the UEs in the frequency domain and time domain, respectively. Furthermore, network operators can adjust the length of the maximum DRX cycle to apply to the LTE-Advanced networks.

5.2. Guaranteed Scheduling Latency

In the proposed scheme, the guaranteed scheduling latency configured by IoT manufacture is used to specify and to ensure the real-time requirement of IoT UEs. The scheduling latency, a real-time indicator, is the nonproductive time between the end of last RB and the start of next RB as shown in Figure 5. The scheduling latency usually consists of the context switching time and scheduling decision time. In the proposed scheme, each IoT UE has a timer of the guaranteed scheduling latency to ensure the scheduling latency. The timer will be restarted when the UEs are scheduled and allocated RBs again by the eNodeB if the guaranteed scheduling latency expires.

5.3. Channel Quality Indicator

The four-bit channel quality indicator ranged from 0 to 15, as shown in Table 1, which indicates the highest Modulation and Coding Scheme (MCS) and the maximum data rate with less than 10% block error ratio that UE can support. The CQI is calculated at the UEs according to its Signal to Interference plus Noise Ratio (SINR) and the CQI is regularly reported from the UE to the eNodeB. The reporting intervals of the CQI lie between 2 and 160 ms [13]. A higher CQI value will result in higher data rate and higher MCS profile used. In the proposed scheme, we maintain the overall system performance based on the CQI as high as possible. Since relatively sufficient buffer is used in the eNodeB, the channel quality rather than service traffic dominates the QoS in the cellular networks.

5.4. Fuzzy Membership Function

In the proposed scheme, the goal of the membership functions is to convert three metrics, the UE’s exact DRX cycle, scheduling latency, and CQI, into degree of memberships. Recall that, in Problem Formulation, we attempt to maximize the scheduling efficiency by adjusting the fuzzy set . Since this fuzzy set is related to these three metrics, we fuzzified them into memberships to figure out the intersection of the three metrics by the intersection function. The three fuzzy membership functions used in the scheme are given aswhere the notation denotes the fuzzy membership without loss of generality. The , , and are the exact DRX cycle, guaranteed scheduling latency, and the CQI of th IoT UE, respectively. The which equals 15 is the maximum CQI index.

5.5. Low-Complexity Fuzzy Intersection Function

In this paper, to reduce the computational complexity of the scheduling decision, we propose a low-complexity and simple intersection approach. The scheduling decision is made according to the three metrics, the UE’s exact DRX cycle, scheduling latency, and CQI. The intersection function is formulated aswhere each IoT UE’s membership intersection can be regarded as the volume or capacity in the exact DRX cycle, guaranteed scheduling latency, and the CQI planes from the capacity point of view. To present our ideas clearly, we illustrate an example of the IoT UEs intersections as shown in Figure 6, in which each node presents the product of the UE’s three metric memberships.

The IoT UEs which are closer to the upper right side have higher priority to be considered for scheduling. The number of IoT UEs can be scheduled and allocated RBs at a time instant depending on the available system bandwidth of the eNodeB.

5.6. Adaptive Power Control Algorithm

In the future, a lot of IOT UEs are expected to be deployed indoors where the small cells, such as femtocells, are necessary to be deployed to achieve a high spectral efficiency and to provide better QoS for IoT UEs. In our previous study [17], we proposed an effective Adaptive Smart Power Control Algorithm (ASPCA), which can be applied to cluster IoT UEs in the coverage hole and to deal with the cross-tier interference issues by determining an appropriate serving range of home eNodeB without requiring complicated negotiation among them. The proposed ASPCA not only improves the overall system performance but also takes QoS of UEs into account.

6. Simulation Model

In this section, we introduce a heterogeneous simulation model, in which an outdoor macro eNodeB is located in center of the map, and six surrounding femto eNodeBs are deployed around the macro eNodeB. The macro eNodeB’s coverage is overlaid with the six femto eNodeBs. In addition, about 70% of IoT UEs are randomly deployed in a  m2 indoor area, and others are randomly deployed in the range of 100 meters of each femto eNodeB. In addition, all IoT UEs of macro eNodeB are randomly deployed in the Range of Interest (ROI). The main simulation parameters are summarized in Table 2. The transmission power of the femto eNodeB is 20 dBm, and the transmission power of the macro eNodeB is 46 dBm. The details of channel fading model, propagation model, data rate calculation, scheduling latency parameter, and the power consumption model are described as follows.

6.1. Channel Fading Model

To provide realistic results, we use the Rayleigh fading channel model to reflect the effect of urban environments. Because of multipath propagation, the channel fading will lead to Bit Error Rate (BER) increasing. The BER is defined as the number of received bit errors divided by the total number of transmitted bits over a communication channel. The analytical equation in the Rayleigh channel [18] is given aswhere denotes the constellation size of adopted -ary QAM signal which is related to the adopted MCS. The adopted MCS is configured according to the CQI reported by each IoT UE.

6.2. Propagation Model and Data Rate Calculation

The propagation model specified by 3GPP [19] has been considered and implemented for the urban environment aswhere is the separation from the cell to the user in kilometers, is the carrier frequency in MHz, and is the antenna height of cell in meters. In our simulation, a lookup table, as given in Table 3, is used to map SINR estimate to spectral efficiency [20] in order to calculate the data rate.

6.3. Scheduling Latency Parameters

The QoS Class Identifier (QCI) has been standardized [1] as shown in Table 4, in which the packet delay budget is the latency that a packet receives between the IoT UE and the eNodeB. In the simulation, fixed 50 ms delay budgets are used as the scheduling latency of the IoT UEs for performance evaluation.

6.4. Power Consumption Model

In this section, we introduce a power efficiency indicator to emulate the power consumption of DRX mechanism. In the simulation, unlike other studies, we focus on investigating the power consumption of the scheduling information reception on the PDCCH. A lookup table to map CQI index to power consumption is shown in Table 5. The power efficiency indicator is given aswhere the denotes the UE’s power consumption of reception and the denotes the number of bits per subframe. By defining the power efficiency, we can calculate the IoT UE’s power consumption according to its signal quality. In the case of DRX adapted, the IoT UE’s power consumption is defined as the multiplied by the number of PDCCH bits. On the other hand, the multiplied by the UE’s scheduling latency is defined as the IoT UE’s power consumption in the case of being without DRX.

7. Results and Discussion

In this section, we present simulation results to verify that our proposed scheme has the ability to ensure the DRX constraint and real-time requirement and to maintain overall system performance in IoT over the LTE/LTE-Advanced networks. First, we investigate the scheduling performance in terms of the exact DRX cycle and guaranteed scheduling latency as shown in Figure 7 to Figure 10. After that, we examine the power consumption efficiency of our proposed scheme as shown in Figure 11. Then we investigate the performance of the DRX mechanism in terms of average data rate as shown in Figure 12. Next, we compare the proposed scheme with conventional schemes, Round-Robin (RR), and Max-C/I scheme as shown in Figure 13. Finally, we show that the requirements of exact DRX cycle and guaranteed scheduling latency can be satisfied as shown in Figure 14.

In our simulation, the CQI does not vary since the low-mobility of IoT UEs is assumed and deployed in the indoor environment, thence no fast-fading effects. In addition, 32 ms exact DRX cycle, 50 ms guaranteed scheduling latency, and 1 ms inactivity timer are used in the normal case. In the multi-DRX cycle case, we examine the impact of six different DRX cycles, 32 ms, 16 ms, 8 ms, 2 ms, and 1 ms, which are randomly assigned to the IoT UEs.

Our simulator generates three metrics, the CQI, guaranteed scheduling latency, and the exact RDX cycle as shown in Figures 7, 8, and 9. The results of these three metrics intersections are shown in Figure 10, in which we can clearly see that there are few intersections at about 32 ms since fixed 32 ms exact DRX cycle is used. Even though fixed 50 ms of guaranteed scheduling latency is adopted, the scheduling decision is still dominated by short exact DRX cycle due to more stringent time restriction. Comparing Figure 6 with Figures 7 and 8, we can observe that even though the outdoor UEs of femtocell 0, UE 4, UE 8, UE 16, UE 17, UE 18, and UE 19, have bad CQI, they can still reach their constraints of the guaranteed scheduling latency and exact DRX cycle. In addition, it should be noted that there are only 6 RBs that can be allocated to IoT UEs for a TTI due to intended narrowing of 1.4 MHz system bandwidth.

A comparison of average power consumption between the case of being with DRX and the case of being without DRX mechanism is shown in Figure 11. The figure shows that the proposed scheme can conserve about half of energy consumption compared to the case of being without DRX mechanism.

A comparison of single DRX cycle and multi-DRX cycle is shown in Figure 12. We can observe that the multi-DRX cycle has more stable data rate than the single DRX cycle. Evidently, the diversity of metrics, such as multi-DRX cycle and multischeduling latency, can benefit our scheme to make better decision.

The throughput comparison of the proposed scheme, RR, and the Max-C/I scheme is shown in Figure 13. Since Round-Robin scheme serves IoT UEs in turn without taking the instantaneous channel quality into account, it gets the worst throughput. Because the Max-C/I scheme prefers to serve UEs which have better channel condition, it gets the best throughput but loses fairness. The proposed scheme can provide a better throughput as well as fairness for the IoT UEs.

Finally, the satisfaction of the proposed scheme is shown in Figure 14, which shows that the requirement of the exact DRX cycle and the guaranteed scheduling latency can be accomplished by our fuzzy-based power saving scheduling scheme.

8. Conclusions

In this paper, we proposed a fuzzy-based power saving scheduling scheme for IoT over the LTE/LTE-Advanced networks to deal with the issues of the radio resource management and power consumption as well as the unexpected delay caused by the DRX mechanism from the scheduling and resource allocation perspective. The simulation results show that our scheme has the ability to leverage the tradeoff among three individual metrics and overall system performance. Hence, the IoT UE’s requirements can be guaranteed. In addition, we exam the performance of multi-DRX cycle, and we find that the diversity of the system metrics can help fuzzy-based approach make better decision. The proposed scheme is useful to be applied to the IoT over the LTE/LTE-Advance networks in practice.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was partially supported by Ministry of Science and Technology, Taiwan, under Grants nos. NSC 102-2221-E-008-039-MY3 and 104-2221-E-008-039-MY3.