Abstract

With the rapid growth of Internet traffic and smart mobile terminals, ultradense networks are adopted as the key technology of the fifth generation to enhance resource utilization and content distribution while causing serious energy efficiency problem. Mobile edge computing has recently drawn great attention for its advantages in reducing transmission delay and network energy consumption by implementing caching and computing abilities at the edge of mobile networks. To improve network energy efficiency and content transmission, in this paper, we propose a novel energy-efficient hierarchical collaborative scheme by considering the in-network caching, request aggregation, and joint allocation of caching, computing, and communication resources in a layered heterogeneous network including mobile users, small base stations, macro base stations, and the cloud. We formulate the energy consumption problem as a queuing theory-based centralized model, where the same content requests can be aggregated in the queue of each base station. Then, the optimal solution is analyzed based on the distribution characteristic of content popularity at the base stations. Simulation results show that the performance of our proposed model is much better than the existing cloud-edge cooperation solutions without considering the deployment of caching resource and request aggregation policies.

1. Introduction

With the rapid growth of Internet traffic represented by multimedia (e.g., Youtube and Netflix) and smart mobile terminals (e.g., smartphones and tablets) [1], as the key technology of the fifth generation (5G) mobile communication system, ultradense networks can efficiently improve the resource utilization and efficiency of content distribution [2]. However, the consequent energy consumption problem has been increasingly prominent caused by the density deployment of base stations (BSs) and strict quality of service (QoS) of mobile traffic. Therefore, it is urgent and challenging to enhance network energy efficiency while ensuring the efficiency of content transmission [3, 4].

Although improving content delivery by shortening the distance between content requesters and providers, cloud computing brings extra deployment and operation costs and huge energy consumption [5, 6]. As a lightweight extension of cloud computing, mobile edge computing (MEC) can further reduce transmission delay and network energy consumption by implementing caching and computing abilities at the edge of mobile networks, e.g., base stations (BSs), access routers, and switches [7]. Considering the obvious advantages and limited service capacity of MEC compared with cloud computing, hierarchical cloud-edge cooperation schemes have recently drawn great attention of academic and industry to improve energy efficiency of mobile networks and content delivery [8].

Cloud-edge collaboration solutions have initially been explored in a two-layer network of edge devices and the cloud, ignoring the computing and caching capabilities of the BSs [911]. Zhang et al. [12] investigate energy-saving wireless backhaul bandwidth allocation and power allocation in heterogeneous small cellular networks and propose an optimal energy efficiency model, which can be solved iteratively. Then, distributed frameworks and centralized processing capabilities are developed to achieve green networks, in particular, by reducing energy consumption through BS sleep policies. Qi and Wang [9] discuss interference-aware user association problem under cell sleeping for heterogeneous cloud cellular networks. Han et al. [13] propose an energy sharing-based energy and user joint allocation method between macro base stations (MBSs) and small base stations (SBSs) in a heterogeneous network. Wang et al. [14] consider improving network performance in mobile networks by joint management and allocation of caching, computing, and communication (3C) resources. Li et al. [15] propose an energy-efficient resource allocation scheme by orchestrating the delay-sensitive tasks in a resource-constrained cloud-edge-end collaboration system. However, most current works mainly focus on the cooperative optimization of two kinds of resources above [7, 16]. Kai et al. [17] design a cooperative computation offloading policy under the constrained serving and transmitting capabilities, where the tasks are partially conducted at the mobile devices, edge servers, and the cloud. Chen et al. [18] develop a reinforcement learning-based resource allocation framework, which can leverage energy consumption and network latency while satisfying power and computation constraints. Xu et al. [19] propose an IC-IoT network architecture based on software-defined networking paradigm and use deep Q-network (DQN) model to optimize the allocation of computing and cache resources to propagate IC-IoT processes. Zhang et al. [20] formulate the joint offloading and resource allocation problem as a Markov decision process (MDP) to maximize the number of unloading tasks while reducing energy consumption.

Although cloud-edge cooperation can improve energy efficiency and content distribution, most works are studied in a three-tier topology of mobile users (MUs), BSs, and the cloud, without considering the influence of in-network caching, request aggregation [2124] and network heterogeneity, and mainly focused on the joint allocation optimization of two kinds of resources. In this paper, we propose an energy-efficient hierarchical collaborative solution to improve content delivery, where in-network caching, request aggregation, and the cooperative allocation of 3C resources are analyzed in a layered heterogeneous network including MUs, SBSs, MBSs, and the cloud. The main contributions of the paper are as follows. (i)We formulate the energy efficiency problem as a queuing theory-based centralized model in a cloud-edge cooperation network, where in-network caching is considered and the same content requests can be aggregated in the queue of each base station. Energy consumption is minimized by jointly optimizing the allocation of 3C resources while ensuring QoS of content delivery(ii)We analyze the minimal energy consumption problem based on the distribution characteristic of content popularity and present the key factors that affect network performance(iii)We evaluate the proposed energy efficiency model in heterogeneous network environments. Simulation results show that the performance of our proposed solution is much better than the existing cloud-edge cooperation schemes not considering the deployment of caching resource and request aggregation policies

The rest of this paper is organized as follows. In Section 2, the energy models of different network components are presented. In Section 3, the minimal energy consumption problem is formulated and analyzed. Simulation results are presented and discussed in Section 4. Finally, we conclude this study in Section 5.

2. System Model

In this section, we formulate the cloud-edge cooperation problem to minimize energy consumption and improve content delivery by jointly allocating 3C resources in a mobile network. Figure 1 describes a cloud-edge cooperation scenario, where both MBSs and SBSs have caching and computing capabilities. Content requests from a MU can be satisfied by its connected BS and the cloud in sequence. As shown in Figure 1, the total energy consumption consists of MUs, BSs, the cloud, and network links, where energy models of different BSs and their accessed MUs are discussed separately in the system.

2.1. Energy Model of Mobile Users

We assume that there are MBSs in the network, and the th MBS is directly accessed by SBSs and MUs. is the number of MUs connected to the th SBS of the th MBS. The number of different network contents available in our system is . Thus, the energy consumption of MUs, , can be written as

2.1.1. Energy of Mobile Users in SBSs

The energy about a content request consumed by the th user of the th SBS, , is the product of its power and the delay between sending this request and receiving the corresponding data, which can be written as where is the power about the content request consumed by the th user of the th SBS, and are boolean variables and indicate whether the request about content is satisfied at the th SBS and th MBS, respectively. is equal to be 1 if the th SBS buffers content , and 0 otherwise. is equal to be 1 if the th MBS caches content , and 0 otherwise. , , and are the delay for the content request to be met at the th SBS, th MBS, and cloud, respectively.

To solve (3), a new queuing model with different service rates and arrival rates is designed, where in-network caching and request aggregation are considered [3, 25]. In our proposed queuing model, the same requests arriving at a network node can be aggregated to a request to be forwarded to fetch the corresponding content. If this node stores the data in its cache, the content will be distributed to the end-users along the routing path in the opposite direction. Therefore, both in-network caching and request aggregation can reduce the response time caused by content requests and further improve energy efficiency of our system. We assume that , , and are request arrival rate and service rate of the th SBS after the introduction of in-network caching and request aggregation policies [3], the number of servers at the th SBS, respectively. Therefore, the utilization of the th SBS can be written as

The probability that a request from a MU of the th SBS will have to queue at this SBS when its servers are occupied can be written as where is the steady-state probability that zero request tasks exist in the th SBS.

Therefore, on basis of (4) and (5), the response time that a content request is satisfied at the th SBS can be written as

When the content is stored at the th SBS, the total delay that a MU obtains this content consists of uplink and downlink transmission delay between the th SBS and this MU, execution time, and response delay in the th SBS, which can be derived as where the and indicate uplink and downlink transmission rate between the th SBS and the accessed MU, denotes the CPU clock speed in the th SBS, is the number of CPU clocks to deal with the task about content request , is the size of content , and is the proportion of the requested data in the total traffic generated by the task about content request .

Similarly, we assume that , , and are request arrival rate and service rate of the th MBS after the introduction of in-network caching and request aggregation policies, the number of servers at the th MBS, respectively. , , and are request arrival rate and service rate of the cloud considering in-network caching and request aggregation, the number of servers in the cloud. Thus, the total delay of a content request from the SBS satisfied in the th MBS and the cloud can be written as where the and are uplink and downlink transmission rate between the th SBS and th MBS, and are uplink and downlink transmission rate between MBSs and the cloud, and are the CPU clock speed in the th MBS and cloud, and and are the response time that a content request is satisfied at the th MBS and cloud, respectively.

2.1.2. Energy of Mobile Users in MBSs

Based on the above analysis, the energy about the content request consumed by the th user of the th MBS, , and its corresponding delay, , can be written as where is the power consumed by the th MU of the th MBS when obtaining content .

2.2. Energy Model of Base Stations

In the mobile network, the energy consumption of BSs can be written as where represents running time of the system, and and are the power consumption of the th SBS and th MBS, respectively.

2.2.1. Energy of SBSs

The total power consumption of the th SBS can be written as where and represent the traditional power and cache power at the th SBS.

The traditional power consumption of the th SBS can be written as where , , and are the signal-to-noise ratio and the maximal transmission rate, the amount of network requests about data at the th SBS, and and are the static power consumption of the th SBS in the active mode and its corresponding slop parameter [26].

The cache power at the th SBS consists of two parts: cache retrieval power and content caching power, which can be written as where presents the retrieval power consumption about content in the buffer of the th SBS, and is the power efficiency parameter depending on storage hardware technologies [27].

2.2.2. Energy of MBSs

Similarly, the total power consumption of the th MBS can be written as where and are the traditional power and cache power consumption of the th MBS, respectively.

Therefore, the expressions of and can be written as where , , and are the signal-to-noise ratio, the maximal transmission rate, and the amount of network requests about content at the th MBS, and are the static power consumption of the th MBS in the active mode and its corresponding slop parameter, and is the retrieval power consumption about content in the buffer of the th MBS.

2.3. Energy Model of Cloud

The model of cloud is consisted with static power and the processing power of requests which can not satisfied by MBS. where is the retrieval power consumption about content request in the cloud.

2.4. Energy Model of Network Wired Links

As shown in Figure 1, the total energy consumption of wired link can be written as where is the power consumption about traffic transmitting between the th SBS to the th MBS, is the power consumption about traffic distributing between the th MBS to the cloud [27, 28].

3. Problem Formulation and Analysis

In this section, we formulate the minimal energy consumption problem as a cooperative cloud-edge resource allocation model for content delivery services. Then, this model is analyzed theoretically to present how to obtain the optimal solution.

3.1. Problem Formulation

Based on the energy models presented in Section 2, the energy-efficient hierarchical collaborative problem can be formulated as Eq. (21), where and are the maximal caching capacities of MBS and its accessed SBS , respectively. Therefore, the optimization objective of Eq. (21) is to minimize energy consumption of the system by designing optimal caching and routing strategies under the limited caching, computation, and communication capacities.

In the constraints, - are the maximal cache size constraints of SBSs and MBSs, - present the utilization constraints of SBSs, MBSs, and the cloud, the boolean variables related to caching decision can only be 0 or 1 in , and limits the proportion of the requested data in the total traffic generated by a request task between 0 and 1.

3.2. Model Analysis

In this part, we analyze the optimal solution of the proposed hierarchical energy consumption problem (21) based on the distribution characteristic of content popularity, which will provide a benchmark for online solutions to obtain near-optimal results in mobile heterogeneous networks.

We assume that network content popularity follows the Zipf’s distribution and the BSs cache network contents according to the rank of content popularity [29]. Zipf’s law is a statistical distribution in certain data sets and states that the relative request probability for the th most popular content is calculated by , where is the number of different network contents, and is the skewness factor. A large value of indicates that more requests are sent for the popular data. In our model, the vertical collaborative caching between a MBS and its followed SBSs is utilized to optimize the cache hit rate to improve energy efficiency, which means that the most popular contents are stored in the SBSs and the less ones are cooperatively cached in their upper MBS. Therefore, the utilization of th SBS and th MBS with optimal caching can be written as where and are the skewness factors of content popularity at the th SBS and th MBS. The number of popular contents arriving at the BSs increases with the growth of the skewness factor.

Based on the rewritten utilization expressions in (22) and (23), the minimal delay can be obtained under the optimal caching. Similarly, we can rewrite the formulas related to network power in Section 2 on basis of content popularity. Therefore, the minimal energy consumption can be achieved to improve content delivery in the system.

4. Simulation and Results

In this section, we evaluate the performance of our proposed model in heterogeneous scenarios, e.g., cache size, content popularity, the number of different contents, and the arrival rate of network requests. In the simulation, cache size of a BS is abstracted as a proportion and is equal to the relative size to the amount of different contents, which is from to [30, 31]. The content popularity follows the Zipf distribution, where the range of the skewness factors and varies from to [29, 32]. In addition, the request arrival rate of a MBS is twice that of its accessed SBSs. The default value of arrival rate about a SBS is set to be in our simulation. The simulation is carried out to demonstrate the advantages of the proposed solutions “Offline with Cache” and “Online with Cache” compared with “OPT without Cache and Aggregation” and “OPT without Cache” schemes. “Offline with Cache” and “Online with Cache” are the corresponding offline and online models of Eq. (21), which operate under the limited 3C resources capacities by using optimal and least recently used (LRU) caching policies while considering request aggregation. “OPT without Cache and Aggregation” indicates that the existing cloud-edge cooperation scheme does not consider the deployment of caching resource and request aggregation policies, while request aggregation is adopted in “OPT without Cache” [3, 33]. The advantages of in-network caching and aggregation have been discussed in Section II-A. Through performance comparison, we can find their influence on energy consumption in the simulation, respectively.

Figure 2 shows the network energy consumption of the four solutions under different cache sizes. As shown in Figure 2, we can see that the performance of our proposed model performs much better than “OPT without Cache” and “OPT without Cache and Aggregation,” where energy consumption of the latter always remains the same. The reason is that there are no caches deployed in the access networks, and all their requests are routed to the cloud to fetch data. Due to that the same content request can be aggregated in the BSs, compared with “OPT without Cache and Aggregation,” the network delay in “OPT without Cache” is significantly improved. As the cache size increases, more popular contents are buffered at the network edge, which reduces the performance gap between “Offline with Cache” and “Online with Cache.” With the growth of cache size, energy efficiency of BSs converges to a stable state in the proposed model, because additional cached unpopular contents have little effect on energy efficiency of the system.

Figure 3 shows the network energy consumption of the four solutions under different content popularity. As shown in Figure 3, we can see that content popularity has no effect on the performance of “OPT without Cache and Aggregation,” because each request must be routed to the cloud to obtain the corresponding content. As content popularity grows, energy efficiency is improved in “Offline with Cache” and “Online with Cache.” The reason is that a larger Zipf skewness parameter means more popular contents are requested by the MUs, which makes the majority of requests directly satisfied by the cached data in the SBSs and MBSs. Moreover, the performance of “OPT without Cache” is improved as content popularity increases. The reason is that more content requests are aggregated in the queue.

Figure 4 shows the network energy consumption of the four solutions when the number of different contents varies. As shown in Figure 4, we can see that the energy consumption of our proposed solution increases with the growth of the number of different contents. The reason is that a larger amount of different contents means more unpopular contents are requested by MUs, which makes more requests unsatisfied in the BSs with limited cache capacity and obtain their contents from the cloud. Besides, a larger content diversity can bridge the performance gap between “Offline with Cache” and “Online with Cache,” because the increase of more unpopular requests reduces the impact of caching resources on network performance to a certain extent. Compared with in-network caching, the growth of content diversity has limited effect on the request aggregation, which makes the energy consumption of “OPT without Cache” increase slowly. Due to that, each request fetches the corresponding content from the cloud in “OPT without Cache and Aggregation,” its performance is not affected by the number of different contents.

Figure 5 shows the network energy consumption of the four solutions under different request arrival rates. As shown in Figure 5, we can see that energy efficiency of the four schemes declines when the request arrival rate increases, because a larger queuing delay results in the growth of energy consumption. Due to the fact that popular contents are always cached at the network edge in “Offline with Cache,” the performance gap between “Offline with Cache” and other solutions enlarges with the growth of the request arrival rate. The performance between “OPT without Cache” and “OPT without Cache and Aggregation” has the similar trend. The reason is that the effect of request aggregation is improved with the increase of arrival rate.

5. Conclusions

In this paper, we propose a novel energy-efficient hierarchical collaborative model for content delivery services by considering the in-network caching, request aggregation, and cooperation allocation of caching, computing, and communication resources in a layered heterogeneous network of MUs, SBSs, MBSs, and the cloud. Firstly, we formulate the energy efficiency problem as a centralized model, which can aggregate the same content requests in the queue of each BS, and achieve minimal energy consumption by jointly optimizing different resources on basis of cloud-edge cooperation and request queue. Then, the optimal energy efficiency problem is analyzed based on the distribution characteristic of content popularity. Simulation results show that the performance of our proposed model is much better than the existing cloud-edge cooperation scheme without considering the deployment of caching resource and request aggregation policies.

In future work, we will design a more efficient online scheme to approximate the optimal solution, e.g., predictive and proactive caching about network contents. Besides, we intend to develop a weighted model between network power and delay to achieve their prominent tradeoff. Moreover, the mobile behaviors of network users will be considered to improve our proposed model. Finally, the full-dimensional collaboration problem will be investigated to verify the system performance in the more complex network environment.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is partially supported by the Scientific Research Plan of Beijing Municipal Commission of Education under Grant KM201910005026, Beijing Nova Program of Science and Technology Z191100001119094, and Beijing Natural Science Foundation L202016.