Abstract

Live streaming service usually delivers the content in mobile edge computing (MEC) to reduce the network latency and save the backhaul capacity. Considering the limited resources, it is necessary that MEC servers collaborate with each other and form an overlay to realize more efficient delivery. The critical challenge is how to optimize the topology among the servers and allocate the link capacity so that the cost will be lower with delay constraints. Previous approaches rarely consider server collaborations for live streaming service, and the scheduling delay is usually ignored in MEC, leading to suboptimal performances. In this paper, we propose a popularity-guided overlay model which takes the scheduling delay into consideration and utilizes MEC collaboration to achieve efficient live streaming service. The links and servers are shared among all channel streams and each stream is pushed from cloud servers to MEC servers via the trees. Considering the optimization problem is NP-hard, we propose an effective optimization framework called cost optimization for live streaming (COLS) to predict the channel popularity by a LSTM model with multiscale input data. Finally, we compute topology graph by greedy scheme and allocate the capacity with convex programming. Experimental results show that the proposed approach achieves higher prediction accuracy, reducing the capacity cost by more than 40% with an acceptable delay compared with state-of-the-art schemes.

1. Introduction

With the rapid popularization of smart devices, the Internet traffic has ushered an explosive growth [1], and almost 82% of all network traffic comes from video traffic [2]. The increasing traffic puts amounts of pressure on the cloud data center, bringing more difficulties for the optimization of the servers [3], especially for some latency-sensitive services, e.g., live streaming. To address this, mobile edge computing (MEC) is brought in as a new technology for live streaming service to reduce the network latency and alleviate the backhaul capacity [4]. Internet service provider (ISP) places nearby MEC servers at the network edge so that users can visit these servers instead of remote cloud servers and get a better experience. Due to limited resources of a single MEC server, multiple servers are usually used to collaborate with each other and form an overlay to deliver the content [5]. The cost of deploying such an overlay mainly comes from the link capacity cost (If an overlay is firstly deployed, it has another cost called Server Cost to purchase servers’ hardware. Compared with the link capacity cost, Server Cost is a one-time deployment cost, so we ignore it in this paper. We think that servers’ hardware resources, computing capacities and upload capacities, are enough and will not be an optimization bottleneck). While the link capacity is higher, the source-to-end delay which is defined as the time elapsed from the cloud server to the MEC server is lower but the cost increases rapidly. A critical problem is how to construct the topological graph among these servers and allocate the bandwidth capacity to each link so that the cost will be minimum while the delay is still under a certain bound.

Failed to address the problem above, existing works mainly suffer from following deficiencies.

In MEC environment, most works pay attention to the optimization for static contents [6, 7], such as video on demand (VoD) streaming or image caching, which are delay insensitive. These approaches usually fail to address the rationality of MEC application providers or have different objective functions, leading to improper results in cost optimization for live streaming.

Some works optimize the resource allocation by predicting the popularity of contents [810], considering that a few popular videos usually contribute to most of the bandwidth consumption [11]. However, suffering from the similar reason that these models usually focus on the prediction of static contents, it is still hard to meet the real-time requirement of live streaming.

As a considerable impact factor in optimization strategy, server’s scheduling delay is usually ignored or not modeled sufficiently in most of existing works [1214], which renders these models’ inaccuracy and inefficiency in reality.

To tackle above challenges, we construct a multisource multichannel overlay model which focuses on the popularity prediction, topology generation, and link bandwidth allocation. We combine the deep neural network and the mathematical model to optimize the overlay deployment cost, i.e., we first predict the popularity of live channel with LSTM model and identify which channel the MEC server should subscribe to. Then, we compute the topology graph and allocate the link capacity by mathematical optimization methods.

In the proposed approach, each channel stream has heterogeneous rate which is constant in transmission, and all packet lengths are equal (the channel rate and the packet length may vary in reality, but they are not variables in our model and do not affect the problem solving. Hence, we simplify them.). Without loss of generality, we suppose that a channel can only originate from one source, and a cloud server could be the source of multiple channels. A subscriber (defined as a MEC server which subscribes to channels) could receive a channel stream from a cloud server, another MEC server subscribing to the same channel or a helper (a helper denotes a MEC server which forwards unsubscribed channel streams.). Therefore, there are some nodes (helpers) that can be included or excluded in a channel transporting path, and all channel trees are combined into a mesh.

In practice, the link cost is usually charged by the maximum rented capacity. And the source-to-end delay actually consists of four type delays, e.g., link propagation delay, server transmission delay, server processing delay, and server queuing delay. More specifically, the link propagation delay is the time that a packet travels over a physical connection, usually reflected by the round-trip time. And the server processing delay can be omitted while the computing resource is sufficient as aforementioned. Then, we combine server transmission delay and server queuing delay into server scheduling delay, defined as the used time that a packet is kept on a server until it is completely transmitted out. In this way, the source-to-end delay consists of link propagation delay and server scheduling delay. As a result, a high scheduling delay will cause network congestion, which should be taken into consideration by all means.

As presented in Figure 1, it depicts an example of proposed overlay model where , are cloud (source) servers and are MEC servers in different regions. Cost optimization is carried out by a central optimizer which continuously collects network parameters and sends control message flows. The workflow is shown in Figure 2. The optimizer predicts the popularity of live channels and decides the subscribed list of each MEC server. Based on obtained information and the delay requirements, the optimizer computes a topology graph with least deployment cost. Finally, the optimizer informs MEC servers of the optimized information, i.e., the subscribed list, the topology graph, and the link capacity. Once MEC servers receive the control message, they collaborate with each other and form an overlay to deliver live channel to server and , channel to , , and . In this overlay, channel stream is pushed from to with forwarding. does not subscribe to channel but forward its data as a helper. The link between and transports two channels simultaneously.

In summary, we make the key contributions as follows: (i)Aiming at live streaming service, we formulate the optimization problem and construct a multisource multichannel overlay model in which the MEC servers collaborate with each other. In addition, the scheduling delay is also taken into consideration to construct an optimized topology graph among MEC servers, achieving lower link capacity with delay constraints(ii)Instead of using fixed live channel popularity for the optimization, we utilize state-of-the-art LSTM model to learn the features of historical streaming data for adaptive and accurate popularity prediction. Furthermore, we also takes the information of time and weekdays into consideration, employing multiscale input data to make a more accurate and robust prediction(iii)Cost optimization for live streaming (COLS) is proposed as a complete and efficient cost optimizer framework, which considers the whole systematic flow of the optimization in a logical order, including accurate key parameters prediction, comprehensive overlay model formulation, optimal topology generation, and efficient capacity allocation. Finally, COLS is able to achieve a lower cost in polynomial time while meeting the delay constraint

The remainder of this paper is organized as follows. After reviewing related works in Section 2, we elaborate the popularity prediction of live channel in Section 3. Following the mathematical formulation in Section 4, we compute the topology in Section 5. Illustrative experiment results are presented in Section 6. Finally, we conclude in Section 7.

2. Relate Work

In this section, we review related works in the areas of MEC collaboration, popularity prediction, and scheduling delay model.

MEC collaboration. In MEC environment, many works [6, 7] realize collaboration mechanism among the servers to achieve higher efficiency. For example, the approach proposed in [6] utilizes the collaboration between the MEC servers to cache the static content in spare time, e.g., midnight, which is impractical for live streaming service. Since these proposed optimization methods aim to allocate the resource efficiently, they usually have different optimization objective in resource utilization and fail to be applied in live streaming services with real-time requirement.

Recently, some specially designed resource optimization methods [1518] are proposed for live streaming services. For instance, CCAS [15] proposes an auction-based algorithm to optimize the backhaul capacity and the caching space so as to improve the live video quality. Zhang et al. [16] model the computational and wireless spectrum resource in edge-clouds networks. They propose a Markov decision process to decrease the latency of live streaming services. Nevertheless, these approaches usually focus on resource optimization for a single MEC server and rarely consider the collaboration among multiple servers. It potentially results in a low performance such as the network congestion while the user request is higher.

Popularity of video stream. As a key parameter, some works predict the popularity of videos by analyzing the image frame. For example, TLRMVR [8] proposes a novel low-rank multiview embedding learning method to predict the popularity of microvideo. MMVED [9] combines multiple features (image frame, acoustic, and textual info) and considers the randomness for the mapping from data to popularity. Although these approaches are able to achieve efficient prediction, they usually aim at the static complete file and need to parse entire image frame, which is impractical for live streaming. Inspired by success of deep learning techniques, Deepcache [19] predicts the popularity with LSTM Encoder-Decoder to cache contents smartly. And BSPP [10] presents a model for predicting the number of user requests based on Malcov model in MEC and further designs an offloading scheme based on this model. Although effective, these approaches utilize the data in single dimension to predict the popularity, which lack sufficient robustness while facing the random noise and outliers.

On the other hand, some approaches [20, 21] use both the popularity and retention rate of video streams to maximize video bitrate for efficient utilization of bandwidth. Distinguished from these approaches which emphasize bitrate adaptation, this paper focuses on the topology optimization for the cooperation of MEC servers while additionally considering the scheduling delay to achieve lower link capacity with delay constraints. Since our proposed approach and these methods have different focus, respectively, they can be combined to achieve better performances of live streaming services.

Scheduling delay model. For the mathematical model about overlay, most existing works [1214] rarely formulate the relationship between the link capacity and the server scheduling delay. BSUM [12] considers the scheduling delay and constructs the topology graph among MEC servers to optimize the resource. But it studies the scheduling delay insufficiently without considering the impact of different link capacities on the scheduling delay, which limits the effect of optimization consequently.

3. Popularity Prediction

In this section, we take advantage of state-of-the-art LSTM model to predict the popularity of live channels, which is the first step called COLS-P of the optimizer as shown in Figure 2.

For time series data, recurrent neural network (RNN) has been widely used to capture the temporal correlations and continuity constraints. As a variant of RNN, long short-term memory (LSTM) model solves the long-term dependence problem of general RNN and enables the network to learn the long-term dependence of time series by selectively memorizing the characteristic information of time series. At each timestep of input time series, LSTM applies the following operations: where the operator denotes the Hadamard product (element-wise product). The subscript denotes the time step. represents the input at time and represents hidden state. represent forget gate, input and output reset gates, respectively, and denotes memory cell state. , , and are weight matrices and bias which need to be learned during training. And indicates the activation function.

The consecutive historical popularity data can be regarded as time series data. Intuitively, we employ state-of-the-art LSTM model to predict the live channel popularity based on historical data. Specifically, Figure 3 shows the popularity of one live channel on one MEC server in 300 consecutive hours and the second red box records (in 100-150 hours period) is further detailed in Figure 4, which presents the popularity changes in single day. We observe from these figures that the time data have an impact on the channel popularity: (i)Weekday impact. The data in two red boxes of Figure 3 correspond to the channel popularity in Sunday and Wednesday, respectively. As presented in Figure 3, it is obvious that the popularity of Sunday is higher than that of Wednesday, so the message of weekday can be a valid supplementary information for accurate prediction(ii)Hour impact. As shown in Figure 4, we can easily distinguish the popularity changes in the third black box from the first two via the trends. However, the first two is hard to be distinguished from each other just with the trends. Fortunately, as we can observed in Figure 4, the different period of time in the day (different hours) can be an efficient indicator to resolve this confusion

Therefore, different from previous works which only take the historical popularity data for training, we also consider the influence of time data (weekday and hour info impacts) and utilize multiscale time data for training to achieve more accurate prediction.

We train the LSTM network for each channel individually. The raw data consists of consecutive tuples, which contain hourly, weekday, and popularity records (i.e., user requests) per hours. The original data is processed in the form of sliding window, in which these tuples are treated as the input sequence. Then, the popularity in next hour is set as a label. As the popularity prediction in this paper is a regression task, the loss function of the network is set to mean square error (MSE) which is the most common and widely used loss function for regression task.

When the training is completed, for a channel , we use the historical popularity records of one MEC server to predict its popularity in the next time period. Define , where is the streaming rate of channel n. Then, we sort the live channel by and select the first live channels as the subscribed list of the MEC server .

4. Overlay Formulation

After we learned from LSTM network which channels each MEC server should subscribe to (i.e., ), we formulate the overlay model including topology model, cost model, delay model, and joint optimization for live streaming.

The major symbols used in this paper are presented in Table 1. We regard the overlay as a directed complete graph , where is the set of all servers (cloud servers and MEC servers). Let be the set of sources (cloud servers) and be the set of subscribers (MEC servers). So, is the set of possible overlay connections and . is the set of channels. Denote the rate of channel as . The streaming is delivered to subscribers in via a tree. A subscriber receives a stream either from a cloud server, a MEC server which subscribes to the same channel or a helper. There are totally trees, and we denote the tree of channel as .

4.1. Topology Model

Equation (2) shows a variable which indicates whether link is used in tree . All are combined into a vector solution .

Equations (4) and (5) guarantee each channel tree is connected, and there is no isolated server. Also, there is no loop in each tree as shown in Equation (6). is the number of MEC servers which subscribe to channel . is the source server of channel . is a subset of servers and denotes the set of links connecting servers in .

4.2. Cost Model

Denote the capacity of link as . Like , all are combined into a vector . The link cost is , which is a linear function of . Total cost is the sum of all link capacity costs, i.e., where is a constant coefficient.

4.3. Delay Model

We employ a sequential scheduling model in which a parent node transmits packets into one link after another sequentially [22]. Denote the worst-case scheduling delay of server as , which is the maximum amount of time that a packet has to wait until it is transmitted out completely (according to a packet, its queuing delay is the sum of other packets’ transmission delay.). To avoid the congestion, should be smaller than the time interval between two sequential packets, where is the packet size [22]. Therefore, we set following congestion constraints: where is the set of children (with repetition) of server in all channel trees, is the set of channels that server is streaming out. is the maximum streaming rate of these channels ( may be different from since server can be the leaf of a tree (it will not stream out such channel) or it acts as a helper to relay an unsubscribed stream.).

We illustrate an example of the scheduling delay in Figure 5. Server is the parent node while others are son node. and are capacities of link and , respectively. Channel streams and are pushed from server . Servers and subscribe to channel and . Hence, child set of server is ( is calculated repeatedly). In a scheduling period, server transmits one packet into each edge , , and . Therefore, it transmits three packets totally. The worst-case scheduling delay is the maximum amount of time that the third packet has to wait until it is transmitted out completely, i.e., . Denote the rate of channel and as and , respectively, then we have .

Equation (10) shows the source-to-end delay constraint. Denote the propagation delay of link as and the source-to-end delay of server in tree as . is the sum of the source-to-end delay of its parent in tree , the scheduling delay of and the propagation delay of link . To ensure quality of service, the source-to-end delay of each node is bounded by a constant value.

4.4. Joint Optimization Model

Combining the above model, we formulate our cost optimization problem as follows:

Our goal is to find overlay trees among MEC servers for each live channel (optimize ) and allocate the capacity (optimize ) to minimize the cost while the delay is still under a boundary. However, it is a mixed integer nonlinear programming which is NP-hard [23]. Besides, from Equations (8) and (10) and Figure 5, we find that the source-to-end delay of server can be affected by link capacity , even though link does not connect . It means there are correlations among different link capacities. When a tree is constructed completely, the scheduling delay is not determined and is affected by other trees which are constructed later. These factors bring difficulties in solving problems. So, we divide the original problem into two subproblems and solve them sequentially in Section 5.

5. Algorithm Design

To simplify the problem, we divide the original problem into two subproblems: topology generation (COLS-T) and capacity allocation (COLS-C). In COLS-T, we ignore the scheduling delay and congestion constraints to construct an overlay that meets the delay bound. In COLS-C, we reassign the capacity to each link so as to reach a lower cost based on the aforementioned topology.

5.1. Topology Generation (COLS-T)

In this section, we ignore the scheduling delay and constraint as present in Equation (9), focusing on the propagation delay to construct the tree. Hence, the problem is transformed into how to find a series of Steiner minimum trees [24] under the hop-constraint. We use a greedy scheme to solve it in polynomial time.

There are totally channel trees. Each tree is initialized and has only a source cloud server. The initial capacity of a tree is its channel rate (In COLS-C, we will reallocate the capacity). We expand the tree from the source server to subscribing servers by adding a server into a partially constructed tree in each iteration. We define a metric called the Marginal Unit Cost (MUC) to determine which server is added. A server has two ways to join in the tree, and their MUC is given by: (i)Server is directly connected via server and MUC is:(ii)Server is connected through a potential helper via server . MUC is given by:where is the concurrent throughput of link .

COLS-T is outlined in Algorithm 1. In each iteration, we select link which incurs the smallest MUC and connect the corresponding server into tree . Then, we update overlay parameters and continue a new iteration until all servers are connected. Finally, we combine all Steiner trees into a mesh.

1 ;
2 whiledo
3 foreach, , do
4  ifthen
5   
6   ;
7  else
8   ;
9   
10  end
11 end
12 ;
13  ifthen
14   Add helper , node into via node ;
15   ;
16   ;
17   ;
18  else
19   Add node into via node ;
20   ;
21  end
22  ;
23 end
5.2. Capacity Allocation (COLS-C)

To achieve efficient capacity allocation, we take the scheduling delay into consideration based on the aforementioned overlay given by COLS-T. So we reallocate the capacity of each link to make most of the limited capacity and achieve lower cost.

In order to achieve optimal allocation, we first prove the capacity allocation is a convex problem. Then, we take advantage of classical optimization algorithm, sequential least squares quadratic programming (SLSQP) which is widely used to solve the convex optimization problem.

Considering that the overlay topology has been constructed, is constant, i.e., Equations (3), (4), (5), and (6) are always satisfied so we can omit them. In this way, our objective is to find a vector so that (7) get a minimum value subject to constraints (9) and (10).

We prove COLS-C is a convex problem as follows: (i)Object function (7) is the sum of convex functions. Obviously, (7) is a convex function(ii)Scheduling delay constraint (9) can be rewritten as:where is the children set of server . is a constant. The second-order derivative of fulfills the condition of being convex:

Hence, is a convex function. (iii)Total delay constraint (10) is similar as (9). It can be rewritten as:where is the source of channel ; is a constant means the aggregated source-to-end propagation delay of server ; is the ancestor set of server ; means the delay upper bound. is also a convex function since .

As demonstrated above, object function (7) and constraints (9) and (10) are all convex, so we have proved that the capacity allocation is a convex problem. Then, we utilize SLSQP algorithm (SLSQP can be called directly from the library SciPy) to seek the minimum solution by iterating over the objective function (7) which denotes the sum of all link capacity costs, while satisfying the delay constraints (9) and (10).

5.3. Computational Complexity Analysis

The complexity of COLS is . In COLS-T step, one server is included into one tree at each round, and there are totally rounds. In each round, there are links to be calculated. Each link costs time to select the helper, hence, adding a node in each round costs and all rounds cost .

In COLS-C step, we use SLSQP to solve the convex problem. It uses the Han-Powell quasi-Newton method with a BFGS update of the B-matrix and L1-test function in the step-length algorithm. It has overall time complexity where is the number of variable [2527]. There are variables, so the time complexity is . In summary, the overall complexity of COLS- is .

6. Illustrative Experiment Results

To evaluate the performance of COLS, we have conducted extensive experiments in two aspects: popularity prediction and mathematical optimization. We present detailed experiment settings and comparison schemes in Section 4.1. Then, we illustrate results in Section 6.2.

6.1. Simulation Setup

We compare COLS with following state-of-the-art schemes: (i)DEEPCACHE [19] builds a LSTM Encoder-Decoder model to predict the popularity of content. In this network, the dimension of training data is single, and it only has historical playback records(ii)CCAS [15]. Each MEC server connects the cloud server directly, and there is no collaboration. The MEC server gives up delivering some channels to save the link capacity. The link capacity is a constant in CCAS, and the scheduling delay is not considered. To meet the delay constraint, we adapt CCAS by adding a capacity allocation step. We increase the capacity iteratively with a certain proportion until it meets the constraint(iii)BSUM [12] constructs an overlay with MEC collaboration. It considers the scheduling delay insufficiently, which ignores the correlation between different link capacities, and hence, the real scheduling delay is higher. Besides, just like CCAS, the link capacity is constant. BSUM only optimizes the topology and does not optimize the capacity. We also add a capacity allocation step which is the same as the step in CCAS

The dataset used for experimental evaluation comes from real scenes, which is provided by the telecom operator. Considering that the size of the source raw data is more than 2 TB including 931964 files (live streaming playback records) [28], we randomly choose a certain number of the records in several consecutive periods of time to construct the dataset used for evaluation. And the detailed baseline parameters are shown in Table 2.

Specifically, experiments are carried out in two parts: (i)Popularity prediction. We compare COLS with DEEPCACHE. We randomly selected 10 sequences as input of designed model and output the corresponding predicted popularity. Mean square error is used to evaluate the prediction performance(ii)Cost optimization. To compare COLS with CCAS and BSUM, we randomly select some MEC servers from the raw data and generate round trip time (RTT, which denotes the propagation delay) matrix among servers. For the selected servers, we use the aforementioned LSTM network to predict the popularity and get the subscribed list of each MEC servers. In order to ensure the accuracy of results, the channel that the MEC server subscribed to in CCAS and BSUM remains the same as COLS. Each scheme is evaluated 20 times and gets an average result. To evaluate the performance of each scheme, we focus on the cost metric, which is the sum of all link capacity costs

6.2. Simulation Results

To evaluate the performance of proposed approach, we have conducted extensive experiments. Figures 6 and 7 demonstrate COLS’s advantages (MEC collaboration, scheduling delay consideration and capacity convex programming), which make it outperforms other schemes. Figure 6 illustrates the component proportion of maximum server delay before capacity allocation. CCAS does not consider the scheduling delay and BSUM considers insufficiently. Both of them have some missing scheduling delays which are not calculated. All their theoretical delays are lower than the bound but real delays are opposite. To meet the delay constraint (500 ms in this paper), they need to allocate more capacities to reduce the scheduling delay, which bring the cost increases. Just as mentioned above, CCAS makes MEC servers connect the cloud server directly, leading to a higher scheduling delay and a lower propagation delay. On the contrary, COLS and BSUM have collaborations, and thus, their scheduling delays are lower.

Figure 7 depicts maximum server delays after capacity allocation. It is obvious that CCAS reduces most scheduling delay and causes a highest capacity cost. BSUM uses a simple allocation algorithm, and the link capacity is redundant after the allocation. Hence, the scheduling delay is lower than that of COLS as shown in Figure 7. Compared with other schemes, COLS uses convex programming to allocate capacities, which is more efficiently. It makes capacities less redundant and minimizes the cost while the delay is still under the bound (500 ms).

Figure 8 shows the cost versus the channel number. The results demonstrate that COLS outperforms the other two competing schemes, and the cost of COLS is only around half of BSUM/one-third of CCAS when the channel number reaches 6. CCAS gives up delivering some live channels, i.e., users receive the live stream from the cloud server directly, which increases the cloud server scheduling delay and saves the link capacity. When the channel number is low, the overlay topology is simple, and fewer links connect the cloud server, which provides more growth space for the scheduling delay. However, as the channel number increases, the topology becomes complicated, and more links connect the cloud server. CCAS needs more capacities to reduce the scheduling delay. These capacity costs are higher than costs saved by giving up live channels. Inversely, in this case, COLS has the lowest cost because of MEC collaboration, scheduling delay consideration, and convex programming. It is more suitable for multichannel overlay, which is more common in the reality, i.e., COLS is more practicable than other schemes.

Figure 9 presents the difference results between COLS and DEEPCACHE. The mean prediction error of our method is lower than that of DEEPCACHE. This is because COLS adds more information (hours and weekday) to predict the popularity, making it a more accurate.

Figure 10 illustrates the cost versus the server number. It shows that COLS cost is lowest (outperforms others at least 40%) and increases more slowly than that of competing schemes. The reason is that COLS considers both MEC collaboration, scheduling delay, and efficient capacity allocation. In CCAS, all MEC servers connect the cloud server directly. Section 5 infers that the cloud server connects too much links, leading to a high scheduling delay. To reduce the scheduling delay, CCAS has to increase the link capacity and gets a highest cost. BSUM considers the scheduling delay insufficiently, and the real delay is beyond the bound. It has to allocate more capacity to meet the constraint. The allocate algorithm causes redundant capacities and a higher cost.

We plot in Figure 11 the cost versus the delay constraint with 50 servers. When decreases, topology graphs constructed by COLS and BSUM gradually become similar as the graph constructed by CCAS, i.e., MEC servers connect the cloud server directly, and there is no MEC collaboration. The reason is that the propagation delay will accumulate and beyond small constraint if there are collaborations. Therefore, CCAS could give up some live channels to get a lower cost. When increases, all costs of three schemes decrease. COLS has collaborations and convex programming to reduce the cost more quickly than others. In other words, high-cost reductions are achieved by sacrificing a small amount of delay in COLS.

7. Conclusion

In this paper, we study the cost optimization of an overlay MEC network for live streaming. Proposed network has a more realistic delaying and topological model, where MEC servers collaborate with each other to delivery streaming. We formulate the problem and propose a framework called COLS, which predicts the popularity by LSTM model and solves optimization problem by greedy scheme and convex programming. Simulation results show that COLS has higher prediction accuracy, reduces the capacity cost by at least 40% compared with state-of-the-art schemes.

Data Availability

The source data used to support the findings of this study are currently under embargo while the research findings are commercialized. Requests for data, 6 months after publication of this article, will be considered by the corresponding author.

Disclosure

The abstract was presented at the 8th International Conference on Digital Home (ICDH) 2020.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by Guangxi Innovation Driven Development Special Fund Project (AA18118039).