Wireless Communications and Mobile Computing

Wireless Communications and Mobile Computing / 2021 / Article
Special Issue

Machine Learning in Mobile Computing: Methods and Applications

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 9932977 | https://doi.org/10.1155/2021/9932977

Ying Yu, "Application of Mobile Edge Computing Technology in Civil Aviation Express Marketing", Wireless Communications and Mobile Computing, vol. 2021, Article ID 9932977, 11 pages, 2021. https://doi.org/10.1155/2021/9932977

Application of Mobile Edge Computing Technology in Civil Aviation Express Marketing

Academic Editor: Wenqing Wu
Received16 Mar 2021
Revised06 May 2021
Accepted18 May 2021
Published01 Jun 2021

Abstract

With the popularization of mobile terminals and the rapid development of mobile communication technology, many PC-based services have placed high demands on data processing and storage functions. Cloud laptops that transfer data processing tasks to the cloud cannot meet the needs of users due to low latency and high-quality services. In view of this, some researchers have proposed the concept of mobile edge computing. Mobile edge computing (MEC) is based on the 5G evolution architecture. By deploying multiple service servers on the base station side near the edge of the user’s mobile core network, it provides nearby computing and processing services for user business. This article is aimed at studying the use of caching and MEC processing functions to design an effective caching and distribution mechanism across the network edge and apply it to civil aviation express marketing. This paper proposes to focus on mobile edge computing technology, combining it with data warehouse technology, clustering algorithm, and other methods to build an experimental model of MEC-based caching mechanism applied to civil aviation express marketing. The experimental results in this paper show that when the cache space and the number of service contents are constant, the LECC mechanism among the five cache mechanisms is more effective than LENC, LRU, and RR in cache hit rate, average content transmission delay, and transmission overhead. For example, with the same cache space, ATC under the LECC mechanism is about 4%~9%, 8%~13%, and 18%~22% lower than that of LENC, LRU, and RR, respectively.

1. Introduction

In recent years, the ability of humans to use computer technology to generate and collect information has been greatly improved. Large-scale database systems have been widely used in the management of research companies, and their development momentum is very strong. This raises a new question for managers, how can we effectively manage and apply a large amount of data? In particular, in today’s highly competitive society, data should be “used” effectively and has been put on the agenda.

With the in-depth development of domestic and foreign markets, the direction of civil aviation express marketing has gradually shifted from “product driven” to “market driven” and “customer driven,” which requires civil aviation express to adopt a marketing strategy centered on market demand. After gradually realizing the role of historical data resources in improving the competitiveness of enterprises and increasing economic benefits, they tried to obtain their own sales data and sales data of competing companies to assist in the formulation of sales decision information and timely adjustment of sales strategies and focus on sales and improve the level of revenue. How to obtain useful data faster to help companies analyze the actual needs of customers? By providing a variety of layered and personalized service solutions, the civil aviation industry must give priority to increasing sales revenue and profits and improving customer satisfaction and loyalty. In the marketing process, in order to make accurate and timely business decisions, relevant information must be fully obtained and used to assist the decision-making process.

Mobile edge computing can provide cloud and IT services near mobile subscribers and allow cloud servers to be used inside or near base stations. Therefore, using the MEC platform can reduce the terminal delay perceived by mobile users; application developers can use real-time wireless access network information from MEC to provide context-aware services; MEC can also implement the execution of computationally intensive applications. Yu introduced the MEC platform’s architecture and described and realized the key functions of the above-mentioned functions, then investigated the relevant latest research results, and finally discussed and determined the MEC open research challenges [1]. The downside is that the detailed analysis of this article around MEC just stays at the technology itself, without extending it. Tran and Pompili consider an MEC-enabled multicell wireless network, where each base station (BS) is equipped with an MEC server, which can help mobile users perform computationally intensive tasks through task offloading. In order to maximize the user’s task offloading revenue, the problem of joint task offloading and resource allocation is studied [2]. However, the problem under consideration involves joint optimization of task offloading decisions. Due to the combined nature of this problem, it is difficult and impractical to find the best solution for large-scale networks. Ahmed and Rehmani introduced and discussed the definition of MEC given by researchers, the opportunities that MEC brings, and the research challenges in the MEC environment. In addition, the motivation of MEC was highlighted by discussing various applications [3]. However, the discussions conducted by the institute are all theoretical knowledge, not combined with practice, and are rather vague.

The innovation of this article is (1) the introduction of big data technology into mobile edge caching, and by bringing cloud computing and cloud storage closer to the edge of the network, it can better cope with the impact of the rapid growth of data traffic on the network and provide a high-quality network service and user experience. (2) This paper proposes a collaborative caching mechanism based on machine learning under the distributed MEC service system. The mechanism uses the local caches of several MEC servers to form a cooperative cache domain as the overall structure and uses the migration learning method to predict the popularity of the content. Based on this prediction, an MEC with an optimization goal of minimizing the average content transmission cost is established.

2. Application Methods in Civil Aviation Express Marketing

2.1. Mobile Edge Computing Technology

As a new architecture to improve the efficiency of computing offload, mobile edge computing provides computing resources in the access network close to smart mobile devices [4, 5]. Mobile edge computing is the foundation of cloud computing. The quasicloud computing is located close to the local network where the data source is located, and the data will not be sent to the cloud as much as possible, so as to reduce transmission delay and network bandwidth costs as much as possible. Currently, mobile edge computing is widely used in various fields. In the communications field, network operators combine telecommunications network services with edge computing and provide services to users by arranging MEC servers (MEC servers) at access points (access point (AP)) or base stations (base station (BS)).

Mobile edge computing allows mobile terminals to transfer computing tasks to network edge nodes, such as wireless access points and base stations. The basic framework diagram of the mobile edge computing system is shown in Figure 1. Its design idea meets the needs of mobile terminals to expand computer functions and at the same time compensates for the long-term transmission delay of the platform [6].

It can be seen from Figure 1 that on the MEC platform, the edge computing server is installed on the side of the wireless access network, which greatly shortens the distance between it and the user equipment. In traditional cloud computing, data must pass through the wireless access network and the main network to reach the remote cloud server. After the cloud server processes the data, it returns the result to the user through a long and visible link [7]. It takes a lot of time to use the cloud platform to process this task. Due to the shorter transmission distance, MEC no longer needs to pass through large mobile links and the main network, thereby reducing delay costs. On the other hand, because the processing power of the server edge is much higher than that of the mobile device, the processing time of the task is greatly reduced.

The MEC technology and hierarchical microcloud (Cloudlet) technology that provide services such as computing and caching for users within the access range are both considered to be the natural evolution of mobile cloud technology. The MEC server can provide cloud services to mobile users within the coverage of its Wi-Fi access points. Currently, applications deployed on terminal devices have increasing demands for computing resources and storage resources; making offloading workloads deployed on mobile terminals to the cloud has become the most effective solution to the insufficient performance of terminal devices [8]. The MEC server brings traditional core cloud-centric computing closer to the edge of the network on the user side, accelerating content, services, and applications, thereby improving responsiveness from the edge. While providing users with a highly distributed computing environment, the MEC server can also perform special tasks that cannot be completed in the traditional Internet architecture, such as real-time big data analysis, perception performance optimization, and distributed cloud computing. For many emerging applications, such as intensive video coding and local streaming services, mobile operators can use MEC servers to work with core cloud systems. When the local MEC server cannot meet the needs of users or the computing power of the MEC server is insufficient to support when computing intensive tasks are performed by the mobile terminal, the MEC server can request computing resources from the core cloud and collaborate to complete the work [9, 10]. This MEC server and the core cloud data center cooperate to complete tasks, which can make up for the core cloud’s long distance to users, long transmission delay, and small cloud Wi-Fi coverage. The MEC server is used to complete seamless task offloading and execution.

At present, the MEC technology has been recognized by the 5G PPP organization as one of the three key emerging technologies for 5G networks, including network function virtualization (NFV) and software-defined networking (SDN).

2.2. Data Warehouse Technology

Data warehouse is a new data processing architecture based on relational database technology to solve integrated data problems. The currently accepted concept of a data warehouse is the definition of the creator’s data warehouse: a data warehouse is centralized, integrated, time-changing, and continuous data collection, supporting the execution of the decision-making process [11]. Regarding data warehouse, although domestic and foreign experts do not have a unified definition, their views all point out that data warehouse has the characteristics of subject-oriented, integrated, relatively stable, and historical.

2.2.1. Theme

Each topic corresponds to an analysis question raised by managers, which is an analysis field at the macro level. The theme refers to the abstract concept that aggregates and categorizes data in the macro field for mining, analysis, and utilization [12]. Since the data in the data warehouse is subject-oriented, there are two processes for the organization of the data: determining the subject and determining the content of the data that needs to be included in each subject. In a data warehouse, each topic corresponds to a relationship, that is, a data table, so the expression of the topic can be realized by using a data table in a relational database. Subject-oriented data organization has two characteristics: independence and completeness.

2.2.2. Granularity

Granularity refers to the level of detail and the level of the data warehouse in the data warehouse and is a very important concept in the data warehouse. The more detailed the description in the data warehouse, the lower the level of data analysis. On the other hand, the wider the description of the information, the higher the resolution. The data in the data warehouse usually has various resolution levels, including resolution level, resolution, openness, and advanced analysis. Analysis classification directly affects the amount of data in the repository and the adjustable data tables and numbers associated with the analysis process [13] as shown in Figure 2.

The data granularity of a data warehouse is related to the level of data collection organized in a specific topic. Data warehouse analysis must meet certain design principles to meet all levels of questionnaires and analysis requirements. Granular design principles are as follows: optimized storage structure, high query performance, space saving, and strive to eliminate data loss in the existing structure.

2.2.3. Metadata

Metadata is “data information” used to describe and identify the source of data elements, the activities of data elements in the process, the data warehouse, and the description of data and operations (input, calculation, and output). In order to effectively organize a data warehouse, metadata with clear description capabilities and rich content must be designed.

2.2.4. Data Segmentation

Data segmentation refers to a way to reasonably separate all data into small physical units suitable for independent storage management [14]. The main purpose of data segmentation is to improve the efficiency of organization and data analysis. There are many standards for data segmentation such as date, characteristics, region, and business area, or it can be a combination of two or more reference standards. Under normal circumstances, the date item should be regarded as the basic segmentation standard in the segmentation process. It can naturally segment the data according to people’s understanding, and the segmentation is even. Data segmentation can be divided into two types: system level and application level [15]. System-level segmentation is done by most systems, such as database management and operating systems. Application-level segmentation is performed directly by password repository administrators or developers, and this level of segmentation is relatively more flexible.

2.3. Clustering Algorithm

Clustering algorithm is the most studied and widely used algorithm in unsupervised learning. Its basic function is to divide a specified data set into multiple subsets through a certain algorithm, that is, clusters [16, 17]. It can be said that clustering is a process of dividing samples in a data set into disjoint subsets through algorithms. The purpose of the clustering algorithm is to classify a group of data to form data clusters, and meet the following two conditions. (1) The tasks after the same cluster are as similar as possible. (2) The difference between tasks in different clusters is as large as possible.

The most commonly used criterion to determine the degree of similarity between points in the clustering algorithm is to use the Euclidean distance and Mahalanobis distance between two points [18, 19]. Generally, the Euclidean distance in the -dimensional feature space can be expressed as

The Mahalanobis distance can be expressed as

Ming’s distance is a generalization of Mahalanobis distance, which can be expressed as

It can be seen that for Ming’s distance, when, it corresponds to the Mahalanobis distance, and when, it corresponds to the Euclidean distance.

For the clustering problem in the MEC system, we need to determine the similarity metric before classification. Suppose there are currently tasks, numbered , and the total amount of calculation required to complete each task is , respectively. Assuming that the sensitivity of all tasks to delay can be divided into categories, numbered , when there are tasks. The extension sensitivity can be expressed as , where , . We define the classified cluster as and hope that the maximum value of the standard deviation of all clusters obtained after classification is as small as possible.

Assume that at a certain moment the task in the controller is , and the corresponding delay sensitivity is . After normalizing the delay sensitivity, the delay sensitivity obtained is , where

The total amount of calculation required to complete the task is , respectively, and the total calculation amount obtained after normalizing the calculation amount is , where

We define that the Euclidean distance between tasks and in the same cluster can be expressed as

The average of the distances in the same cluster can be expressed as

Define the standard deviation corresponding to tasks in the same cluster as , when there is task in cluster ; the corresponding standard deviation is

We believe that in the same cluster, the smaller the standard deviation of all tasks, the higher the similarity of the tasks. Therefore, we hope that in the results obtained by clustering, the maximum value of the standard deviations in all clusters is as small as possible. That is, when all tasks can be divided into clusters, we hope to get

Assuming that there are a total of tasks in the current -th cluster, then the average of the distances of all tasks in this cluster is

where means

The weight vector of the currently selected sample , where and represent the weight vector between the current input sample in the output layer. Therefore, the Euclidean distance vector between the weight vector and the input can be obtained by the following formula:

Determine the winning neuron by the smallest distance between the output layer neuron and the task sample :

Adjust the connection weight: for the winning neuron and all the neurons in the output layer and input layer in its neighborhood , the connection weight vector of the algorithm executed at the th time is in accordance with the formula shown below, make corrections

The connection weight of the input neuron and the output neuron is also a two-dimensional vector, and represents the winning neuron.

Update the learning rate and the neighborhood function : the general learning rate and neighborhood function will gradually decrease with the increase of the number of iterations, which can be expressed as

The clustering algorithm is a strategy for clustering mobile terminal tasks in the MEC system. For the results obtained after clustering, it is hoped that the maximum distance standard deviation in all clusters is as small as possible [20]. It can not only ensure the quality of service and improve the user experience but also reduce the queuing delay of subsequent tasks, which is conducive to reducing system overhead.

3. Application Experiment Based on Mobile Edge Computing Technology in Civil Aviation Express Marketing

3.1. Experimental Program Based on MEC Architecture

In order to verify the effectiveness of the prediction of content popularity to design the collaborative caching mechanism under the MEC architecture, this article compares the caching scheme with some known content caching schemes, including the least recently used (LRU) strategy and randomized cache strategy (randomized replacement (RR)). The least recently used (LRU) replacement strategy is a caching strategy for estimating future requested content by observing the content being accessed by users in the near future. When the amount of cached data exceeds the threshold, the least recently accessed content is deleted [21, 22]. The RR caching strategy randomly selects data content for caching.

In addition, in order to more intuitively and effectively illustrate the advantages of the content popularity prediction scheme based on migration learning proposed in this paper and the improvement of network performance by the MEC collaborative caching mechanism, this research also combines the caching scheme with LENC (noncooperative learning). Based caching scheme and GT (popularity-aware greedy strategy) scheme are compared. Specifically, the LENC scheme means that each MEC directly caches content with high popularity based on the prediction results of the content popularity based on the migration learning method in this article, until the cache space is full, that is, no collaborative caching is performed between MEC servers; the GT scheme is when the true value of content popularity is known in advance; the MEC collaborative greedy algorithm proposed in this paper is used for cache deployment [23]. In fact, the GT algorithm provides the performance upper limit (not the real performance upper limit) under the greedy cooperative caching mechanism of this article for the other four algorithms (LECC, LENC, LRU, and RR).

3.2. Performance Indicators

The comparison is made from three performance index parameters, namely, the cache hit rate (hit rate (HR)), the average content transmission delay (average delivery latency (ADL)), and the average content transmission cost (average transmission cost (ATC)).

3.2.1. Cache Hit Rate (HR)

In the MEC service system, the processing methods of user requests can be divided into the following two situations: one is that if the requested content has a backup in the MEC cooperative cache domain; the content is sent to the user, which we call a cache hit. The other is that if the content requested by the user is not cached in the local cooperative cache domain; the content needs to be obtained from the remote central server and then pushed to the user. This is called a miss [2426].

3.2.2. Average Content Transmission Delay (ADL)

In the MEC service system, the average delay for users to obtain content is an important performance indicator to measure the quality of user experience. The smaller the average content delivery delay, the more user requests can be satisfied by the local MEC, and the higher the quality of user experience [27, 28].

3.2.3. Average Content Transmission Cost (ATC)

The average content transmission cost (ATC) index is the value of the objective function of the optimization problem. In addition, in the definition of the objective function of the optimization problem, we set the unit data content to be transmitted from to , and the transmission cost through single-hop routing is . The unit data content is transmitted from the remote cache server to , and the transmission cost of the single-hop route is . Since the exact values of and cannot be known in the simulation of real test scenarios, we use simulation to illustrate the impact of and value settings on performance. We define the cost factor :

3.3. Experimental Parameter Settings

Consider the scenario where video content services are dominant in mobile Internet applications. The simulation parameters are as follows. There are 3 randomly distributed BSs in the cooperative buffer domain, and the number of mobile users in each cell is 400. Assume that the content provider publishes a total of 800 video files whose popularity obeys the ZipF distribution model, and the skewness coefficient is . This parameter describes the degree of skewness of the content popularity distribution. For the cooperative cache domain model, we consider 3 MEC servers in a domain, and each server receives an average of 9000 content requests per day. It is assumed that the MEC server in the domain has a uniform size of cache space. The update period is 3 h. In addition, the transmission delay in the access network and core network refers to the 3GPP LTE standard. In the simulation experiment, we study the impact of system parameters on cache performance, including the size of cache space, the number of service contents, the skewness coefficient of the popularity distribution, and the cost coefficient [29, 30].

4. Application Experiment Analysis Based on Mobile Edge Computing Technology in Civil Aviation Express Marketing

4.1. Performance Comparison of Five Mechanisms in a Certain Cache Space

First, the five caching mechanisms under the change of the MEC cache space are described and the performance comparison in three aspects: cache hit rate, average content transmission delay, and transmission overhead. The value of each MEC buffer space size ranges from 10 to 100, the overhead coefficient is 10, the skew coefficient is 0.52, and the number of service contents in the system is 800. Table 1 records the variation of the cache hit rate with the size of the MEC cache space under each cache mechanism.


Cache space sizeCaching mechanismCache hit rate (%)

20GT21.6
LECC20.3
LENC18.6
LRU11.8
RR7.5

40GT33.1
LECC31.5
LENC27.8
LRU21.7
RR15.1

60GT43.3
LECC39.8
LENC34.2
LRU28.1
RR21.9

80GT50.8
LECC47.6
LENC40.0
LRU36.7
RR28.8

100GT57.5
LECC54.8
LENC44.3
LRU42.1
RR34.9

According to the variation of the cache hit rate with the size of the MEC cache space under each cache mechanism recorded in Table 1, the trend change graph of the cache hit rate with the size of the MEC cache space under each cache mechanism in Figure 3 can be obtained.

It can be seen from Figure 3 that the cache hit rate under all strategies increases as the cache space increases. Experimental data shows that the LECC mechanism proposed in this paper is superior to LENC, LRU, and RR. Specifically, when the cache space is 60, the cache hit rate under the LECC mechanism is about 6%, 11%, and 18% higher than that of LENC, LRU, and RR, respectively. In addition, when the size of the cache space changes from 20 to 100, the cache hit rate under the LECC mechanism differs from the upper limit of the cache hit rate obtained by GT by only 3%-5%. This proves the effectiveness of the caching mechanism based on intelligent prediction of content popularity in this paper.

Table 2 records the variation of the average content transmission delay with the size of the MEC cache space under each caching mechanism.


Cache space sizeCaching mechanismAverage content transmission delay

20GT78.1
LECC79.7
LENC81.5
LRU88.6
RR92.2

40GT66.7
LECC69.3
LENC73.2
LRU79.1
RR85.2

60GT57.0
LECC60.8
LENC66.8
LRU72.1
RR79.2

80GT49.3
LECC53.9
LENC60.0
LRU64.1
RR72.2

100GT42.8
LECC45.7
LENC56.4
LRU58.9
RR66.7

According to the variation of the average content transmission delay under each cache mechanism with the size of the MEC cache space recorded in Table 2, the trend change graph of the cache hit rate under each cache mechanism with the size of the MEC cache space can be obtained.

It can be seen from Figure 4 that the average content transmission delay under all strategies decreases as the cache space increases. This is because as the cache space increases, more content can be cached locally so that more user requests can be directly served by the local cooperative cache domain. Experimental data shows that the LECC mechanism has the lowest ADL. For example, when the storage space ranges from 60 to 100, the ADL under the LECC mechanism is about 6% to 10%, 10% to 15%, and 18% to 22% lower than that of LENC, LRU, and RR, respectively.

Table 3 records the variation of the average content transmission overhead with the size of the MEC cache space under each caching mechanism [31, 32].


Cache space sizeCaching mechanismAverage content transfer overhead

20GT376
LECC392
LENC409
LRU442
RR467

40GT334
LECC347
LENC365
LRU387
RR425

60GT281
LECC302
LENC327
LRU359
RR390

80GT244
LECC261
LENC301
LRU322
RR358

100GT213
LECC236
LENC278
LRU290
RR327

According to the changes in the average content transmission overhead of each cache mechanism with the size of the MEC cache space recorded in Table 3, the trend graph of the cache hit rate with the size of the MEC cache space under each cache mechanism in Figure 5 can be obtained.

It can be seen from Figure 5 that the average content transmission overhead under all strategies decreases as the cache space increases. Experimental data shows that the LECC mechanism has the lowest ATC. For example, when the storage space ranges from 60 to 100, the ATC under the LECC mechanism is about 4%-9%, 8%-13%, and 18%-22% lower than that of LENC, LRU, and RR, respectively. The significant improvement in cache performance brought about by the LECC mechanism mainly comes from the content popularity prediction based on migration learning proposed in this paper and the MEC collaborative cache [33].

4.2. Performance Comparison of Five Mechanisms under a Certain Amount of Service Content

Tables 46 record the cache hit rate, average content transmission delay, and transmission overhead under the five caching mechanisms as the number of different service contents changes. The value range of the number of service contents is 500 to 4000, the cost coefficient is fixed at 10, the skew coefficient is taken at 0.52, and the size of the cache space of each MEC server is fixed at 60.


Number of service contents
Cache hit rate (%)
Caching mechanism
GTLECCLENCLRURR

100037.235.630.424.919.3
200026.324.819.814.99.8
300021.218.716.210.57.2
400018.515.812.17.95.1

According to the changes in the cache hit rate of each cache mechanism with the number of different MEC service contents recorded in Table 4, the trend change graph of the cache hit rate with the size of the MEC cache space under each cache mechanism in Figure 6 can be obtained.

According to the variation of the average content transmission delay under each caching mechanism with the amount of different MEC service content recorded in Table 5, the trend change graph of the average content transmission delay under each caching mechanism in Figure 7 with the size of the MEC cache space can be obtained.


Number of service contents
Average content transmission delay
Caching mechanism
GTLECCLENCLRURR

100061.864.969.875.782.1
200074.877.280.685.690.3
300079.782.885.189.194.0
400082.384.988.792.595.1

According to the variation of the average content transmission cost under each caching mechanism with the number of different MEC service contents recorded in Table 6, the trend graph of the average content transmission cost under each caching mechanism in Figure 8 with the size of the MEC cache space can be obtained.


Number of service contents
Average content transfer overhead
Caching mechanism
GTLECCLENCLRURR

1000312325349375413
2000371383402425451
3000398412422445470
4000415423440462476

Figures 68 describe the changes in the cache hit rate, average content transmission delay, and transmission overhead with the number of different service contents under the five caching mechanisms. Experimental data shows that the LECC mechanism proposed in this paper is superior to LENC, LRU, and RR in terms of HR, ADL, and ATC. It can be seen from Figure 6 that the cache hit rate under all strategies decreases as the number of service contents increases. The average content transmission delay and transmission overhead increase with the increase in the number of service contents. This is due to the limited cache space of the MEC server. With the continuous increase of service content, more and more user requests cannot be satisfied from the local cache. From the numerical results in Figure 6, when the number of service content changes from 1000 to 3000, the HR under the LECC mechanism is about 5%~9%, 11%~14%, and 16%~21% higher than that in LENC, LRU, and RR, respectively.

5. Conclusions

With the rapid development of the mobile Internet and the Internet of Things, mankind is about to usher in the 5G era. 5G technology demands “large capacity, large bandwidth, low latency, and low power consumption,” and mobile edge computing is precisely the 5G network that improves user experience. The key technology is deployed at the edge of the network, close to the data source, and has rapid feedback. It can sink the computing content and capabilities and localize the business. This paper proposes a collaborative caching mechanism based on machine learning under the 5G MEC architecture. The mechanism uses the local caches of several MEC servers to form a cooperative cache domain as the overall structure and uses the migration learning method to predict the popularity of the content. Based on this prediction, an MEC is established that optimizes the average content transmission cost. The content cache model further proves the above optimization problem. We use algorithms to solve the content caching scheme and perform simulation verification. Experimental results show that the proposed LECC caching strategy can effectively improve the cache hit rate and reduce transmission overhead and content transmission delay.

The proposed mobile edge computing architecture provides storage and computing and network services for mobile terminal users and provides a good platform for task migration. Task migration is to transfer all or part of large-capacity PC applications to mobile devices with a large amount of resources on the platform to deal with insufficient processing problems and limited power. The short-term capabilities of mobile edge computing provide users with powerful computing capabilities and ultra-high-speed networks, which can be accessed anytime, anywhere, shorter migration path and lower power consumption.

Data Availability

No data were used to support this study.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Y. Yu, “Mobile edge computing towards 5G: vision, recent progress, and open challenges,” China Communications, vol. 13, Supplement2, pp. 89–99, 2016. View at: Google Scholar
  2. T. X. Tran and D. Pompili, “Joint task offloading and resource allocation for multi-server mobile-edge computing networks,” IEEE Transactions on Vehicular Technology, vol. 68, no. 1, pp. 856–868, 2019. View at: Publisher Site | Google Scholar
  3. E. Ahmed and M. H. Rehmani, “Mobile edge computing: opportunities, solutions, and challenges,” Future Generation Computer Systems, vol. 70, pp. 59–63, 2016. View at: Google Scholar
  4. N. Ansari and X. Sun, “Mobile edge computing empowers Internet of things,” IEICE Transactions on Communications, vol. 101, no. 3, pp. 604–619, 2018. View at: Google Scholar
  5. W. Li, Z. Chen, X. Gao, W. Liu, and J. Wang, “Multimodel framework for indoor localization under mobile edge computing environment,” IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4844–4853, 2019. View at: Publisher Site | Google Scholar
  6. S. Wang, Y. Zhao, L. Huang, J. Xu, and C. H. Hsu, “QoS prediction for service recommendations in mobile edge computing,” Journal of Parallel & Distributed Computing, vol. 127, pp. 134–144, 2017. View at: Google Scholar
  7. J. Zeng, J. Sun, B. Wu, and X. Su, “Mobile edge communications, computing, and caching (MEC3) technology in the maritime communication network,” China Communications, vol. 17, no. 5, pp. 223–234, 2020. View at: Publisher Site | Google Scholar
  8. K. Zhang, Y. Mao, S. Leng, Y. He, and Y. ZHANG, “Mobile-edge computing for vehicular networks: a promising network paradigm with predictive off-loading,” IEEE Vehicular Technology Magazine, vol. 12, no. 2, pp. 36–44, 2017. View at: Publisher Site | Google Scholar
  9. T. Wang, H. Luo, X. Zheng, and M. Xie, “Crowdsourcing mechanism for trust evaluation in CPCS based on intelligent mobile edge computing,” ACM Transactions on Intelligent Systems and Technology, vol. 10, no. 6, pp. 1–19, 2019. View at: Google Scholar
  10. Q. V. Pham, L. B. Le, S. H. Chung, and W. J. Hwang, “Mobile edge computing with wireless backhaul: joint task offloading and resource allocation,” IEEE Access, vol. 7, no. 99, pp. 16444–16459, 2019. View at: Publisher Site | Google Scholar
  11. J. C. Yim, S. H. Kim, and C. S. Keum, “Personalized service recommendation for mobile edge computing environment,” The Journal of Korean Institute of Communications and Information Sciences, vol. 42, no. 5, pp. 1009–1019, 2017. View at: Publisher Site | Google Scholar
  12. Y. Mao, J. Zhang, S. H. Song, and K. B. Letaief, “Stochastic joint radio and computational resource management for multi-user mobile-edge computing systems,” IEEE Transactions on Wireless Communications, vol. 16, no. 9, pp. 5994–6009, 2017. View at: Publisher Site | Google Scholar
  13. Y. Jararweh, M. Al-Ayyoub, M. Al-Quraan, A. T. Lo’ai, and E. Benkhelifa, “Delay-aware power optimization model for mobile edge computing systems,” Personal & Ubiquitous Computing, vol. 21, no. 6, pp. 1–11, 2017. View at: Google Scholar
  14. M. Zeng and V. Fodor, “Energy minimization for delay constrained mobile edge computing with orthogonal and non-orthogonal multiple access,” Ad Hoc Networks, vol. 98, pp. 102060.1–102060.13, 2020. View at: Google Scholar
  15. Y. Zhai, T. Bao, L. Zhu, M. Shen, X. du, and M. Guizani, “Toward reinforcement-learning-based service deployment of 5G mobile edge computing with request-aware scheduling,” IEEE Wireless Communications, vol. 27, no. 1, pp. 84–91, 2020. View at: Publisher Site | Google Scholar
  16. Q. Zhang, L. Gui, F. Hou, J. Chen, S. Zhu, and F. Tian, “Dynamic task offloading and resource allocation for mobile edge computing in dense cloud RAN,” IEEE Internet of Things Journal, vol. 7, no. 4, pp. 3282–3299, 2020. View at: Publisher Site | Google Scholar
  17. J. Zhang, H. Guo, and J. Liu, “Adaptive task offloading in vehicular edge computing networks: a reinforcement learning based scheme,” Mobile Networks and Applications, vol. 25, no. 4, pp. 1–10, 2020. View at: Google Scholar
  18. R. Fantacci and B. Picano, “Federated learning framework for mobile edge computing networks,” CAAI Transactions on Intelligence Technology, vol. 5, no. 1, pp. 15–21, 2020. View at: Publisher Site | Google Scholar
  19. N. Saranya, K. Geetha, and C. Rajan, “Data replication in mobile edge computing systems to reduce latency in Internet of things,” Wireless Personal Communications, vol. 112, no. 4, pp. 2643–2662, 2020. View at: Publisher Site | Google Scholar
  20. W. Wen, Y. Cui, T. Q. S. Quek, F. C. Zheng, and S. Jin, “Joint optimal software caching, computation offloading and communications resource allocation for mobile edge computing,” IEEE Transactions on Vehicular Technology, vol. 69, no. 7, pp. 7879–7894, 2020. View at: Publisher Site | Google Scholar
  21. Y. Liu, Y. Li, Y. Niu, and D. Jin, “Joint optimization of path planning and resource allocation in mobile edge computing,” IEEE Transactions on Mobile Computing, vol. 19, no. 9, pp. 2129–2144, 2020. View at: Publisher Site | Google Scholar
  22. J. Zhang, H. Guo, J. Liu, and Y. Zhang, “Task offloading in vehicular edge computing networks: a load-balancing solution,” IEEE Transactions on Vehicular Technology, vol. 69, no. 2, pp. 2092–2104, 2020. View at: Publisher Site | Google Scholar
  23. J. Shen, Y. Ren, J. Wan, and Y. Lan, “Hard disk drive failure prediction for mobile edge computing based on an LSTM recurrent neural network,” Mobile Information Systems, vol. 2021, Article ID 8878364, 12 pages, 2021. View at: Publisher Site | Google Scholar
  24. J. Ahn, J. Lee, S. Park, and H. S. Park, “Power efficient clustering scheme for 5G mobile edge computing environment,” Mobile Networks and Applications, vol. 24, no. 2, pp. 643–652, 2019. View at: Publisher Site | Google Scholar
  25. X. Meng, W. Wang, Y. Wang, V. K. N. Lau, and Z. Zhang, “Closed-form delay-optimal computation offloading in mobile edge computing systems,” IEEE Transactions on Wireless Communications, vol. 18, no. 10, pp. 4653–4667, 2019. View at: Publisher Site | Google Scholar
  26. Z. Lv and Q. Liang, “Optimization of collaborative resource allocation for mobile edge computing,” Computer Communications, vol. 161, pp. 19–27, 2020. View at: Publisher Site | Google Scholar
  27. J. Xue, Z. Wang, Y. Zhang, and L. Wang, “Task allocation optimization scheme based on queuing theory for mobile edge computing in 5G heterogeneous networks,” Mobile Information Systems, vol. 2020, Article ID 1501403, 12 pages, 2020. View at: Publisher Site | Google Scholar
  28. C. H. Wu and S. B. Tsai, “Using DEMATEL-based ANP model to measure the successful factors of E-commerce,” Journal of Global Information Management, vol. 26, no. 1, pp. 120–135, 2018. View at: Publisher Site | Google Scholar
  29. S. Wang, Q. Li, J. Hou, S. Meng, B. Zhang, and C. Zhou, “Active defense by mimic association transmission in edge computing,” Mobile networks & applications, vol. 25, no. 2, pp. 725–742, 2020. View at: Publisher Site | Google Scholar
  30. H. Sun, F. Zhou, and R. Q. Hu, “Joint offloading and computation energy efficiency maximization in a mobile edge computing system,” IEEE Transactions on Vehicular Technology, vol. 68, no. 3, pp. 3052–3056, 2019. View at: Google Scholar
  31. S. Li, D. Zhai, P. Du, and T. Han, “Energy-efficient task offloading, load balancing, and resource allocation in mobile edge computing enabled IoT networks,” Science China Information Sciences, vol. 62, no. 2, pp. 1–3, 2019. View at: Google Scholar
  32. M. Abdel-Basset, G. Manogaran, and M. Mohamed, “A neutrosophic theory based security approach for fog and mobile-edge computing,” Computer Networks, vol. 157, pp. 122–132, 2019. View at: Google Scholar
  33. B. Zhu, S. Ma, R. Xie, J. Chevallier, and Y. Wei, “Hilbert spectra and empirical mode decomposition: a multiscale event analysis method to detect the impact of economic crises on the European carbon market,” Computational Economics, vol. 52, no. 1, pp. 105–121, 2018. View at: Publisher Site | Google Scholar

Copyright © 2021 Ying Yu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views307
Downloads389
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.