Advanced Pattern Recognition Systems for Multimedia DataView this Special Issue
Edge Cloud and Mobile Auxiliary Equipment of Fine Art Painting Course Teaching Auxiliary System Design
Due to the development of information technology, the new type of smart campus integrates teaching, scientific research, office services, and many other applications, and online education information resources show a blowout growth. Art and painting teaching courses in the cloud era also face a series of digital challenges. In view of the development of digital teaching mode, art and painting course is different from other courses and has higher requirements for the transmission of audio and video. How to transmit the audio and video generated during interactive teaching efficiently and with low latency is a challenge. In order to realize the needs of art painting course teaching, based on the characteristics of edge cloud and mobile auxiliary equipment, build the “double cloud” mode of art painting teaching auxiliary system, through the cloud edge deployed different layers and design adaptive code switching mechanism, realize the art painting teaching and remote teaching video flow, help to realize the teaching interaction between teachers and students and instant feedback, and help to improve the art painting course teaching effect.
With the rapid development of the Internet of Things and the popularity of 4G/5G wireless networks, the era of the Internet of everything has arrived, and the rapid increase in the number of edge devices in the network makes the data generated by such devices reach the zettabyte (ZB) level. The introduction and widespread application of cloud computing have changed the way people work and live their daily lives . In the new era, the teaching mode of art and painting course has also undergone tremendous changes. At the same time, due to the COVID-19 epidemic in recent years, people have a more urgent demand for online learning. Remote online art and painting course teaching is not only faced with the problem of large demand for video transmission, but also faced with problems such as lag caused by many mobile devices. In particular, the explosion of access devices and user requirements for low latency  and the centralized big data processing with cloud computing model as the core cannot efficiently process the data generated by edge devices. The imbalance and gaps between resource-constrained devices and complex computing-intensive applications have increasingly become a bottleneck for user-specified experience quality and may hinder the development of mobile applications. In the era of edge big data processing, although network edge devices still produce huge amounts of real-time data, these edge devices will be deployed to support real-time data processing edge computing platform to provide users with a large number of services or functional interface; users can call the interface to obtain the required edge computing services and effectively reduce the user’s delay. Therefore, to make use of the computing power of the cloud and meet the low latency requirements of the user side, it is inevitable to build the art and painting course teaching auxiliary system of edge cloud and mobile auxiliary devices.
Based on this background, this paper designs the art and painting course teaching auxiliary system aided by edge cloud and mobile devices. The full text is divided into four chapters. Chapter 1 introduces the research background and research necessity and the chapter arrangement of the paper; Chapter 2 mainly introduces the current status quo of edge computing and its application, focusing on the application of edge computing in the field of image and video transmission. It introduces the transmission problem that needs to be solved by the teaching auxiliary system; Chapter 3 mainly aims at the particularity of art and painting course teaching. The required functions of the teaching aid system are analyzed, constructing the “double cloud” teaching auxiliary system model, through the design of cloud edge collaboration mechanism to realize efficient and low latency transmission; Chapter 4 is mainly implemented for the designed cloud edge collaboration mechanism. The relevant validation was also performed. The experimental results and the errors were analyzed, arriving at a conclusion. Experiments show that, through the cloud edge and design of adaptive transcoding switching mechanism, realize the art painting teaching and distance teaching video playback, help to realize the two-way interaction between teachers and students and real-time feedback, and help to improve the teaching effect of art painting course.
2. State of the Art
According to the Cisco Internet Business Solutions Group calculation , the number of 50 billion wireless devices is now connected to the network. Application services based on the Internet of everything platform require a shorter response time and also generate large amounts of data involving personal privacy. In this case, the traditional cloud computing model will not be able to efficiently support the application services based on the Internet of everything, and the edge computing has emerged at the historic moment and has been widely concerned by researchers in the past two years. Edge computing is a computing technique that can be performed at network edges so that computing can be performed near a data source. It is increasingly clear that edge computing is becoming an increasingly popular alternative due to the high workload and high response latency . Edge in edge computing refers to the computing and storage resources on the edge of a network, in which the network edge device already has sufficient computational power to achieve local processing of the source data. However, edge computing is essentially local and small in scale. Facing a large amount of data, it does not have a powerful computing platform like cloud computing. Therefore, only edge computing alone to process a large amount of data information is often not enough, and the computing delay will be greatly increased. Therefore, it is necessary to combine edge computing with cloud computing not only to meet the computing needs of large capacity, but also to take into account the computing efficiency and effectiveness. Edge computing model can not only reduce the data transmission bandwidth, but also better protect the privacy data and reduce the risk of sensitive terminal data privacy leakage . Cloud platform and edge cloud platform will become the supporting platform for the emerging Internet of everything applications, jointly supporting the future big data applications.
Edge computing provides services near data sources, giving it a huge advantage in many mobile and Internet of Things applications. Ali Cloud has deployed a “city brain” in Hangzhou . This method uses the data obtained by the city cameras through video analysis technology and visual analysis technology and then makes decisions according to the results to improve the road traffic rate. The analysis and training process of deep learning is placed on the cloud, and the generated model is executed on the edge gateway, realizing the combination of cloud and edge cloud. Tang et al.  proposed a big data analysis framework based on smart city, which has good results on processing geographically widely distributed data. Crown et al. designed a lightweight plant disease identification model based on edge computing, deployed a lightweight deep neural convolutional network in an embedded system with limited computing power, and then realized a high-precision lightweight end model using quantitative blood river and simulation learning . Dong combines edge computing and video monitoring to build a video monitoring system based on edge computing and realizes a face recognition algorithm based on convolutional neural network based on the system . Chang et al. proposed a video surveillance framework based on edge computing, which can fully judge the video content at the front-end or near the video source . Combined with the edge com puting platform, underwater image enhancement technology, and underwater target detection algorithm, Ren has designed a set of underwater image processing system based on the edge computing platform to realize the real-time detection of underwater multitargets . In response to the needs of COVID-19 prevention and control, Jia et al. adopted the combination of mobile edge computing (MEC) and face mask wearing recognition algorithm to deploy the video image data processing task of the video acquisition terminal in the edge cloud, realizing the local shunt processing of real-time video image information, and greatly reducing the network bandwidth pressure . China also attaches great importance to the development of edge computing and continuous innovation and increases its efforts to promote the technology, standards, and industrial development in the field of industrial Internet and Internet of vehicles .
Nowadays, in order to promote the development and progress of education reform and domestic and foreign scholars and enterprises, how to analyze and process a large amount of data video faced with the increasing amount of data, reduce the delay, and improve the rate is particularly important. Edge computing is especially suitable for cases when data and devices are distributed and information resources are transmitted. In particular, for the teaching course of art and painting, the teaching link needs to convey not only the voice of teachers, but also the process of painting. And with the popularity of intelligent mobile terminals, it has become a trend for students to use intelligent terminals to learn art and painting. Teachers also need to use modern means to show the painting process for students and at the same time to immediately grasp the students’ painting learning progress, to better conduct teaching and guidance work. With the support of embedded sensors and device cameras, the transmission of teaching audio and video and bidirectional interaction produce a higher level of video flow transmission requirements. Using the distance teaching method, the video streaming has higher requirements on the network bandwidth. A reasonable way to solve this problem is to comprehensively adopt the “double cloud” combination of online cloud and edge cloud and use the edge computing of cloud data processing center and mobile auxiliary devices to jointly complete this application support.
3.1. System Framework of Art and Painting Course Auxiliary System
Use edge cloud and mobile auxiliary devices to build a “double cloud” art painting course teaching auxiliary system. The first thing is to build a composite framework of “online cloud edge cloud,” use cloud edge system computing to improve the computing efficiency of data, and help teachers and students to realize more smooth and lower delay service delivery of cloud to mobile auxiliary devices . Based on the core capabilities of cloud computing and edge computing, the cloud computing platform will be built in the edge infrastructure, forming a comprehensive elastic cloud platform built for computing, network, storage, and security in the edge zone. Edge cloud and central cloud form a technical framework of “cloud edge collaboration,” performing memory, computing, intelligent data analysis, and other tasks, configured around the network, minimize response latency, reduce bandwidth costs, and provide cloud services. Therefore, the present cloud edge architecture model is generally three layers of “cloud edge end” . Specific models are shown in Figure 1.
Edge computing infrastructure has initially formed a unified industry consensus, “end, edge, cloud” has become the basic composition of cloud edge collaborative computing architecture, and “network” provides a necessary foundation for the collaborative linkage between the three in the whole framework. On the cloud side, the centralized management platform is responsible for management and scheduling; on the edge side, control, analyze, and store edge applications; the end side, various mobile auxiliary equipment, is the terminal application side of teaching auxiliary system. “Network is a very important part of cloud edge collaborative computing, and its main role is to provide basic capabilities for the ‘cloud edge end’ collaborative linkage.” Central cloud and edge cloud cooperate with each other to realize edge cloud collaboration, network computing capability scheduling, network unified management, and other functions and truly realize the “ubiquitous” cloud. In the art painting teaching course assistance system based on edge cloud and mobile auxiliary equipment, the construction of edge cloud is set in the school. Terminal provides support. In view of this, the auxiliary teaching system does not have high requirements on the mobile auxiliary terminal, as long as it can meet the basic drawing and video requirements. Figure 2 describes the logical relationship between the “network, cloud, and end” of the art and painting course teaching auxiliary system based on edge cloud and mobile auxiliary device. It consists of three main components: end-user auxiliary device, edge learning server, and deep learning cluster on the remote cloud. The system can support teachers and students to realize intelligent instant interaction and one-to-one personalized teaching by using intelligent mobile auxiliary equipment and application support platform.
3.2. Collaborative Mechanism of Conventional Teaching Auxiliary Cloud Edge in the Teaching Auxiliary System
Different from other categories of courses, art and painting courses are more interactive between teachers and students, which belongs to a typical “interactive” teaching mode. “Interactive” teaching is a teaching mode that pays more attention to the interaction between teachers and students. In short, teaching activities are considered a communication between students as well as between teachers and students, and the educational process is a dynamic interactive process. However, the interaction between teachers and students will generate a large amount of data and network traffic, such as images, audio, and video, which depends on network load and congestion and varies from each individual local network, and the data needs to be processed with high accuracy. Processing images only in the cloud is affected by high wide-area latency and bandwidth costs and also reduces precision or real-time requirements . Due to the explosive growth of computer computing power, the research in the field of machine learning is very hot. The processing effect of machine learning method based on neural network on audio and video images has surpassed the traditional algorithms and become the best method to process audio and video at present. This paper also builds an art and painting course auxiliary teaching system suitable for “double cloud” collaboration, by adding deep neural network on the basis of edge cloud collaboration framework, to achieve better use effect.
According to the “dual cloud” system model given in Figure 2, in an edge learning framework, end-user devices such as mobile phones, cameras, and Internet of Things devices may be noisy and highly redundant. The edge learning server collects large amounts of raw data from the end-user and performs preprocessing and preliminary learning techniques to filter noise and extract key features of the raw data. Deep learning clusters are equipped with powerful and scalable GPU resources based on output from edge servers to perform deep learning tasks such as convolutional neural networks (CNN) and long-term short-term memory (LSTM) networks. Edge learning servers reduce the workload of the network infrastructure compared to state-of-the-art cloud computing architectures. The network latency between the end-user device and the edge server is significantly shorter than that between the end-user device and the cloud server, because this local edge server is close to the end-user device. The model trained on the cloud can be deployed on edge servers to provide timely services to end users, and new data can be continuously transmitted to the cloud to further update the model.
According to the conventional teaching assistance mechanism of the designed teaching auxiliary system, we use the deep neural network to process the audio and video in the teaching process. On the basis of the traditional deep neural network, we join the edge cloud collaborative network to push a part of the computing resources to the edge for computing. In addition, the DNN is divided and deployed to the edge and cloud for execution to improve the transmission efficiency of teaching audio and video flow. On the specific use of deep neural network, we use the AlexNet network with good results in the field of image recognition. Figure 3 gives the specific structure of the AlexNet neural network, mainly including convolutional, pooling, activation, and full connectivity layers, optimized using the gradient stochastic descent algorithm.
In the neural network, different layers are mainly used to simulate the neuronal structure of living organisms to achieve information transmission. In the AlexNet network, the convolutional layer mainly changes the size of the input image, adjusts the number and size of filters, and mainly relies on different convolution cores, and the convolution operation satisfies formula (1).
The fully connected layer connects all the neuronal results together and adjusts the proportion of the output results of each neuron by increasing the weight of learning, which is satisfied as
The main purpose of the pooling layer is to reduce dimensionality, reduce complexity, and provide some translation invariance at the beginning of training. The two common pooling operations are average pooling and maximum pooling. These layers differ mainly from the dimensions of their input, the size of the pool area, and the step size of the application pool.
The activation layer applies the nonlinear function separately to each of its input data, producing the same amount of data as the output. In neural networks, common activation layers include Sigmoid layer (Sig), Rectified Linear layer (Relu), and Hard Tanh layer (Htanh), the function expressions of various activation layers are shown in (3)–(5), and their operations satisfy formula (6).
The mean variance between the output and the input of the neural network can be expressed as
The learning process of backpropagation and stochastic gradient descent are used to optimize the network parameters and find the corresponding coefficient matrix W, bias vector b, and the network weight coefficient of the appropriate output layer and hidden layer. The update formula of gradient descent is shown in formula (8):
The output of the learning process can also be controlled by an output gate and shown in formulas (9)–(11):where , are bias, input weights, and cyclic weights of the forgotten gate, respectively.
3.3. Distance Teaching Auxiliary Cloud Side Coordination Mechanism in the Teaching Auxiliary System
The distance teaching of art and painting course is a mode of teaching by using computer network. It sends courses to one or more locations outside the campus for students to learn, free of geographical restrictions, brings great convenience to difficult places, and provides good and rich learning opportunities for students and has become a normal teaching method under COVID-19. The typical structural framework for distance teaching is shown in Figure 4.
However, with the rapid growth of network users, the limited broadband resources cannot adjust dynamically for users connected to the same network; moreover, if the video source is stored in the remote cloud center platform, the content uncertainty in the video increases during the transmission process. Using edge computing cloud computing migration to edge computing platform, make it closer to the client, the network localization, and use the MEC storage capacity, computing ability, and processing ability to improve the cache performance, transmission rate, and efficiency, conducive to art painting course in the process of remote teaching interaction smoothly. MEC will have the capacity to compute and storage to the edge of the network, closer to the user and data source, the user’s request will no longer need a long time to receive response, do not need to arrive through a long transmission network to the remote core network is processed, but by the local MEC server will unload part of the traffic, direct processing and response to the user, communication delay will be greatly reduced . Take video transmission as an example: under the traditional response mode, when a user initiates a call request for video content on the client side, it first needs to access through the base station, then links the target content through the core network, sends it back layer by layer, and finally completes the interaction between the terminal request and the target content. With the MEC solution, we can deploy the MEC server on the base station side of the test user entity and use the storage resources provided to cache the content on the MEC server. Users can obtain the target content directly from the MEC server. This can greatly shorten the transmission time of data and improve the service quality experience of users.
At present, the network video transmission mainly adopts the streaming media format, and the video compression technology mainly adopts the MPEG-4 video compression technology released by Microsoft. The MPEG-4 technique does not use methods in pixel-defined areas but instead uses lines and outlines to define areas, thus using a narrow bandwidth . Coupled with better flexibility and interaction, higher compression rate, bidirectional interaction will be very suitable for network video transmission. As long as the HTTP transmission of video data protocol is followed, the dynamic adaptive flow technology (DASH) suitable for different network bandwidth is developed. DASH includes two parts: server and client, which divides the compressed files to form the format supported by DASH and then generates the corresponding MPD files. The client first requests the MPD file and then knows the information of the linked video according to the requested MPD file .
4. Result Analysis and Discussion
4.1. Implementation and Verification of Routine Teaching Assistance of Teaching Auxiliary System
In the conventional teaching mechanism of the teaching auxiliary system, we deploy the AlexNet neural network in the cloud and the mobile auxiliary terminal, respectively, to conduct simple simulation experiments, and analyze the results. As can be seen from Figure 5, the left is the transmission delay and the right calculates the cloud. Processing analysis has a large delay in transmission communication, only 6 ms, less than 6%; however, edge computing processing analysis is the opposite, which produces 385 ms, but less communication transmission delay. It can be seen that (1) the data transmission delay of the moving edge is small, but the computational aspect will produce a large delay ; (2) cloud processing has significant computing advantages over mobile edge processing, but it does not necessarily bring end-to-end latency advantage because of the cost of data transmission dominates; (3) mobile edge execution generally has lower latency compared to cloud-only methods.
Further analyze the latency of the AlexNet neural network and the size of the output data as shown in Figures 6 and 7 where different network layers produce different latency and output data sizes, with the front-end network layer such as convolutional layer, pooling layer, and the back-end network layer (full connection layer); and the front-end network layer produces more output data than the back-end network layer. Thus, there is a unique partition point to divide the AlexNet neural network to work on the cloud and edge.
The edge cloud collaborative network model based on neural network dynamically divides the DNN into two parts. One part of the layer is calculated at the edge and the other part is calculated in the cloud, so as to optimize the end-to-end latency, reduce the latency, and improve the rate. The layered nodes can be dynamically adjusted according to the actual situation. The general requirement is to minimize the total delay. The results are shown in Figure 8.
Figure 8 shows the end-to-end delays under different network bandwidths, mainly to analyze the delays under different network rates in two deployment scenarios. Overall, when the network bandwidth decreases, only the delay of the application increases with cloud processing; when the network bandwidth increases, the delay generated by cloud reasoning and neural network edge cloud model reasoning change, both reducing the latency. In addition, the cloud reasoning model is easily affected by the network bandwidth and changes with the network bandwidth. The dynamic division of neural network based edge cloud model makes the broadband change have little impact on the application latency and maintains consistent low latency. Essentially, the neural network deployed by dual cloud requires far less bandwidth than the traditional case.
4.2. Implementation and Verification of Remote Teaching Assistance of Teaching Auxiliary System
In the remote teaching mode of art and painting course based on edge cloud and mobile auxiliary devices, by introducing the dynamic adaptive video cache scheme based on MEC, it can effectively solve the video cache effect under different bandwidth conditions and can meet the needs of smooth teaching in different definition conversion.
Mobile AIDS include smartphones, laptops, tablets, and wearable devices that are connected through base stations or access points, while a MEC server is deployed next to the base station or access point, combining both MEC and DASH technologies. The MEC server is closer to the user side, so when the MEC server receives the request, it not only processes the request, but also directly interacts with the user or forwards it to the cloud or other MEC server. We use the MEC server for video cache and processing, and the concept of the MEC cache server is similar to the cache proxy server on the Internet. Specifically, DASH divides the video content into multiple segments, and each video can be encoded into a different resolution and bitrate and can be requested independently, in a video streaming session. Due to the actual computing power, the MEC server can transcode the video to different variants to meet the requirements of users, so as to provide users with a smooth and good experience.
Distance education scenario in most of the traffic is generated in the form of video, and video stream code rate is usually constant, but the video stream occupied broadband changing; therefore, DASH adaptive streaming code rate technology will be according to different network bandwidth to generate the appropriate rate and automatically adjust conversion, to provide users with a smooth viewing experience. According to the characteristics of art painting teaching, this paper mainly requests the appropriate code rate from the server based on the amount of buffer zone. Specifically, the upper thresholds and lower thresholds are set according to the amount of the buffer. When the two conditions (12) and (13) are met, the code rate of video fragmentation remains unchanged; otherwise the video fragmentation will be gradually improved;where is the estimated bandwidth, and is the switching factor of the code rate. When conditions (14) can be met, the maximum code rate of the video fragmentation rate is reached.
When conditions (15) can be met, the code rate of video fragmentation is gradually reduced; otherwise, the code rate of video fragmentation remains unchanged.where is the threshold to reduce the video code rate. We can set a threshold for reducing the bit rate. When the estimated bandwidth cannot meet the current bit rate, the resolution needs to be reduced; when the current network bandwidth can meet the current bit rate, the bit rate of the next video slice remains the same as the previous one. Through this dynamic adaptive adjustment method, the automatic adjustment of code rate is realized, and the experience quality and satisfaction of users in watching teaching videos are improved. In the process of transcoding, the video will produce a certain time delay. By playing 4K HD video, different bandwidth occupancy rate can be simulated, and the bandwidth occupancy rate can be 4K to 4K, 4K to 2K, 4K to 1080P, 4K to 720P, and 4K to 360P, respectively, conducting end-to-end delay analysis.
Figure 9 shows the switching delay results of different models in different conversion modes. As can be seen from the figure, the lower the transcoding of the traditional model, the lower the delay. The MEC dynamic adaptive video flow cache model gradually reduces the end-to-end delay in the transcoding process, but the difference is not big, which reflects a good network effect. Overall, based on MEC dynamic adaptive video flow cache model in the process of transcoding end-to-end delay less than the traditional model, transcoding can significantly reduce the delay and improve the user effect, confirming that the mentioned model can produce better effect and can meet the video transmission under the condition of different network loans.
In view of the digital characteristics of the current era, the education industry has introduced a large amount of digital equipment, and the art and painting courses have introduced intelligent equipment terminals, which has fundamentally changed the traditional teaching mode. Moreover, with the education reform and epidemic prevention and control demand in recent years, the demand of online teaching is further released, with higher requirements for interactive teaching and distance teaching. In view of the development of digital teaching mode, art and painting course is different from other courses and has higher requirements for the transmission of audio and video. In order to realize the needs of art painting course teaching, based on the characteristics of edge cloud and mobile auxiliary equipment, build the “double cloud” mode of art painting teaching auxiliary system, through the cloud edge deployed different layers, and design adaptive code switching mechanism, realize the art painting teaching and remote teaching video flow, help to realize the teaching interaction between teachers and students and instant feedback, and help to improve the art painting course teaching effect. Although some work has been done in this paper, there is still a lot of work in the design of teaching auxiliary system, and how to design mobile applications based on auxiliary system is still a greater challenge.
The labeled datasets used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
This work was supported by the Xingtai Universities.
B. Singh, S. Dhawan, and A. Arora, “A view of cloud computing,” International Journal of Computers & Technology, vol. 4, no. 2b1, pp. 50–58, 2013.View at: Publisher Site | Google Scholar
S. Sigdel, A summary on above the Clouds: a Berkeley View of Cloud Computing, Kathmandu University, Bagmati, Nepal, 2016.
D. Evans, The Internet of Things: How the Next Evolution of the Internet is Changing Everything, Scientific Research, Atlanta, GA, 2011.
J. Zhao, “Analysis of edge computing techniques in the internet of things environment,” Science and Education Guide: Electronic Edition, vol. 22, p. 1, 2020.View at: Google Scholar
M. Satyanarayanan, “The emergence of edge computing,” Computer, vol. 50, no. 1, pp. 30–39, 2017.View at: Publisher Site | Google Scholar
D. Wang, “Alibaba builds IoT core capabilities with “cloud edge end” collaborative computing,” Communication World, vol. 48, no. 9, 2018.View at: Google Scholar
B. Tang, Z. Chen, and G. Hefferman, “A hierarchical distributed fog computing architecture for big data analysis in smart cities,” in Proceedings of the ASE Bigdata & Social Informatics, Pisa, Italy, October 2020.View at: Google Scholar
G. Wang, J. Wang, and Y. Sun, “Lightweight plant disease identification model for edge-oriented calculation,” Journal of Zhejiang A&F University, vol. 35, 2020.View at: Google Scholar
L. Dong, Video Surveillance System Based on Edge Computing, University of Electronics, Beijing, China, 2020.
G. E. Chang, G. W. Bai, and H. Shen, “Edge computing based video surveillance framework,” Computer Engineering and Design, vol. 40, 2019.View at: Google Scholar
Y. Ren, Research on Underwater Image Processing Method Based on Edge Computing Platform, Harbin Engineering University, Harbin, China, 2020.
R. Jia, H. Du, and X. Kong, “The design and implementation of the intelligent system for mask wearing based on edge computing,” Modern Electronic Technology, vol. 44, no. 15, p. 5, 2021.View at: Google Scholar
Z. Wang, “Development status and prospects of intelligent edge computing,” ET Journal, vol. 5, 2022.View at: Google Scholar
K. Zhang, W. Huang, X. Hou, J. Xu, R. Su, and H. Xu, “A fault diagnosis and visualization method for high-speed train based on edge and cloud collaboration,” Applied Sciences, vol. 11, no. 3, Article ID 1251, 2021.View at: Publisher Site | Google Scholar
C. A. Long, A. Jw, and B. Jz, “Long-term optimization for MEC-enabled HetNets with device–edge–cloud collaboration,” Computer Communications, vol. 166, pp. 66–80, 2021.View at: Google Scholar
J. Chen, Y. Ma, and L. Song, “Embedded memory dynamic troubleshooting data compression design,” Journal of Electronic Measurement and Instrument, vol. 32, no. 7, p. 7, 2020.View at: Google Scholar
X. Geng, “Review of mobile-edge computing techniques,” Shanxi Electronic Technology, vol. 2, 3 pages, 2020.View at: Google Scholar
C. Yang, Z. Yu, and X. Wang, “End-to-end streaming media transmission control technology research review,” Computer Engineering and Application, vol. 41, no. 8, p. 5, 2020.View at: Google Scholar
C. Ge, N. Wang, S. Skillman, G. Foster, and Y. Cao, “QoE-driven DASH video caching and adaptation at 5G mobile edge,” in Proceedings of the ACM ICN IC5G Workshop, New York, NY, USA, September 2016.View at: Publisher Site | Google Scholar