A Novel Resource Deployment Approach to Mobile Microlearning: From Energy-Saving Perspective
Mobile Microlearning, a novel fusion form of the mobile Internet, cloud computing, and microlearning, becomes more prevalent in recent years. However, its high deployment and operational costs make energy saving in cloud become a concerning issue. In this paper, to save energy consumption, a resource deployment approach to cloud service provision for Mobile Microlearning is proposed. Chinese Lexical Analysis System and Dynamic Term Frequency-Inverse Document Frequency (D-TF-IDF) are adopted to implement resource classification. Resources are deployed to the 2-tier cloud architecture according to the classification results. Grey Wolf Optimization (GWO) algorithm is used to forecast real-time energy consumption per byte. The simulation results show that, compared to traditional algorithm, the classification accuracy of small sample categories was significantly improved; the forecast energy consumption value and the standard values are 7.67% in private cloud and 2.93% in public cloud; the energy saving reaches 2.22% to 16.23% in 3G and 7.35% to 20.74% in Wi-Fi.
Millions of people are participating in Mobile Microlearning and the number of students who are enrolled in a single course at the same time can be as high as tens of thousands . On the one hand, it indicates that Mobile Microlearning is getting more prevalent. On the other hand, it will increase the load and the energy consumption of the cloud platform. The core goal of Mobile Microlearning is to ensure that the users (visitors) can receive all kinds of online learning resources provided by cloud platform without the limitation of time, space, and region. In order to achieve the goal, studying a green cloud resource deployment method is very essential.
Introducing cloud computing into microlearning to solve the obstacles in the process of Mobile Microlearning by high computing power and huge storage of cloud is a promising method. However, current researchers focus on deploying the system to the existing cloud platform [2–4], user behaviour collection , framework and data analysis , learning style  or time management , and energy consumption in the Mobile Microlearning is rarely involved.
An effective way to solve energy consumption problem is to offload computing and data intensive tasks from private cloud with poor resource to public cloud with rich resource. With the collaboration of private cloud and public cloud, the goal of low energy consumption can be accomplished in a certain extent, but it also faces some problems. We mainly describe these problems of Mobile Microlearning process in mobile environment, Mobile Microlearning users, and service migration. In the wireless mobile communication network, there are some limitations for private cloud platforms to access public cloud platforms, such as long delay time, poor stability, and predictability, which will make the environment of wireless mobile communication network have poly clusters, fluctuations, and other nonstationary characteristics. At the same time, mobile terminal users are advanced creatures with rich thinking awareness, so their requests are usually personalized. For example, when the battery is low, some users may want to get the best application performance, while others are willing to sacrifice some application performance for longer standby time. Therefore, the relationship between benefits and costs in Mobile Microlearning should be accurately measured. In addition, in the service migration process, the private cloud platform and the public cloud platform will inevitably generate many times of information exchange and transmission. The device status and network environment may change during the execution of the application, which will further increase wireless traffic instability, bandwidth consumption, latency, network congestion, and cloud downtime.
Aiming at users’ dynamic and personalized demands for Mobile Microlearning, it is particularly important to research how to effectively deploy rich and colourful learning resources in the cloud platform so as to provide low-cost, efficient, and continuous cloud services to mobile users. Based on this consideration, we propose a new resource deployment approach to cloud service provision for Mobile Microlearning in this paper.
Firstly, we propose a resource deployment framework for Mobile Microlearning, which consists of classification module, 2-tier cloud architecture module, and GWO forecast module. The classification module and the 2-tier cloud architecture module achieve the resources classification and deployment, and the GWO forecast module can find the server with the lowest energy consumption cost which will provide users with energy-saving services. Secondly, we propose a D-TF-IDF algorithm, which reduces the influence of the uneven distribution of training set on the classification accuracy of test set. Finally, we propose a green cloud resource deployment method for Mobile Microlearning based on the framework, and the simulation results prove the superiority.
The rest of the paper is organized as follows. Related works in Section 2 are introduced. Fundamental methods are described, and a resource deployment framework for Mobile Microlearning is proposed in Section 3. The energy consumption problem in the Mobile Microlearning process is formulated in Section 4. Some simulations and numerical analysis are carried out in Section 5. At last, the conclusion is given in Section 6.
2. Related Works
Microlecture mobile learning system (MMLS) allows learners to access microlecture videos and other high-quality microlecture resources wherever and whenever they like . But the experimental results show that completion rate of Mobile Microlearning is still low . By analysing the reasons, we find that the choice of massive learning resources is the primary factor influencing the popularization of Mobile Microlearning. Reference  introduces a cloud-based system which can organize learners into a better teamwork context and customize microlearning resources in order to meet personal demands in real time. Chen et al. believe that it is a critical issue how to organize these microlearning units, in order to make them easier to be used and learned, as a result, they proposed an approach based on process mining to organize the learning units according to the situation of users . Kolas et al. added the interactive module in Mobile Microlearning and researched the value of the module . Emanuel et al. researched the effects of countries, race, gender, class, and income on the completion rate of Mobile Microlearning . Souza et al. researched the theory, concept, and model of Mobile Microlearning . Kim et al. discussed influence of mobile wireless technology on mobile learning . So far, energy consumption in Mobile Microlearning is rarely discussed by scholars.
Although the low cost and ease of use of mobile cloud computing have great potential for mobile cloud learning, it still faces the heavy load of large-scale user application calls. Rapid growth of the demand for computational power by scientific, business, and web applications has led to the creation of large-scale data centres consuming enormous amounts of electrical power . Moreno et al. introduced a dynamic resource provisioning mechanism to over allocate the capacity of real-time cloud data centres based on customer utilization patterns. The main idea is to exploit the resource utilization patterns of each customer to decrease the waste produced by resource request overestimations . The drawback is that it may reduce the QoS of user. Alahmadi et al. proposed a novel approach for scheduling, sharing, and migrating virtual machines (VMs) for a bag of cloud tasks to reduce energy consumption with guaranteed certain execution time and high system throughput . However, it does not take into account the additional energy caused by dynamical monitor cloud systems. Liu et al. proposed an approach based on ant colony optimization (ACO) to solve the VM placement (VMP) problem, named as ACO-VMP, so as to effectively use the physical resources and to reduce the number of running physical servers , which requires a large number of virtual machines. Farahnakian et al. propose a dynamic virtual machine consolidation algorithm to minimize the number of active physical servers on a data centre in order to reduce energy cost. The proposed dynamic consolidation method uses the k-nearest neighbour regression algorithm to predict resource usage in each host , which relies too much on the use of virtual machines. Knauth et al. use simulation to quantify the difference in energy consumption caused exclusively by virtual machine schedulers , which can reduce cumulative machine uptime by up to 60.1%. Zhao et al. present online VM placement algorithms for increasing cloud provider revenue , but it did not consider services migration situation. Mashayekhy et al. designed an auction-based online mechanism for VM provisioning, allocation, and pricing in clouds that consider several types of resources .
The above research provides a good idea for cloud resource allocation in the process of Mobile Microlearning. For the rich Mobile Microlearning resources and highly personalized customer demand, this paper focuses on data processing, data storage, and service migration in the process of Mobile Microlearning, to research to energy consumption of Mobile Microlearning by referring to the previous research results. The D-TF-IDF algorithm is used to improve the classified accuracy. The GWO algorithm and 2-tier cloud architecture are used to deploy resources. On a service basis, we minimize the cost of energy consumed during the resource deployment.
3. Mobile Microlearning Application Modelling
In this section, we first briefly describe fundamental method that we will use in the Mobile Microlearning framework, and then we introduce the resource deployment framework for Mobile Microlearning based on cloud in detail.
3.1. TF-IDF Algorithm
Known as one of the most effective keyword extraction technologies, the Term Frequency-Inverse Document Frequency algorithm (TF-IDF algorithm) is proposed by Saltond . By multiplying the word frequency (TF)  and the inverse document frequency (IDF) , the TF-IDF value of a word will be obtained. In fact, TF-IDF value is the weight of a keyword. The larger the TF-IDF value of a word, the more important the word is, which is shown in (1). Here, is expressed more precisely compared to .where is word and is resource. is the number of times that a word appears in a resource , and is the total number of times that all words appear in this resource . is the total number of resources in the training set, and is the number of resources that contain the word .
3.2. GWO Algorithm
The Grey Wolf Optimizer algorithm (GWO algorithm, for short) is a new biological heuristic algorithm that is inspired by the strict hierarchy and hunting process of natural grey wolves . The algorithm has received wide attention and research since it was proposed [30–34]. The processes of seeking optimal solution by GWO algorithm are mainly divided into three steps: tracking, encircling, and hunting. The process of GWO is shown in Figure 1.
In Figure 1, represents the current number of iterations of the algorithm. represents the close degree between the energy consumption value calculated by GWO algorithm at time and the standard energy consumption value, in which . We assume that the energy consumption of a cloud platform for processing a byte at its best is , . indicates that the task is executed on a private cloud platform, and indicates that the task will be executed on the public cloud platform. Because and are random related parameters, GWO algorithm has strong search ability and can search in the global scope.
According to the social hierarchy of grey wolf, we know the hunting (optimization) is guided by three wolves, , , and . The wolf is the random renewal wolf of these three wolves. In the initial state, , , , and randomly update their position to approach that is the prey position at , which is the tracking process. During tracking process, if a search agent discovers a prey, the others will approach it quickly. The process repeats until the prey stop moving, which is called encircling. Next, all grey wolves will surround their prey and complete the attack. During the process of GWO, , , and are the optimal solutions obtained so far.
3.3. Resource Deployment Framework of Mobile Microlearning Based on Cloud
In this paper, cloud is introduced into Mobile Microlearning environment, and cloud-based resource deployment framework is built. In the process of cloud services providing, different resource allocation strategies determine different service performance, while users have the same request and response processes for cloud services. Therefore, the process of users requesting service resources in this paper is referred to in the literature . The framework is shown in Figure 2. Next, we will explain in detail the way the framework works.
First of all, the processing capability of the cloud data centre is not unlimited, so the system needs to collect the user’s microlearning request, and the request is temporarily stored in the database of user requests. Secondly, to remove meaningless words and find the vocabulary set that is the most representative of the user’s request features, the system uses Institute of Computing Technology, Chinese Lexical Analysis System 2013(ICTCLAS2013)  and Stop Words table of Harbin Institute of Technology  to preprocess user requests. At the same time, a number of frequently used words are selected as the feature words of a specific request to carry on the keyword extraction process. Thirdly, the system will match the keywords in a new user request with the keyword database in the classification module, calculating the classification accuracy. Next, according to the classified accuracy and collaboration on the 2-tier cloud architecture module, Mobile Microlearning users can find the learning resources in the ideal case. Finally, based on the current state of equipment and network environment, we use GWO algorithm to find the server with the minimum energy consumption, and the resource deployment process is performed by the server. Through the above steps, users can obtain the required services in an energy-saving way.
4. Problem Formulations
In this section, we first describe the sorting process of a large number of learning resources in Mobile Microlearning and deploy these resources using 2-tier cloud architecture. Then, we study the low energy consumption strategy and formulate the energy consumption problem in the Mobile Microlearning process.
4.1. The Classification Module
In the classification module, there are two main reasons affecting the smooth completion of Mobile Microlearning. Firstly, with the advent of the era of big data, the learning resources of Mobile Microlearning are increasingly huge. Therefore, it is difficult for users to find the requested learning resources from the massive resource pool. Secondly, with diversified background of Mobile Microlearning users, cloud platform cannot accurately obtain users’ real requirements. The above reasons will lead to the inconsistency between the services provided by cloud platforms and the content requested by users, which will affect the satisfaction of users of Mobile Microlearning and increase the energy consumption in the process of Mobile Microlearning. Therefore, it is very important to extract the key information of content effectively from a large number of mobile learning resources and improve the classification accuracy. The classification module is shown in Figure 3.
Many researchers believe that words are independent and meaningful elements of language component. For a resource, we can see it as a collection of words sequence when we ignore meaningless symbols. More importantly, keywords are considered by the public as the words that best embody the main ideas of the text. Therefore, we can use keywords as features of resource classification. Firstly, we use the category homogenizing method to process the training set. Secondly, because of its higher segmentation accuracy and analysis speed, we use ICTCLAS2013 segmentation system to classify learning resources. Thirdly, we use the Stop Words table to process the segmentation results. Its main purpose is to eliminate meaningless words and make keywords more representative. Fourthly, we count the frequency of each word appearing in each category and establish a keyword database. Finally, we use D-TF-IDF to calculate the weight of the keywords. This process involves two important methods: category homogenization and D-TF-IDF method. Next, we will describe the working process of these two methods in detail.
Due to the characteristics of resources distribution, resources of some category are relatively scarce. As a result, some keywords cannot accurately represent the classification of resources, which affects the accuracy of classification. So, the small sample categories are reorganized to form a new training set that is as balanced as possible, which is called the category homogenizing method. For example, training set is , testing set is , where and are large sample categories and and are small sample categories. Categories in the original training set are reorganized to form a new training set in which the category distribution is more uniform. We suppose the reorganized training set is , where , . Then, we perform resource categorization process on . For , if the classification result indicates that the resource in testing set belongs to the original large sample categories, such as , we will accept the classification result; if the classification result indicates that the resource belongs to the reorganized sample category, such as , we will classify again and decide whether the resource belongs to or .
The above process achieves training set preprocessing. However, for the resource deployment framework proposed in this paper, we focus on the research of classification accuracy, because it is the primary condition for deploying resources to 2-tier cloud modules and modelling energy consumption. In order to obtain the optimal classification accuracy that is relatively fair to both large and small sample categories, we proposed an improvement on the traditional TF-IDF method by dynamically adjusting and , called D-TF-IDF. The weight of the keyword in D-TF-IDF algorithm is described as (2). The expression of in this formula has also been revised compared to .where and are positive numbers.
Dynamically adjusting equals to adding constant value to the denominator, which suppresses the role of the denominator. We assume that , , . In a small category, , and we can get . In a large sample category, if , we can get that . When , , we can get in a large sample category, and in a small sample category. For the same and , the weight fluctuation of the large sample is 0.00086, and the weight fluctuation of the small sample is 0.00792. The above example shows that for the same and , the weight fluctuation range of the small sample category is more violent than that of the large sample category. This shows that the D-TF-IDF method proposed in this paper can increase the weight of words at the same time, but the effect is more obvious for small samples. Therefore, as long as the dynamic adjustment of and is there, this paper can find a relatively fair weight for both large sample samples and small sample samples. Let us give for convenience of description.
In summary, the classification accuracy of is calculated in formula (3).where is the number of keywords extracted from .
Finally, we get the average classification accuracy of the entire sample set by formula (4).
4.2. The 2-Tier Cloud Architecture Module
In the 2-tier cloud architecture module, we deploy Mobile Microlearning resources to the public cloud platform or the private cloud platform according to that is the average classification accuracy of classification module. The main purpose is to ease the limitations of deploying resources on a single platform, increase the possibility of users looking for resources on private cloud platforms, and reduce the probability of service migration and the resulting energy consumption. We can achieve energy saving through collaboration between the private cloud and public cloud without affecting QoS.
The schematic diagram of the 2-tier cloud architecture module is shown in Figure 4, which includes private cloud and public cloud. It retains the scalability and flexibility advantages of public cloud platforms, as well as the advantages of low latency, low power consumption, and fine granularity of private cloud platforms. Meanwhile, it takes into account the fact that the physical capacity of cloud data centres is limited, which are not composed of infinite resources. So, it deploys resources according to the classification module. The first tier (Private Cloud) consists of nodes connecting to access points. By deploying resources with high classification accuracy to private cloud platform, users can improve the probability of finding resources in private cloud platform and reduce the possibility of service migration. Meanwhile, resources with low classification accuracy will be deployed to the second tier (Public Cloud) platform, mainly made up of Amazon and others. Its aim is to make full use of the public cloud platform’s richer resources and faster computing speed to discover Mobile Microlearning resources, which can better meet the requests of users. By deploying resources in the 2-tier cloud architecture, it can guarantee the QoS of service to a certain extent on the premise of energy saving.
4.3. The GWO Module
The main function of GWO module is to take the network environment and equipment capacity into account to find the server with the minimum energy consumption, and the server will complete the resource deployment process. In order to make the model proposed in this paper better reflect the network environment and device capability, the module needs the cloud platform to randomly generate probe bytes to simulate the user request process. However, in this process, in order to avoid the generation of extra energy consumption due to the large detection byte, we use 1 byte as the detection byte. Since the energy consumption of 1 byte is too small to measure and transmit, we can embed the probe byte into the network packet and add tags to the network packet. Then, the system only needs to track the network packet and its round-trip energy consumption, and its average value is the energy consumption of the probe byte. Next, we will use GWO algorithm to carry out the server optimization process.
We assume the number of request is and divide it into , , , and , which are strict hierarchical structure of GWO. Next, we use GWO algorithm to forecast the energy consumption of per byte based on current network status. This algorithm overcomes the limitation of adopting fixed energy consumption strategy in current research. In addition, the dynamic energy consumption strategy based on real-time environment and equipment status can better reflect the real energy consumption situation.
The energy consumption of the services is generated randomly according to changes in the supply and demand relationship. Due to the instability of the equipment state and network environment in the real cloud platform, it may cause the cloud platform generate higher energy consumption when it completes the mobile learning resource deployment process. So, we define a variable , which is the closest to . So, the main purpose of GWO algorithm is to find . At the same time, for convenience, we define as the standard value.
We simulate and formulate the hunting process of grey wolf. The energy consumption of Mobile Microlearning in can be defined aswhere represents the degree of approximation between the energy consumption calculated by the GWO algorithm and the optimal energy consumption at .
In the process of optimization, our goal is to find the server with the least energy cost. Therefore, we need to find the minimum in each iteration.where , . The above process is repeated until the maximum number of iterations is met or the optimal value is found.
In order to describe the general characteristics of the optimal value simply and directly, we define as the forecast value, and its calculation process is shown in formula (8).
4.4. Energy Consumption Formulation
In earlier researches, the execution rate was predetermined, which had certain limitations, because it could not truly reflect the network environment and equipment. In order to make the research closer to the real operating environment of Mobile Microlearning, the task execution rate is set according to the classification module in this paper.
According to the training set of the classification module, we get the total byte size of the training set. It is defined in formula (9).where is the counting function, which counts the number of words.
At the same time, we get the byte size of the keywords, which are extracted from the training set. It can be defined as formula (10).Therefore, we get the proportion of keywords in the text, as described in formula (11).Based on the above results, we can obtain the execution rate of the system as shown in formula (12).where is the total number of bytes in memory to be processed. is the number of processors the system will run in the future. is the number of processors running on the current system. is the duration of resource classification according to the statistics of classification module.
The process in which the cloud platform provides the service to Mobile Microlearning users mainly divides into two kinds of situations. The first situation is the ideal resource deployment process. In this case, the private cloud can satisfy all requests of all Mobile Microlearning users. The second situation is the nonideal resource deployment process. Under the case, the private cloud platform needs to perform service migration process to public cloud platform, which will result in additional energy consumption.
The time cost in ideal situation is shown in formula (13).where is the total byte provided by the private cloud platform. is the number of servers running on a private cloud platform. is the number of servers running in the private cloud platform in the current state.
Therefore, the ideal learning energy consumption is described as formula (14).where represents the energy consumption of the best server on a private cloud platform for processing one byte, which can be calculated by formula (8).
It is well known that if a user request is not answered for a long time, then this will seriously affect the user’s Mobile Microlearning quality. To solve this problem, private cloud sets a time threshold based on experience. If the private cloud platform is unable to provide the required service for Mobile Microlearning users within the time threshold, the system model proposed in this paper will automatically cut off the service opportunity of the private cloud platform and execute the service migration process. Therefore, the time threshold can effectively alleviate the high energy consumption caused by the user’s excessive time consuming private resources. Therefore, the time cost of Mobile Microlearning in nonideal cases is shown in formula (15).where is the time threshold, the maximum time delay for mobile cloud services that private cloud platforms can tolerate. is the time cost of service migration, and is the byte size that the public cloud platform is providing to user. is the execution rate of public cloud. is the number of processors that will be open in the public cloud.
In practice, the bandwidth state, network status, and device status (such as CPU load) of the cloud platform may be different . It will have a certain impact on migration performance. According to , we need to consider the impact of the size of VM memory, memory contamination rate, and network transmission rate in the migration performance. Therefore, we get migration time cost in formula (16).where represents the number of bytes requested by users migrating from a private cloud platform to a public cloud platform. is memory transmission rate during migration. and are defined as (17) and (18), respectively.where represents memory contamination rate during migration. is the memory contamination threshold. is the size of memory.
For resources stored in private or public clouds, users can obtain the required services through collaboration between the two platforms. Therefore, on the premise of guaranteeing the service quality of Mobile Microlearning, the energy consumption of Mobile Microlearning process is shown in formula (20).
5. Simulation and Numerical Results
To verify the validity of the method, we will evaluate the performance of the proposed method from three aspects, the classification accuracy of D-TF-IDF, forecast accuracy of GWO algorithm, and energy consumption of Mobile Microlearning process. The experimental parameters and experimental results are as follows.
The experimental data set is corpus of Fudan University , which divides 19,637 texts into 20 categories. Among them, the training set includes 9804 texts, and the testing set includes 9833 texts; the largest category contains 1,600 texts, and the smallest category contains 25 texts. We know from previous research that the distribution of the corpus is consistent with the uneven distribution of various types of resources in the actual environment .
We conduct word segmentation and word frequency statistics on the training set, to build the word frequency table. After preprocessing the data set, we can get 20185 words. Of these words, the 20 words with the highest frequency were extracted as keywords for the text. The weight of each keyword in 20 categories is calculated by TF-IDF. Parts of the results are shown in Tables 1 and 2.
It can be seen from Tables 1 and 2 that the keyword weights of small sample categories are mostly 0, while the keyword weights of large sample categories do not have a good distinguishing feature. For example, the keyword “OF” appears in the large sample category C11, C19, C31, C32, and C34. Although the probability of the word appearing in C19 is higher, we cannot determine that the text belongs to C19. Because “OF” is not a real word, with no substantive significance in category, it should not be a keyword. The above situation exists simultaneously in the small sample class. Therefore, it is essential to find representative keywords for high classification accuracy.
In order to make the classified accuracy of small samples and large samples relatively fair, and vary between 0 and 10, with an interval of 0.2. The variation process of classification accuracy of different samples with and is shown in Tables 3 and 4.
As can be seen from Table 3, the classification accuracy of large sample categories generally increases with the changes of and ; for example, the classification accuracy of C3 increases from 0 to 95%. It can be seen from Table 4 that the classification accuracy of small sample categories decreases; for example, the classification accuracy of C6 drops to 80% from 0. It is worth noting that when and increase, the classification accuracy reduces for those, e.g., C7, C11, and C32 in the case of the large sample category. Therefore, we have to find the value of and that can lead to fair and optimal classification accuracy for both large sample category and small sample category.
In order to better display the effect of on the classification accuracy of various samples, Figure 5 shows the result of the TF-IDF and D-TF-IDF.
Figure 5 shows that, except for C19 and C34, the classification accuracy of D-TF-IDF algorithm is higher than that of TF-IDF algorithm. For example, the classification accuracy rate of C32 increased from 47.5% to 83.9% and that of C35 increased from 1.9% to 26.9%. For C19 and C34, the classification accuracy of D-TF-IDF algorithm decreased by 1.03% and 9.07%, respectively. For C4 and C5, the classification accuracy was not improved due to the small sample size. On the whole, however, the upward trend of the classification accuracy of all kinds of samples under the D-TF-IDF algorithm is far greater than its downward trend, which proves that the D-TF-IDF algorithm has more advantages in sample classification.
From Figures 6 and 7, we can see that classification accuracy varies with the number of samples. In Figure 6, for the TF-IDF algorithm, the minimum and the maximum classification accuracies were 38% and 94.3%, respectively. For the D-TF-IDF algorithm, the two values are 70.5% and 99.2%, respectively. In Figure 7, for the TF-IDF algorithm, they are 0 and 11.3%, respectively. For the D-TF-IDF algorithm, they are 0 and 40.7%, respectively. Based on the above experimental results, we found that although the classification accuracy of large sample classes was slightly decreased, the classification accuracy of small sample categories was significantly improved. Therefore, compared with TF-IDF algorithm, the D-TF-IDF algorithm proposed in this paper has more advantages in improving classification accuracy.
In order to bring the experiment closer to the real dynamic cloud environment, we use the GWO algorithm to forecast the energy consumption in the current environment. Because 1 byte energy consumption is too small to measure, we predict the energy consumption of 1kb data in the current environment. The experimental parameters are as follows: , , and refers to . During the experiment, GWO optimization algorithm was performed 20 times separately, and the number of iterations of each experiment was 1000.
Figures 8 and 9 show the relationship between the forecast value and the standard value in the 3G environment. The difference between the forecast energy consumption value and the standard value is very small, which is 7.67% in private cloud and 2.93% in public cloud. Therefore, the forecast accuracy of GWO on private cloud platform is 92.33% and that on public cloud is 97.07%.
Figures 10 and 11 show the relationship between the forecast value and the standard value in Wi-Fi environment. From Figures 10 and 11, we can see that the difference between the forecast energy consumption value and the standard value is 3.16% and 6.40%, respectively. Therefore, the forecast accuracy of GWO on private cloud platform is 96.84% and that on public cloud is 93.60%.
At the same time, in the 20 experiments, the variance between forecast values in four cases is 9.70515E-07, 3.06072E-06, 1.11715E-07, and 1.29163E-06, respectively, which proves the stability of the forecast value. Therefore, we believe that these forecast values can really reflect energy consumption in Mobile Microlearning.
For the energy consumption of Mobile Microlearning resource deployment, the experimental results are mainly compared with the traditional algorithm proposed in . To make it more convincing, the number of requests and the file size are consistent with . Therefore, the number of requests is 10000, and the file size is between 500kb and 5GB. Additionally, the parameters of migration are set as follows: , , , , , , . To make the results convincing, we conducted 20 experiments independently. The experimental results are shown below.
As shown in Figure 12, we can see that the energy consumption using the proposed method is less than that of traditional algorithm. The maximum energy consumption and the minimum energy consumption of the proposed algorithm are 155504j and 138015j, respectively. And the maximum and minimum values of the traditional method proposed by Mohamed et al. are 164755j and 159032j, respectively. As a result, the energy saving is 16.23% and 2.22%, respectively, which proves the superiority of the proposed method in 3G in this paper.
As shown in Figure 13, we can see that the energy consumption using the proposed method is less than that of the traditional algorithm. The maximum and the minimum energy consumption of proposed algorithm are 74412j and 67897j, respectively. And the maximum and minimum value of the traditional method proposed by Mohamed et al. are 85639j and 79496j, respectively. So, the energy saving is 20.74% and 7.35%, respectively, which proves the superiority of the method we proposed in this paper in Wi-Fi environment.
In order to better verify the influence of environment on Mobile Microlearning, we compared the energy consumption of this algorithm in 3G and Wi-Fi environments. The experimental results are shown in Figure 14. As shown in Figure 14, we can see that, using the proposed method, the energy consumption in Wi-Fi is far less than that in 3G. The maximum and minimum energy consumption of the proposed algorithm in 3G environment are 155504j and 138015j, respectively. In Wi-Fi, they are 74412j and 65931j, respectively. This proves that users should learn as much as possible in a Wi-Fi environment because it saves 49.97% energy than that in 3G.
Figure 15 compares energy consumption between the proposed algorithm and traditional algorithm. The proposed algorithm is superior to the traditional algorithm, because energy consumption of the proposed algorithm is less than that of the traditional algorithm both in Wi-Fi and 3G environment. The algorithm saves 16.09% energy for Wi-Fi and 10.54% for 3G
Based on the green cloud service provisioning framework , in this paper, we focus on high energy consumption caused by resource deployment in Mobile Microlearning environments. First, the Chinese Lexical Analysis System and the TF-IDF method in text mining are applied to the keyword mining and statistics of user request in Mobile Microlearning.
Meanwhile, in order to find the optimal classification accuracy for both large samples category (popular information) and small samples category (less popular information), we use category homogenization method and D-TF-IDF method, respectively. Second, a 2-tier cloud architecture is adopted to deploy resources according to the classification results of the classification module. Third, GWO module forecasts the energy consumption of each byte in current network environment and finds the server with the lowest energy cost. For the new user request, the classification module tries its best effort to get the highest classification accuracy. The 2-tier cloud architecture, through the collaboration between private cloud and public cloud, mitigates the drawbacks of the overload operation of a single cloud platform. On the other hand, it increases the probability that users find resources in private cloud and reduces the energy consumption brought by service migration. The experimental results show that the method we proposed can improve classification accuracy and achieve the purpose of saving energy. However, this algorithm can only make a rough estimation of resource classification and energy consumption. Therefore, in the future research work, the important modules involved in the Mobile Microlearning process need to be deeply processed to achieve the overall energy-saving treatment in the Mobile Microlearning process.
The corpus data used to support the findings of this study have been deposited in the Fudan University repository (http://nlp.fudan.edu.cn/).
Conflicts of Interest
The authors declare that they have no conflicts of interest.
This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grants no. 61602155, no. U1604155, no. 61871430, and no. 61370221, in part by Henan Science and Technology Innovation Project under Grant no. 174100510010, and in part by the Industry University Research Project of Henan Province under Grant no. 172107000005.
N. Mäkitalo, T. Aaltonen, and T. Mikkonen, “Coordinating proactive social devices in a mobile cloud: Lessons learned and a way forward,” in Proceedings of the IEEE/ACM International Conference on Mobile Software Engineering and Systems, MobileSoft 2016, pp. 179–188, USA, May 2016.View at: Google Scholar
G. Sun, J. Shen, J. Luo, and J. Yong, “Evaluations of heuristic algorithms for teamwork-enhanced task allocation in mobile cloud-based learning,” in Proceedings of the 2013 IEEE 17th International Conference on Computer Supported Cooperative Work in Design, CSCWD 2013, pp. 299–304, Canada, June 2013.View at: Google Scholar
M. F. Kamaruzaman and I. H. Zainol, “Behavior response among secondary school students development towards mobile learning application,” in Proceedings of the 2012 IEEE Colloquium on Humanities, Science and Engineering Research, CHUSER 2012, pp. 589–592, Malaysia, December 2012.View at: Google Scholar
V. Tam, E. Y. Lam, and S. T. Fung, “Toward a complete e-learning system framework for semantic analysis, concept clustering and learning path optimization,” in Proceedings of the 12th IEEE International Conference on Advanced Learning Technologies, ICALT 2012, pp. 592–596, Italy, July 2012.View at: Google Scholar
H. Khalil and M. Ebner, “MOOCs Completion Rates and Possible Methods to Improve Retention - A Literature Review,” in Proceedings of World Conference on Education Multimedia, Hypermedia and Telecommunications, pp. 1236–1244, 2014.View at: Google Scholar
I. Nawrot and A. Doucet, “Building engagement for MOOC students: Introducing support for time management on online learning platforms,” in Proceedings of the 23rd International Conference on World Wide Web, WWW 2014, pp. 1077–1082, Republic of Korea, April 2014.View at: Google Scholar
G. Sun, T. Cui, S. Chen, W. Guo, and J. Shen, “MLaaS: A Cloud System for Mobile Micro Learning in MOOC,” in Proceedings of the 3rd IEEE International Conference on Mobile Services, MS 2015, pp. 120–127, USA, July 2015.View at: Google Scholar
J. Chen, Y. Zhang, J. Sun, Y. Chen, F. Lin, and Q. Jin, “Personalized micro-learning support based on process mining,” in Proceedings of the 7th International Conference on Information Technology in Medicine and Education, ITME 2015, pp. 511–515, China, November 2015.View at: Google Scholar
S. H. Kim, C. Mims, and K. P. Holmes, “An Introduction to Current Trends and,” AACE Journal, vol. 14, no. 1, pp. 77–100, 2006.View at: Google Scholar
I. S. Moreno and J. Xu, “Customer-aware resource overallocation to improve energy efficiency in realtime Cloud Computing data centers,” in Proceedings of the 2011 IEEE International Conference on Service-Oriented Computing and Applications, SOCA 2011, USA, December 2011.View at: Google Scholar
A. Alahmadi, A. Alnowiser, M. M. Zhu, D. Che, and P. Ghodous, “Enhanced first-fit decreasing algorithm for energy-aware job scheduling in cloud,” in Proceedings of the 2014 International Conference on Computational Science and Computational Intelligence, CSCI 2014, pp. 69–74, USA, March 2014.View at: Google Scholar
X. F. Liu, Z. H. Zhan, K. J. Du, and W. N. Chen, “Energy aware virtual machine placement scheduling in cloud computing based on ant colony optimization approach,” in In Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation, pp. 41–48, 2014.View at: Google Scholar
F. Farahnakian, T. Pahikkala, P. Liljeberg, and J. Plosila, “Energy aware consolidation algorithm based on K-nearest neighbor regression for cloud data centers,” in Proceedings of the 2013 IEEE/ACM 6th International Conference on Utility and Cloud Computing, UCC 2013, pp. 256–259, Germany, December 2013.View at: Google Scholar
T. Knauth and C. Fetzer, “Energy-aware scheduling for infrastructure clouds,” in Proceedings of the 2012 4th IEEE International Conference on Cloud Computing Technology and Science, CloudCom 2012, pp. 58–65, Taiwan, December 2012.View at: Google Scholar
L. Yang, R. J. Zheng, J. L. Zhu, M. C. Zhang, and Q. T. Wu, “A Green Cloud Service Provisioning Method for Mobile Micro-Learning,” International Journal of New Technology and Research, vol. 4, no. 4, pp. 63–69, 2018.View at: Google Scholar
X. Xu, W. Wang, T. Wu, W. Dou, and S. Yu, “A Virtual Machine Scheduling Method for Trade-Offs Between Energy and Performance in Cloud Environment,” in In Proceedings of 2016 International Conference on Advanced Cloud and Big Data, pp. 246–251, 2016.View at: Google Scholar
N. Balasubramanian, A. Balasubramanian, and A. Venkataramani, “Energy consumption in mobile phones: a measurement study and implications for network applications,” in Proceedings of the 9th ACM SIGCOMM conference on Internet measurement conference (IMC '09), pp. 280–293, Chicago, Ill, USA, November 2009.View at: Publisher Site | Google Scholar
M. A. Mohamed and I. Y. Abdel-Baset, “proposed Model of Smartphone Energy-Aware for Mobile Learning,” International Journal of Scientific & Engineering Research, vol. 8, pp. 526–535, 2017.View at: Google Scholar