Nowadays, mobile cloud computing (MCC) has emerged as a new paradigm which enables offloading computation-intensive, resource-consuming tasks up to a powerful computing platform in cloud, leaving only simple jobs to the capacity-limited thin client devices such as smartphones, tablets, Apple’s iWatch, and Google Glass. However, it still faces many challenges due to inherent problems of thin clients, especially the slow processing and low network connectivity. So far, a number of research studies have been carried out, trying to eliminate these problems, yet few have been found efficient. In this paper, we present an enhanced architecture, taking advantage of collaboration of thin clients and conventional desktop or laptop computers, known as thick clients, particularly aiming at improving cloud access. Additionally, we introduce an innovative genetic approach for task scheduling such that the processing time is minimized, while considering network contention and cloud cost. Our simulation shows that the proposed approach is more cost-effective and achieves better performance compared with others.

1. Introduction

During the last decade, we have witnessed an explosion of smart devices. According to Gartner Inc., mobile device is becoming the platform with approximately 2.4 billion units of both cellular phones and tablets shipped in 2013, outstripping PC sales in the same period which was less than 315 million shipments [1]. Together with it, the demands for mobile and service application from users are increasing, which may require processing capacities beyond what could be offered by even the most powerful smartphones [2, 3]. Meanwhile, gaining more and more popularity in recent years, cloud computing (CC) [4, 5], widely known as the next generation’s computing infrastructure with virtual unlimited resource and service provision, offered a considerable complement to mobile thin clients such as smartphones, tablets, or even the Google cutting-edge invention, Google Glass [6], and Apple’s iWatch [7] by expanding their power and extending the capability of mobile computing. This integration results in a new paradigm named mobile cloud computing, allowing mobile devices to run computation-intensive applications, providing end users with rich computing experience [8]. However, it is still insufficient and vulnerable to inherent obstacles of thin devices: unstable accessibility, low network connectivity, especially the weak processing capacity. These may degrade Quality of Service (QoS), particularly the performance of the system. That prevents mobile devices from satisfying the increasingly sophisticated applications demanded by users.

Recently, many researchers have investigated to find the ways to improve the weakness. In [9], for example, Siegemund et al. discuss the fact that smart objects (i.e., smartphones) can leverage resources and computing capabilities from nearby nodes to extend their own processing ability. Similarly, in [10], a guideline to create a virtual MCC provider is proposed, although the limited capacity and bandwidth are still a hurdle [11, 12]. This framework takes advantage of nearby thin clients to create on-the-fly connection, thus avoiding the need to connect directly to infrastructure-based clouds. The downside of these efforts, however, lies in the thin clients’ capacity restriction and low bandwidth between them and cloud. Therefore, they are not sufficient to eliminate the above discussed problems of thin clients. On the other hand, thick clients, including powerful smartphones (multicore CPU, large memory, LTE-enabled), usually come with better hardware and network connectivity. Furthermore, lately, lots of efforts have been made to reduce the cost of using cloud services. For example, it would be really costly to process workflow by using solely cloud resources, for which users are charged based on the number of virtual machine (VM) instances and on hours of usage [1315]. Thus, it is understandably suggested that thin clients be coupled with thick clients to achieve a desirable access to cloud, as stated in [8], as well as optimizing the cloud cost.

However, task scheduling is still one of the main issues in [8], which may negatively affect the performance, QoS, and user experience. Consequently, in this paper, the crucial technical contribution, based on the idea in [8], is to introduce an enhanced solution with a novel genetic algorithm in order to exploit the collaboration between thin-thick clients and cloud network to optimize task scheduling of the processing system so as to deal with the above issues, accordingly improving QoS, user experience, and reliable performance of the system. In particular, our proposal takes into account not only the network contention but also the cost charged to cloud customers (CCs) since these two factors play important roles in satisfying user’s expectations [16]. Moreover, the approach is experimentally evaluated and compared with existing ones. The results prove that our method can guarantee a task scheduling efficiency and has better cost-effectiveness than other approaches.

The remainder of this paper is structured as follows. Section 2 presents some related work which has partly solved the discussed problems. Section 3 gives a motivation scenario, where our proposal model can be practical to thin-thick client collaboration for task scheduling. System architecture is presented in Section 4. Section 5 details the problem formulation. And Section 6 specifies the implementation, performance evaluation. The last section concludes our paper and suggests the future work.

There have been numerous studies that attempt to solve task scheduling problems, which are considered as a variant of the quadratic assignment problem (QAP) [27, 28]. In [2932], the authors propose task scheduling approaches for assigning processors to task graph templates prepared in advance. Gupta et al. in [26] suggest a new fault tolerant scheduling algorithm MaxRe to incorporate the reliability analysis into the active replication schema and exploit a dynamic number of replicas for different tasks. The limitation of these methods is that they do not consider the network contention. Sinnen and Sousa [19] present an efficient task scheduling method based on network contention although this study does not look attentively at monetary cost paid by cloud customers for use of the cloud resources.

In heterogeneous CC environment, despite numerous efforts, task scheduling remains one of the most challenging problems [33]. The authors in [22] introduce a cost-efficient approach to select the most proper system (private or public cloud) to execute the workflow. Selection also depends on the possibility of meeting the deadline of each workflow as well as the cost savings. Zeng et al. in [14] propose budget conscious scheduling Comparative Advantage (CA) function to satisfy the strict budget constraint. In its first phase, each of the tasks is assigned to the best VM whose CA1 value is maximum to get a worthy tradeoff between cost saving and efficiency. In the second phase, budget constraint is used in the CA2 function to evaluate the improvement of task reassignment to another VM on cost and performance. Notwithstanding CA is hard to be applied to the large-scale workflows. In the meantime, Li et al. in [13] present a scheduling algorithm to schedule the application of large graph processing considering both cost and schedule length. The input cost-conscious factor of this method is the cloud cost and is used as a weight to calculate the earliest finish time (EFT) of each task. Senthil Kumar and Balasubramanie in [21] try to schedule tasks to the available fault-free resources according to historical values of the system to improve the system performance, so that its reliability can be enhanced. That system also considers the minimum cost for CCs. However, global optimality between cost and schedule length is not properly addressed in these approaches.

Recently, some GAs have been successfully applied for solving the global optimal problem in task scheduling. The authors in [25] propose approaches using genetic processes to find multiple solutions faster and ensure global optimal usage of the processing system. Gupta et al. [26] developed a method based on genetic algorithm (GA) to find optimal scheduling, which was showed to be efficient to discover optimal solution, focusing on the quality of solution and effect of mutation probability on the performance of GA. Nonetheless, in these proposals, monetary cost is not considered. Omara and Arafa in [34] show a new scheduling algorithm according to a fitness adaptive algorithm-job spanning time adaptive genetic algorithm, so as to enhance the overall performance of the cloud computing environment while considering communication cost and computation cost but the tradeoff between cost and schedule length is not considered.

For ease of understanding, we present an overview of common scheduling approaches along with ours in Table 1.

As shown in Table 1, the desired scheduling approach should consider all the three factors including execution time, network contention, and the cloud cost in combination of thin-thick clients and clouds. Thus, in this paper, we aim to provide a schedule scheme which takes all three factors into account. The main contribution of this work is the development of a genetic algorithm to determine the global optimal schedule in combination of thin-thick clients and clouds so as to improve QoS requirements and system reliability.

3. Motivation Scenario

The scenario illustrated in Figure 1 exemplifies the utilization of a collaborated mobile cloud with thick clients in optimizing task scheduling.

An HR manager is attending a meeting for the report on user behavior statistic of his/her company. In order to get the insight of the information from the report, the manager uses his/her Google Glass (thin client) he/she is wearing to take pictures from the presentation slide and sends them to the cloud for extraction and analysis of user activity patterns. The achieved data are used to analyze valuable information such as site traffic, Internet access time, favorite subjects, and health insurance, on an hourly, daily, weekly, or even yearly basis. These patterns can be extremely useful for certain types of prediction or recommender systems.

Though the glass is Internet-enabled and can conveniently use online search engine to search for information, its bandwidth is fairly limited while there are so many photos with large size that have been taken. Fortunately, the glass is preconfigured to be connected with the company’s facilitated thick client computers, which offers a more powerful CPU, larger memory, and especially a high speed internet connection. It clearly makes sense then to let the glass transfer those photos to the thick clients and have them uploaded to some cloud services using the terrific Internet speed. Once uploaded to the cloud, all pictures will be processed and the outputs will be looked up for the related information. In particular, cloud service will handle the work to look for related user logs. It is noteworthy that the records of user behavior statistic, stored as logs on cloud, are collected on a daily basis and, thus, are really huge (up to several hundreds of GBs) and are operationally and even financially expensive to look up some specific data as well as to process them. To speed up the progress, the log is divided into smaller sets of user logs, which are then processed in parallel by both a new network made up of the company’s powerful thick clients (laptop, desktop, etc.) and cloud virtual machines. This not only significantly increases processing pace but also reduces the cost of using cloud services because it is costly if he/she only uses cloud resources (VMs). Eventually, the processed result will be delivered to a thick client, which works as a broker before sending to the thin client. Thus, the manager can achieve the results in shorter time than the time he/she could have spent if using only his/her Google Glass.

The above scenario is one of the potential examples, where our proposed model can be applied for utilizing the joint work between thin clients and thick clients for parallel processing in a typical mobile cloud computing environment. Such collaboration promisingly increases the opportunity of using resources efficiently. The thin client takes advantage of multiple thick clients’ relay to enhance the data distribution from cloud networks, consequently enhancing its computing capabilities. With that in mind and with CC platform staying ready and emerging on the market, our paper aims at architecting a network design, based on the thin-thick client collaboration, attempting to address the following issue: globally minimizing the task scheduling in the collaborated mobile cloud computing and thick clients while considering the network contention and cloud cost.

4. System Architecture

The following section gives an insight of our system architecture to address issues discussed above.

Our architecture has two layers, as illustrated in Figure 2, including (1) cloud provider layer, which contains virtual machines (VMs), and (2) cloud customer layer, where thin clients and thick clients reside. In the second layer, there is a thick client functioning as a centralized management node, also known as a broker, which (1) receives all computation requests of users, (2) manages processor’s profiles (processing capacity, network bandwidth) as well as computation costs together with results of data query returned from processors, and (3) accordingly creates the most reasonable schedule for an input workflow. In particular, it sends data to clouds in a single connection but when VMs send data to cloud customer layer, the data will be divided into different parts with different sizes before being delivered to thick clients in multiconnections according to a previous research [8]. Moreover, the system has to satisfy the following requirements:(i)The network needs to support high bandwidth and low latency connection between thick clients and clouds so that data can be transferred at a fast rate.(ii)STUN information and a communication library can be shared by these P2P and thick clients to thin clients or vice versa.(iii)Thick clients should store a copy of persistent data of the cloud and should keep this loosely synchronized.

In the next section, we formulate the problem and describe our proposed approach.

5. Problem Formulation and Solution

Task scheduling [35] on a target system that has a network topology is defined as the problem of allocating the tasks of an application to a set of processors that have diverse processing capabilities in order to minimize total execution time. Thus, the input of task scheduling includes a task graph and a process graph. The output is a schedule representing the assignment of a processor to each task node.

5.1. Problem Formulation

In this section, we first define the terms used in this paper. The notations are listed in Notations. Then, we explain how to formulate the problem. Eventually, a genetic method for task scheduling is presented to solve the above problem.

Definition 1. A task graph (e.g., as in Figure 3) is represented by a Directed Acyclic Graph (DAG), , where the set of vertices represents the set of parallel subtasks and the directed edge describes the communication between subtasks and , associated with task represents its computation time, and represents the communication time between task and task with corresponding transferred data . One presumes that a task without any predecessors, , is an entry task , and a task that does not have any successors, , is an end task . The task consists of workload , which delimits the amount of work processed with the computing resources. Besides, it also contains a set of preceding subtasks and a set of successive subtasks of task , denotes start time, and refers to the execution time of task on processor . Hence, the finish time of that task is given by .

Suppose that the following conditions are satisfied.

Condition 1. A task cannot begin its execution until all of its inputs have been gathered sufficiently. Each task appears only once in the schedule.

Condition 2. The ready time is the time in which processor completes its last assigned task and is ready to execute task . Therefore,where is a set of tasks executed at processor , , and is a set of preceding tasks of .

Condition 3. Let be an idle time interval on processor in which no task is executed. A free task can be scheduled on processor within if

Definition 2. A processor graph demonstrated in Figure 4 is a graph that describes the topology of a network between vertices (processors) that are cloud virtual machines (VMs), thick or thin clients. In this model, is the finite set of vertices, and a directed edge denotes a directed link from vertex to vertex with , . Each processor controls the processing rate and bandwidth on the link connecting it to other processors.

5.2. Proposed Approach

Given a task graph and a processor graph with network topology , our method chooses the most appropriate schedule to execute the tasks. Among the various guided random techniques, genetic algorithms (GAs) are the most widely used for the task scheduling problem [17].

A genetic algorithm [33], illustrated in Figure 5, is inspired by natural evolution. It is a robust search technique that allows a global high-quality solution to be derived from a large search space in polynomial time. This is in contrast to other algorithms that find only local optimal results. GA combines the best solutions from past searches with exploration of new regions of the solution space. In this algorithm, a feasible solution is represented using an individual (chromosomes), and a set of assignments signifying tasks allocated on nominated processors is considered a gene in a chromosome. The algorithm keeps a population of these randomly generated individuals that evolves over generations. The quality of an individual in the population can be characterized by a fitness function whose value specifies the fitness of an individual compared to others in the population. Higher fitness level presents better solutions. Based on fitness, parents are selected to produce offspring for a new generation. The fitter individual has a better chance to reproduce. A new generation has the same number of individuals as the previous generation, which dies off once it is replaced with the new generation. By applying genetic operators, namely, selection, crossover, and mutation to a population of chromosomes, the quality of the chromosomes can be improved. As a result, a new population of individuals is produced. If well designed, this new population will converge to optimal solution.

The following section describes in detail each operator of the genetic algorithm.

5.2.1. Representation

Here, we choose each individual (as shown in Figure 6) in the population to illustrate a feasible solution to the problem and contain an array of task assignments. Each of the assignments consists of a task and a corresponding assigned processor. The time frames of each task in each individual, such as earliest start time and earliest finish time, can be changed to adjust those of its successive tasks. These changes can lead to a very complex state during the genetic algorithm. Hence, our solution is to ignore the time frame while conducting genetic manipulation and assign a time slot to each assignment in order to obtain a feasible schedule later.

A one-dimensional array (Figure 6) may not be suitable for representing the workflow because it only defines which processor is allocated to each task and cannot show the order of task assignments on each processor. However, the execution order is very important since it significantly impacts the workflow execution [36]. We use a two-dimensional array to represent a schedule, as demonstrated in Figure 7. In this two-dimensional array, the order of tasks on each processor is shown. During genetic manipulation, the two-dimensional array is transformed into a one-dimensional array.

5.2.2. Establishing the Initial Population

The initial population is a set of individuals generated through a random heuristic. Each individual consists of pairs of tasks and processors on which the tasks are allocated.

5.2.3. Constructing a Fitness Function

A fitness function can be used to characterize the quality of each individual in a population based on its optimization value. According to fitness value, parents are selected to generate new offspring. Since the purpose of our method is to minimize the schedule length while considering the network contention and the cost for cloud users, the fitness function has to rely on EFT and cloud costs paid by CCs. The following section illustrates establishment of EFT and the cost of task on a processor from its start time as well as the ingredient costs.

The start time of a task is defined when the last preceding task is completed. Thence, to determine that start time, the earliest idle interval on processor has to be searched and found to satisfy Conditions 2 and 3. As a result, the start time of task on processor is set as

Thus, the earliest start time (EST) of a task executed on a processor is computed as follows:where is the communication time between processors and to execute task and is defined asHere, is the amount of input data stored at processor and used for executing task and is amount of outgoing data executed from and then transferred to . Therefore, the earliest finish time (EFT) of the task is calculated as

In addition, the algorithm also considers the cost paid by cloud customers for using cloud resources to execute the tasks. The cost for task executed at a VM or at a thin client as well as thick client is defined by

In (7), each cost is calculated as follows.

Cost of processing is expressed aswhere is the processing cost per time unit of workflow execution on processor with processing rate .

Let be the finish time of the task which is completed first out of the parallel tasks and there is no available task after this one, let be the waiting cost per time unit, and let be the finish time of task . Then the cost of waiting time is as follows:

Suppose that the amount of money per time unit for transferring outgoing data from processor is ; then the cost of communication time is defined as follows:We assume that the distribution of disconnection events between a cloud and clients is a Poisson distribution with parameter , which represents the stability of the network. The expected number of arrivals over an interval of length is . Let be a random variable for the length of an offline event, let be the mean length, and let be the disconnection cost per unit time. Therefore, the expected duration of a disconnection event, which can affect the processing time of task , is . Hence, the cost of disconnection can be derived as

Let be the storage cost per data unit and let be the storage size of task on processor . Then the storage cost of task on processor is calculated asFurther, we compute the cost of using the memory of processor for task as follows:where is the size of the memory used and is the memory cost per data unit.

Using this cost, we can calculate a fitness function that computes the tradeoff between the cost and EFT as

By considering the above fitness function that combines and , we can determine which individual in a population is the most appropriate to satisfy the function. This indicates that its combination of and should demonstrate the minimum value of the tradeoff .

5.2.4. Genetic Operators

(i) Selection. New individuals are selected according to their fitness described by the utility function’s tradeoff value after being compared to others in the population. The chance of being selected as a parent is proportional to fitness and is in inverse ratio to the tradeoff value. An individual whose tradeoff value is lower is better than the one with a higher tradeoff value. The fittest is considered as successive generation evolves. An excessively strong fitness selection bias can lead to suboptimal solution.

(ii) Crossover. Crossover operates at an individual level and is used to generate new offspring from two randomly selected individuals (parents) in the current population in order to result in an even better individual in the subsequent generation. There are three methods of crossover, single-point crossover, two-point crossover (or multipoint crossover), and uniform crossover, for all of which the chance of crossover is between 0.6 and 1 in general. As shown in Figures 8, 9, and 10, the crossover operator used is determined by the following rules:(i)One, two (or multiple) points are randomly chosen from selected parents.(ii)These random points divide each individual into left and right sections.(iii)Crossover then swaps the left (or the right) sections of the two individuals.(iv)Two new offspring are created by recombining sections taken from two parents.

In particular, in single-point crossover (presented in Figure 8), one position in the individual is randomly chosen. The first child is a combination of the head of the first parent with the tail of the second parent. The second child is also an arrangement of the head of the second parent with the tail of the first parent. In the meantime, in a two-point crossover operator, two positions in the individuals are randomly chosen as shown in Figure 9. The benefit of two-point crossover is to avoid the inherent problem of single-point crossover, wherein the genes at the head and the genes at the tail of a certain chromosome are always split when recombined.

A random mask containing bits (as illustrated in Figure 10) is generated in uniform crossover. The mask determines which bits are copied from each parent. The bit represents the position of the elements in each individual, and the bit density in the mask determines how much material is taken from each parent.

(iii) Mutation. In genetic algorithms, a mutation generates new offspring from a single parent in the current population. Mutation maintains the diversity of individuals by exploring new and better genes than were previously considered in order to prevent a combination of all solutions in the population converging into a local optimum of solved problems as crossover can only explore the current combinations in the gene pool. However, mutation rates are low as the chance of mutation in a specific individual is low (approximately 0.001). There are two types of mutations: a replacing mutation and a swapping mutation.

The purpose of the replacing mutation is to reallocate a substitute processor to a random task in an individual. The selected processor is also randomly chosen and has enough capacity to execute the task. Figure 11(a) illustrates the replacing mutation. In this figure, processor allocated to task is replaced by processor . In contrast, the swapping mutation changes the execution order of independent tasks on the same processor in an individual for the same time slot. The example of swapping mutation in Figure 11(b) shows that task occupies the initial time slot of task .

To verify the performance of our proposed genetic algorithm based approach, we have also designed several other task scheduling algorithms. These algorithms minimize the schedule length of the workflow or lessen the cloud cost paid by CCs. Algorithm 1 is greedy for cost algorithm, where each task of the workflow is assigned to a processor which minimizes cost greedily for cloud resources to execute that task. In Algorithm 2, contention aware scheduling [19] aims to create a schedule based on EFT pondering on network contention. Meanwhile, Algorithm 3 shows that our approach speculates in both network contention and cloud cost as well as considering the tradeoff between the execution time and the cost. Moreover, our method also tries to get a global optimal scheduling for the workflow.

Input: Task graph , processor graph TG =
Output: A new task scheduling
Function greedyForCostScheduling (, TG)
  Sort task into list according to priority;
  for each
    Find the best processor which minimize the execution cost of task ;
    Assign on ;
  return a new task scheduling;

Input: Task graph , processor graph TG =
Output: A new task scheduling
Function networkCostScheduling (, TG)
  Sort task into list according to priority.
  for each
    Find the best processor which allows EFT of , taking account of network bandwidth usage;
    Assign on ;
  return a new task scheduling;

Input: Task graph , processor graph TG =
Output: A new task scheduling
Function geneticScheduling (, TG)
 Generate initial population;
 Compute fitness of each individual according to (14)
  repeat // New generation
   for population_size
    Select two parents from old generation;
    // biased to the fitter ones
    Recombine parents for two offspring;
    Compute fitness of offspring;
    Insert offspring in new generation
  until population has converged

6. Implementation and Analysis

6.1. Experimental Settings

This section presents our experiments via numerical simulations to evaluate the efficiency of our approach, the cost-time aware genetic algorithm (CTaG), and compare its performance with two others: contention aware scheduling (CaS) [19], which just takes account of network contention, and greedy for cost (GfC) algorithm, which merely concerns the monetary cost. All the parameters are a different task graph with the increase of the matrix size from 10 to 90 and a heterogeneous processor graph which is a combination between 20 VMs with the different configurations and 6 thick clients and 4 thin clients located at the local system of CCs for the above algorithms as shown in Table 2. We developed the simulations in Java with JDK-7u7-i586 and Netbeans-7.2 using CloudSim [37]. It is a framework for modeling and simulation of cloud computing infrastructures and services. In our simulation, we denote Million Instructions per Second by MIPS to represent the processing capacity of processors.

6.2. Experimental Results

Figures 1219 show the simulation results of our proposed genetic method for task scheduling compared against other scheduling techniques. The graphs present the measured outcomes of systems based on the above methods. Their simulated results are different from each other, which is mainly caused by the instability of the network bandwidth between heterogeneous processors and the increasing number of tasks.

Figure 12 shows that GfC algorithm gets the worst case, CaS obtains the best result in terms of schedule length, while our approach is still in the middle. Specifically, our method is 19% better than GfC.

However, regarding the monetary cost paid by CCs (as illustrated in Figure 13), it has been observed that although CaS provides the best performance, it has the highest cost while the opposite is true for GfC algorithm. In the meantime, our solution is balanced between schedule length and cloud cost. Compared with CaS, our method can save nearly 20% of cost for CCs.

We next measured the effect of increasing number of processors on the cloud cost and the schedule length only in CTaG algorithm with a fixed number of tasks. The results reflected in Figures 14 and 15 indicate that more processors result in better system performance but higher cost. It is highly noticeable to find that the cost goes up from 300000  to 327400  as the number of processors increases from 15 to 20.

Similar to the above simulation that regards the number of generations, results from Figures 16 and 17, it shows that the schedule length of the workflow is reduced with the slight decrease of the execution cost when the number of the generations increases. This is because each individual selected has to consider the tradeoff between cloud cost and execution time.

Finally, we observe the performance of the CTaG algorithm with a different number of individuals. When the number of individuals is altered from 20 to 90 (Figures 18 and 19), we witness the fact that the increase in the population size does not significantly affect the execution cost of the workflow while probability of finding a faster solution is higher. The cost just fluctuates between 55500 and 60500 . On the other hand, scheduling time exhibits a downward trend from approximately 78 minutes to around 50 minutes.

7. Conclusion

This paper proposes a thin-thick client collaboration architecture that aims at optimizing cloud access in mobile cloud platform. Furthermore, we presented a novel genetic method to improve the task scheduling so as to bring reliable processing time while balancing the network contention and cloud service cost. Besides, we conducted simulations to evaluate our approach. Through the implementation, it has been seen that our solution is more cost-effective and achieves better performance than other existing approaches when compared. We will soon extend the proposed model to run in various circumstances to achieve higher reliability and efficiency with maximum satisfaction.


Directed Acyclic Graph
: Set of parallel tasks
: Communication between subtasks and
: Set of preceding tasks of task
: Set of successive tasks of task
: Start time of task on processor
: Execution time of task on processor
: Finish time of task on processor
: Time in which processor completes its last assigned task and is ready to execute task
EST: Earliest start time
EFT: Earliest finish time
: Cost for task executed on processor
: Processing cost of task executed on processor
: Waiting cost of task executed on processor
: Communication cost of task executed on processor
: Disconnecting cost of task executed on processor
: Cost of using the memory of processor for task
: Storage cost of task on processor .

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


This work was supported by the IT R&D program of MSIP/IITP (B0101-15-0535, Development of Modularized In-Memory Virtual Desktop System Technology for High Speed Cloud Service). And this research was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT, and Future Planning (NRF-2010-0020725). The corresponding author is Professor Eui-Nam Huh.