Abstract

A quantum optimization scheme in network cluster server task scheduling is proposed. We explore and research the distribution theory of energy field in quantum mechanics; specially, we apply it to data clustering. We compare the quantum optimization method with genetic algorithm (GA), ant colony optimization (ACO), simulated annealing algorithm (SAA). At the same time, we prove its validity and rationality by analog simulation and experiment.

1. Introduction

Cluster technology is connecting multiple independent servers and providing services as a whole by a cluster. To achieve parallel program in a high efficiency, service request must be allocated to each server, reduce the access time, and optimize the overall performance. Load balancing mechanism is the core cluster technology.

In the literature [1], server cluster provides high reliability, availability, and scalability by gathering server nodes into one group. User requests need to be distributed to every server node fairly to maximize the characteristic of server cluster. In the same time, it proposes an efficient and adaptive load balancing arithmetic for the server cluster. The arithmetic computes the load of servers with the usages of computer resources or their weights. The weights are dynamically determined based on the usages of the statistics. Their experimental result shows that this arithmetic protect the bottleneck from server cluster efficiently compared with before arithmetics.

The state of web applications communicates and coordinates with lot of geographically distributed information resources offering information to great number of clients. Homogeneous server clusters are unable to meet the growing demand of the applications including real-time video and audio, ASP, JSP, and PHP. Moreover, it also provides better reliability by gracefully transferring the load from server which is unavailable due to failure or for preventive maintenance. Heterogeneity with scalability makes the system more complex. The literature [2] proposes a dynamic load balancing (DLB) algorithm for extensible heterogeneous server cluster for content awareness. The arithmetic considers server’s processing ability, queue length, utilizing ratio, and so forth, as load indices. As the clusters supports multiplex services, at the basic level, it has used content awareness forwarding arithmetic.

In the literature [3], online games are becoming more fashionable recently as the internet becomes popular, game platforms become different, and ubiquitous environment is supported. Wherefore, distributed technology is required to support huge numbers of concurrent game clients simultaneously. Specifically, when the users are playing games, a lot of unpredictable problems can arise, for example, a certain server handles more loads than what is recommended because much more game users crowded into a specific region of game world. The kinds of situations can lead to the game servers instability. Here, the global dynamic load balancing model and distributed massive multiplayer online game (MMOG) servers architecture are put forward to apply the load balancing arithmetic. Much more different experiments were achieved to test efficiency.

A load balancing arithmetic named dynamic weighed random (DWR) algorithm for the session initiation protocol (SIP) application server cluster is put forward in the literature [4]. It utilizes weighted hashing random arithmetic that supports dialog in the SIP protocol of distributing messages. The weight of every server is dynamic and adaptive with feedback mechanism. The arithmetic of DWR is efficient in the cluster balance, and it is much better than the limited resource vector (LRV) arithmetic and the minimum sessions first (MSF) arithmetic.

The literature [5] proposes a new server load balancing model. Server cluster load balancing is well known to be a critical mechanism for network-based information service. Most of previous schemes cannot take server's loadings into account, which might not make the loadings of all servers be balanced and drive the server system to work on the borderline of being overloaded and/or out of function. The proposed model is aimed at preventing the occurrence from malfunction and saving the power consumption of the cluster system under a low loading, when it provided a better performance. All of the connection requests are truly distributed into one server until a prespecified portion of the maximum allowed serving load is realized. The following requisitions are served by another one server in the same method. These experimental results illustrate the feasibility of the proposed model.

The consolidation of server is due to virtualization technology, which enables multiple servers to run on one platform. Moreover, virtualization may bring the overheads on performance. The prediction of virtualization performance is very important. The literature [6] proposes a general model for predicting the performance of consolidation. On the other hand, a load balancing problem is studied that arises in server consolidation. A certain amount of workloads are assigned to a few number of high performance target servers, and the workloads in every target server are balancing. It is as an integer linear programming that first models the load balancing problem. The fully polynomial time approximate scheme (FPTAS) is proposed to get the approximate optimal solution.

The response time of a website needs to be improved, which one replicates the site on multiple servers. It will depend on how the incoming requests are distributed among replicas, where the effectiveness of a replicated server system. In the literature [7], a testbed is described that can evaluate the performance of many different load-balancing strategies. A general architecture that allows different load-balancing methods to be supported easily is used in the testbed. It emulates a typical internet scenario and allows variable load production and performance measurement. They measure and illuminate the performance of some policies for load balancing in this testbed by some basic experiments.

A key issue for cluster system is the utilizing efficiency of system resources, in which the method of load balance is very important to realize the efficient resources. Based on server cluster system, the literature [8] proposes an improved self-adaptive arithmetic for network load balancing. The arithmetic can improve the utilizing efficiency of system resource by showing simulation results. In order to achieve the request of real time, when dealing with tasks and high availability of system, it can reduce the server's response time.

Effective load balancing mechanism can extend the “capacity” of the server and improve system throughput. In early studies of load balancing algorithm, genetic algorithm (GA), dynamic feedback algorithm (DFA), ant colony algorithm (ACO), simulated annealing algorithm (SAA), round robin (RR), Min-Min algorithm, Max-Min algorithm, and so on have some improvements in different degree at different perspective on the load balancing system.

They provide solutions to the problem of load balancing for server cluster by these arithmetics mentioned above. But these arithmetics have this or that problem such as local premature problem and divergence problem.

In order to overcome the instability of the above algorithms, the server load balancing method based on quantum algorithm is proposed. And we prove it better than GA, ACO, and SAA by simulation experiments.

2. Quantum Optimization Algorithm

The quantum optimization method of clustering algorithm is mainly used in this particle. The method is put forward by clustering idea based on quantum theory, which is a kind of unsupervised clustering method. It is also applied to the traditional clustering algorithm, by the study of the theory of energy distribution in quantum mechanics, theory study, we found that the microscopic particles distribution in the energy field relies on the potential energy which associates with particles themselves, the smaller the potential energy around the particles, the more they are absorbed. In the energy field with particle distribution described by a wave function, the particle distribution will ultimately depend on the potential energy in the energy field. For the design of clustering algorithm, the potential energy function is used and the cluster center is determined by the particle distribution. Similarly, how to determine the cluster center and the corresponding number of samples of clusters is also the main task of cluster analysis. Therefore, distribution of particles in space studied by quantum mechanics theory is similar to the distribution of samples studied by clustering algorithm. The known clustering process of sample distribution can be regarded as the known wave function which describes the particle distribution.

The clustering process can be expressed as follows: with the known wave function, solve the potential energy function by the Schrödinger equation. Particle distribution ultimately depends on the value of the potential energy function.

Quantum state of a particle wave function is as follows: where is the wave function to describe the particle quantum state, is the Hamilton operator that describes the system’s total energy, is the potential energy function on behalf of potential energy of the particle, is the operator’s energy proper value, is the Nabla operator, and is the parameter.

From formula (1), one can find that, with the same potential field distribution, determination of cluster centers is similar to the principle of quantum changing with the potential energy. To solve the particle distribution of potential energy function, the particle with minimum potential energy is determined. As the focal point for cluster, through (1), the potential energy function is shown as follows:

3. The Model of Server Task Scheduling

In cluster services, the loading balance can be described as follows: tasks need to be allocated to node servers with different loading and handling capacity for processing, in order to find an optimization schedule to minimize the total completion time. Mathematical model for the system is shown as follows.

Suppose there are m servers (or nodes) and n tasks. Each task has to be assigned to only one server. In this paper, denotes the servers, whereas denotes one of the servers or nodes; denotes the current load, whereas denotes the current load of node . For example, means that node has a current load of 0; that is to say, this node is idle. The tasks are denoted by , where is one of the tasks. We build an matrix between servers and tasks: , where is one of the elements and there are two states: where , .

We use to denote the time of processing on one task; that is to say, the time of task is processed on node . We use the processing time by where and .

Obviously, is also an matrix.

We consider that the optimal state occurs with these conditions: (1) the whole system has a relative short time of processing, and meanwhile (2) the throughput of system in unite time is relatively large. We can use the following equations to describe this state: where is the new task, is the current whole load at the node, is the length of ready queue at node , is the average processing time at node , and are constant, and is a function which can reflect the ability of node processing and required tasks. When the processing tasks and loading capacity at the node reach the maximal matching, the system is on the optimal running state.

4. Server Loading Balance Model Based on Quantum Optimization Algorithm (QOA)

4.1. The Model of Quantum Cluster Algorithm

The model of loading balance by quantum optimization can be described as a tetrad (task, weight, function, and scheduling); among them, task is the task to be assigned and scheduling is to assign the task according to the rules [912]. In quantum model, the task and weight are denoted by qubit and , respectively. A qubit state can be expressed as (6). Consider where and satisfy the condition Model establishing: there are and qubits in task and scheduling, respectively,

The relation of task and schedule is described as where .

Suppose , , is the set of cluster samples. Each sample of them belongs to one of the set modes according to some rule. And we use to denote set of samples in each type of mode and to denote the competition superiors corresponding to the mode in . Quantum state of cluster samples is as follows.

For cluster samples in Euclidean space, we define the transfer equation (10) to achieve the quantum description of cluster samples: where

Constraint rule in competition is as follows.

Definition 1. Suppose and are -dimension quantum state vectors, and we define similar coefficient of , as
According to Definition 1, the task samples and the weight vector of cluster mode sample have a similar coefficient as follows: Suppose node with maximum similar coefficient is the winner; then, satisfies where .
Adjust to move weight vector towards the direction of sample , and at last make the scheduling output of node indicate the mode type which represents.

4.2. Quantum Cluster Algorithm

Step 1. Set initial value for , , where , , and is random value in .

Step 2. Set the maximum step size as Max_length. Initialize the learning rate and neighborhood radius ; then, set initial loop counting .

Step 3. Calculate learning rate and neighborhood radius by the following equations:

Step 4. Take out a sample vector from training set in order, and calculate the superior node which is numbered according to the formula (13) and (14).

Step 5. In node array, neighborhood with as the center and as the chosen radius should adjust the weight vector by the following equations: Here, where and , and , are the probability amplitudes of and , respectively.

Step 6. Consider

Step 7. For a set of samples in one type , the center sample should be calculated by the following equations: where .

Step 8. Calculate the learning rate:

Step 9. Take out a type set from training set in order, and number the winner node of the center sample in this type by .

Step 10. Consider

5. Analog Simulation

5.1. The Condition of Circumstance of Analog Simulation

In order to compare quantum clustering optimization algorithm (QOA), genetic algorithm (GA), ant colony optimization (ACO), and simulated annealing algorithm (SAA), select five servers as nodes with the number of tasks from 0 (or 100) to 1000 (or 800) to compare the results of the three methods by MatLab. The correlation parameters of selected servers for experiments are in Table 1.

The topological structure of the network servers is as in Figure 1.

5.2. Results

Figure 2 shows that the system load balancing degree of QOA is better than GA, SAA, and SAA. And the more task quantity, the better result. The task quantity is from 100 to 1000.

Figure 3 shows the system throughput rate of GA, SAA, ACO, and QOA. That is to say, the QOA is bigger than GA, SAA, and ACO.

Figures 5, 6, 7, and 8 show the throughput of QOA is smoother than GA, SAA, and ACO.

From the results, it is clear that quantum optimization algorithm (QOA) is better in cluster server task scheduling than genetic algorithm (GA), simulated annealing algorithm (SAA), and ant colony optimization (ACO). QOA is more effective in task scheduling (Figure 4).

6. Conclusions

The paper gives a quantum optimization model and arithmetic on cluster server and proves their validity by analog simulation and experiments. The model and the arithmetic increase the throughput and efficiency of the system, and they had some merits than traditional model and arithmetic.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This study is supported by the National Natural Science Foundation of China (61173056).