Abstract

We present an algorithm of quantum restricted Boltzmann machine network based on quantum gates. The algorithm is used to initialize the procedure that adjusts the qubit and weights. After adjusting, the network forms an unsupervised generative model that gives better classification performance than other discriminative models. In addition, we show how the algorithm can be constructed with quantum circuit for quantum computer.

1. Introduction

The restricted Boltzmann machine (RBM) is a probabilistic network model that has one visible layer and one hidden layer. In the RBM structure, the connections between visible variables and hidden variables are excluded, so a visible variable is conditionally independent influenced by the units of hidden layer, and a hidden variable is conditionally independent influenced by the units of visible layer. The RBM has been applied in many fields, such as text [1, 2] and images [3]. In [4], using complementary priors, a fast, greedy algorithm can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. In deep belief nets (DBN) for phone recognition [5], the mean covariance RBM is firstly introduced to learn features of speech data that serve as input into a DBN. The result of the proposed method is superior to all published results on speaker-independent TIMIT to date. For using RBM better, the research in RBM is introduced precisely for now in [6]. It provides a series of methods for the parameters of RBM to solve the existing problem. In [7], the spike and slab RBM are characterized by having both a real-valued vector, the slab, and a binary variable, the spike, associated with each unit in the hidden layer. The proposed method can achieve competitive performance on the CIFAR-10 object recognition task. As the classification RBM is established, different strategies for training and different approaches to estimating gradient are studied. The proposed method is applied in, namely, semisupervised and multitask learning [8]. The present paper describes a novel algorithm for automate brain MRI scan tissue segmentation which employs a continuous RBM. The results of a pilot performance test are presented. The proposed method can approach and possibly surpass the performance of existing segmentation algorithms [9]. For now, there are two ways. One is that a RBM is trained as an unsupervised neuron model for Deep Belief Network. The other is that a RBM is paired with some other evolution algorithms or computation methods to improve the performance of learning methods.

Quantum computation was firstly proposed by Kak [10]. Nielsen and Chuang extended the field and applied the algorithm in factoring [11]. In quantum computation, qubit is the basis of computation, which can contain many system conditions. The parallelism makes the algorithm execute fast. Therefore, quantum computation has better performance in computational complexity, convergence, and arithmetic speed. Nowadays, quantum computation combines other evolution algorithms, such as quantum Fourier transform [12, 13], quantum neural network [14, 15], and quantum genetic algorithm [16, 17]. Quantum algorithm is applied in bilevel multiobjective programming, parameter optimization, and pattern recognition [1820]. The best way to achieve quantum computation is to construct a quantum circuit for quantum computer. Maeda et al. [21] used a quantum circuit to construct a qubit neuron model to solve XOR problem. P. C. Li and S. Y. Li [22] established a learning algorithm of quantum neural network based on universal quantum gate, which got a superior performance in pattern recognition. Aiming at fast computation and efficient convergence, the quantum circuit has been applied in eigenvalues search [23], optimal method [24], probability calculation [25], and so on [26].

In this paper, we argue that quantum computation can provide a novel, fast, and stable framework for RBM to solve unsupervised nonlinear problems. An algorithm of quantum restricted Boltzmann machine (QRBM) network is proposed. In the proposed method, quantum representation is used as input, and quantum gates are used for data training and parameters updating. By modeling the quantum circuit of visible layer and hidden layer, QRBM will be used for locating the best global solution in the network outputs. We compare different number of hidden layers, which is the key of convergence speed and complexity. We also describe how the QRBM can be used for pattern recognition. Using experiments on fault diagnosis of gear box, we show that the classification performance of QRBM is more competitive than other classifiers such as standard RBM (SRBM) network and standard neural network (SNN).

The structure of the paper is organized as follows. In Section 2, the principle of RBM network is introduced. Section 3 shows the quantum computation and quantum gates for QRBM. In Section 4, the execution steps of QRBM are given, and it shows how the quantum circuit is composed of quantum gates, which provides a simulation progress. Section 5 shows the pattern recognition performance of QRBM for gear box with different hidden layers.

2. Restricted Boltzmann Machine Network

In RBM with one hidden layer, consider the state of visible layer and the state of hidden layer ; the energy function is given bywhere is the connection weight and and are the biases of two layers, , .

According to energy function, the joint probability of a visible and a hidden variable is assigned: where .

The probability for visible layer is given by summing the whole possible variables in hidden layer:

The probability that the RBM is assigned to a training set can be raised by finding the minimum energy in energy function. Define free energy as

Then the probability can be rewritten as

Because of probabilistic network, the performance of RBM, whose standard is likelihood function, is evaluated on probability theory. The method of maximum likelihood is used for estimation for model parameters, which makes the maximum probability for variables of visible layer and finds the minimum free energy of the RBM network:where .

For sampling problem in RBM, it starts at any random state in visible layer and performs Gibbs sampling [1]. In order to simplify the procedure of sampling, a gradient of objective function called contrastive divergence is shown, which needs fewer steps.

3. Quantum Computation

In quantum computation, because of the entanglement, superposition, and no-cloning of qubit, a random state can represent linear combination of two basic states:where and are the probability amplitude. The relationship between and satisfies the following:

In this paper, the independent variables can be described as follows:where .

In quantum computation, the quantum gates are used for random and complicated computation. In Hilbert space, a random unitary transformation can be decomposed into the combination of single qubit gates and two-qubit gates. The common quantum gates have Hadamard gate (), rotation gate (), and controlled-NOT gate (), which are shown in Figures 1, 2, and 3.

In order to carry out the proposed QRBM algorithm, the three quantum gates are the basis for constructing a quantum circuit to execute.

4. The Algorithm of Quantum Restricted Boltzmann Machine

4.1. The Quantum Representation

There is a set of dependent variables and a set of independent variables .

Definition 1. A formula which is to transform the real values into the quantum representation of variables is said to be where , .

In Formula (9) and Formula (10), one can see that . After transformation, the qubit sets are got:

4.2. Quantum Circuit for QRBM

Based on quantum gates, a quantum circuit can be constructed. Set a quantum register , which can store the state of assistant qubits. The specific quantum circuit of QRBM network is shown in Figure 4.

In Figure 4, we show the variables of visible layer and the variables of hidden layer and a quantum register contained qubits. Hadamard gate is the preprocessing step, which is the basis of unitary transformation. The matrix of quantum rotation gates is used for phase transformation to calculate the coefficients of variables in visible layer. With a controlled-NOT gate, a quantum state of each variable is switched, and the whole variables in visible layer are summed into one qubit. After phase transformation with Hadamard gate again, the states of one qubit in hidden layer are given, which represents the output matrix.

As we all see, the quantum circuit in Figure 4 is a simple circuit with one hidden layer. In order to enhance the performance of QRBM, we can increase the number of hidden layers to compose different QRBM network, such as QRBM1, QRBM2, and QRBMn. A suitable number of hidden layers can shorten execution time and maintain high classification accuracy. The number of hidden layers in learning algorithm is depended on the actual need for pattern recognition and fault diagnosis.

4.3. The Strategy of Parameter Update

In QRBM algorithm, there is a matrix of quantum rotation gate updated, which has parameters:

Set QRBM1 algorithm as an example; the output after quantum operations can be denoted aswhere , .

We put the coefficient of qubit as the actual output, so

Quantum computation is also a probability computation. So the probability formula of hidden layer for QRBM is obtained:

Definition 2. For , the free energy for hidden layer in QRBM can be said to be where is the expected probability and is the actual probability.

The bigger the is, the worse the robustness of neural network model is. It will be sensitive for the change of sample.

For the updating of parameter , the gradient descent method that can approach the minimum error is applied. Put where , .

So the formula of parameter updating is denoted as where is the iteration step.

5. Experiment

In this work, we applied the proposed algorithm, QRBM, into the pattern recognition of gear box. The computation operation has been carried out on a computer that had an Intel Core 2 Duo CPU with 2.00 GB of main memory. We compared quantum restricted Boltzmann machine (QRBM) with the other two traditional methods, standard neural network (SNN) and standard restricted Boltzmann machine (SRBM).

In gear box experiment for collecting vibration signals, we set four working conditions, that is, normal condition, gear tooth wear fault, gear root crack fault, and gear tooth broken fault. The speed is 600 r/min. The sampling number is 2048. The sampling frequency is 12800 Hz. The mode of sensor is CA-YD-185. For each working condition, 50 samples are collected, in which there are 20 samples for training and 30 samples for testing. Figures 5(a) and 5(b) give the demonstration of vibration signal in four working conditions.

Considering amplitude and its distribution, we can see some obvious differences in Figures 5(a)5(d). Meanwhile, the waves in Figure 5 are a little confused. In gear tooth wear fault, there is much noise. In gear root crack fault, there is a strong impact at the crack. In gear tooth broken fault, there is lots of noise and a stronger impact at the broken point.

In order to find the best number of hidden layers, we compared the performance of four QRBM algorithms in execution time and classification accuracy. The number of hidden layers is set at 1, 2, 3, and 4, which are called QRBM1, QRBM2, QRBM3, and QRBM4. The parameters in four algorithms are listed as follows. The initial value of is the random value in . The number of iteration is 1000. The structure of three algorithms is separately 2048-1800-4, 2048-1800-1024-4, 2048-1800-1024-800-4, and 2048-1800-1024-800-400-4. So, the comparison results are shown in Figure 6.

As we can see from Figure 6, QRBM4 had the lowest accuracy 85.6%, which took 11.27 s. QRBM3 had the accuracy 90.3%, which took 8.41 s. The classification accuracy of QRBM2 was the highest accuracy 97.1%, which took 6.13 s. QRBM1 had the accuracy 95.8%, which took 4.28 s. As the number of hidden layers increases, the classification accuracy is lower and the run time is longer. The reason that QRBM4 had lowest accuracy and longest execution time was that there are too many quantum gates calculated and the value is stuck in local optimum. For QRBM2, because of suitable quantum gates, it had the highest classification accuracy, but the execution time was longer than QRBM1. Therefore, in this paper, we choose QRBM1 as the main learning algorithm.

To evaluate the performance of QRBM1, we compared the classification performance to SNN and SRBM, which are the traditional methods for pattern recognition. In SNN, the parameters were set as follows. The structure is 2048-1024-4. The learning rate is 0.9. The number of iteration steps is 1000. The tolerated error is 0.05. In SRBM, the parameters were set as follows. The structure is 2048-1024-4. The number of iteration steps is 1000. The tolerated error is 0.05. The three algorithms were carried out 50 times, and these results are presented in Table 1.

In Table 1, SNN, SRBM, and QRBM1 can reach the accuracy rate 100%, but QRBM1 had the shortest execution time. Meanwhile, the performance of SRBM was better than SNN. With observing all the results of four working conditions, the best performance algorithm is QRBM1, which enhanced 2–4% of classification accuracy rate and saved 63% of execution time at most. So, QRBM algorithm has better feasibility and practicability.

6. Conclusions

The intention of this paper is to propose an algorithm of quantum restricted Boltzmann machine based on quantum gate. We considered the universality of quantum computation for RBM for pattern recognition and evaluated it. In particular, we highlighted the quantum circuit for QRBM and explored the suitable number of hidden layers. We also extended the application field where the QRBM can be employed and multiclass problem can be solved.

As discussed and analysed previously, the algorithm of QRBM is an effective and practical learning method, which can achieve superior classification performance. As a prognosis for quantum computation applied in RBM, with these improvements to QRBM, this review could help us construct a more precise quantum circuit for quantum computer in future.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the editor and the assessors for their useful and pertinent comments. This paper is supported by the National Science Foundation of China (51305454, 51205405). Thanks are due to Professor Zhang for providing the core thoughts of the algorithm. Thanks are also due to Doctor Zhou for verifying the math derivation and formula in the whole paper.