Mathematical Problems in Engineering

Volume 2017 (2017), Article ID 9549323, 7 pages

https://doi.org/10.1155/2017/9549323

## A Cycle Deep Belief Network Model for Multivariate Time Series Classification

^{1}School of Information and Electrical Engineering, China University of Mining and Technology, Xuzhou, Jiangsu 221116, China^{2}School of Computer Science and Technology, Jiangsu Normal University, Xuzhou, Jiangsu 221116, China

Correspondence should be addressed to Gang Hua

Received 27 May 2017; Revised 11 July 2017; Accepted 24 August 2017; Published 4 October 2017

Academic Editor: M. L. R. Varela

Copyright © 2017 Shuqin Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Multivariate time series (MTS) data is an important class of temporal data objects and it can be easily obtained. However, the MTS classification is a very difficult process because of the complexity of the data type. In this paper, we proposed a Cycle Deep Belief Network model to classify MTS and compared its performance with DBN and KNN. This model utilizes the presentation learning ability of DBN and the correlation between the time series data. The experimental results showed that this model outperforms other four algorithms: DBN, KNN_ED, KNN_DTW, and RNN.

#### 1. Introduction

Time series data are sequences of real-valued signals that are measured at successive time intervals. They can be divided into two kinds: univariate time series and multivariate time series (MTS). Univariate time series contain one variable, while MTS have two or more variables. MTS is a more important data type of time series because it is widely used in many areas such as speech recognition, medicine and biology measurement, financial and market data analysis, telecommunication and telemetry, sensor networking, motion tracking, and meteorology.

As the availability of MTS data increases, the problem of MTS classification attracts great interest recently in the literature [1]. MTS classification is a supervised learning procedure aimed for labeling a new multivariate series instance according to the classification function learned from the training set [2]. However, the features in traditional classification problems are independent of their relative positions, while the features in time series are highly correlated. That resulted in the loss of some important information if the traditional classification algorithms are used for MTS, since they treat each feature as an independent attribute. Many techniques have been proposed for time series classification. A method based on boosting are presented for multivariate time series classification [3]. In [4], the authors proposed a DTW based decision tree to classify time series and the error rate is 4.9%. In [5], the authors utilize a multilayer perceptron neural network on the control chart problem and the best performance achieved is 1.9% error rate. Hidden Markov Models are used on the PCV-ECG classification problem and achieve 98% accuracy [6]. Support vector machine combined with Gaussian Elastic Metric Kernel is used for time series classification [7]. The dynamics of recurrent neural networks (RNNs) for the classification of time series are presented in [8]. However, simple combination of one-nearest-neighbor with DTW distance is claimed to be exceptionally difficult to beat [9].

Deep Belief Network is a type of deep neural network with multiple hidden layers, introduced by Hinton et al. [10] along with a greedy layer-wise learning algorithm. Restricted Boltzmann Machine (RBM), a probabilistic model, is the building block of DBN. DBN and RBM have witnessed increased attention from researchers. They have already been applied in many problems and gained excellent performance, such as classification [11], dimensionality-reduction [12], and information retrieval [13]. Taylor et al. [14] proposed conditional RBM, an extension of the RBM, which is applied to human notion sequences. Chao et al. [15] evaluated the DBN performance as a forecasting tool on predicting exchange rate. Längkvist et al. [16] applied DBN for sleep stage classification and evaluated the performance. The result illustrated that DBN either with features (feat-DBN) or using the raw data (raw-DBN) performed better than the feat-GOHMM. The feat-DBN achieved 72.2% and the raw-DBN achieved 67.4%, while the feat-GOHMM achieved only 63.9%.

Raw-DBN do not need to extract feature before classifying the sleep data and this algorithm is easy to implement. However, it neglects the important information in time series data and its performance is not satisfactory. This paper proposed a Cycle DBN model for time series classification. This model possesses the ability of feature learning since it is developed on the basis of DBN. Meanwhile, the characters of time series data are taken into consideration in the model.

The remainder of the paper is organized as follows. Next section reviews the background material. In Section 3, we detail the Cycle DBN model for multivariate time series. Section 4 evaluates the performance of our Cycle DBN on two real data sets. Section 5 concludes the work of this paper.

#### 2. Background Material

A time series is a sequence of observations over a period of time. Formally, a univariate time series is an ordered set of real-valued numbers, and is called the length of the time series . Multivariate time series is more common in real life and it is more complex since it has two or more variables. A MTS is defined as a finite sequence of univariate time seriesThe MTS has variables and the corresponding component of the th variable is a univariate time series of length :In this paper, we use bold face characters for MTS and regular fonts for univariate time series.

The time series classification problem is a supervised learning procedure. First we should learn a function according to the given training set . The training set includes samples and each sample consists of an input paired with its corresponding label . Then we can assign a label to a new time series instance based on the function we learned from the training set.

A Deep Belief Network (DBN) consists of an input layer, a number of hidden layers, and finally an output layer. The top two layers have undirected, symmetric connections between them. The lower layers receive top-down, directed connections from the layer above.

The process of training DBNs includes two phases. Each two consecutive layers in DBN are treated as a Restricted Boltzmann Machine with visible units and hidden units . There are full connections between visible layer and hidden layer, but no visible-to-visible or hidden-to-hidden connections (see Figure 1). The visible and hidden units are connected with a weight matrix, , and have a visible bias vector and a hidden bias vector , respectively. We need to train each RBM independently one after another and then stack them on top of each other in the first phase. This procedure is also called pretraining. In the second phase, the BP network is set up at the last level of the DBN, and the output of the highest RBM is received as its input. Then we can perform a supervised learning in this phase. This procedure is called fine-tuning since the parameters in the DBN are tuned using error back propagation algorithm in this phase.