Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article
Special Issue

Advanced Intelligent Fuzzy Systems Modeling Technologies for Smart Cities

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 8885625 | https://doi.org/10.1155/2020/8885625

Mu Qiao, Zixuan Cheng, "A Novel Long- and Short-Term Memory Network with Time Series Data Analysis Capabilities", Mathematical Problems in Engineering, vol. 2020, Article ID 8885625, 9 pages, 2020. https://doi.org/10.1155/2020/8885625

A Novel Long- and Short-Term Memory Network with Time Series Data Analysis Capabilities

Academic Editor: Yi-Zhang Jiang
Received03 Sep 2020
Revised26 Sep 2020
Accepted30 Sep 2020
Published14 Oct 2020

Abstract

Time series data are an extremely important type of data in the real world. Time series data gradually accumulate over time. Due to the dynamic growth in time series data, they tend to have higher dimensions and large data scales. When performing cluster analysis on this type of data, there are shortcomings in using traditional feature extraction methods for processing. To improve the clustering performance on time series data, this study uses a recurrent neural network (RNN) to train the input data. First, an RNN called the long short-term memory (LSTM) network is used to extract the features of time series data. Second, pooling technology is used to reduce the dimensionality of the output features in the last layer of the LSTM network. Due to the long time series, the hidden layer in the LSTM network cannot remember the information at all times. As a result, it is difficult to obtain a compressed representation of the global information in the last layer. Therefore, it is necessary to combine the information from the previous hidden unit to supplement all of the data. By stacking all the hidden unit information and performing a pooling operation, a dimensionality reduction effect of the hidden unit information is achieved. In this way, the memory loss caused by an excessively long sequence is compensated. Finally, considering that many time series data are unbalanced data, the unbalanced K-means (UK-means) algorithm is used to cluster the features after dimensionality reduction. The experiments were conducted on multiple publicly available time series datasets. The experimental results show that LSTM-based feature extraction combined with the dimensionality reduction processing of the pooling technology and cluster processing for imbalanced data used in this study has a good effect on the processing of time series data.

1. Introduction

Time series data are a common type of data in work and life. The time series dataset is a collection of observations at different moments collected with a certain collection technology and at certain time intervals. Therefore, each observation result in time series data is often time stamped. With the continuous improvement of computer technology and storage capabilities, storage devices store many time series data. Time series data are widely generated and exist in various industries or fields. For example, camera systems in shopping malls collect a large amount of consumer-related information. The data storage departments of large financial centers and security companies collect a large amount of stock information. In terms of environmental data, real-time weather information can be observed by artificial satellites, and a large amount of geological and mineral data can also be detected by related instruments. In meteorology, the recorded data of precipitation, temperature, and air quality pollution in each city or region are all time series data. In e-commerce consumption, customer consumption habits, commodity transaction volume, logistics data, and commodity evaluation data are usually time series data. Therefore, research on time series data exists in all walks of life. Extracting the required information from time series data has important practical significance.

Time series data have the characteristics of large data volume, high dimensionality, and continuous updating; so, time series data are relatively complicated. Because of the complexity of time series data, analysis and research on time series data have not achieved great results. The initial study of time series data analysis can be mainly categorized into several major periods, such as descriptive time series analysis, statistical time series analysis, frequency domain analysis, time domain analysis, and time series data mining. In the past 20 years, the analysis of time series data has received widespread attention, and various time series data research methods have been proposed. Related research mainly includes similar time series research [13], time series search and query [4], dimensionality reduction [5, 6], segmentation [7, 8], anomaly detection [9, 10], topic discovery [11], prediction [12, 13], clustering [1422], classification [2325], and segmentation [26, 27].

This article focuses on the clustering analysis of time series data using LSTM combined with pooling technology for feature extraction and the UK-means for clustering. The work of this research is summarized as follows:(1)The general method of using an RNN for feature extraction is to select the last hidden unit of the network to express the data. However, the original time series data cannot be expressed well when using only the last layer, so this article will use pooling technology to combine the hidden layer expression of all time steps. This method effectively reduces the data dimension while preserving the original information to the greatest extent.(2)A K-means clustering algorithm suitable for imbalanced data is introduced. The algorithm first oversamples the dataset to construct multiple balanced training subsets. Second, traditional K-means is used to cluster each subset and get the clustering result of each training subset. Finally, the integrated strategy is used to get the final clustering result.(3)The dimensionality-reduced feature data are input into the UK-means model to obtain the clustering results for the time series data. By comparing the experimental results of different feature extraction methods, dimensionality reduction techniques, and clustering models, it is verified that the method used in this study has the best analysis effect on time series data.

2.1. Introduction to Time Series Data

Time series data refer to data composed of sequence values or events that change over time. Sequence values are usually data measured at the same time interval, such as the daily closing price of stocks, daily fluctuations in material prices, futures trading data, and medical data. There are two main types of time series data in reality: continuous and numerical. Continuous data mainly describe the continuous occurrence of events or business activities in time. For example, the sales and development time and the development and production time of certain products. There are many types of continuous numeric values generated in specific commercial marketing activities, such as categorical variables, numeric values, date types, currency amounts, duration, and counts. The subtype values mainly describe the relationship between a time series and other business activities or other derivative data, for example, whether the appearance or function of a certain product plays a key role in the sales of the entire enterprise.

2.2. Time Series Data Analysis Technology

The time series data analysis is mainly for prediction, classification, and anomaly detection. Regardless of the specific purpose, the technology used can be roughly divided into two categories. One is based on traditional analysis techniques. The second is based on deep learning technology.

2.2.1. Traditional Analysis Technology

Traditional analysis methods can be divided into two categories: qualitative analysis and quantitative analysis. Qualitative analysis is often used to predict trends, and judgments can be made without referring to industry values. These values include an industry's previous sales volume, market competition intensity, product planning strategy, and many other issues. Industry experts use a large amount of existing data and, then, use their own personal analysis to synthesize conclusions and judge the trend in a certain indicator in the next stage in the future [28]. When market data are not accurate and complete, especially when there are no numerical measurement data, this method is the main reference method for product sales analysis in the industry. This method relies on the long-term experience and intuition of industry experts and can give results quickly, but it is not accurate enough. Qualitative forecasting is a common time series analysis method in practice [29].

Quantitative analysis is a numerical analysis method. The biggest difference is that quantitative analysis relies on objective numbers, while qualitative analysis relies on intuition. Quantitative analysis is more accurate than qualitative analysis, and of course, the corresponding time complexity increases accordingly. Quantitative analysis is more convincing for more precise issues such as changes in product sales and fluctuations in traffic flow. It can objectively discover inherent statistical laws through numbers. Currently, quantitative analysis mainly includes regression analysis methods [30], traditional statistical analysis methods [3133], and machine learning [1518, 3439] methods.

2.2.2. Deep Learning Technology

The theory of deep learning was developed based on the artificial neural network model. Deep learning methods introduce deeper network structures based on artificial neural networks and conduct deeper analyses of extracted features and timing relationships in data. For example, the pretraining method proposed in a deep belief network model allows an artificial neural network to initialize the weights before optimizing training so that the values will not be too far from the ideal value. This processing method can significantly save computing resources. After pretraining, fine tuning is used for training. A deep network can be realized through these two methods. For deep network models, a layer-by-layer training method is generally adopted. A loss function is constructed from the output layer and the expected output to optimize the stochastic gradient descent so that training is conducted from the outside to the inside, layer by layer. The advantage of a deep network is that each layer may extract different features and ultimately form a powerful learning ability. Typical deep learning models can be found in references [4043].

2.3. LSTM Network and Pooling Technology

The LSTM network contains a cell and three gates, which are input gates, output gates, and forget gates. The function of the gates is to limit the flow of data using activation functions. In the figure, f and represent activation functions.

Figure 1 shows that the input gate, output gate, forget gate, and cell unit calculations involve input data and hidden data. The data of the cell state depend on the forget gate, the input gate, its previous state, and the cell unit. The hidden layer depends on the output gate and the cell state after activation. The hidden layer information at the previous moment and the input information are gathered into three gates, and after the weight connection, the activation function is used to suppress the value. In this way, the output values of the three gates at the current moment are obtained. Additionally, the temporary cell information is the same as the case of the three gates. Through the hidden information and the current input obtained at the previous moment, the output value at the current moment is obtained with the activation function. Then, the temporary cell information at the previous moment and the output values of the input gate and forget gate are used to obtain the current cell state information. Finally, the output information of the current hidden layer is obtained by using the output gate and cell state information.

Each parameter in the LSTM network is defined as follows. The meaning of each symbol is shown in Table 1.


SymbolMeaning

Superscript tt time
SubscriptSpecific hidden layer unit
lUnit size of the input gate output
iSize of the input unit
hSize of the hidden unit
Size of the output unit of the forget gate
cOutput size of the cell state
wSize of the output unit of the output gate
Value of the lth network unit at time t
actActivation function
Output
Weight
bValue of an after activation

The forward formula of the network is as follows.

Input gate,

Forget gate,

Output gate,

Cells,

Hidden layer,

Output,

The RNN can obtain the required hidden layer information and, then, use average pooling to obtain the features after dimensionality reduction. Let the transformed feature be and the feature dimension be C; then, we can obtain

Before clustering, the elements obtained by the hidden layer need to be pooled. Pooling is generally used in convolutional neural networks to reduce the dimensionality of the feature vector output with the convolutional layer, and it can also prevent overfitting. Its purpose is to use a value to represent a small area. Commonly used methods include maximum pooling, average pooling, and additive pooling. Maximum pooling is often applied to images because this method can ensure rotational invariance and expansion of the image. For time series data, to make the information of each time dimension supplement the last dimension, it is most appropriate to use average pooling or additive pooling [44]. Because it is easy to obtain a relatively large value and increase the amount of calculation in additive pooling, this study uses average pooling. The principle comparison diagrams of the maximum pooling and average pooling operations are shown in Figure 2. A 2 × 2 window is selected in the figure.

Figure 2 shows that average pooling takes the average value in the window, and maximum pooling takes the maximum value in the window. In the pooling operation, in addition to selecting the size and value of the window, it is also necessary to select the span of the window, that is, the stride of the window moving on the matrix. The span in Figure 2 is 2. In actual applications, the span is sometimes selected as 1. This can easily capture the relationship of translational invariance.

2.4. Unbalanced K-Means Algorithm

The traditional K-means algorithm is mainly aimed at clustering analysis of balanced data. The time series data are mostly unbalanced data. When traditional K-means processes such data, the effect will drop a lot. In order to improve the clustering performance of the K-means algorithm on unbalanced data, this paper uses an unbalanced K-means algorithm (UK-means). The principle of the algorithm is shown in Figure 3.

It can be observed from Figure 3 that UK-means first oversampled the dataset and randomly selected M data from the majority of categories, and each data subset contained N samples. Each majority class sample subset and the minority class sample set are fused into M datasets with a sample size of 2N. Secondly, K-means clustering is performed on each training subset to obtain the clustering result of the training subset. Finally, the results of each training subset are averagely weighted to obtain the final clustering result.

3. Time Series Data Clustering Method

3.1. Framework of the Method Used

For cluster analysis of time series data, this paper uses a clustering framework that combines LSTM and pooling technology. Figure 4 shows a framework diagram of the cluster analysis used in this study. In Figure 4, the original sequence is first input one by one into the LSTM for feature extraction, and the hidden states obtained at each moment are gathered. Second, average pooling technology is used to reduce the dimensionality to obtain the transformed data expression. Finally, the unbalanced data clustering algorithm is used to cluster the converted data. This study uses the UK-means model for clustering operations. The introduction of UK-means is in section 2.4. In summary, the main idea of this research is to use LSTM to extract features and reduce dimensionality of datasets. Then, UK-means clustering feature data are used to get the final clustering result. The flow chart of this method is shown in Figure 5.

3.2. Parameter Update in the LSTM Model

The mean square error is used to obtain the target expression as follows:

We differentiate the input gate, forget gate, output gate, and internal state and obtain their expressions as follows:

Input gate,

Forget gate,

Output gate,

Internal state,

The cell state can be obtained from the hidden layer or from the cell state at a later time. Therefore, the cell state gradient is as follows:

The gradient update formula of the hidden layer is

The weight parameters mainly include , , , , , , , , and . These weight parameters can be divided into 3 categories, namely, , , and . The gradient descent method is used to update these three types of weight parameters, and the expression is as follows:

4. Experimental Results and Analysis

4.1. Experimental Data

This article conducts experiments on the UCR public dataset. The UCR dataset provides multiple sets of datasets based on real scenes, each of which contains training sets and test sets in the same format. Each row in each dataset is recorded as a piece of data, the first number of a piece of data is the category, and the remaining part is a time series. For ease of use, all data in the UCR are standardized using z-scores. All the experiments use the UCR's default training set and test set without redivision. The introduction of the dataset used in the experiment is shown in Table 2.


DatasetNumber of categoriesTraining setTest setTime series length

Adiac37390391176
Beef53030470
CBF330900128
ChlorineCon34673840166
CinCECGTorso44013801639
Coffee22828286

4.2. Experimental Environment and Evaluation Index

The implementation language for all algorithms in this study is python 2.7. In the hardware environment, the CPU is an Intel core i7-7700k 4.4 G, the memory is 16 G, and the graphics card is an NV GTX 1070. This study uses the accuracy (Acc) and F-score as two evaluation indicators to evaluate the clustering performance of the clustering framework used on time series data. The calculation formulas for the evaluation indices are shown as follows:

The larger the Acc value, the better the clustering performance. When the values of P and R are both high, the larger the F-score value is, the better the clustering effect is. The F-score is more suitable for the classification and evaluation of unbalanced data. A description for each parameter in the abovementioned formula is shown in Table 3. In Table 3, P stands for positive, N stands for negative, T stands for true, and F stands for false.


Prediction

PN
ActualPTPFN
NFPTN

4.3. Experimental Results and Analysis
4.3.1. Experiments on Different Clustering Algorithms

The samples are input to the LSTM network for feature extraction, and then, the average pooling technology is used to reduce the dimensionality. Finally, different clustering algorithms are used for the transformed features to explore the influence of different clustering algorithms on the clustering effect on time series data. The comparison algorithms used are K-means [17], fuzzy C-means clustering (FCM) [16], soft subspace clustering (SSC) [18], and unbalanced K-means clustering (UK-means) [38]. The experimental results are shown in Table 4.


DatasetIndexK-meansFCMSSCUK-means

AdiacAcc0.67170.68030.69870.7097
F-score0.63850.64170.65420.6686

BeefAcc0.58950.59980.62310.8110
F-score0.65890.66760.68890.7874

CBFAcc0.63670.64320.81290.8724
F-score0.57640.58100.76750.8289

ChlorineConAcc0.55770.56180.58200.6887
F-score0.63490.64460.65270.6510

CinCECGTorsoAcc0.67260.68230.69560.6854
F-score0.55270.55980.57210.5938

CoffeeAcc0.68320.67880.68900.6882
F-score0.72760.71420.72890.7245

The experimental results in Table 4 verify the following three conclusions. (1) Under the premise of the same feature extraction and dimensionality reduction technology, different clustering algorithms will have different clustering effects. (2) On the Adiac, Beef, CBF, and ChlorineCon datasets, the UK-means algorithm in the sixth column has the best clustering effect. This shows that the clustering effect obtained by using the unbalanced clustering algorithm is better. For time series data, an unbalanced clustering algorithm is more suitable for clustering analysis. (3) On the CinCECGTorso and Coffee datasets, the clustering effect under the SSC algorithm is the best. This is because the SSC algorithm not only reflects the relationship between the sample attributes and clusters but also reflects differences in the related attributes. The clustering results obtained by the UK-means on these two datasets are slightly lower than those obtained by SSC, but the performance difference is not very large. This shows that the UK-means can also be used on unbalanced datasets without significantly reducing the clustering performance.

4.3.2. Experiments on Different Dimensionality Reduction Techniques

After the sample is output by the LSTM network, the feature vector dimension at this time is still very large. This study uses an average pooling technique to reduce its dimensionality. To verify the effectiveness of this dimensionality reduction method, the method of no dimensionality reduction, the maximum pooling method, and the average pooling method are compared here. The final clustering algorithm uses the UK-means. The comparative experimental results are shown in Table 5.


DatasetIndexNo dimensionality reductionMaximum pooling technologyAverage pooling technology

AdiacAcc0.64770.70120.7097
F-score0.60390.65940.6686

BeefAcc0.57870.76580.8110
F-score0.64520.83560.7874

CBFAcc0.62120.84230.8724
F-score0.56440.77900.8289

ChlorineConAcc0.54560.54780.6887
F-score0.52380.52760.6510

CinCECGTorsoAcc0.59980.62120.6854
F-score0.54850.56450.5938

CoffeeAcc0.63350.60210.6882
F-score0.61860.58970.7245

The third column in Table 5 shows the clustering results obtained without adding dimensionality reduction technology. The fourth column shows the clustering results obtained by using the maximum pooling technique. The fifth column shows the clustering results obtained by dimensionality reduction using average pooling technology. Clearly, the value in column 5 is significantly greater than that in column 3. This shows that the method of using pooling technology to enhance the expression ability of the hidden layer is very effective. The values in column 5 are slightly larger than the values in column 4, which shows that the average pooling technique is more suitable for dimensionality reduction processing on time series data.

4.3.3. Feature Extraction Experiment

This research uses the LSTM model to extract the features from the input samples. To verify its effectiveness, the contrast feature extraction method used is the wavelet transform [45]. In the framework of the entire clustering method, the dimensionality reduction technology still uses the average pooling technology, and the clustering algorithm uses the UK-means algorithm. The final experimental results are shown in Table 6.


DatasetIndexWavelet transformLSTM

AdiacAcc0.68700.7097
F-score0.65890.6686

BeefAcc0.78060.8110
F-score0.85350.7874

CBFAcc0.88320.8724
F-score0.83980.8289

ChlorineConAcc0.67360.6887
F-score0.60620.6510

CinCECGTorsoAcc0.63210.6854
F-score0.57520.5938

CoffeeAcc0.69670.6882
F-score0.73450.7245

The fourth column of Table 6 shows the experimental results of feature extraction using the LSTM model. The third column shows the experimental results obtained by using the wavelet transform for feature extraction. On the datasets Adiac, Beef, ChlorineCon, and CinCECGTorso, the values in the fourth column are all greater than those in the third column. This shows that the LSTM-based feature extraction method works better on these 4 datasets. On the datasets CBF and Coffee, the values in the third column are greater than the values in the fourth column, which shows that the clustering results obtained by the wavelet transform are slightly better than those obtained by LSTM. However, the difference in the effect is not very large. The clustering effect when using the LSTM network for feature extraction on 4 datasets (Adiac, Beef, ChlorineCon, and CinCECGTorso) is better than that based on the wavelet transform. On the remaining 2 datasets (CBF and Coffees), the effects of the two feature extraction methods are not much different. Considering the details, it is more sensible to use the LSTM method for feature extraction.

5. Conclusions

Because time series data have many special properties, commonly used clustering algorithms cannot achieve satisfactory results when clustering time series data. The purpose of this research is to find suitable models for various time series data. The research on time series data generally focuses on the chronological nature of time series data. To capture this property, this study uses an RNN that can process the data in chronological order to train the data. Due to the gradient problem in traditional RNNs, they have shortcomings in practical applications. This study uses a special CNN, namely, LSTM, to learn the dimensionality reduction expression of a time series. For long time series, the hidden layer of the network cannot remember all of the time information. As a result, it is difficult to compress the global information in the last layer. In response to this problem, this research performs an average pooling operation after stacking all the hidden unit information to further complete the dimensionality reduction of the data. Finally, the UK-means algorithm is used to cluster the feature data after dimensionality reduction. The experiments are conducted on multiple UCR public datasets. The experimental results verify the effectiveness of the clustering algorithm used. The method used in this research involves a large number of matrix operations, such as matrix addition and matrix multiplication. This requires high hardware performance. The next step will be to study how to improve the efficiency of the algorithm.

Data Availability

The labeled dataset used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was supported by the Scientific Research Project of Jilin Education Department.

References

  1. W. R. Pearson and A. J. Mackey, “Using SQL databases for sequence similarity searching and analysis,” Current Protocols in Bioinformatics, vol. 59, no. 1, pp. 1–22, 2017. View at: Publisher Site | Google Scholar
  2. J. Tong, R. I. Sadreyev, J. Pei, L. N. Kinch, and N. V. Grishin, “Using homology relations within a database markedly boosts protein sequence similarity search,” Proceedings of the National Academy of Sciences, vol. 112, no. 22, pp. 7003–7008, 2015. View at: Publisher Site | Google Scholar
  3. D. Berndt and J. Clifford, “Finding patterns in time series: a dynamic programming approach,” Advances in Knowledge Discovery & Data Mining, pp. 229–248, 1996. View at: Google Scholar
  4. T. Kahveci and A. K. Singh, “Optimizing similarity search for arbitrary length time series queries,” IEEE Transactions on Knowledge and Data Engineering, vol. 16, no. 4, pp. 418–433, 2004. View at: Publisher Site | Google Scholar
  5. E. Keogh, K. Chakrabarti, M. Pazzani, and S. Mehrotra, “Dimensionality reduction for fast similarity search in large time series databases,” Knowledge and Information Systems, vol. 3, no. 3, pp. 263–286, 2001. View at: Publisher Site | Google Scholar
  6. J. Song, G. Yoon, H. Cho, and S. M. Yoon, “Structure preserving dimensionality reduction for visual object recognition,” Multimedia Tools and Applications, vol. 77, no. 18, pp. 23529–23545, 2018. View at: Publisher Site | Google Scholar
  7. A. Janos, F. Balazs, N. Sandor, and A. Peter, “Modified Gath–Geva clustering for fuzzy segmentation of multivariate time-series,” Fuzzy Sets and Systems, vol. 149, no. 1, pp. 39–56, 2005. View at: Publisher Site | Google Scholar
  8. N. Wang, X. Liu, and J. Yin, “Improved Gath-Geva clustering for fuzzy segmentation of hydrometeorological time series,” Stochastic Environmental Research and Risk Assessment, vol. 26, no. 1, pp. 139–155, 2012. View at: Publisher Site | Google Scholar
  9. G. M. Weiss, “Mining with rarity,” ACM SIGKDD Explorations Newsletter, vol. 6, no. 1, pp. 7–19, 2004. View at: Publisher Site | Google Scholar
  10. M. Hu, Z. Ji, K. Yan et al., “Detecting anomalies in time series data via a meta-feature based approach,” IEEE Access, vol. 6, pp. 27760–27776, 2018. View at: Publisher Site | Google Scholar
  11. P. Langley, “Scientific discovery, causal explanation, and process model induction,” Mind & Society, vol. 18, no. 1, pp. 43–56, 2019. View at: Publisher Site | Google Scholar
  12. A. Bauer, M. Züfle, N. Herbst, A. Zehe, A. Hotho, and S. Kounev, “Time series forecasting for self-aware systems,” Proceedings of the IEEE, vol. 108, no. 7, pp. 1068–1093, 2020. View at: Publisher Site | Google Scholar
  13. A. D. Shatashvili, I. S. Didmanidze, G. A. Kakhiani, and T. A. Fomina, “A method of preliminary forecasting of time series of financial data,” Cybernetics and Systems Analysis, vol. 56, no. 2, pp. 296–302, 2020. View at: Publisher Site | Google Scholar
  14. P. Qian, Y. Jiang, Z. Deng et al., “Cluster prototypes and fuzzy memberships jointly leveraged cross-domain maximum entropy clustering,” IEEE Transactions on Cybernetics, vol. 46, no. 1, pp. 181–193, 2015. View at: Publisher Site | Google Scholar
  15. Y. Jiang, D. Wu, Z. Deng et al., “Seizure classification from EEG signals using transfer learning, semi-supervised learning and TSK fuzzy system,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25, no. 12, pp. 2270–2284, 2017. View at: Publisher Site | Google Scholar
  16. P. Qian, Y. Jiang, S. Wang et al., “Affinity and penalty jointly constrained spectral clustering with all-compatibility, flexibility, and robustness,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 5, pp. 1123–1138, 2016. View at: Publisher Site | Google Scholar
  17. Y. Jiang, F. L. Chung, S. Wang, Z. Deng, J. Wang, and P. Qian, “Collaborative fuzzy clustering from multiple weighted views,” IEEE Transactions on Cybernetics, vol. 45, no. 4, pp. 688–701, 2014. View at: Publisher Site | Google Scholar
  18. P. Qian, K. Zhao, Y. Jiang et al., “Knowledge-leveraged transfer fuzzy C -Means for texture image segmentation with self-adaptive cluster prototype matching,” Knowledge-Based Systems, vol. 130, pp. 33–50, 2017. View at: Publisher Site | Google Scholar
  19. Y. Jiang, F. L. Chung, H. Ishibuchi, Z. Deng, and S. Wang, “Multitask TSK fuzzy system modeling by mining intertask common hidden structure,” IEEE Transactions on Cybernetics, vol. 45, no. 3, pp. 534–547, 2015. View at: Google Scholar
  20. P. Qian, S. Sun, Y. Jiang et al., “Cross-domain, soft-partition clustering with diversity measure and knowledge reference,” Pattern Recognition, vol. 50, pp. 155–177, 2016. View at: Publisher Site | Google Scholar
  21. Y. Zhang and J. Cai, “Fuzzy clustering based on automated feature pattern-driven similarity matrix reduction,” IEEE Transactions on Computational Social Systems, vol. 99, pp. 1–10, 2020. View at: Publisher Site | Google Scholar
  22. Y. Zhang, F.-L. Chung, and S. Wang, “A multiview and multiexemplar fuzzy clustering approach: Theoretical analysis and experimental studies,” IEEE Transactions on Fuzzy Systems, vol. 27, no. 8, pp. 1543–1557, 2019. View at: Publisher Site | Google Scholar
  23. B. R. Bakshi and G. Stephanopoulos, “Representation of process trends-IV. Induction of real-time patterns from operating data for diagnosis and supervisory control,” Computers & Chemical Engineering, vol. 18, no. 4, pp. 303–332, 1994. View at: Publisher Site | Google Scholar
  24. A. Fullah Kamara, E. Chen, L. Qi, and Z. Pan, “Combining contextual neural networks for time series classification,” Neurocomputing, vol. 384, pp. 57–66, 2020. View at: Publisher Site | Google Scholar
  25. Y. Lei and Z. Wu, “Time series classification based on statistical features,” EURASIP Journal on Wireless Communications & Networking, vol. 2020, no. 1, pp. 1–13, 2020. View at: Publisher Site | Google Scholar
  26. E. Keog and S. Kasetty, “On the need for time series data mining benchmarks: A survey and empirical demonstration,” Data Mining and Knowledge Discovery, vol. 7, no. 4, pp. 349–371, 2003. View at: Google Scholar
  27. K. Joo-Chang and K. Chung, “Mining based time-series sleeping pattern analysis for life big-data,” Wireless Personal Communications, vol. 105, no. 2, pp. 475–489, 2019. View at: Publisher Site | Google Scholar
  28. A. Sengupta, Ap Prathosh, S. N. Shukla, V. Rajan, and K. Chandan, “Prediction and imputation in irregularly sampled clinical time series data using hierarchical linear dynamical models. Conference proceedings,” in Proceeedings of the 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 3660–3663, Jeju island, Republic of Korea, July 2017. View at: Publisher Site | Google Scholar
  29. S. S. Wulff, “Time series analysis: Forecasting and control, 5th edition,” Journal of Quality Technology, vol. 49, no. 4, pp. 418-419, 2017. View at: Publisher Site | Google Scholar
  30. X. Chen, A. T. K. Wan, and Y. Zhou, “Efficient quantile regression analysis with missing observations,” Journal of the American Statistical Association, vol. 110, no. 510, pp. 723–741, 2015. View at: Publisher Site | Google Scholar
  31. B. Seong, “Smoothing and forecasting mixed-frequency time series with vector exponential smoothing models,” Economic Modelling, vol. 91, pp. 463–468, 2020. View at: Publisher Site | Google Scholar
  32. G. Zhang, X. Zhang, and H. Feng, “Forecasting financial time series using a methodology based on autoregressive integrated moving average and Taylor expansion,” Expert Systems, vol. 33, no. 5, pp. 501–516, 2016. View at: Publisher Site | Google Scholar
  33. A. K. Singh, “Fractionally delayed Kalman filter,” IEEE/CAA Journal of Automatica Sinica, vol. 7, no. 1, pp. 169–177, 2020. View at: Publisher Site | Google Scholar
  34. P. Qian, Y. Chen, J.-W. Kuo et al., “mDixon-based synthetic CT generation for PET attenuation correction on abdomen and pelvis jointly using transfer fuzzy clustering and active learning-based classification,” IEEE Transactions on Medical Imaging, vol. 39, no. 4, pp. 819–832, 2020. View at: Publisher Site | Google Scholar
  35. P. Qian, C. Xi, M. Xu et al., “SSC-EKE: Semi-supervised classification with extensive knowledge exploitation,” Information Sciences, vol. 422, pp. 51–76, 2018. View at: Publisher Site | Google Scholar
  36. Y. Jiang, Z. Deng, F.-L. Chung et al., “Recognition of epileptic EEG signals using a novel multiview TSK fuzzy system,” IEEE Transactions on Fuzzy Systems, vol. 25, no. 1, pp. 3–20, 2017. View at: Publisher Site | Google Scholar
  37. P. Qian, J. Zhou, Y. Jiang et al., “Multi-view maximum entropy clustering by jointly leveraging inter-view collaborations and intra-view-weighted attributes,” IEEE Access, vol. 6, pp. 28594–28610, 2018. View at: Publisher Site | Google Scholar
  38. N. M. Faizah, L. F. Surohman, and R. P. Hendra, “Unbalanced data clustering with K-means and euclidean distance algorithm approach case study population and refugee data,” Journal of Physics: Conference Series, vol. 1477, Article ID 022005, 2020. View at: Publisher Site | Google Scholar
  39. Y. Zhang, S. Wang, K. Xia, Y. Jiang, and P. Qian, “Alzheimer’s disease multiclass diagnosis via multimodal neuroimaging embedding feature selection and fusion,” Information Fusion, vol. 66, pp. 170–183, 2021. View at: Publisher Site | Google Scholar
  40. N. M. Rezk, M. Purnaprajna, T. Nordström, and Z. Ul-Abdin, “Recurrent neural networks: an embedded computing perspective,” IEEE Access, vol. 8, pp. 57967–57996, 2020. View at: Publisher Site | Google Scholar
  41. J. Cai, J. Hu, X. Tang, T.-Y. Hung, and Y.-P. Tan, “Deep historical long short-term memory network for action recognition,” Neurocomputing, vol. 407, pp. 428–438, 2020. View at: Publisher Site | Google Scholar
  42. C. Hubert, A. Rivera, M. Farhadloo, and M. A. Pedroza, “Grape detection with convolutional neural networks,” Expert Systems with Applications, vol. 159, Article ID 113588, 2020. View at: Publisher Site | Google Scholar
  43. Y. Zhang, H. Ishibuchi, and S. Wang, “Deep Takagi-Sugeno-Kang fuzzy classifier with shared linguistic fuzzy rules,” IEEE Transactions on Fuzzy Systems, vol. 26, no. 3, pp. 1535–1549, June 2018. View at: Publisher Site | Google Scholar
  44. L. Hubert and P. Arabie, “Comparing partitions,” Journal of Classification, vol. 2, no. 1, pp. 193–218, 1985. View at: Publisher Site | Google Scholar
  45. K. Stratis, C. Stavros-Richard, C. Alexander, and M. Fitzpatrick, “Detecting anomalies in time series data via a deep learning algorithm combining wavelets, neural networks and Hilbert transform,” Expert Systems with Applications, vol. 85, pp. 292–304, 2017. View at: Publisher Site | Google Scholar

Copyright © 2020 Mu Qiao and Zixuan Cheng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views0
Downloads0
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.