Abstract

In real application scenarios, the inherent impreciseness of sensor readings, the intentional perturbation of privacy-preserving transformations, and error-prone mining algorithms cause much uncertainty of time series data. The uncertainty brings serious challenges for the similarity measurement of time series. In this paper, we first propose a model of uncertain time series inspired by Chebyshev inequality. It estimates possible sample value range and central tendency range in terms of sample estimation interval and central tendency estimation interval, respectively, at each time slot. In comparison with traditional models adopting repeated measurements and random variable, Chebyshev model reduces overall computational cost and requires no prior knowledge. We convert Chebyshev uncertain time series into certain time series matrix; therefore noise reduction and dimensionality reduction are available for uncertain time series. Secondly, we propose a new similarity matching method based on Chebyshev model. It depends on overlaps between two sample estimation intervals and overlaps between central tendency estimation intervals from different uncertain time series. At the end of this paper, we conduct an extensive experiment and analyze the results by comparing with prior works.

1. Introduction

Over the past decade, a large amount of continuous sensor data was collected in many applications, such as logistics management, traffic flow management, astronomy, and remote sensing. In most cases, these applications organize the sequential sensor readings into time series, that is, sequences of data points ordered by temporal dimension. The problem of processing and mining time series with incomplete, imprecise, and even error-prone measurements is of major concern in recent studies [16]. Typically, uncertainty occurs due to the impreciseness of equipment and methods during physical data collection period. For example, the inaccuracy of a wireless temperature sensor follows a certain error distribution. In addition, intentional deviation brought by privacy-preserving transformation also causes much uncertainty. For example, the real time location information of some VIP may be perturbed [7, 8].

Managing and processing uncertain data were studied in the traditional database area during the 80s [9] and have been borrowed in the investigation of uncertain time series in recent years. Two widely adopted methods are introduced in modeling uncertain time series. First, a probability density function (pdf) over the uncertain values represented by a random variable is estimated in accord with a priori knowledge, among which the hypotheses of Normal distribution are ubiquitous [1012]; however, the hypotheses of Normal distribution are quite limited in many applications; the uncertain time series data with Uniform or Exponential distribution is frequently found in some other applications, for example, Monte Carlo simulation of power load and evaluation of reliability of electronic components [13, 14]. Second, the unknown data distribution is summarized by repeated measurements (i.e., sample or observations) [15]; the accurate estimation of data distribution is obtained by large amount of repeated measurements; however, it causes high computational cost and more storage space.

In this paper, we propose a new model for uncertain time series by combining the two methods above and use descriptive statistics (i.e., central tendency) to resolve the uncertainty. On this basis, we present an effective matching method to measure the similarity between two uncertain time series, which is adaptive to distinct error distributions. Our model estimates the sample value range and the central tendency range derived from Chebyshev inequality, extracting the sample estimation interval and central tendency estimation interval drawn from repetitive measurements at each time slot. Unlike traditional similarity matching methods of uncertain time series based on the measurement of distance, we adopt the overlap between sample estimation intervals and that between central tendency estimation intervals to evaluate similarity. If both estimation intervals from two uncertain time series at corresponding time slot have a chance of being equal, the extent of similarity is larger as compared to the case in which they never be the same.

The rest of this paper is organized as follows. In Section 3 we propose the model of Chebyshev uncertain time series. Section 4 is on the preprocessing of uncertain time series based on Chebyshev model. Section 5 describes the process of similarity match with new method. Section 6 addresses the experiments. At last, Section 7 draws a conclusion.

To sum up, we list our contributions as follows:(i)We propose a new model of uncertain time series based on sample estimation interval and central tendency estimation interval derived from Chebyshev inequality and convert Chebyshev uncertain time series into certain time series matrix for dimensionality reduction and noise reduction.(ii)We present an effective method to measure the similarity between two uncertain time series within distinct error distributions without a priori knowledge.(iii)We conduct extensive experiments and demonstrate the effectiveness and efficiency of our new method in similarity matching between two uncertain time series.

The problem of similarity matching for certain time series has been extensively studied over the past decade; however the similar problem arises for uncertain time series. Aßfalg et al. first propose a probabilistic bounded range query (PBRQ) [15]. Formally, let be a set of uncertain time series and let be an uncertain time series as query input; let be a distance bound and let be a probability threshold. The is given by

Dallachiesa et al. proposed the method called MUNICH [16]; the uncertainty is represented by means of repeated observations at each time slot [15]. An uncertain time series is a set of certain time series in which each certain time series is constructed by choosing one sample observation for each time slot. The distance between two uncertain time series is defined as the set of distances between all combinations from one certain time series set to the other. Notice that the distance measures adopted by MUNICH are based on -norm and DTW distances; if , the -norm is Euclidean distance; the naive computation of the result set is not practical. Large result space causes exponential computational cost.

PROUD [12] processes similarity queries over uncertain time streams. It employs the Euclidean distance and models the similarity measurement as the sum of the differences of time series random variables. Each random variable represents the uncertainty of the value of corresponding time slot. The standard deviation of the uncertainty and a single observation for each time slot are prerequisites for modeling uncertain time series. Sarangi and Murthy propose a new distance measurement DUST. It is derived from the Euclidean distance and under the assumption that all time series values follow some specific distribution [11]. If the error of the time series values at different time slot follows Normal distribution, DUST is equivalent to the weighted Euclidean distance. Compared to the MUNICH, it does not need multiple observations and thus is more efficient. Inspired by the moving average, Dallachiesa et al. propose a simple similarity measurement that previous studies had not considered; it adopts Uncertain Moving Average (UMA) and Uncertain Exponential Moving Average (UEMA) filters to solve the uncertainty from time series data [16]. Although the experimental results show that they outperform the sophisticated techniques that have been proposed above, a priori knowledge of the error standard deviation is indispensable.

Most of the above techniques are based on the assumption that the values of time series are independent of one another. Obviously, this assumption is a simplification. Adjacent values in time series are correlated to a certain extent. The effect of correlations is studied in [16] and the research shows that there is a great benefit if the correlations are taken into account. Likewise, we implicitly embed correlations into estimation intervals in terms of repetitive observation values, adopting the degree of overlap to evaluate the similarity of uncertain time series. Our approach reduces overall computational cost and outperforms the existing methods on accuracy; new model requires no prior knowledge and makes dimensionality reduction available for uncertain time series.

3. Chebyshev Uncertain Time Series Modeling

As shown in [15], let be an uncertain time series of length ; is a random variable represented by a set of measurements (i.e., random sample observations), . is denoted as sample size of . Distribution of the points in is the uncertainty at time slot . The larger sample size is, the more accurate data distribution is estimated. However computational cost is prohibitive. To solve the problem, we present a new model for uncertain time series by considering Chebyshev’s inequality below.

Lemma 1. Let (integrable) be a random variable with finite expected value and finite nonzero variance . Then, for any real number ,

Formula (2) (Chebyshev’s inequality) [17] is the lower bound of probability of ; on condition that and are known, the distribution information need not be considered. Real number has an important influence on the determination of the lower bound. For an appropriate , the probability of possible values of random variable falling in the boundaries satisfies desired threshold. The estimation of possible value range is as follows.

Theorem 2. Given a random variable with the finite expected value and finite nonzero variance , if the in inequality (2) equals , thenno matter which probability distribution obeys.

Proof. Consider

The above proof shows that when equals , the probability of within interval exceeds 0.9; nearly all possible measurements fall in the interval. We substitute the random variable with to express the uncertainty.

According to the probability distribution of , possible value range description of uncertainty is insufficient; a central or typical value is another feature for a probability distribution; it indicates a center or location of the distribution, called central tendency [18]. The most common measure of central tendency is arithmetic mean (mean for short), so the central tendency of a random sample set in form of mean is defined below.

Given a random sample set drawn from with and , , each sample satisfies hypothesis; then

As a random variable, the expectation and variance are evaluated below:Analogously, for central tendency variable , in accord with Lemma 1, the corresponding estimation interval can be obtained.

Theorem 3. Given a random variable with and , a random sample set drawn from the population of , for the variable with and , if the in inequality (2) equals , then

Proof. Consider

In summary, the sample estimation interval of is the range of possible measurements and central tendency estimation interval is the range of central tendency of . The uncertainty of is represented by a combination of the two intervals at each time slot. Uncertain time series can be defined below.

Definition 4. For an uncertain time series of length , each element is a random variable with and , is the central tendency of random sample set from the population corresponding to , and an Chebyshev uncertain time series is defined below:where is the cardinality of random sample set . Consider the Chebyshev uncertain time series above; and are difficult to be obtained because of the unidentified distribution of population. We choose two statistics to estimate the and ; one is the arithmetic mean of , mentioned in (5); the other is the sample standard deviation , calculated by the following equation:

Equations (12) and (6) show that and are unbiased estimator for and . and in Definition 4 can be replaced with and ; is rewritten as follows.

Definition 5. Given a sample set at time slot , is represented as follows:

According to the descriptions above, the expression at each time slot can be transformed into a vector. It consists of four elements (except time value), namely, , , , and , in ascending order, denoted as ; consider

Definition 6. An uncertain time series of length can be rewritten in terms of matrix with the following formula:Additionally, it can be expanded as follows:where is the lower bound sequence of random variable composed of , is referred to as lower bound sequence of variable , is named upper bound sequence, and the upper bound sequence of is denoted as , illustrated in Figure 1. Four certain time series constitute an uncertain time series based on Chebyshev model.

4. Uncertain Time Series Preprocessing

4.1. Outlier Elimination from Sample Set

In the process of the sample collection, the occurrence of outliers is inevitable. As an abnormal observation value, it is distant from others [19]. This may be ascribed to undesirable variability in the measurement or experimental errors. Outliers can occur in any distribution; naive interpretation of statistics such as sample mean and sample variance derived from sample set that include outliers may be misleading. Excluding outliers from sample set enhances the effectiveness of statistics. The definition of an outlier can be formalized below.

Definition 7. Given a sample set at time slot , is sorted in ascending order. The sorted elements constitute a sample sequence, denoted as . and are the lower and upper quartiles, respectively; then we could define an outlier to be any sample outside the range:for a nonnegative constant , which adjusts the granularity of excluding outliers.

4.2. Exponential Smoothing for Noise Reduction

In the area of signal processing, noise is a general term of unwanted (and, in general, unknown) modifications during signal capture, storage, transmission, processing, or conversion. To recover the original data from the noise-corrupted signal, the filters applied to noise reduction are ubiquitous in the design of signal processing systems. An Exponential smoothing filter assigns exponentially decreasing weights to the sample in time order and is effective [2022]. In this subsection, we use exponential smoothing to process the noise in time series data. Given an certain time series , is the observation at time slot , ES is a smoothed sequence associated with , and is the smoothed value at time slot . If the first sample is chosen in raw time series as initial value and an appropriate smoothing factor is picked, all values composed of smoothed sequence ES are available iteratively. The single form of exponential smoothing is given in formula The raw time series begins at time ; smoothing factor falls in interval . On the basis of the equation, the exponential smoothing of an uncertain time series modeled in Chebyshev matrix (Definition 6) is defined as follows:For example, a raw time series is chosen from the ECG200 dataset in UCR time series collection [23]; after the disturbance by standard deviation 0.2, it is modeled as Chebyshev uncertain time series illustrated in Figure 2; tiny fluctuations around four lower and upper bound sequences reflect the existence of noise. We perform the exponential smoothing against the uncertain time series, choosing the first sample of each bound sequence as initial value and setting the smoothing factor to 0.3. Note that higher value of actually reduces the level of smoothing; in the limiting case with the output series is just the same as the original series. After triple exponential smoothing, the uncertain time series become clearer, because triple exponential smoothing takes into account seasonal changes as well as trends, illustrated in Figure 3.

4.3. Dimensionality Reduction Using Wavelets

In the process of analysis and organization of high-dimensional data, the difficulty is the problem of “curse of dimensionality” coined by Bellman and Dreyfus [24]. When the dimensions of the data space increase, data size soars, and thus the available data becomes sparse. Extracting these valid sparse data as feature vectors in lower dimension feature space is the essence of dimensionality reduction. Time series, as the special high-dimensional data, is under the influence of curse of dimensionality as well. We adopt wavelets frequently used in dimension reduction to deal with the time series data [2527].

Daubechies [28] finds that wavelet transforms can be implemented using a pair of Finite Impulse Response (FIR) filters, called a Quadrature Mirror Filter (QMF) pair. These filters are often used in the area of signal processing as they lend themselves to efficient implementation. Each filter is represented as a sequence of numbers. The filter lends this the length of this sequence. The output of a QMF pair consists of two separate components: a high-pass and a low-pass filter, which correspond to high-frequency and low-frequency output, respectively. Wavelet transforms are considered to be hierarchical since they operate stepwise. The input on each step is passed through the QMF pair. Both high-pass and low-pass component of the QMF output are in half of the length of the input. The high-pass component is naturally associated with details while the low-pass component concentrates most of the energy or information of the data. The low-pass component is used as further input; hence the length of the input is reduced by a factor of 2 at each step. The single step is illustrated in Figure 4, where refers to the length of signal sequence in general, not some concrete value.

For example, as shown in Figure 3, we choose Haar wavelet to build QMF pair; the low-pass output is a dimension-reduced uncertain time series whose length shortens from 270 to 135, illustrated in Figure 5; the sequence of QMF pair based on Haar wavelet is defined as follows:Note that the low-pass output is obtained through the convolution of and the uncertain time series to be reduced in dimension; in the same manner, the convolution of and the uncertain time series is the high-pass output.

5. Similarity Match Processing

We present a new matching method based on Chebyshev uncertain time series. As shown in Definition 5, without loss of generality, we utilize two variables , from different uncertain time series and at time slot to specify the matching procedure. Let and be the sample estimation interval from and at time slot in Figure 6(a). If the two intervals overlapped as shown in Figure 6(b), and have possibility of taking identical value from the overlap intersection set; with the increasing of overlap in Figures 6(c) and 6(d) (expressed by the double arrow solid lines), the possibility increases gradually. Thus, and become more similar in terms of the range of samples. The above analysis outlines the similarity measure based on the overlap of sample estimation intervals qualitatively; then we analyze the process quantitatively. The lengths of two sample estimation intervals at identical time slot are different. As shown in Figure 6, let and be the length of sample estimation intervals of and , respectively: denote the length of overlap between and illustrated in Figures 6(b) and 6(c):In Figure 6(d), equalsIf the two observations intervals are not overlapped in Figure 6(a), the problem arises. In fact, it should be marked; we put a negative symbol into formula like thisIf , the two observation intervals have no overlap, and the lower is, the farther two intervals become. Let Overlap Ratio be the ratio of the length of overlap to length of observation intervals to quantify the degree of overlap, denoted as rop; thus,where each of them falls in (only when the length of overlap equals the length of observations interval, equals 1 in Figure 6(d), ).

We combine and and construct a single quantity called Overlap Degree of sample estimation interval, denoted as , so that it measures the overlaps linearly. Here is the definitionwhere also belongs to . The sum of denotes the degree of overlap between the two uncertain time series and such that

We will further discuss the similarity between and . As illustrated in Figure 7, even if two sample estimation intervals at time are entirely overlapped, it is difficult to determine whether the two variables have similarity or not to a certain degree, because of a variety of overlapping between central tendency estimation intervals and . In other words, the degree of overlap between and determines the degree of similarity between and on condition of identical sample estimation intervals. As shown in Figure 7(c), the two variables, compared to the case in Figures 7(a) and 7(b), are more similar obviously; the larger overlapping is, the more similar two variables are. If central tendency estimation intervals have no overlap or a little and sample estimation intervals overlap to some extent, the estimation of similarity cannot be obtained. With regard to the above cases, only is not sufficient to measure the similarity; we need further to measure the similarity between two variables with central tendency estimation intervals.

As illustrated in Figure 8, there are three cases of overlapping. Let be the overlap between two central tendency estimation intervals. In Figure 8(a), for the estimation intervals and , the lengths of estimation interval and are represented asWith no overlapping between them, the is denoted asIn Figure 8(b), and have overlap as described below:In Figure 8(c), contains ; the overlap is represented as follows:Analogous to , the Overlap Ratio of estimation interval between and is defined:The Overlap Degree of , namely, between and , is depicted below:

We sum up of the two uncertain time series and in length of ; the sum indicated by is represented as

In conclusion, we combine the and to evaluate the degree of similarity between two uncertain time series, which is signified by DOS and expressed as follows: is the factor in the range of ; in different applications, and refer to different weights; here set . Consider ( is length of uncertain time series).

6. Experimental Validation

In this section, we examine the effectiveness and efficiency of the new method proposed in this paper. Firstly, we introduce the uncertain time series value generation and experimental datasets; then we analyse the results of the experiments. All the methods are implemented in MATLAB and C++, and the experiments are run on a PC with 3.1 GHz CPU and 4 GB of RAM.

6.1. Uncertainty Model and Assumption

As described in Definition 5, an uncertain time series is a time series including sample estimation interval and central tendency estimation interval derived from a set of observations at each time slot. Given a time slot , the value of uncertain time series modeled aswhere is the true value and is the error. In general, the error could be drawn from distinct probability distribution; this is why we treat as a random variable at the time .

6.2. Experimental Setup

Inspired by [11, 12, 15], we use real time series datasets of exact values and subsequently introduce uncertainty with uncertainty model through perturbation. In our experiments we consider Uniform, Normal, and Exponential error distributions with zero mean and vary standard deviation within interval .

We selected 19 real datasets from the UCR classification dataset collection [23]; they represent a wide range of application areas: 50words, Adiac, Beef, CBF, Coffee, ECG200, Lighting2, SyncCtrl, Wafer, FaceFour, FaceAll, Fish, Lighting7, GunPoint, OliveOil, OSULeaf, SwedLeaf, Trace, and Yoga. The training and testing sets are reconfigured, and we acquired the time series sets as Table 1.

6.3. Accuracy

On the purpose of evaluating the quality of the results, we use the two standard measures of recall and precision. Recall is defined as the percentage of the truly similar uncertain time series that are found by the algorithm. Precision is the percentage of similar uncertain time series identified by the algorithm, which are truly similar. Accuracy is measured in terms of the harmonic mean of recall and precision to facilitate the comparison. The accuracy is defined as follows:

As mentioned in [11], an effective similarity measure on uncertain data allows us to reason about the original data without uncertainty. For the sake of validating new method, we conduct experiments from different aspects.

In the first experiment, we examine the effectiveness of our approach for different error standard deviations and error distributions. In Figure 9, the results from different error distributions are averaged over all datasets and shown at various error standard deviations. The accuracy decreases linearly with increasing error standard deviation from 0.2 to 2 and the performance with Uniform distribution is better than the other two distribution performances. Bigger standard deviations produce more uncertainty to time series data.

Next, we verify the effectiveness for different datasets. In Figure 10, each time series from each dataset is perturbed with different error, that is, Normal, Uniform, and Exponential; combining 20% accuracy of the match in standard deviation 1 with 80% accuracy of the match in standard deviation 0.4 as the accuracy of relative small standard deviations on each dataset, most of datasets perform well (accuracy reaches 80% or so, some come to 90%), with SyncCtrl being the best performer (accuracy = 96%), except Beef, OliveOil, and SwedLeaf, which will be explained below. Similarly, the trend is verified also with Uniform and Exponential error distributions.

Figure 11 summarizes the performance of each dataset in relative big standard deviations of error, integrating the 20% accuracy of match in standard deviation 2 with 80% accuracy in standard deviation 1.4. As with the increasing of standard deviation, the accuracy of all datasets decreases. With Normal error, the accuracy of Adiac drops the most fast, nearly 50% (from 81% to 33%), and the tendency is also held with Exponential error distribution. Coffee, FaceFour, SyncCtrl, and yoga are exceptions; the increasing standard deviations have no significant impact on their accuracy. With Uniform error, the accuracy of Fish drops the most fast, up to 30.4%, the accuracy of Adiac drops 25.8%, and ECG200 decreases 14.4%; the accuracy of other datasets falls lightly. With Exponential error, most datasets drop fast and the most fast dataset is Adiac, up to 41%. In conclusion, the Uniform error impacts all datasets lightly with the increasing standard deviation, compared to the Normal and Exponential error.

As mentioned above, the datasets Beef, OliveOil, and SwedLeaf have poor performance, but Coffee, FaceFour, syncCtrl, and yoga perform well in Figures 10 and 11. We find that all of these are partially related to the average absolute value of respective datasets which are disturbed. As shown in Figure 12, we compute the average absolute values of all disturbed datasets; the AAVs (average absolute values) of Beef and OliveOil are 0.0956 and 0.3337, respectively, smaller than others. The AAV of disturbed Coffee is 18.0541, which is the biggest among all datasets; the other three datasets are also big ones. In other words, for large AVVs it is difficult to be impacted with small uncertainty even though standard deviation of error comes to 2. On the contrary, Beef and OliveOil are easier to be impacted even if standard deviation of error is 0.2. However, SwedLeaf is different; it may be ascribed to the wave form, which we will explore in future research. Considering the impact of the size of observation samples, it is important for two kinds of estimation intervals which stem from observation samples. As described above, all experiments results are based on observation samples. We describe how the results come to be if the size of observation sample gets large. In Figure 13(a), with the Normal error, the accuracy of three sizes of observation sample is shown at various standard deviations. The result of 64 samples is the best; 32 samples result is better than 16 samples. At relative small standard deviations (0.2–0.8), the results of three sizes are of little difference; with the deviation growing, the differences gradually become more observable. The results of Uniform and Exponential distributions are similar to Normal and are reported in Figures 13(b) and 13(c). The differences with Uniform error among three sizes are smaller than the other two distributions.

In Figure 14(a) we compare our approach with other techniques under Normal error distribution, namely, PROUD, DUST, Euclidean distance, UMA, and UEMA, referring to the methodology proposed in [16]. The results demonstrate that our approach is more effective than other techniques with three distribution errors. With 0.2 error deviations, UEMA and UMA outperform others; PROUD performs slightly better than DUST and Euclidean, but with larger error standard deviation its accuracy drops slightly below DUST and Euclidean. This trend is also kept with Uniform and Exponential distribution, illustrated in Figures 14(b) and 14(c).

We also compare the performance of execution time for our approach with other techniques mentioned above. Because the results of three distributions are analogous, the Normal distribution is drawn as an example to show the trend of the results. Figure 15 shows the CPU time per query for Normal error distribution with varying error standard deviation from 0.2 to 2. It shows that the varying standard deviations for error do not impact the performance of these techniques basically. The performance of our approach is slightly better than DUST, UMA, and UEMA. The best time performer is Euclidean. Note that we do not apply PROUD to wavelet synopses; this may be the reason why it does not perform well.

In Figure 16, we describe the CPU time per query for Normal error distribution with varying time series length between 50 and 1000. The time series of different length are obtained by reconstitution of raw datasets. The figure shows that the execution time increases linearly to the time series length. The results of our approach are better than DUST and PROUD; Euclidean gets the best performance.

7. Conclusion

In this paper, we propose a new model of uncertain time series and a new approach that measures the similarity between uncertain time series. It outperforms the state-of-the-art techniques, most of which employ the distance measure to evaluate the similarity.

We validate the new approach with three kinds of error distributions and the standard deviations of error span the range from 0.2 to 2; meanwhile, we compare the new approach with the techniques previously proposed in the literature. Our experiments were based on 19 authentic datasets. The results demonstrate that overlap measuring, based on observations interval and central tendency, outperforms the other complex alternatives. If the expected value of the error in the experiments is considered to be zero, the average of these samples may be a good estimate for unknown values at each time slot; it characterizes the center of data distribution.

In the future, we will make a deeper exploration of the modeling of uncertain time series data when the expected value of the error is zero. We will extend our work to index technique about uncertain time series. We will explore the influence of wave characteristics of time series data and the management of volume uncertain time series.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.