Abstract

This study demonstrates the existence of uncertainty in hydrometeorological trend analyses using historical rainfall from Kenya in East Africa. With respect to the influence of short- and long-term persistence on trend analyses, a total of 13 approaches of rank-based techniques including Mann-Kendall, Spearman’s Rho, and Cumulative Rank Difference (CRD) tests were employed. Graphically, CRD-based diagnoses of trends and subtrends were performed. To assess data-related uncertainty, a resampling procedure was applied. It was shown that at a selected significance level, the null hypothesis (no trend) can be rejected for trend direction while the evidence to reject (zero trend magnitude) is statistically insufficient. Graphical and statistical approaches when combined yield more influential and comprehensive information for inference than relying purely on statistical results. Alongside an apparent linear trend, variations in the nonlinear (e.g., cyclical) component of the series may also not be due to natural randomness. Method-related uncertainty is not negligible especially for series with persistent fluctuations. These findings shed light on the need to assess uncertainty on trend results. Furthermore, it is recommended that the conclusiveness of trends be cautiously premised not only on statistical grounds but also on more considered physical and/or theoretical understanding of the hydrometeorological processes.

1. Introduction

The issue of topical changes observed in the earth’s weather conditions (global warming) seems to have immensely forced the interests of the research community towards detection of changes in terms of variability and trends so as to comprehend the influence of climate system on hydrology, meteorology, and so forth. Examples of some recent studies on the assessment of hydrometeorological changes include [14]. Many studies, especially those on trend analyses, tend to be limited to the verification of whether there exists significant trend or not. In other words, apart from some few examples such as [5], little attention seems to be given to uncertainty assessment in trend analyses. This could be why relevant and comprehensive studies related to uncertainty in trend results can be hardly found in literature.

The uncertainties in trend analyses can be data- and/or method-related. The characterization of data by ties, persistence, and so forth may invalidate the trend test assumptions. On the other hand, a number of methods including parametric and nonparametric tests exist for trend detection. Linear regression test is a typical parametric method. Nonparametric methods include the Mann-Kendall (MK) [6, 7], Spearman’s Rho (SMR) [810], and Cumulative Rank Difference (CRD) [11, 12] tests. How each method attempts to deal with the influence of persistence, data irregularities in form of ties, and so forth on trend results also tends to differ. For instance, the parametric approaches assume the data to be normally distributed. However, the tendency of hydrometeorological data to be nonnormally distributed could lower the accuracy of the results from parametric test. In contrast, the normal distribution requirement of the data is often assumed to be unnecessary for the rank-based tests. Furthermore, due to the uncertainty in the statistical methods to accurately quantify trend directions and their significance, various forms of the trend tests continue to be developed. For instance, several attempts have been made to improve the MK test with respect to prewhitening, test statistic variance correction considering short- and long-term persistence, and so forth. Nonetheless, many trend tests still rely on purely statistical results though some recent graphical techniques also exist. Basically, the use of only one approach and/or trend test for change investigations can lead to lack of an insight about the influence of the method selection on the analyses results. In case the trend results are to be applicable for decision on operational practices in reality, assessment of the associated uncertainties becomes vital to give an insight into the degree of confidence that can be accorded to the analyses and/or findings. According to [13], reliable uncertainty estimation in hydrological investigation is vital for proper management of the environment and water resources. In the same vein, trend results on which the possible uncertainties are not investigated and clearly communicated may unnecessarily prompt lavish expenditure of the limited economic resources for environmental planning and management.

This study with respect to trend detection is aimed at exploring the possible sources of uncertainties and highlighting the insights about the influence of these uncertainties on the analyses. This paper is organized under five sections. Introduction to the problem studied is given in Section 1. Section 2 explores the main trend test assumptions and how they can be violated, possible uncertainty sources, and so forth. Section 3 presents a case study on the assessment of the uncertainties in trend detection. Finally, Section 4 comprises results and discussions from the case study.

2. Uncertainty in Trend Analyses

2.1. Sources of Uncertainty and Suggestions
2.1.1. Data-Related Sources

There are three main assumptions that underlie nonparametric trend tests which are commonly applied in hydrometeorology. It is the violation of these assumptions that comprises the main source of data-related uncertainty as explored next.

Firstly, the occurrence of data points over time is assumed to be in an independent and identically distributed way. This assumption can be falsified through the characterization of data by ties and persistence which influence the magnitude of the test statistic variance. An increase in the data ties deflates the variance of the test statistic. This is synonymous with the effect of anti-autocorrelation in the series. On the other hand, positive autocorrelation inflates the sampling variance of the test statistic. In other words, whereas positive autocorrelation increases type I error (i.e., the probability of rejecting the null hypothesis while it is true), negative autocorrelation leads to an increase in the type II error (i.e., the probability of accepting while it is false). The most popular model which tends to be assumed for addressing the influence of short-term persistence on trend is the lag-1 autoregressive AR(1) process. Based on the AR(1) model, various ways exist on how to deal with short-term persistence, for example, prewhitening and test statistic variance correction. The main challenge is that if the data under consideration follow other correlation models, the procedure based on the assumed AR(1) model may yield inaccurate rejection rates of the trend detection methods. Thus, it is vital to investigate and select the most suitable correlation model for the given series to enhance the accuracy of the procedure to deal with influence of persistence on trend results.

Interestingly, [5] suggested that the classical framework of independent and identical distribution of events within the time series should be discarded. The authors argue that statistical uncertainty is dramatically increased by the presence of long-term persistence in the series. Indeed, the number of significant trends tested on the assumption of short-term persistence was shown to remarkably decrease when the long-term persistence was accounted for in the trend detection [1416]. To explain changes in the hydrometeorological series in terms of the long-term persistence, fractional Gaussian noise model which consider, for example, the Hurst exponent [17] can be used. For the SMR and the MK, the variance correction schemes under the scaling theory can be found in [18] and [15], respectively. For the CRD, the procedure for the correction of its test statistic variance considering long-term persistence can be found in Appendix B of this paper. As shown in Appendix B, the long-term persistence leads to inflation of the test statistic variance when the scaling exponent exceeds 0.5. This inflation leads to increase in type I error in the trend analysis. Some of the challenges that exist in the consideration of long-term persistence and may lower the accuracy of trend results include: (1) inaccuracy in determination of the persistence parameters especially for small sized samples, and (2) difficulty to derive exact expressions of modified variance of test statistic amidst significant number and groups of tied data points especially for a rank-based trend test for example, MK which considers ties in groups [15]. Even if the exact expression of the case in (2) for the modified variance would be obtained, due to the interaction among the various groups of ties, the variance would be computationally arduous for large sized data. Of course, the only plausible option remains approximations of the effect of long-term persistence on trend. Furthermore, although on purely statistical ground even such basis as the presence of long-term persistence in series can be questionable [5], in the author’s opinion the consideration of long-term persistence is undoubtedly vital for realistic hydrometeorological trend analyses. However, a more considered theory to characterize the natural and/or physical behaviour of the hydrometeorological processes needs to be developed. As the development of such a theoretical framework for change detection continues to remain an open problem, methods which consider short- or long-term persistence keep on being used. Whereas the focus of this paper is not to develop a rigorous theoretical framework for attribution problems, the need to assess statistical uncertainties in trend analyses remains an unquestionable fact regardless of how the influence persistence is considered.

Issues on the distribution of events for trend testing seem not to be limited only to persistence. As always is the case in trend analyses, rank-based tests are also assumed to be distribution-free as already highlighted in Section 1. In the same vein, [19] tested the validity of this assumption using series characterized by some of the common distribution types used in hydrology including the Pearson type 3, lognormal distribution, and extreme value distribution (EV1, EV2, and EV3). The power of the trend test is comparable if the series have no trends [19] thereby confirming that the rank-based trend test do not require data to be necessarily of Gaussian distribution. However, the authors claimed that in the presence of trends, dramatic differences exist in the power of the test for the various distributions. If this finding is valid, it means that, for instance, data skewness especially amidst existing trend in the series does affect the performance of the test. In such a case, to assume that rank-based tests are distribution-free would present some sort of uncertainty on the trend results if not corrected from the influence of skewness in the data.

The second test assumption requires the data points to be representative of the population. In trend detection studies, data samples (which are, of course, not the population) are used for the analyses. If the sample is of short-term record length (as it tends to be the case in data scarce regions), the trend results cannot be representative of the true value of the population. Worse still, the true trend value of the given series is even unknown, thereby rendering the estimation of the bias in results from the use of short-term records an impossible task. Perhaps, the estimation of bias in trend from data of short-term record length requires specification of a baseline period which can be stipulated based on a universally clued-up decision of relevant stakeholders dealing with global hydrometeorology. However, it was recently remarked by [2] that it still remains unclear and an open problem on which data record length gives bias-free trend results. Nonetheless, the fact is that the longer the data record length, the better the trend estimates.

Still based on the second assumption, apart from the data limitation stemming from short record length, observation errors also affect the reliability of the samples. For brevity, some brief elaboration can be given in terms of river flow and rainfall measurements. According to [20], different sources of error which affect the derived river flow observations include interpolation and extrapolation error of the rating curve, errors in the measurement of the river stage and discharge to parameterise the rating curve, errors due to presence of unsteady flow conditions, and roughness or the seasonal variations of the vegetation state. For rainfall measurement, some of the uncertainties include sampling errors due to temporal gaps in observations and approximations of areal estimates from point measurements [21]. If the percentage of missing records in the hydrometeorological data (e.g., rainfall) is high, the uncertainty due to in-filling procedure also becomes large. For spatial sampling (e.g., of areal rainfall), the observation-related uncertainties tend to increase with decreasing accumulation of time for the variable [21]. For rain gauges with tipping buckets, data obtained from the conversion of the tips to rainfall using the tip interpolation scheme may differ from those based on the tip counting method. Other sources of uncertainties in the measurements of flow and rainfall include poor maintenance of the recording equipment, change of the recording methodology and equipment, and so forth. Furthermore, trend analyses performed using data selected over different periods also tend to differ [2]. This can be due to the effect of scaling in which trends may exhibit a regular behaviour in the series under the scaling assumption. To reduce the inconsistence in trend results due to the difference in data periods, series with long-term series (if available) would be used. In summary, whereas data quality control may be performed to tackle measurement- or observation-related irregularities, generally low quality of the data limits the fully required understanding of the dataset thereby leading to falsification of (or uncertain) trend results following the “garbage in-garbage out” principle.

Thirdly, the data for trend tests are also implicitly assumed to be measured under true conditions of the sampling times. The contravention of this assumption stems from the impacts of human activities and/or climate variability on hydrometeorology which make the ideal conditions of measurements or derivations of the relevant observations difficult to achieve. Some examples of anthropogenic activities which alter the expected natural conditions for measurement of hydrometeorological variables include (1) river flow regulation through construction of dams and reservoirs to add extra water storage, (2) land use changes for example, transforming a forested area into an urban centre, (3) cloud dusting, that is, using planes to spray chemical into the atmosphere to enhance rainfall for farmers during dry periods. It becomes complex to analyse trends in series with imprints of anthropogenic influence when also long-term persistence may be shaping the climate system fluctuations. When trend analyses are characterized by large uncertainty, rigorous framework(s) would be required to unearth and separate the various layers of synergistic influences in a systematic way (if possible) in the attribution problems.

2.1.2. Method-Related Sources

Because rank-based tests are often preferred to the parametric approaches, the exploration of method-related uncertainties will be purposefully biased towards MK [6, 7], SMR [810], and CRD [11, 12] tests which are applied for the case study. First and foremost, it is worth noting that, under certain limited data structure (e.g., data without ties and/or non autocorrelated series), the performances of nonparametric tests (for instance those used in this study) are comparable for various circumstances of sample size, variation, slope, and so forth as demonstrated by [19] for MK and SMR and [11] for MK and CRD. For some brief explanation on how this performance agreement between the methods comes about, see Appendix A.2. However, considering the general data structure which takes into account the need to deal with the influence of both ties and persistence in a combined way (as a typical requirement for trend analyses), the methods may differ in their performance. For data with ties, the MK considers the number of tied groups and how many observations are in the each group. On the contrary, the CRD computes the measure of ties in a general and lumped way for the reason that this is appreciably simpler than when ties are considered in groups. For the SMR, the ratio of the covariance of the time and observations to the product of their standard deviations may be used when ties exist in the data. Moreover, whereas MK and CRD apply correction for the influence of ties in the computation of the test statistic variance, the SMR corrects for ties in the calculation of the test statistic itself. When dealing with the influence of short-term persistence, the SMR uses an exact expression for the variance of the test statistic under the assumption of multivariate Gaussian dependence [18] or theoretical formula for variance correction [22]. Furthermore, serial correlation can be tackled by: prewhitening [2325], resampling techniques in terms of sieve-bootstrap [26], block bootstrap [27] or phase randomisation [28], and variance correction using either rank-based empirical formula [29] or direct application of the effective sample size [30]. Besides, the CRD corrects the variance of the test statistic based on the factor computed using detrended series to obtain the effective sample size [12]. Considering long-term persistence, the MK can be applied based on the scaling theory [15]. The variance correction for the SMR test based on the fractional Gaussian noise model was also recently proposed [18]. For the CRD, the test statistic variance correction considering long-term persistence can be found in Appendix B of this paper. Apart from the use of only statistical methods, as will be explained shortly afterwards, graphical methods are also important. In summary, it is vital to consider more than one method for analyses of trends as suggested by [31, 32] and implemented in [19, 33]. According to [24], the use of more than one approach in trend testing is supportive to interpret results amidst existing uncertainties.

Uncertainty may also exist in making inference on the trend results in a number of ways. According to [32], purely statistical trend results might be meaningless sometimes. Inferential-related uncertainty might stem from the dilemma on the nature of changes within the data, especially when trend is analyzed in a purely statistical way using the long-term series as a whole. The short-durational changes within the series, which can be revealed through graphical exploration, may be of interest to an environmental practitioner, for example, in assessing the intervention of climate fluctuations on the hydrometeorology. In analysis of monotonic trend using long-term data, it is often implicitly assumed that the direction of change in the series is entirely in the same direction, that is, either positive or negative. This assumption is misleading and can potentially contribute to the uncertainty in trend analyses. It is known that time series may comprise (1) seasonality in the form of rises and falls over known or fixed periods, (2) trend as a smooth function of time in terms of the long-term decrease or increase, (3) cyclical rises and falls over irregular time periods, (4) random component, and (5) combination of (1) and (2), (2) and (4), and so forth. It possible that the analyses of, say, monotonic trend can be influenced by the changes in other components of the series. For instance in rainfall series, trend may be apparent; that is, (no trend) can be rejected at a significance level α% for datasets from some seasons but can be accepted at α% for series aggregated at annual time scale. It would be vital to consider both seasonal and annual time scales for the conclusiveness of trend in the series at a particular station. Plausible option would also be to decompose the given series into its various components in a stepwise way. Whereas extraction and modeling of the seasonal (or periodic) component can be vital for making forecasts, analyses of the changes in the cyclical component can also be vital to obtain the insight about the effect of climate variability. Of course, due to impact of climate variability on hydrometeorology, the tendency of severe events to temporally occur in a clustered form above or below the long-term mean of the variable brings about positive and negative effects on the trend in the series. Over time, these positive and negative effects cancel out each other within the series. Eventually, the overall trend from the full-time series is the net effect of such cancellations [11]. In the same vein, whereas (no trend) may be accepted based on the full-time series, in analyses done over separate short periods can be rejected in several subtrends of the data. Moreover, if the given series has, for instance, step jump in mean, it can be highly probable that (no trend) will be rejected in a statistical test if the full-time series is considered. In such a case, the trend significance can be a signal for the “mistaken” cause. This makes graphical exploration of data vital for trend analyses. Exploratory assessment of changes within the data maximizes the knowledge of possible fascinating details which may prompt further investigations to influence decision-making which is to be supported by the trend results. With respect to graphical identification of subtrends, recently an innovative method was introduced in which the series is divided into two parts and each half sorted in descending order and plotted against each other [34]. Such plot can indicate if high, medium, and low extreme events are changing over the periods of the first and second half of the series. According to [34], the innovative method is free from restrictive data requirements, for example, independent structure of the data points, normal distribution, and long data record. Importantly, separating subtrends, that is, short-durational changes in trend direction over unknown data subperiods (e.g., of decadal time scales) from that of the long-term period (e.g., due to global warming), presents the possibility to ascertain the likely drivers of change in the variable [11]. In summary, trend analyses should entail the combination of both graphical diagnoses and statistical approaches. Graphical techniques as elaborated for the CRD approach in Appendix A (see Sections A.1.3(2) and A.2 for details) help to determine whether the trend in the series is monotonic or not, linear or nonlinear, and so forth. Of course, if the trend is visualized to be nonlinear, application of the procedures for linear trend detection would lead to unrealistic trend results. Analysis of nonlinear trend requires fitting of nonlinear function, for example, a polynomial of a suitable degree. For the CRD technique, approximation of the nonlinear trend is based on the temporal convolution using suitable time scale, for example, 10 years to analyze decadal anomalies for climate variability investigation and 30 to 35 years for an insight on climate fluctuations.

Inferential uncertainty might also be due to the tendency to embark on testing the significance of only the trend direction but not considering, for example, how significant the trend magnitude (or slope) is and whether the sample-based trend slope is a representative estimate of that of the population. To do so, both trend direction and magnitude are important considerations for the analyses. To indicate whether the change in the series is monotonically increasing or decreasing, trend tests are used to analyze the directional sign of the change. How to judge the significance of the trend direction is well known (see (A.6) of Appendix A). The magnitude of monotonic trend in the series can be quantified using slope (assuming linear increase or decrease) based on the method of [35, 36]. For a given series, the trend direction can be significant while the slope is statistically not different from zero at a particular significance level . Furthermore, it may be possible that the trend direction can be significant for a very small magnitude of increase or decrease which may be unimportant in practice [12]. On the contrary, the noise in the data may also make the trend insignificant for a change whose magnitude cannot be ignored in environmental management decision-making [37]. Therefore, apart from verifying (no trend), the significance of the trend slope and the associated uncertainty on the change magnitude should also be investigated. Furthermore, to support the inference of the trend analyses, there should be rigorous attribution procedure, for example, assessment of the consistency and provision of confidence statement [38], systematic investigation of the possible drivers of the change, for example, through the multiple working hypothesis [39], and so forth.

2.2. Uncertainty Assessment

A typical way to assess uncertainty on a given statistical result is to construct confidence interval (CI) at a stipulated level of significance. For a trend line fitted through the data, the least squares approach can be used to construct the CI on the trend slope. However, caution must be taken to use rather the slope estimate based on the method of [35, 36] than the ordinary least squares estimator. This is because the nonparametric approach is not only an unbiased estimator of the true slope but also more robust than the least squares method.

Among other approaches to assess uncertainty, Monte Carlo technique is common. Some of the resampling methods that employ the subsample of a general statistic as the building block to quantify uncertainty include the typical value problem [40] and the Jack-knife method [41]. In the typical value principle, the inclusion of the unknown parameter in the intervals between the ordered subsample is with equal probability; in other words, the statistic values from the various subsamples obtained from a series approximate to a “typical value” [42]. In the Jack-knife method, sample subsets are obtained without replacement, that is, each observation is deleted in a systematic way while computing the parameter using the remaining data points. The CI can be estimated using the Students t-distribution on assumption that parameter estimates (“pseudo-values”) from the subsamples are approximately independent and identically distributed normal random variables [42]. To strengthen the Jack-knife procedure, bootstrapping was introduced by [43]. In bootstrapping, the subsets of the original data are obtained with replacement to estimate the sampling distribution of the parameter estimator.

It is appreciable that in a given sample, the inclusion of every new data point influences the overall trend from the series. In this same line, a resampling technique which also considers the influence of the data points in a systematic way, that is, the Exclude-one and Estimate trend Slope (EES) [12] as will be seen in Section 3.3, can be used to assess the uncertainty on the mean estimate of the population trend magnitude.

3. Methodology

3.1. Case Study and Data

To illustrate the existence of uncertainty in trend results, the case study was conducted using annual rainfall over Kenya (Figure 1), a country in East Africa. Annual station-based (observed) rainfall data at 23 locations were obtained from the Food and Agriculture Organization (FAO) [44]. For the obtained series, Table 1 shows the station ID, location, record length and some statistical metrics including the long-term mean (LTM, mm/year), coefficient of variation (CV), skewness (CS), and actual excess kurtosis (KT). It is seen that the CV ranges from about 0.15 to 0.6 indicating a moderate variability on a year to year basis in the study area. It is expected that, for a normal distribution, the and . The rainfall from most of the locations exhibited slight to moderate skewness and kurtosis. In other words, the data were leptokurtic, that is, having longer and fatter tails with higher central peak than the Gaussian distribution.

3.2. Method-Related Uncertainty in Trend Directions

Three rank-based tests (see Appendix A) including the MK [6, 7], SMR [810], and CRD [11, 12] were applied to the annual rainfall series. At the significance level α of 5%, the values (see (A.6)) were calculated based on , , and , that is, the standardised test statistic of MK, CRD, and SMR, respectively. To assess the uncertainty due to the difference between the methods, various forms of the tests were used. For the MK, the variance correction procedures for autocorrelation proposed based on effective sample size [29, 30] and block bootstrap [27], as well as that considering the scaling theory [15], were used. Furthermore, prewhitening procedures based on various approaches including trend-free prewhitening [25], simultaneous estimation of trend slope and autocorrelation coefficient [45], variance correction prewhitening [24], and modified trend-free prewhitening [46] were also applied. For the SMR, the variance inflation factors based on the long-term persistence [18], Gaussian dependence proposed by [18], and the theoretical formula based on the effective sample size [22] analogous with the procedure of [30] for the MK were applied. For the CRD, variance correction approach based on short-term persistence [12] as well as that for the long-term persistence (see Appendix B of this paper) were applied. In summary, 13 ways in which the selected trend tests (CRD, SMR, and MK) deal with the influence of both short- and long-term persistence were considered.

To find if results from the various methods are different, the null hypothesis (the trend result of a particular method is not different from those of other approaches) and the alternative hypothesis H1 (trend results from each method is different from those of other approaches) were tested. Consider φ as an array of the standardized trend statistic values (from the various forms of all the statistical tests) obtained using a given series, the mean of nφ values of the statistic in φ, and sφ the standard deviation of φ, the upper and lower limits of the CI were computed using (1). Ideally t-distribution could be used in (1); however, because the number of trend testing approaches were deemed few (i.e., 13 in this case), the t-distribution would be leptokurtic; that is, there would be relatively more scores in the tails than for the normal distribution which was eventually adopted:For the methods whose results fall outside the CI, the evidence might not be sufficient to reject at the significance level α%. For approaches whose results are within the CI, the was accepted.

3.3. Significance of Trend Slope and Data-Related Uncertainty

The magnitude of the linear trend can be obtained in terms of its slope given by [35, 36]where and are the th and th observations, respectively.

3.3.1. Significance of the Nonzero Magnitude of the Trend

With respect to the significance of m from (2), the null hypothesis (that the increase or decrease in the variable does not depend on the chronological time of observations) can be tested. In other words, : m and : m . After confirming that the data under consideration has minimal or no influence from the possible outliers, the following procedure based on the ordinary least squares method can be taken in the hypothesis testing:(i)The first step is to compute the Pearson product-moment correlation coefficient r between the observations and the time of observations.(ii)Consider that denotes the standard deviation of the observations, the standard deviation of the time of observations such that , and the standard error of estimate can be computed using .(iii)The statistic () is given by , where is from (2).(iv)The probability value ( value) can be calculated based on Student’s t-distribution for example, using the Ms Excel function TDIST (value, df, tails) such that the two-tailed-based value in terms of the degree of freedom (df) is .(v)At the selected significance level (taken as 5% in this study), if the computed value is greater than in decimal (i.e., %/100), can be accepted if the evidence is statistically sufficient; on the contrary, if the estimated value , cannot be rejected.Although the above procedure was adopted in the case study for fast computation, the significance of the trend slope can also be tested in a purely nonparametric approach as implemented in [47]. According to [47], based on the null distribution of the slope, number of bootstrapped samples each of size are obtained randomly with replacement. Next, (2) is applied to each sample to obtain trend slope estimates. The trend slopes are sorted in ascending order and ranks are assigned to them as well. The rank corresponding to the slope computed before the resampling procedure is obtained by interpolation. The value of the trend slope in the given series is estimated in terms of the probability as . Based on a one-tailed test and say as selected in this study, the is rejected if (for a decreasing trend) or (for a positive trend) [47].

3.3.2. Are the Computed and Population Trend Slopes Different?

Another null hypothesis (that the estimated value of m from (2) is not different from that of the true population) can be tested in terms of a nonparametric approach. If denotes the true linear trend slope of the population, it means : and H1: or . At a stipulated significance level α% (taken as 5% in this study) the uncertainty on the estimate of the true mean of was assessed using the EES [12] method by(i)applying (2) to compute of the given series of size n;(ii)excluding the th data point from the series used in Step (i);(iii)computing m using the data points from Step (ii);(iv)putting back the data point excluded from Step (ii) into the pool of sample;(v)making and repeating Steps (ii) to (iv) n times to obtain number of values;(vi)constructing the CI based on t-distribution and the standard error of the mean usingwhere is the slope estimate from the th subsample and denotes the mean of n values of .

To use t-distribution instead of normal distribution, it is required that the n should be adequately large, for example, 45 and above, as in this study (see Table 1 for data periods). If from Step (i) of the EES falls within the CI limits from (3), it means that : m = may be accepted at the significance level of α%. In case m falls outside the CI limits, it suggests that the evidence to reject : might be statistically insufficient at the significance level of α%. On the other hand, it would also imply that the magnitude of m is probably not due to natural randomness in the series. It is recommended that used in the EES should be similar to that adopted for testing the significance of the trend directions.

3.4. Uncertainty in Trend Directions due to Data Limitation

Although long-term series are required for change attribution problems, some trend analyses studies in data scarce regions have been conducted using short-term data of as low record length as 10 years. To demonstrate the uncertainty due to such data limitation, series from four stations including 4, 10, 12, and 17 were adopted. At each of these stations, the dataset was of length above 90 years and thus deemed adequate for the analyses. Record lengths of 10, 20, 30, 40, and 50 years were chosen for the investigation. For each selected record length, subseries were extracted in a systematic way by sliding a time slice or window. For instance, taking a data record length of 50 years, the subseries were generated as follows:(i)From the full series of annual rainfall covering the period, say, from 1901 to 1994 (at Station 17), a moving window of 50 years was used to extract the subsamples such that the first, second, …, last subseries were taken over the subperiods 1901–1950, 1902–1951, …, and 1945–1994, respectively. Thus, using the 50-year window, a total of 45 subseries were extracted.(ii)Step (i) was repeated for the data record lengths of 40, 30, 20, and 10 years. These yielded 55, 65, 75, and 85 subseries, respectively.(iii)To get rid of the possible bias in the subsampling, the number of subseries for each selected data record length was made equal. To achieve this, some of the subseries from Step (ii) were discarded randomly till their number reduced to 45 for each selected record length. This was to have comparable number of series with the case in Step (i).(iv)The CRD trend test was applied to each subseries while applying test statistic variance correction for long-term persistence where necessary. Finally, using the nonparametric bootstrapping approach, 95% CI was constructed to characterize the uncertainty on the population estimate of the mean of the trend metric .

3.5. Temporal Evolution of Trends

To investigate the changes of trend over time, the sequential MK test [9] can be applied in both progressive and retrogressive ways. However, analogous techniques of temporal trend evolution based on the CRD [11] are adopted in this study. Using data at the same stations as in Section 3.4, three modes of moving window were applied to assess the temporal variation of subtrends using the following:(i)Forward moving window of fixed (F-Fix) length: to characterize climate fluctuations, the window length was set to 35 years. For instance, at Station 17 with data years 1901, 1902, …, 1994, subseries were extracted from the window moved each time by a unit over the subperiods 1901–1935, 1902–1936, …, and 1960–1994, respectively.(ii)Forward moving window of flexible (F-Fwd) length: here, one end of the window is fixed at the beginning of the series and the other stretched in an elastic way towards the end of the data. The flexible end of the window was first placed on the 10th data point from the beginning of the series. Eventually, for the same example in (i) above, subseries were extracted from the window stretched each time by a unit to cover the periods 1901–1910, 1901–1911, …, and 1901–1994, respectively. To check for a possible change in trend direction (if any) at the start of the series, a data point at an index as low as 3 (instead of 10) may be used. Of course, the trend test statistic values for the subsamples of size less than 10 might be somewhat uncertain due to limitation of data points for the analyses.(iii)Backward moving window of flexible (F-Bck) length: the concept of this approach is comparable with that in (ii) except that one part of the window is fixed at the end of the series and the other stretched in an elastic way backward to the beginning of the data. Again, the flexible end of the window was first placed on the 10th data point from the end of the data. The subseries extracted for the same example in (i) covered the periods 1994–1984, 1994–1983, …, and 1994–1901, respectively. Because of the reversal of the series for the F-Bck, the sign of the trend direction is expected to be opposite to that of the F-Fwd. However, when viewed from the beginning towards the end of the series, the slopes of the trend from both F-Bck and F-Fwd are expected to agree with one another.(iv)The statistical CRD trend test was conducted on each of the subseries obtained in Steps (i)–(iii). This was done while applying correction to the test statistic variance where necessary to account for the influence from long-term persistence. For a complete picture of the trend evolution, the values of the were plotted against the ending year of the time slices.

3.6. Graphical Support for Trend Analyses
3.6.1. Rescaled Series and Cumulative Sum of the Rank Difference

To eliminate the influence of possible outliers (if any) on the accuracy of the fitted trendline, the series can be rescaled nonparametrically to obtain equivalent normal variates () using = for such that is the number of times a data point is exceeded, and is the number of times a data point appears within the given sample of size ; for details see (A.15) and (A.16). Anomalies in the form of moving average of the rescaled series are computed using (C.1). Time series plot is made comprising the rescaled data, moving average as well as the trendline fitted to the rescaled series.

For the second type of plot, the series is rescaled using (A.19) and the cumulative sum (A.20) of the rank difference is plotted against the time of the observations [11, 12]. The mean of the rescaled series is taken to be the reference. If there is no trend, the scatter points do not form distinctive curve in the CRD plot. If there is a positive/negative trend in the series, a curve of the scatter points is formed above/below the reference. The procedures of the graphical CRD approach to identify trends are presented in Section A.1.3(2) based on illustrations in the appended Figure 7. The CRD plots for the case study were made for the data at the same stations as in Section 3.4. The uncertainty bounds on the observed changes in each CRD plot were constructed by combining resampling and percentile confidence interval.

3.6.2. Variability in Cyclical Component of the Series

If the series is not characterized by seasonality (i.e., periodicity) and/or the trend is linear, the slope of the monotonic trend is estimated based on (2) and used to detrend the data. It is possible that the data can have both seasonal and cyclical components. The seasonal component is required to be removed prior to the analyses of trend and/or variability. Here, for illustration, using monthly rainfall series, the seasonal component can be removed using the following procedure: (a) extract the data for each month, (b) compute the long-term mean of the data extracted for each month, and (c) use the long-term monthly mean pattern as the seasonal model for each year of the data. For each year, the series can be deseasonalized by subtracting the long-term mean of a given month from the corresponding original data point. Using the detrended or deseasonlized series, lag-1 serial correlation coefficient () is computed and its significance tested. If is significant, the detrended series is prewhitened; otherwise, the detrended series can be considered without prewhitening. Next, the detrended or deseasonlized series is rescaled nonparametrically in a CRD-based way. If the given series is nonlinear and/or has no seasonal component, no detrending is done and r1 is computed based on the original series for its prewhitening where necessary and followed by rescaling. Based on a suitable temporal scale, CRD-based convolution is applied to the rescaled series and the significance of the cyclical oscillations assessed based on bootstrapping approach using Monte Carlo simulations. In this study, the temporal scale of 10 years was selected to assess the cyclical anomalies because of its importance to obtain an insight on the signature of climate variability relevant for planning and management of water resources applications. The systematic procedure on how to assess the significance of the cyclical variation in the series is provided in Appendix C.

4. Results and Discussions

4.1. Trend Directions

Table 2 shows information on the trend directions in the annual rainfall. Consider as the standardized trend statistic value of the th method, to interpret results from Table 2 based on the significance level of 5%, notes should be taken on the following: () the symbols “▲” and “▼” mean (no trend) was rejected for a positive and negative trend, respectively, () the symbol “□” denotes that (no trend) was accepted, () “Y” means ( is not different from the other Zi’s) was rejected, () “N” means ( is not different from the other Zi’s) was accepted, and finally () the symbols “+” and “−” represent positive and negative trend directions, respectively. Trend directions were found negative at Stations 6-7, 9-10, 12, 14, 16-17, and 22. At all the remaining stations, trends were positive in direction. The difference among the results from the various methods is clearly evident with respect to the significance of the trends. Due to method-related uncertainty, trend directions at Stations 4, 8, 9, 10, and 17 based on (no trend) were rejected at 5% level by, respectively, 11, 7, 6, 4, and 6 out of the 13 methods. This is an indication that the selection of a particular method for trend detection influences the statistical results.

Figure 2 shows graphical results of trend directions for selected stations. The uncertainty on the changes from the CRD plots (Figures 2(e)2(h)) is also provided in the form of 95% CI. It is noticeable that at Station 4 (Figures 2(a) and 2(e)) there was an increase in the rainfall. However, at Stations 10 and 17, the annual rainfall exhibited a decrease (Figures 2(b), 2(d), 2(f), and 2(h)). The directions of the trends based on the graphical results are consistent with those already found in Table 2. Though they may not be conclusive on the significance of the change, the attractive feature of the graphical means (e.g., CRD plots) of trend identification is that short-durational changes can be visually separated. For instance, at Station 12 (Figure 2(g)), although there is no trend in the series, it is evident that the cumulative effects of the variation were negative (positive) from 1960 to the early 1990s (from 1915 to 1930). At Stations 10 and 17 (Figures 2(f) and 2(h)) the variation in the series for almost the entire data period was dominated by a negative trend. However, for the annual rainfall at Station 4, the variation in the series was dominated by an increase (Figure 2(e)). Such additional information can not be obtained if the trend analyses are done in purely statistical way.

Figure 3 shows results of variability in detrended and rescaled annual rainfall at selected stations. It is noticeable that for Stations 10 (Figure 3(b)) and 12 (Figure 3(c)) the decadal anomalies fluctuate within the 95% CI. However, for Stations 4 (Figure 3(a)) and 17 (Figure 3(d)), the upper limit of the 95% CI is up-crossed by the decadal anomalies in the early 1960s and mid-1930s, respectively. Therefore, (natural randomness) is accepted (rejected) for the series at Stations 10 and 12 (4 and 17). Given that (no trend) was already rejected at Stations 4 and 17 (see Table 2), it means that the evidence of combined effects of both long-term trend and short-durational variability in annual rainfall at these stations cannot be rejected at the significance level of 5%. It may be possible that such results could be indicative of the combined effects of, for example, anthropogenic factors as well as climate variability and change. Therefore, in analyses of trends, other forms of changes also need to be investigated in hydrometeorological attribution problems.

4.2. Uncertainty in Trend Direction and Slope

Table 3 shows the uncertainty on the mean of the trend statistic values. Because of the difference among the 13 trend testing methods, even if the (no trend) can be accepted using the , it is possible that the (there is trend) may not be rejected based on the limits ( and ) of the 95% CI (e.g., see Stations 9-10 and 17). Results of the trend slopes and associated uncertainty due to the noise in the annual rainfall are presented in Table 4. Whereas the () was rejected at Stations 4, 8-9, 17, and 19, it is noticeable that the values of the observed slope m are somewhat representative of the population since they all fall within the upper () and lower () limits of the 95% CI. The uncertainty on both trend directions and magnitudes in terms of widths of their 95% CI can be seen to vary from station to station. This is indicative of the spatiotemporal variation in the rainfall which might be due to the influence of microclimate difference across the study area. Influences which can also cause such spatiotemporal differences in the rainfall statistics include regional feature, for example, topography, water bodies, and land use transition. For the case study area, influences might come from a number of existing mountains including Mt. Kenya, Mt. Elgon, and Mt. Kilimanjaro with the elevation of their highest points reaching up to 5,199 m, 4,321 m, and 5,895 m, respectively. Some of the large water bodies in Kenya include Lake Turkana, Lake Victoria, Lake Naivasha, and Lake Magadi.

4.3. Trend Evolution

Figure 4 shows, for selected stations, the evolution of trends based on overlapping 35-year block subseries. Increase in the rainfall was exhibited from the 1930s to early 1980s (Station 4, Figure 4(a)), mid-1960s to late 1970s (Station 10, Figure 4(b)), 1940s, 1960s, and early 1980s (Station 12, Figure 4(c)), and 1940s and early 1980s (Station 17, Figure 4(d)). However, the rainfall exhibited a decrease from the mid-1980s to 1990s (Station 4), 1940s and 1950s (Station 10), 1950s and late 1980s (Station 12), and 1950s to 1970s (Station 17). At Station 12 (Figure 4(c)), the trend statistic is shown to cross the line a number of times and the (no trend) was accepted at the significance level of 5% for all the subperiods. At Station 4 (Figure 4(a)), the (no trend) was rejected based on the 35-year subseries over the subperiods 1906–1941 and 1933–1968. Similarly for Station 17, (no trend) was rejected for positive (negative) subtrends based on the subseries over the period 1911–1946 (1930–1965). For the decrease in rainfall at Station 10 based on subseries from 1952 to 1987, (no trend) was also rejected. In summary, the rainfall subtrends were more positive than negative at Station 4. However, the series at Stations 12 and 17 were dominated by rather decrease than increase in the rainfall. These results are consistent with those from Figure 2.

Figure 5 shows the trend evolutions based on windows of varying lengths moved in forward (F-Fwd) and backward (F-Bck) ways. Although the F-Fwd method shows a steady increase (positive slope) in the rainfall at Station 4 (Figure 5(a)) for the entire data period, based on the F-Bck approach, it is visible that this increase was from the early 1900s to late 1950s. Again, based on the F-Bck, there was a decrease (negative slope) in the series from the around 1960 to mid-1990s (end of the data). Generally, for both the F-Fwd and F-Bck methods, the slope of the scatter plot is negative (indicating decrease) for rainfall at Stations 10 and 17 (Figures 5(b) and 5(d)). For Station 17, the (no trend) was rejected for this decrease at the significant level of 5%, for instance, over the periods 1931–1994 and 1901–1992. For the rainfall at Station 12 (Figure 5(c)), the slope is close to zero and the values of the trend statistic fall within the variability bounds; thus, (no trend) cannot be rejected for all the subtrends as well as the full-time series at the significant level of 5%.

4.4. Uncertainty due to Data Limitation

Figure 6 shows, for illustration, the variation in the trend statistic with the record length of rainfall data at selected stations. The width of the 95% CI is shown to reduce as the data record length increases (Figures 6(a)6(d)). In other words, the uncertainty in the trend result reduces as the series record length increases. When there is short-term data at a station, an insight about realistic trend estimate can be derived based on data at nearby stations with longer term records but in the same region (so long as the region is homogenous) [2].

5. Conclusion

This study demonstrates the existence of both data- and method-related uncertainty in trend analyses. Long-term observed annual rainfall at 23 locations in Kenya, East Africa, was used for the case study. To assess method-related uncertainty, results from 13 forms rank-based trend detection methods including the Mann-Kendall, Spearman’s Rho, and Cumulative Rank Difference (CRD) tests considering both short- and long-term persistence were applied to the annual rainfall data. CRD-based graphical analyses of trends were conducted. Insights about the uncertainty in trend results due to data limitations were also assessed. Apart from testing the null hypothesis (no trend) in terms of trend direction, the significance of nonzero trend magnitude was also investigated.

It was found that, at a given level of significance, the (no trend) can be rejected for trend direction while the evidence to reject (zero trend magnitude) is statistically insufficient. Generally, the uncertainty due to data limitation reduces as the data record length increases. Uncertainty due to the influence from the selection of the statistical testing method on the test results is also not negligible especially for series with persistent fluctuations. The combination of graphical and statistical approaches for trend detection was found to yield more influential and supportive information for inference than results obtained when analyses are done in a purely statistical way. These findings shed light on the need to assess uncertainty on trend analyses. However, despite such attempts as in this paper, the following are recommended for trend analyses in hydrometeorology:(a)Graphical or data exploratory techniques should be used to maximize the understanding about the given series to support statistical analyses results. In this study, illustrations are given for the CRD-based graphical diagnoses in terms of rescaled series plot, CRD plot, nonparametric anomaly plot, and three trend evolution plots. With the help of these plots, it can be determined if: (1) the trend is linear or nonlinear, (2) there are persistent short-durational fluctuations, and so forth. To minimize uncertainty in trend results, graphical trend diagnoses are vital for example, to avoid application of linear trend test to a full-time series which is characterized by a nonlinear trend. To obtain these CRD-based plots or implement the statistical CRD test, trend and variability tool “CRD-NAIM” online at https://sites.google.com/site/conyutha/tools-to-download/ is made freely available by the author.(b)If the series is characterized by monotonic trend, it is vital to test the significance of long-term linear trend slope and assess if the magnitude of the change is representative of that for the population. Apart from detecting linear trend, significance in variation of the nonlinear, for example, cyclical component of the series (using detrended series) should also be investigated. Unless the nonlinear trend can be well described and analyzed by a function for example, those assumed in [47], if the series has cyclical fluctuations and no seasonal component, (natural randomness) can be tested using the original data (not detrended) in a nonparametric way.(c)Both method- and data-related uncertainties on the trend analyses should be assessed. Although the uncertainty assessment in this study was made based on linear trends, attempts should also be made to consider the nonlinear trends. In order to make use of the trend analyses uncertainty information for decision-making, an expert knowledge-based inference is required.(d)Before the statistical trend results can be used for decision-making in environmental management and adaptation strategies, they should be supported by rigorous attribution. This can be, for instance, through checking the consistency or inconsistency of the results as well as the provision of confidence statement [38], systematic investigation of the possible drivers to explain the full signal of change, for example, through the multiple working hypothesis [39], and so forth. In summary, the conclusiveness of trends should be cautiously premised not only on statistical grounds but also on more considered physical and/or theoretical understanding of the hydrometeorological processes.

Appendix

A. Supplementary Information on Trend Detection Methods

A.1. Trend Tests

Below are brief descriptions of the nonparametric methods including the Mann-Kendall (MK) [6, 7], Spearman’s rho (SMR) [810], and the Cumulative Rank Difference (CRD) [11, 12] tests employed for the case study.

A.1.1. Mann-Kendall Test

The MK test statistic S is given bywhere and are the sequential data values in a sample of size andFor , is approximately normally distributed with the mean and variance given byWhen ties exist in the data, becomeswhere is the number of tied groups and is the number of observations in the th group.

The standardised MK test statistic which follows the standard normal distribution with mean of zero and variance equal to one is given byPositive and negative values of S indicate increasing and decreasing trend, respectively. The value (probability value, ) for is computed as the cumulative density function (A.6) of the normal distribution based on the standard case with the mean and standard deviation :Considering from (A.6) as the and the significance level, the (no trend) trend is rejected if ; otherwise, the evidence to accept the (there is trend) may be regarded insufficient where necessary. To correct from the influence of data autocorrelation, (i.e., the modified using effective sample size) which can be used to calculate the is given by [30]where is the lag-k serial correlation coefficients of the given series and can be computed using [48]where is the mean of the series.

A.1.2. Spearman’s Rho Test

Considering and as the ranks of the time and data series, respectively, the SMR statistic () is given by [9]When ties exist in the data, the Pearson product-moment correlation coefficient (Equation (A.10)) can be used:where and are the mean values of v and , respectively.

The distribution of was shown to be asymptotically normal with the mean and variance given by (A.11) [8, 9]. As mentioned in [49], even when there are ties in the data, the nice property of lies in the fact that its variance conditional on the set of ranks is still (see [50] p. 477):The standardised SMR test statistic which follows the standard normal distribution with mean and variance of zero and one, respectively, can be given by [8, 9]Positive and negative values indicate upward and downward trends, respectively. The null hypothesis of no trend can be tested based on the standard normal distribution (A.6) by taking as the . The (no trend) trend is rejected if ; otherwise, the evidence to accept the (there is trend) may be regarded insufficient where necessary. The variance of for autocorrelated data is given by [18]whereand , , , and denote the correlation between and , and , and , and and , respectively, for a given dataset considering the series to be the equivalent normal variates with Gaussian dependence.

A.1.3. Cumulative Rank Difference (CRD) Test

Consider as the number of times a data point is exceeded and the number of times a data point appears within the given sample of size . The series is nonparametrically rescaled in terms of the difference (A.19) between the exceedance and nonexceedance counts of data points by [11]where is the replica of such thatThe cumulative sum (c) of the difference (d) in the ranks is obtained usingConsider the hypothetical series ; it means and the values of , , , and at to are , , , and , respectively. The value of from (A.20) can be applied to detect trend in both graphical and statistical ways as briefly described next.

(1) Statistical CRD Test. Statistically, the CRD statistic T is computed using [11]Positive and negative trends are shown by and , respectively. The distribution of is approximately normal with the mean and variance for series without ties [12]. This form of expression for is actually a hyperharmonic series or k-series (i.e., a generalization of the harmonic series) for any positive real number. Based on integral test, for and the k-series diverges and converges, respectively. In fact for , the becomes the Riemann zeta function, that is, the sum of the k-series evaluated at k. The choice of k for the summation of the k-series tends to be case-specific. For instance, the sum of the k-series is referred to as Basel problem for , Apéry’s constant for , and so forth. Although [12] used the upper limit of k set to 4 to obtain sufficient V(T) estimate for a large n, in fact the exact expression (see Appendix D). By considering ties in the data, V(T) becomeswhere is the measure of ties in the data and can be computed usingand is as defined in (A.16).

The standardized CRD test statistic which follows the standard normal distribution with mean and variance of zero and one, respectively, is given bywhere can be or , that is, the corrected from the influence of either short- or long-term persistence, respectively. To verify (no trend) in terms of , the value is computed using (A.6) with taken as the from (A.24). The is rejected if ; otherwise, the evidence to accept (there is trend) could be statistically insufficient if necessary.

(a) Correction of Variance Inflation due to Short-Term Persistence. To correct from the influence of serial correlation, firstly, the slope of the linear trend in the serially correlated original data is computed using the method of [35, 36]. Next, the original series is detrended to obtain using for . By using instead of the , the is computed in a similar way as in (A.8) but with the term varying from 1 to (). At the significance level α similar to that used to conduct the CRD test, the significant values of (call them ) are used to compute the CRD test statistic variance correction factor using [12]Finally, , where is computed using (A.22) based on the original data .

(b) Correction of Variance Inflation due to Long-Term Persistence. The following steps can be followed based on the detailed information presented in Appendix B:(i)Using the original data , estimate the linear trend slope using the method of [35, 36].(ii)Detrend the original series to obtain using , where .(iii)H can be approximated using the detrended series in terms of the generalized Hurst exponent based on the scaling of renormalized –moments of the distribution. The details of this procedure can be obtained from [5153]. Alternatively, based on the autocorrelation function for fractional Gaussian noise model given by [54],where is as defined in (A.8) and the computation of is based on the detrended series from Step (ii). For instance, when from (A.26), can be given byHowever, it is vital to note that the ordinary least squares estimates of are negatively biased [55], and bias corrected can be computed by [56] .(iv)Compute the test statistic variance corrected from the influence of long-term persistence usingwhere and . Although (A.28) is valid for between 0 and 1, for in this study, was taken as from (A.22).

(2) Graphical CRD Approach to Identify Changes. The CRD plot (i.e., the cumulative sum from (A.20) versus the time of the data points) can be used to identify changes in the series over unknown periods as illustrated in Figure 7. The = 0 line is taken as the reference. Values above or below this reference characterize the subtrends. If there is no trend in the series (Figure 7(a)), the CRD curve crosses (or gets close to) the reference a number of times (see Figure 7()). However, if the series has an increase in one-half and a decrease in the other (Figure 7(d)), two CRD curves will be formed; the first/second curve will be above/below the reference (see Figure 7()). Furthermore, when the series is dominated by an increase/decrease (Figures 7(b) and 7(c)) in the variable, most of the (if not the entire) CRD curve will be above/below the reference (see Figures 7() and 7()). For further information on how to use the CRD plot to identify other forms of changes, for example, step jump in the mean of the variable, the reader is referred to [11, 12].

The uncertainty bounds as illustrated in Figure 3 on the observed changes in the CRD are constructed by employing the technique of resampling and percentile confidence interval in a combined way using the following steps:(i)Equation (A.19) is applied to the original series.(ii)The th observation of the original series from Step (i) is selected.(iii)One at a time, the selected data point from Step (ii) is replaced by other values from the sample.(iv)Equation (A.19) is again applied to each of the new series from Step (iii) to generate sets of the values.(v)The th value of the is extracted from each set.(vi)The extracted values of the are ranked from the highest to the lowest.(vii)The upper and lower % CI limits for the th data point from Step (ii) are obtained using the th and th ranked values, respectively [57].(viii)Using the next or ()th observation, the Steps (ii) to (vii) are repeated.(ix)Finally, Step (viii) is also repeated till the % CI limits on all the values of from Step (i) are obtained.

A.2. Comparative Performance of the Trend Tests and Motivation for CRD Approach

Under the condition of certain limited data structure, the basis of comparability of the performance of the rank-based tests can be briefly given. If the given series does not have tied values, every data point will have in (A.16), and thus based on a cursory look at (A.23), . Therefore, so long as the time of the untied observations is in a chronological order, from (A.22) equals the of (A.11). This analogy results from the expectation that T from (A.21) should be equal to of (A.9) for data without ties; therefore, in the same vein the performance of SMR and CRD can be comparable. On the other hand, as shown in (A.22) and (A.4), the skewness of may be comparable to that of because both the MK and CRD apply correction for ties in the computation of rather the test statistic variance than the statistic itself as done for the SMR. Of course, some differences among these nonparametric methods are expected when ties and the influence of persistence on trend analyses are considered.

Given the above background on the comparability of the rank-based trend tests, the question is why the CRD test? The CRD procedure to quantify the influence of ties in the series in a lumped way is computationally faster and appreciably simpler than considering groups of tied data points as done for the MK test. This simplicity of the CRD with respect to ties follows the fact that its statistic is based on rather the exceedance and nonexceedance data counts than the consideration of sequential data points in the series as required for the MK test. Moreover, when ties exist in groups, the effect of the interactions among the various groups makes it almost impossible to derive exact modified variance (e.g., considering persistence) especially for large sized data with significant number of tied points [15]. On a different note, the available trend detection methods including the MK and SMR tests tend to be applied in a purely statistical way. Through graphical diagnostic procedures, making use of the pattern of the partial terms required to compute the overall trend test statistic, would enhance the comprehension of the nature of the changes within the series. In the same vein, the CRD makes use of several plots for graphical trend diagnoses. The plots include the following:(a)Rescaled Series Plot (RSP): the data is rescaled as described in Section 3.6.1. The rescaling is to eliminate the influence of possible outliers on the graphical analysis. Next, temporal convolution is applied to the rescaled data using (C.1) based on a suitable time scale. In the RSP, the rescaled series as well as the aggregates are plotted against time of observations. Trendline is also fitted to the rescaled data. The RSP can be used to identify if the trend is generally linear or nonlinear. Based on the illustration using synthetic series of in Figure 8(a), it is noticeable that the trendline fitted to the entire data is inadequate to describe the trend in the data because the trend direction is positive from to 75 and negative from to 200. Description of the data using linear trend can only be adequate if the analyses for the parts with positive and negative trends are conducted separately.(b)CRD plot: here, the CRD makes use of a plot equivalent to the well-known Brownian Bridge to diagnose the temporal changes in the series upon which the test statistic can be computed. Previously in a number of studies (see e.g., the Pettitt test [58], distribution-free CUSUM test [59]), diagrams based on the concept of Brownian Bridge (i.e., characterization of the Brownian motion by a limit Markov or memory-less process of finite-variance random walks with short-range correlation) were suggested for change-point detection. Instead of considering the sum of the difference in ranks between successive data points as in [58], the CRD accumulates the difference between the exceedance and nonexceedance counts of the data points. Eventually for the CRD, the mean of the nonparametrically rescaled series equals zero and this, compared to other cases in which the Brownian Bridge is applied, makes it possible to directly separate periods over which the impacts of the cumulative variation of events have positive or negative contribution on the trend direction in the series. Further details of this approach can be found in Section A.1.3(2). As shown in Figure 8(b), the CRD plot shows that the series (from Figure 8(a)) comprises both negative and positive subtrends. Comparatively, the subperiod of the series with the negative subtrend is longer than that with positive subtrend.(c)Trend Evolution Plots (TEPs): the CRD test incorporates four ways of subtrend identification through moving windows slid in backward (F-Bck) and forward (F-Fwd) ways. Again the equivalent retrograde and progressive analyses of the CRD are not new concepts because of their analogy to the sequential MK test [9]. The main serious setback in the use of the sequential MK test [9] is that its accuracy is rather lower than expected since it does not cater for the influence of persistence on the subtrend analyses. For the CRD test, each subseries is considered in an independent way and the test statistic variance correction for the influence of persistence is applied where necessary. Additionally, the CRD incorporates the moving window of fixed (F-Fix) time slice which can be slid in an overlapping way. Based on the chosen length of the window, subtrends analysed over various time slices can be indicative of the influence of climate variability on the hydrometeorological variable. As opposed to the use of retrograde and progressive analyses to detect change point as suggested by [9], the F-Fix, F-Fwd, and F-Bck techniques as already described in Section 3.5 can be used to rigorously test (all the subtrends are due to natural randomness). The values as well as the thresholds for the trend directions are plotted against the subperiods. is accepted if the values for F-Fix, F-Fwd, and F-Bck fall within the threshold of CI limits. By setting the minimum initial block length for F-Fwd and F-Bck, say to 3, the TEPs can also be used to supplement the results from RSP. For the synthetic series shown in Figure 8(a), the (all the sub-trends are due to natural randomness) is accepted for the TEPs of Figures 8(e) and 8(f) and rejected in Figure 8(d). Additionally, though not implemented in this study, the fourth procedure for the CRD trend evolution entails application of the F-Bck and F-Fwd in a combined way by fixing the initial window for the subseries around the centre of the data.(d)Nonparametric Anomaly Plot (NAP): here, the anomalies computed using the procedure presented in Appendix C are plotted against the time of observations. If based on RSP or TEPs, the overall trend in the series is determined to be nonlinear for example, with cyclical variation, the NAP is made using the rescaled series without detrending. If the original series has generally linear trend (without large change in subtrend directions), the trend slope is first calculated and used to detrend data. The detrended series is rescaled as described in Section 3.6.1. Finally, NAP is obtained using the detrended and rescaled series. As illustrated for synthetic series in Figure 8(a), the NAP in Figure 8(c) based on 30-year time slice shows that the cyclical variation in the series is not due to natural randomness. Such extra combination of statistical and graphical analyses of nonlinear, for example, cyclical component of the series, is not considered by the well-known methods such as the MK and SMR tests.

B. Influence of Long-Term Persistence on the Variance of CRD Test Statistic

To assess the effect of long-term persistence on the (variance of the CRD test statistic), Monte Carlo simulations were performed. For each selected value of ranging from 10 to 600 and by varying the Hurst exponent from 0.5 to 1, a total of 5,000 synthetic series characterized by fractional Gaussian noise were generated. Many algorithms to generate fractional Gaussian noise exist, some of which can be found given by [60]. The values of from 0.5 to 1 were considered because over this range the autocorrelation function becomes less than zero for any lag, a case that is so unusual in hydrometeorological practice [60]. For each n and selected H, the distribution of the CRD test statistic T was assessed.

Figure 9 shows that the original variance of the CRD test coincides with that when . As tends to a large value for a given which is greater than 0.5, the variance inflation factor is expected to increase asymptotically. In the same vein, is inflated when . This influences the rate at which the null hypothesis (no trend) can be rejected at a selected significance level.

Consider as the variance of T which takes into account the influence of the long-term persistence on trend result. A power function was found to adequately fit the relation between and n. The coefficients of the power function were in turn found to follow a polynomial (see (A.28)). The variation of (T) as a function of both and H is also presented in Figure 10.

Figure 11 shows the type I error of the CRD test before and after the variance correction for the influence of the long-term persistence. It can be seen that, for a given , the rejection rate increases considerably for the case when (Figure 11(a)). For each selected value of , the rejection rate notably increases as becomes large. When the variance correction (VC) is applied (see Figure 11(b)), the rejection rates for the various values of and H get close to the 5%, that is, the selected significance level for the analyses of the trends. This demonstrates the acceptability of the VC procedure for trend analyses of data characterized by long-term persistence.

Figure 12 shows the distribution of the CRD test statistic before and after the variance correction. In the presence of some trend in the series, normally the influence of the long-term persistence would lead to positive bias of the when . However, since no linear trend was superimposed on the synthetic series characterized by the fractional Gaussian noise, it is noticeable that the mean of the is zero. As H increases, the spread in the tails of the distribution also increases (Figures 12(a) and 12(b)). The variance correction is expected to increase the peakedness and reduce the spread of the tails of the for all various values of H > 0.5. This expectation is demonstrated in Figures 12(c) and 12(d).

C. Anomalies in the Cyclical Component of the Series

To assess the variation in the cyclical component of the series, the null hypothesis (the variability in the nonlinear component of the series is due to natural randomness) can be tested using the following steps which can be taken as an extension of the nonparametric anomaly indicator method of [57]:(i)Using the original data , estimate the linear trend slope using the method of [35, 36].(ii)If the series has nonlinear trend or is generally characterized by cyclical fluctuations, take = . However, if the data has monotonic trend, to allow the linear trend and cyclical components of the series be analyzed independently, detrend the original series to obtain using , where . If the series has the influence of seasonality or periodicity, deseasonlize the data to obtain using the procedure illustrated in Section 3.6.2.(iii)Assuming that the AR(1) is the correct correlation model for the data, compute based on using (A.8). If is significant, prewhiten the detrended series using ; otherwise, .(iv)Rescale the prewhitened series to obtain the equivalent normal variates () using the procedure presented in Section 3.6.1.(v)Select a suitable temporal scale to extract nonparametric anomalies in the rescaled series such thatwhere is the anomaly in the kth time slice andAs mentioned before, for instance, in this study, was set to 10 years for analysis of decadal anomalies; thus, was used. The extracted anomalies are plotted against time of observations. The horizontal line is considered the reference above and below which the anomalies characterize the temporal variability. To construct the variability bounds of the anomalies in the form of CI, the detrended and rescaled series is reshuffled several times, say (in this study, was set to 10,000). Equation (C.1) is applied to each reshuffled series so as to obtain sets of anomalies. Each set of the anomalies is sorted in descending order such that the upper and lower limits of the CI are defined by th and th anomalies, respectively. If the anomalies from the original series (before reshuffling procedure) fall within (outside) the CI, the null hypothesis (natural randomness) is accepted (rejected).

Though not done in this study, it is recommended that the correlation model be investigated and correctly specified before the prewhitening in Step (iii). This is because the procedure based on the assumed AR(1) may yield inaccurate rejection rates for the variations of the cyclical component of the series if the detrended data follow other correlation models. Generally for the various correlation models, the prewhitened series can be obtained by [61]: , where is a vector of size of ones, is the estimated mean of the original series , and is an lower triangular matrix corresponding to the particular correlation model. For instance, how to obtain considering the fractional Gaussian noise model can be found in [62]. Apart from the AR(1) and the fractional Gaussian noise model, some of the other correlation models include the autoregressive-moving average model, the fractional autoregressive-moving average model, and the fractionally integrated autoregressive-moving average model.

D. Variance of CRD Test Statistic

Proof of . To show that because according to [12] = , two methods can be used:
Method 1Method 2Let be the geometric series (i.e., the series that has a constant for the successive terms):As tends to infinity, the series converges if the absolute value of is less than 1 and thusReplacing x by ,Thus,

Competing Interests

The author declares no conflict of interests and no competing financial interests.

Acknowledgments

The author acknowledges that the data used in this paper were obtained from the Food and Agriculture Organization [44].