Abstract

Structured spatial point patterns appear in many applications within the natural sciences. The points often record the location of key features, called landmarks, on continuous object boundaries, such as anatomical features on a human face. In other situations, the points may simply be arbitrarily spaced marks along a smooth curve, such as on handwritten numbers. This paper proposes novel exploratory methods for the identification of structure within point datasets. In particular, points are linked together to form curves which estimate the original shape from which the points are the only recorded information. Nonparametric regression methods are applied to polar coordinate variables obtained from the point locations and periodic modelling allows closed curves to be fitted even when data are available on only part of the boundary. Further, the model allows discontinuities to be identified to describe rapid changes in the curves. These generalizations are particularly important when the points represent shapes which are occluded or are intersecting. A range of real-data examples is used to motivate the modelling and to illustrate the flexibility of the approach. The method successfully identifies the underlying structure and its output could also be used as the basis for further analysis.

1. Introduction

Many scientific investigations involve the recording of spatially located data. This data might summarize objects within an image as digitized versions of continuous curves. Once the data are collected often the original context is lost and the aim of the analysis is to identify which points are associated with each other and to link the points to reconstruct the original shape. These can then be seen as estimates of continuous curves and object outlines. If the original scene contains multiple structures, then the analysis must also divide the points into groups with separate curves used to describe the points in each group. It is important to note that this is likely to form only the first part of an analysis and hence can be seen as exploratory data analysis.

This paper looks at the use of smoothing splines to identify and describe geometric patterns in sets of points. It is assumed that the points lie on smooth curves but that a dataset may contain multiple intersecting curves. It is vital that this be done in a nonparametric way so that the widest possible range of patterns can be highlighted. In general, these are closed, or nearly closed, curves and so a transformation to polar coordinates is used to simplify the analysis. Intersecting curves are described by allowing discontinuities in the fitted curves. These procedures are illustrated using simulated data and varied real datasets describing human faces, gorilla skulls, handwritten number 3’s, and an archaeological site. These provide a wide variety of point patterns and reinforce the general usefulness of the proposed methods. For mathematical detailed description and applications of shape-based analysis of points, refer to, for example, Batschelet [1], Bookstein [2], Dryden and Mardia [3], and Lele and Richtsmeier [4].

To allow for this wide variety of possible curves a nonparametric fitting approach, such as splines, can be used (see, e.g., [5, 6]). The flexibility is helpful in the exploratory statistical analysis of a dataset, and the results can be used to suggest parametric equations for later analysis. Nonparametric regression is the general name for a range of curve fitting techniques which make few a priori assumptions about the true shape. In nonparametric regression, several different families of basis functions can be used to describe curves; one of the common kinds of basis for smooth curves is the spline. Splines are generally defined as piecewise polynomials in which curve, or line, segments are joined together to form a continuous function. The spline smoothing approach to nonparametric regression is discussed, for example, by Silverman [7] and extended to deal with branching curves by defining a roughness penalty by Silverman and Wood [8]. For an introduction to natural cubic spline see Green and Silverman [9]. For more review of spline methods in statistics see Wegman and Wright [10], Silverman [11], Silverman [7], Nychka [12], and Wahba [13].

It is important to note that there are many existing general frameworks for performing spline-based regression. For example, multivariate adaptive regression splines (MARS) [14] or its more robust generalizations, RMARS [15] and RCMARS [16], with a good overview and comparison in [17]. These follow the general approach of general additive modelling [18] and give a formal framework for fitting and model selection.

A brief introduction to splines, along with the extension to circular data, is given in Section 2. The main results of this paper are given in Section 3 by considering modelling for single curves with occlusions and multiple intersecting curves. Although simulated examples are used to illustrate, the main real-data examples are given in Section 4. General discussion appears in Section 5.

2. Nonparametric Curve Estimation and Periodic Splines

A smoothing spline is a nonparametric curve estimator that is defined as the solution to a minimization problem. It provides a flexible smooth function for situations in which a simple polynomial or nonlinear regression model is not suitable. For a set of observations consider a regression problem where the observations are assumed to satisfywhere the errors are uncorrelated with zero mean and constant variance, . Then the spline smoothing method uses the data to construct a curve by minimizing the objective functionwhere represents the th derivative of , with being a positive integer, and is a smoothing parameter. For more details of smoothing splines see, for example, Eubank [19], Eubank [6], and Cantoni and Hastie [20]. An alternative definition of the level of smoothing is in terms of an equivalent degrees of freedom, Df, which describes the amount of information in the data needed to estimate the residuals. The function smooth.spline [21] allows or Df to be specified, but the degrees of freedom have been used in what follows as this gives a more intuitive interpretation.

The above objective function consists of two parts: the first measures the agreement of the function and the data and the second is a roughness penalty reflecting the total curvature—this can also be interpreted in a Bayesian setting as the likelihood and prior. Hence, for given Df, the estimate of is given byIf Df is large then the function is rough but closely fits the data, whereas when Df is small then the function is smooth but may not fit the data well. Here the choice of Df is made automatically using standard leave-one-out cross-validation [22]; that is,where is the fitted spline curve, for given parameter , and with the th data point, , being removed. Then is the fitted curve using the cross-validation estimate of the degrees of freedom.

Figure 1 shows fitted curves using splines with different degrees of freedom, Df. The true curve is a sine function with noise level which corresponds to a signal to noise ratio () of about . In (a) Df is about half the value found using cross-validation which is used in (b), with (c) using double the cross-validation degrees of freedom. The small degrees-of-freedom value gives a smoother fitted curve that ignores many of the points in the data whereas a large value produces a rougher fit which more closely follows the data. The automatic choice was which gives a very good fit to the data reproducing the sin curve well.

For this dataset, the periodic nature of the sin function has, so far, been ignored, and it is clear that the extreme left and right do not match exactly. For such datasets, made up of angles or directions, ignoring the periodic nature of the measurements when smoothing may produce unacceptable edge effects. A simple approach for dealing with this issue will now be considered.

Suppose that the dataset is made up of paired angles and distances which will be denoted as for a sample of size . A simple approach for periodic data measured in the interval , say, is to repeat the data. That is, for each angle , the corresponding new angular values are , where , and similarly repeat the corresponding radial distances to be . This produces a dataset, , with data values, and even for small (e.g., or ) this gives a very good approximation to the full periodic spline. Cogburn and Davis [23] present the theory of periodic smoothing spline with application to the estimation of periodic functions and the R function periodicSpline from the package splines might provide an alternative computational approach.

As illustration consider Figure 2 which shows fitted curves equivalent to those in Figure 1 but with . The solid circles are the original data with open circles representing the copied data points. Similarly, the solid line is the spline fitted curve over the original interval with the dashed line showing the fitted curve over the copied data points. In all cases the fit is better than in Figure 1, with the periodic nature, reproduced well, and as before the cross-validation choice of smoothing has produced an excellent reconstruction of the true curve.

Once fitted a residual sum of squares, RSS, calculated on the original data values, can be used as a measure of goodness-of-fit. Here this will be calculated using the radial distances with definition but other versions could be used, for example, the Euclidean distance between fitted and observed points.

Of course, the approach could lead to a poor fit if the data is not periodic, but to prevent this it is possible to allow for a discontinuity in the relationship. Here the approach of Gu [24], who considered discontinuities in cubic splines with a jump at a known location, will be extended to the periodic case and with an unknown discontinuity location.

Suppose that the points are partitioned into two groups with the first, , containing all the points with angles up to and including the change point and those with angles above. Assuming that the points are ordered in increasing value of the angle, so that , then let be the data before the change point and the remaining data. For change point at two curves are fitted to the data such thatwhere cross-validation is used separately on the two parts leading to two degrees of freedom, . The significance of the change point could be assessed through a chi-squared test, but here a change point influence graph is considered based on the goodness-of-fit.

Consider the data shown in Figure 3(a) which has a change point of size about introduced at . The curve in Figure 3(a) is fitted using a smoothing spline, but ignoring the change point, it is possible that the curves in Figure 3(b) are fitted using smoothing splines with change point at the estimated location. The automatically chosen value for the degrees of freedom, , for the single curve in (a) is whereas for the two-part curve the overall degrees of freedom are . Figure 3(c) shows the residual sum of squares, RSS, for each possible change point location with a very clear minimum. The RSS for the curve in Figure 3(a) is while, in (b), it has reduced to , which is substantially smaller and provides a much better description of the data. Hence this approach provides an intuitive approach to finding change points in data automatically.

3. A Model for Multiple Overlapping Curves

3.1. Motivation

To motivate the modelling, consider an unobserved true scene containing a few objects of various shape and sizes, with possible overlap. However, instead of the scene being recorded faithfully, only partial information is taken and, in particular, only points along the edges of the objects are recorded. These points might be chosen to identify features with special significance or they might simply be at equal or random locations along the edge. Further, due to overlaps, points from the full edge may not be in the dataset. Once collected, there is no record of which points are from which object, and no record is kept of possible object shapes nor even the number of objects. Hence, let the dataset consists of a collection of points, , recorded within some small region in 2D.

Figure 4 shows example datasets which will be analysed later. Panel (a) shows a human face profile with the forehead, eyes, nose, mouth, and chin clearly identifiable on the left—the points on the right locate the back of the neck and the hairline. Panel (b) shows points located along a handwritten number 3 at approximately equally spaced intervals.

3.2. Modelling a Single Curve with Occlusion

Before the periodic smoothing spline approach can be applied it is necessary for the data to be first transformed to polar coordinates. First define a centre, , which can be estimated using the data centroid and then use the one-to-one transformationThis gives rise to an alternative data representation via the centre and polar coordinates . Note that although this representation contains pieces of information, by construction, the polar coordinate variables are not independent. Of course, other estimates of centre could be considered, such as the point which minimizes the variance of the radii. In particular, this measure should be more robust to presence of occlusions.

To illustrate the transformation and the subsequent spline smoothing consider the simulated data in Figure 5. Panel (a) shows the given points along with the sample centre marked with a “+”; the points in (b) are the corresponding polar coordinates relative to this centre. Also shown in (b) are the nonperiodic smoothing spline (continuous black line) and the period smoothing spline (dashed red line). These are all closely aligned except at the extreme angles. Once transformed back into Cartesian coordinates, as shown in panel (c), the slight discrepancies between the fitted splines are more clearly visible. At the far right of the plot, the periodic spline curve is closed and more naturally represents a possible object, whereas the nonperiodic spline is not closed making it difficult to interpret if this were part of the edge of a real object.

Figure 6 shows a second elliptical dataset but where part of the ellipse is missing. The Cartesian data are shown in (a) and (c), with the polar transformed data in (b). Panel (b) shows the nonperiodic spline and the period spline with dramatic differences which are even more obvious when the fitted curves are transformed back into Cartesian coordinates, as shown in panel (c). The periodic smoothing spline has done a very good job of interpolating the missing part of the curve and the results can easily be relied upon in further analysis. In particular, slight changes in the position of a few critical point will lead to very different shapes for the nonperiodic spline.

To summarize, application of smoothing splines to periodic point data has proved very successful. The modification of the duplicated data is a simple, yet effective way to create closed curves and to interpolate where data are missing. The approach has provided a robust and informative reconstruction of the unknown curve from the data.

3.3. Modelling Multiple Intersecting Curves

To allow for intersecting and overlapping curves the points are partitioned into groups, , where . That is, with when and . To record group membership a matrix is defined, where if point belongs to group and otherwise. Then, and , where is the number of points in the th group; that is, . For each group, working in polar coordinates, there is a centre, , and coordinates relative to the centre, , with the full set of parameters denoted as . The corresponding Cartesian coordinates can be written as , withand the full collection of data as . Further, it is assumed that the point locations are recorded with error giving observed measurementswhere and are independent Gaussian random variables with zero mean and constant variance .

In what follows the full dataset will, without further explanation, be referred to using either and or equivalently, but without explicit reference to the group membership, and as is most convenient and intuitive.

3.4. Estimation with Multiple Intersecting Curves

Now consider estimation of the model unknowns from observed data. Start by supposing that a dataset is available but that the group membership information is intact; then the group centres could be estimated asand, although some of these are unimportant, corresponding polar coordinate representation of point relative to group centre iswhere and . The overall residual sum of squares is then the sum of the separate components

Now consider the case when the group membership is unknown and must be inferred from the data. The aim is to find linked points by fitting curves. Some datasets have more than one curve and some have intersecting curves. Then classifying the points into groups may help to fit the correct curves that represent the data.

In general, this can be thought of as a change point problem, as already discussed, to address the lack of stationarity in the values. A change point occurs at some point in the data if all of the values up to and including it share a common curve while all those after the change point share another. This is exactly the same situation as the discussion in Section 2 and hence the same method of solution is applied.

4. Application to Real Data

4.1. General

The previous sections have illustrated the proposed exploratory data analysis tools on simulated example, whereas in this section the success of the approach is demonstrated on a varied range of real datasets. There is no wish to construct formal equations to define the shape but to stimulate further analyses.

4.2. Example  1: Face Data

The first experiment is conducted on data extracted from the human face [25] in a study looking at changes in shape due to growth in children. Figure 7(a) shows the data with points joining the points; then (b) shows the points transformed to polar coordinates along with fitted spline curves. Figure 7(c) shows the data set with back-transformation fitted values, and the solid curve shows those from the standard spline while the dotted curve shows those from the periodic spline. It is clear from the fitted curves that there is not much difference between the periodic and the standard smoothing splines. Both produce well fitted curves for the face. It is worth noting that the fitted curve can be evaluated at arbitrarily close locations, not only at the data points, and hence a smoothly interpolated curve can be drawn.

4.3. Example  2: Gorilla Skulls

This dataset, taken from Dryden and Mardia [3], is composed of 8 anatomical landmarks from the skulls of 29 male and 30 female gorillas. A landmark is defined as a point of correspondence on each object that matches between and within populations [3]. Figure 8(a) shows a schematic diagram of a typical skull with the landmarks indicated.

Figure 8(c) shows landmarks for one of the male gorillas and Figure 8(b) the corresponding points in polar coordinates along with spline fitting to the dataset and in Figure 8(c) after back-transforming. For both, the fits are good but at the expense of low smoothing in the spline. This fitting procedure was repeated for the other gorilla skulls and surprisingly the smoothed curves give good summaries allowing the skulls to be easily categorised into four main groups covering mainly male skulls which are rather elongated and two covering mainly female skulls which appear more rounded. The males lead to generally larger values of the degrees of freedom () than the females (). In fact, the automatic choice of the degrees-of-freedom parameter can be used as a simple discrimination variable giving only 8 out of 59 incorrectly classified skulls. It is important to note that this was not a preconceived discriminator but was identified by the exploratory analysis. This has highlighted the usefulness of simple and flexible tools as a preliminary step in a more wide-ranging investigation.

4.4. Example  3: The Number 3

Another dataset, again taken from Dryden and Mardia [3], is made up of 13 landmarks from 30 handwritten number 3’s; see Figure 9(a). Suppose the data are divided into two subsets with and observations, respectively. The best partition is made according to the minimum value of the overall residual sum of squares, RSS, which is displayed in panel (c). Each subset is transformed to polar coordinates using the different centres marked “+” in panel (a). Each subset is indicated by different marks along with their fitted spline curves as plotted in panel (b) with the back-transformed fitted curves in panel (a). Clearly, this has described the two-part curves very well. Again, this demonstrates the flexibility of the procedure.

4.5. Example  4: Archaeological Site Data

The data in Figure 10(a) shows part of a typical image dataset (supplied by Alistair Marshall of the Guiting Power Amenity Trust; see Aykroyd et al. [26] for details) from a magnetic survey of an archaeological site. As well as linear features, which represent ditches, there are also several drifts of pits, but blurring and noise tend to camouflage the exact locations. Panel (b) shows the locations of some of these pits, appearing as small circles, and panel (c) shows the corresponding polar coordinates relative to the two data centres (marked “+” in (b)). According to the minimum value of the residual sum of squares, RSS, the observations can be classified automatically into two groups.

The data centres are calculated for each subset, the small circles are the data in the first subset, and “” are the data in the second subset, with the fitted curves plotted in Figure 10(c). Then the fitted curves are back-transformed into Cartesian coordinate as shown in panel (b). The solid curve is for the first subset while the dotted curve is for the second subset. The aim of the analysis is to identify which points are associated with each other and to fit curves to the points, and this has been achieved well. The resulting linked points might then form part of further analysis or aid physical excavation.

5. Discussion

Making sense of clouds of points, apparently randomly placed across a 2D region, is a key task in many statistical investigations. When the points are recorded without additional information, the first task is to infer structure by linking points using a data-driven approach. This paper has proposed and investigated a simple, yet effective method based on change point identification and nonparametric spline smoothing. It provides an intuitive explanatory tool to identify patterns in the point locations. When it is assumed that the structures form lines and curves, the change points divide the data into subsets, with the splines providing a flexible method to infer the shape of the structures. The method has easily dealt with occlusions and intersections in scenes with multiple curves. Similar results might be achieved by applying more general modelling approaches, such as MARS, RARS, RCMARS; for details see, for example, [17], but we believe that a more straightforward and intuitive approach can have equal impact by bringing a range of easy-to-use tool to a wider audience. Further, for all users the methods considerations can be used to suggest further analyses based on more sophisticated approaches.

There is scope for extending the approach to include larger numbers of curves where it is not possible to divide the curves with a single change point. The nature of the problem is closely related to classification where the group membership is missing. This strongly suggests that a probabilistic approach might be considered based on statistical distribution models. This would then fit into the general framework where the EM algorithm has proven very useful. Also, there is a need to extend the approach to deal with unordered points and ones which are not star-shaped. These are areas of possible future work. Further, it is of interest to develop a similar procedure which would allow more formal modelling and model section, perhaps following the approach of general additive modelling [18].

The applications are various and varied with an illustrative example of the method when the data points are anatomical landmarks defined by geometrical features, equally spaced but blindly placed points along smooth curves and from extreme intensity points in grey-scale images. Further, the results of the analysis have provided new variables which could be the starting point for other analyses. Hence there is potential for this to be a valuable exploratory data analysis method in the tool-kit of applied statisticians and applied scientists.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.