Journal of Optimization

Volume 2013, Article ID 345287, 15 pages

http://dx.doi.org/10.1155/2013/345287

## PRO: A Novel Approach to Precision and Reliability Optimization Based Dominant Point Detection

School of Computing, National University of Singapore, Singapore 117417

Received 5 June 2013; Revised 21 July 2013; Accepted 2 August 2013

Academic Editor: Manuel Lozano

Copyright © 2013 Dilip K. Prasad. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

A novel method that uses both the local and the global nature of fit for dominant point detection is proposed. Most other methods use local fit to detect dominant points. The proposed method uses simple metrics like precision (local nature of fit) and reliability (global nature of fit) as the optimization goals for detecting the dominant points. Depending on the desired level of fitting (very fine or crude), the threshold for precision and reliability can be chosen in a very simple manner. Extensive comparison of various line fitting algorithms based on metrics such as precision, reliability, figure of merit, integral square error, and dimensionality reduction is benchmarked on publicly available and widely used datasets (Caltech 101, Caltech 256, and Pascal (2007, 2008, 2009, 2010) datasets) comprising 102628 images. Such work is especially useful for segmentation, shape representation, activity recognition, and robust edge feature extraction in object detection and recognition problems.

#### 1. Introduction

In many applications, boundaries are represented using polygonal approximation [1–8]. The problem of dominant points detection is to determine the points only from a digital curve for such representation. This representation reduces the memory and computational complexity in storing and processing the digital curves and helps in the determination of geometrical properties like inflexion points, perimeter, and tangent estimation. It is useful for topological representation, character recognition, segmentation, and contour feature extraction in the applications of computer vision. Further, it reduces the problems of digitization and related noise issues.

The problem of fitting lines on curves (including dominant point detection) is quite old. The method of Teh and Chin [9] relies primarily on the accurate determination of the support region based on chord length and the perpendicular distance of the pixels from the chords to determine the dominant points. Ansari and Huang [10] proposed a method in which a support region is assigned to each boundary point based on its local properties. A combination of Gaussian filtering and a significance measure is used on each pixel for identifying the dominant points. Cronin’s [11] method finds the support region for every pixel based on a non-uniform significance measure criterion calculated by locally determining the support region for each point. B. K. Ray and K. S. Ray [12] proposed a k-cosine-transform based method to determine the support region. Sarkar [13] proposed purely a chain code manipulation based method for determining the dominant points where the chain code is sufficient and the exact coordinates of the pixels are not necessary.

This problem remains relevant in the current era as well. Some of the recent dominant point detection methods are Masood [14] and its modification [15], Carmona-Poyato et al. [16], and Nguyen and Debled-Rennesson [17]. These methods have already shown considerable improvements over earlier dominant point detection or polygonal approximation methods. However, all of these methods except that of Carmona-Poyato et al. [16] use local properties of fit like the maximum distance (deviation) of the pixels on the digital curve from the fitted polygon. On the other hand Carmona-Poyato et al. [16] use a ratio ‘‘’’ which incorporates the quality of the global fit instead of the local fit.

We highlight the reasons for the continued relevance of this problem in the current era. Even though many algorithms were developed, the measures or metrics used to compare and benchmark various algorithms were not effective, as shown in [18, 19]. Researchers tried using absolute measures such as compression ratio, integral square error, figure of merit, zero norm, and infinity norm; many studies proved that such metrics fail to represent the quality of fit in one manner or other. The reason for this was not understood fully. Recently, using very simple metrics, namely, precision and reliability measures, it was shown that there is a perennial conflict in the quality of fit in the local scale (precision at the level of a few pixels) and global scale (reliability at the level of complete curve) [20–22]. The duality of precision/reliability of a fit haunts most fitting algorithms. The reliability of the fitting decreases with the increase in the precision of fitting and vice versa [20, 21, 23]. Due to this reason, most absolute measures fail in quantifying the quality of fit properly. However, due to the conflicting nature of the global and local natures of fit, it is better to optimize both of them simultaneously. The present work is a first attempt in our knowledge towards this aim. In this paper, we present three contributions.(i)The proposed algorithm provides the user with good flexibility regarding the nature of fit. For example, the user may choose a very close fit, which closely follows the digital curve and retains all the small perturbations in the curve. Or the user may choose a curvature following fit that removes the local effects (due to noise, etc.) and retains the large-scale features of the digital curve. (ii)The precision and reliability measures are extended for digital curves. In [20], these measures were only defined for straight line segments. Now, these are now extended as the measures for fitting line segments on digital curves. (iii)An important contribution of the proposed work is the numerical experiments. A total of 102628 images from 10 datasets are used for analysis [24–32]. These datasets are chosen because they are the benchmark datasets in the field of computer vision and the research and applications using these datasets often employ dominant point detection as a fundamental preprocessing step.

We highlight that the focus of this paper is to present a method that gives the user the flexibility to choose the nature of fit very easily using an intuitive and simple control parameter. References [33, 34] are examples of practical problems that need such flexibility. Thus the aim of this paper is quite different from [35–37], where the aim was to develop a framework (not a method) to make standard methods nonparametric and stable within the digitization errors.

#### 2. The Basic Framework of Line Fitting

The basic framework is similar to the dominant point detection method proposed by Lowe [38] and Ramer-Douglas-Peucker [39, 40] (referred to as the RDP method). We highlight that the methods of Lowe [38] and Ramer-Douglas-Peucker [39, 40] are not exactly the same. However, both are based on splitting at the point of maximum deviation. Lowe [38] has an additional merge stage and uses a curvature parameter which makes it relatively less control parameter dependent. Here, the main concept and splitting stage are of relevance. Let us consider a digital curve , where is the th edge pixel in the digital curve . The line passing through a pair of pixels and is given by (1) Then the deviation of a pixel from the line passing through the pair is given as (2) Accordingly, the pixel with maximum deviation can be found. Let it be denoted as . Then considering the pairs and , we find two new pixels from using the concept expressed in (1) and (2). It is evident that the maximum deviation decreases as one chooses newer pixels of maximum deviation between a pair. This process can be repeated till a certain condition (depending upon the method) is satisfied by all the line segments. This condition will be referred to as the optimization goal for the ease of reference. The condition used by RDP [38–40] is that for each line segment, the maximum deviation of the pixels contained in its corresponding edge segment is less than a certain tolerance value: where is the chosen threshold and is typically a few pixels.

#### 3. Precision- and Reliability-Based Optimization (PRO)

The precision and reliability measures were defined for a line segment in [20]. Suppose, for a sequence of connected pixels , to , a line fits the pixels perfectly. Then, the coefficients of the line, and , must satisfy (4) where , , the superscript denotes the transpose operation, and is a column matrix containing rows, whose every element is 1. Then, using (4) as the ideal case, the precision of fitting can be modeled using the normalized residue given in (5) where represents the Euclidean norm of vectors. Since it considers the residue for each pixel, it is characteristic of the local nature of fitting alone. We highlight that the precision metric is closely related to conventionally used integral square error (see (16)). However, the present form in (5) makes it comparable with the reliability metric presented next, such that the same control parameter can be used for both precision and reliability metrics.

On the other hand, for the global characteristics of fit, another measure called the reliability measure is needed. Generally, reliability of a fit refers to how well the fit is expected to satisfy at least two conditions:(i)the fit should be valid for a sufficiently large region (or in this case a long curve), (ii)it should not be sensitive to occasional spurious large deviations in the edge.

A combination of both these properties can be sought by defining a reliability measure as shown in (6) where is the maximum Euclidean distance between any two pair of pixels [20]. Here, represents the magnitude or the absolute value.

In RDP, when a point on the curve is selected as a dominant point, the curve is segmented into two smaller curves and two line segments intersecting at the selected dominant point are used to represent the original curve. The two line segments are associated directly with the two smaller curves. Thus, for each pair of line segment and its associated curve, one can compute the precision and reliability measures and using (5) and (6). Then, for each segmented curve, if the precision and reliability measures are not below a certain threshold given by (7) then the segmented curve is further segmented into two curves at the point of maximum deviation. Then two independent lines are fitted on these two subcurves using the least squares method and the whole procedure is performed recursively until the reliability and precision measures for each line fit on each subcurve are lesser than the chosen threshold. Typically, is the chosen tolerance value and is typically less than 1. The pseudocode for the PRO algorithm is provided in Algorithm 1.

In the proposed modification the same algorithm as RDP is used to fit the curves, but the optimization goal is not based on maximum deviation (use of (3) as the optimization goal). It is based on precision and reliability measures and (use of (7) as the optimization goal), which give the users larger freedom in determining the nature of fit. For example, using close to 0 (like 0.1 or 0.2) results in a very close fit on the curve, where the lines follow even small deviations in the curvature. This may be used to achieve high fidelity fit, where one needs to retain even the smallest deviations in the data or where one studies the nature of noise in the curve itself. Using close to 1 (like 0.9 or 1) results in line fit that smoothens over small spurious deviations and retains all significant curvature changes in the line fit.

#### 4. Precision and Reliability Measures for Fitting Line Segments on Digital Curves

This section extends the precision and reliability measures expressed in (5) and (6) to the cases of digital curves. Suppose line segments are fitted upon a digital curve. Then the net precision measure for the digital curve is defined as follows (8) [21]: where is the precision measure of the th line segment, defined using (5). The net reliability measure of the digital curve is defined as follows (9) [21]: where corresponds to defined after (6) for the th line segment.

The previous definitions in the context of a digital curve are consistent with the concepts of precision and reliability presented in [20]. Since precision is a local property of fit, it can be defined only for the individual line segments and therefore the net precision accounts for the precision of every line segment. On the other hand, reliability is the characteristic of the complete digital curve. Thus, the net reliability measure does not consider the reliability measure of individual segment, but the individual constituents of reliability measure (the numerator and denominator in (9)) are computed using all the segments.

#### 5. Numerical Experiments

In this section, three numerical experiments are presented. The first experiment (Section 5.3) uses 17 very small images, each with one digital curve, so that the performances of the various methods can be seen in close up for various curvature conditions of the digital curves. In the second experiment (Section 5.4), the traditional examples used in the research related to dominant point detection methods [13, 14, 16, 17, 41–43] are used for benchmarking the proposed method against Masood [14], Carmona-Poyato et al. [16], Marji and Siy [41], Teh and Chin [9], Ansari and Huang [10], B. K. Ray and K. S. Ray [12], B. K. Ray and K. S. Ray [42], Arcelli and Ramella [43], Sarkar [13], and Cronin [11]. In the third experiment (Section 5.5), 10 datasets with a total of 102628 images are considered in the study on the performance of various methods for huge datasets. The various algorithms used for comparison are described in Section 5.1 and the performance parameters used to quantitatively compare the performances of various methods in Section 5.2.

##### 5.1. Algorithms for Comparison

For the ease of reference, the following nomenclatures are used for the line fitting methods. The method proposed by RDP [39, 40], as discussed in Section 2, is called . For example, RDP2 refers to the use of tolerance value for maximum deviation of 2 pixels in the RDP algorithm. The proposed method is called , where denotes the value of . For example, PRO0.2 refers to the use of in the PRO method. The method proposed by Carmona-Poyato et al. [16] is referred to as Carmona and uses 0.4 as the value of the control parameter used in [16]. The method proposed by Masood [14] is referred to as Masood and uses 0.9 as the value of its control parameter, maximum tolerable deviation of the pixels from the fitted lines. Carmona and Masood are chosen for comparison because of two reasons. The first reason is that they are quite recent and represent the state-of-the-art methods for dominant point detection problem. The second reason is that while Masood uses local nature of fit as the main criterion for dominant point detection, Carmona-Poyato uses the global nature of fit as the main criterion. Thus, it is interesting to compare the performance of these methods against PRO, which considers both the local and global natures of fit. A total of 8 algorithms, namely, PRO0.2, PRO0.6, PRO1.0, RDP1, RDP2, RDP3, Carmona, and Masood are considered for detailed comparison in all experiments.

##### 5.2. Performance Metrics

Following eight performance metrics are considered, where is the number of images in a dataset, is the number of digital curves in the th image, and is the number of line segments fit on the th curve.(1)Maximum precision measure per line segment (MPLS): maximum of the precision measures given by (5) of all the line segments fitted for the digital curves in an image is computed. Then the quantity is averaged for all the images in the dataset and is given in (10): (2)Maximum reliability measure per line segment (MRLS): maximum of the reliability measures given by (6) of all the line segments fitted for the digital curves in an image is computed. Then the quantity is averaged for all the images in the dataset as given in (11) (3)Maximum precision measure per digital curve (MPDC): maximum of the precision measures given by (8) for all the digital curves in an image is computed. Then the quantity is averaged for all the images in the dataset as shown in (12) (4)Maximum reliability measure per digital curve (MRDC): maximum of the reliability measures given by (9) for all the digital curves in an image is computed. Then the quantity is averaged for all the images in the dataset as shown in (13) (5)Average dimensionality reduction (ADR): average of the dimensionality reduction measures of all the digital curves in an image is computed. Then the quantity is averaged for all the images in the dataset as shown in (14) where is the number of pixels in a digital curve. DR is the reciprocal of the commonly known compression ratio. (6)Maximum deviation (): for a digital curve segment represented by a single line connecting two adjacent dominant points, the maximum deviation (distance) of the pixels on the digital curve segment from the line segment is found [18, 19, 44]. For a complete digital curve, the maximum of the maximum deviations for all the curve segments is found. This quantity is averaged for all the curves in an image and then averaged for all the images in a dataset. It is represented as as shown in (15) where denotes the infinity norm (i.e., maximum norm). (7)Integral square error (ISE): for a digital curve segment represented by a single line connecting two adjacent dominant points, the sum of squares of the deviations (distances) of the pixels on the digital curve segment from the line segment is found [18, 19, 44]. For a complete digital curve, the sum of squares of the deviations for all the curve segments is added. This quantity is averaged for all the curves in an image and then averaged for all the images in a dataset. It is represented as ISE as shown in (16) (8)Figure of merit (FOM): for a digital curve, fom is computed as the reciprocal of the product of the integral square error (ISE) and the dimensionality reduction ratio (DR) [18, 19, 44]. This quantity is averaged for all the curves in an image and then averaged for all the images in a dataset. It is represented as FOM as shown in (17) For all the previous parameters except FOM, it is desirable that their values be as close to zero as possible. In practice, they are all positive and may not be upper bounded. On the other hand, it is desirable to have high value of FOM, though it is notable that FOM is biased towards very close fitting [18, 19].

##### 5.3. Experiment 1

In this experiment, we consider the 17 examples used in [20]. Each example is a snippet of 20 pixels dimensions containing only one digital curve, which may be similar to a digital line or a more complicated curve. The actual lines fitted by various algorithms are shown in Figure 1. Further, the performance parameters are listed in Table 1.

It can be seen in the row PRO0.2 of Figure 1 that PRO0.2 tends to follow the digital curves very closely. As a consequence it is very sensitive to the effect of digitization and generates numerous small line segments to represent the curve, strongly evident in columns (e–h) of Figure 1. Though definitely very reliable and precise, as evident from MPLS, MPRS, MPDC, and MRDC values in Table 1, due to the tendency to fit the curves very closely (note the low values of and ISE in Table 1), they perform poorly in dimensionality reduction (see ADR values in Table 1). The FOM goes to infinity due to the exact fitting for snippets (a)–(d). So, the value of FOM for this experiment for all the algorithms has been excluded. Next, we see that PRO0.6 tends to follow the curvature of the digital curve, better than PRO0.2. We highlight the results in column (m) of Figure 1. While PRO0.2 generated many line segments for the right side of the curve, PRO0.6 is more selective in fitting the line segments and fits the line segments focusing at the location of changes in curvature, rather than following every small-scale feature of the curve. This is significantly evident in the results in columns (i), (j), and (q) of Figure 1. In PRO1.0, instead of focusing on the small features in the digital curve, it tends to follow the general characteristics of the digital curve on a relatively larger scale; see columns (m) and (n) of Figure 1. As a consequence of this characteristic, PRO1.0 has significantly better dimensionality reduction as compared to other PRO algorithms (see ADR in Table 1).

PRO algorithms demonstrate at least two well-defined patterns in terms of performance and the nature of fit. The first pattern, corresponding to tolerance values close to zero, provides very close fits that are highly reliable and precise but has poor dimensionality reduction. The second pattern, corresponding to tolerance values close to 1, performs in between RDP1 and RDP2 techniques and tends to smooth out small variations and retain large-scale curvature changes.

Next, the performance of algorithms is considered. RDP1 algorithm gives a performance comparable to that of PRO0.6, both qualitatively (specifically note columns (i), (m), and (o) of Figure 1) and quantitatively (see Table 1). RDP2 and RDP3 perform poorer than RDP1 for all the parameters except ADR.

Now, the methods of Masood [14] and Carmona-Poyato et al. [16] are considered. In general, [16] reported that Carmona may sacrifice to some extent on the maximum deviation in order to maintain the general characteristics of the curve; the results in Figure 1 show a contradictory result. However, the results of Masood are in general consistent with the general trends observed in [14]. The anomaly of Carmona’s results is explained as follows. Carmona’s method attempts to fit the line segments using a relative measure such that the allowable maximum deviation increases with the length of the digital curve. This results in overfitting for small digital curves and underfitting for large digital curves. These trends are more clearly visible in Sections 5.4 and 5.5.

##### 5.4. Experiment 2

In this experiment, we consider the digital curves considered in [13, 14, 16, 17, 41–43], namely, chromosome, leaf, semicircle, infinity, dog, maple leaf, and Africa (for dog, maple leaf, and Africa, the images from [14] were scanned with a resolution of 300 dpi, the scanned images were blurred using Adobe Photoshop with a blur radius of 2 pixels, and the blurred images were then subjected to threshold to obtain the binary images). The algorithms were executed on a notebook with Intel Core i7 CPU (M620@2.67 GHz), 4 GB RAM, 64 bit Windows 7 system using Matlab 2010. The results obtained are plotted in Figure 2. The qualitative results show that PRO0.6, PRO1.0, and RDP1 are effective in representing all the digital curves. On the other hand, PRO0.2 is very ineffective in representing the digital curves. At best, it is close to the break points discussed in [14].

On the other hand RDP2 and RDP3 are ineffective due to underfitting, especially for small curves like chromosome and infinity. The results of Masood [14] show that Masood is effective in representing the curvature well for most cases. However, for some cases, especially for larger curves, Masood tends to overfit the curves (as notable from the results of dog, maple leaf, and Africa in Figure 2 and Table 2), resulting in very small values of and ISE and high value of ADR. On the other hand, Carmona-Poyato et al. [16] represents the large digital curves well, as noted from the low value of ADR and reasonable values of MPDC and MRDC; it has a tendency to underfit (crude fitting) for smaller curves like chromosome and infinity. The numerical results for chromosome, leaf, and semicircle for the methods Marji and Siy [41], Teh and Chin [9], Ansari and Huang [10], B. K. Ray and K. S. Ray [12], B. K. Ray and K. S. Ray [42], Arcelli and Ramella [43], Sarkar [13], and Cronin [11] have been calculated from the dominant points reported by the methods in their respective publications.

The quantitative comparison of all the algorithms considered in this paper is listed in Table 2. Quantitative comparison gives more interesting insights. The values of MPLS, MRLS, and are exactly the same for all the examples for PRO0.2. Interestingly, for these examples, PRO0.2 fits line segments with zero maximum deviations except for the case shown in Figure 3. This is because for these examples (Figure 2), for any other pixel sequence, over which PRO0.2 tries to fit a single line segment, either the precision measure or the reliability measure exceeds the threshold value .

Another interesting observation is that PRO0.2 always gives the best (highest) FOM. This is because the figure of merit is inversely proportional to the ISE and for PRO0.2, ISE is very small. Indeed FOM is not the best metric for quantifying the quality of line fit, as concluded in [18, 19]. It is interesting to observe that the values of MPLS, MRLS, MPDC, and MRDC have direct association with the quality of fitting in Figure 2. We note that among all the algorithms considered, Masood [14] (being highly iterative by the design of the algorithm) takes the maximum time and the time taken by Masood increases exponentially with the length of the digital curves. All the remaining algorithms have similar computation time.

The methods having similar quality of fit have similar values of these parameters, though the values of ISE and FOM may vary greatly. For example, for maple leaf compare the values of PRO1.0 and RDP2 or Carmona-Poyato et al. [16] and RDP3. This indicates that the proposed metrics MPLS, MRLS, MPDC, and MRDC are good at representing the quality of fit over various methods.

##### 5.5. Experiment 3

In this experiment, various datasets used in diverse image processing applications like segmentation, object detection, object recognition, object categorization, shape and contour based analysis, virtual reality, and gaming applications are considered. The datasets considered are Afreight dataset [24] (920 images), Google dataset (this dataset has been formed by selecting images of various sizes from a random Google search). In this dataset the minimum dimension (length or breadth) of the images is , to and 50 images have been selected for each value of . Thus, the smallest image is of size 25 pixels (length-or breadthwise or both), while the largest image is of size 2500 pixels (length-or breadthwise or both) (5000 images), Berkley dataset [27] (300 images), Cars_INRIA dataset [28] (150 images), Caltech 101 dataset [25] (9149 images), Caltech 256 dataset [26] (30608 images), PASCAL 2007 dataset [29] (9963 images), PASCAL 2008 dataset [30] (10057 images), PASCAL 2009 dataset [31] (14743 images), and PASCAL 2010 dataset [32] (21738 images). The binary edge map of the image is passed to the various algorithms after canny edge operation with threshold ( and ). For all these datasets the performance parameters are listed in Table 3.

RDP3, Masood [14], and Carmona-Poyato et al. [16] have high values of MPLS and MRLS, whereas RDP3 performs the worst in terms of MPDC and MRDC. PRO0.2 always has the lowest values of MPLS, MRLS, MPDC, MRDC, , and ISE and the highest values of ADR and FOM. This indicates that PRO0.2 has a consistent characteristic of very close fitting. RDP3 has very high values of MPLS and MRLS but reasonably low values for MPDC and MRDC. It performs the best in terms of ADR but poorest among all for FOM (equaled with Carmona-Poyato et al. [16]). Since Masood [14] primarily focuses on the local nature of the fit, as compared to Carmona which concentrates on the global nature of fit, Masood has significantly low values of , low value of ISE, and higher value of FOM. However, qualitatively, the difference may not be significant. The performance of Masood and Carmona is similar in terms of MPLS, MRLS, MPDC, and MRDC. This is because the precision and reliability metrics are better designed to balance the global and local natures of fit. Masood takes significantly longer time as compared to any other method and time complexity increases with increase in the size of the image (curves) as seen from Google dataset result in Table 3(i). Finally, the quantitative data of PRO1.0 indicates that PRO1.0 avoids both extremes for all the performance parameters and provides a good balance in terms of the conflicting requirements of dominant point detection methods.

#### 6. Conclusion

The paper has demonstrated that the PRO method, that uses precision and reliability measures in the optimization goals, can be easily tailored by changing the tolerance value. The fit is very close for small tolerance values and the closeness of the fit decreases as the value becomes close to 1. It is shown through extensive comparison that PRO0.2 chooses the dominant points such that each and every change in the curvature is retained though the number of points needed is large. On the other hand, PRO1.0 avoids the problems of underfitting as well as overfitting and provides a good balance for all the performance parameters that indicate local nature of fit, global nature of fit, dimensionality reduction, and the computation time.

As a final note, the results of dominant point detection methods for 10 practical datasets used in high-end computer vision applications are shown. It is demonstrated that PRO1.0 provides good results over all the datasets and across images of various types and sizes. Such a study is important for the research community that uses dominant point detection as one of the fundamental preprocessing steps in high-end applications [45, 46]. The proposed precision and reliability measures are effective in representing the quality of fit across various methods.

#### References

- R. Yang and Z. Zhang, “Eye gaze correction with stereovision for video-teleconferencing,”
*IEEE Transactions on Pattern Analysis and Machine Intelligence*, vol. 26, no. 7, pp. 956–960, 2004. View at Publisher · View at Google Scholar · View at Scopus - A. Kolesnikov and P. Fränti, “Data reduction of large vector graphics,”
*Pattern Recognition*, vol. 38, no. 3, pp. 381–394, 2005. View at Publisher · View at Google Scholar · View at Scopus - D. Brunner and P. Soille, “Iterative area filtering of multichannel images,”
*Image and Vision Computing*, vol. 25, no. 8, pp. 1352–1364, 2007. View at Publisher · View at Google Scholar · View at Scopus - S. Ozen, A. Bouganis, and M. Shanahan, “A fast evaluation criterion for the recognition of occluded shapes,”
*Robotics and Autonomous Systems*, vol. 55, no. 9, pp. 741–749, 2007. View at Publisher · View at Google Scholar · View at Scopus - A. Orzan, A. Bousseau, H. Winnemöller, P. Barla, J. Thollot, and D. Salesin, “Diffusion curves: a vector representation for smooth-shaded images,”
*ACM Transactions on Graphics*, vol. 27, no. 3, article 92, 2008. View at Publisher · View at Google Scholar · View at Scopus - J. L. G. Balboa and F. J. A. López, “Sinuosity pattern recognition of road features for segmentation purposes in cartographic generalization,”
*Pattern Recognition*, vol. 42, no. 9, pp. 2150–2159, 2009. View at Publisher · View at Google Scholar · View at Scopus - G. Erus and N. Loménie, “How to involve structural modeling for cartographic object recognition tasks in high-resolution satellite images?”
*Pattern Recognition Letters*, vol. 31, no. 10, pp. 1109–1119, 2010. View at Publisher · View at Google Scholar · View at Scopus - A. Faure, L. Buzer, and F. Feschet, “Tangential cover for thick digital curves,”
*Pattern Recognition*, vol. 42, no. 10, pp. 2279–2287, 2009. View at Publisher · View at Google Scholar · View at Scopus - C.-H. Teh and R. T. Chin, “On the detection of dominant points on digital curves,”
*IEEE Transactions on Pattern Analysis and Machine Intelligence*, vol. 11, no. 8, pp. 859–872, 1989. View at Publisher · View at Google Scholar · View at Scopus - N. Ansari and K. W. Huang, “Non-parametric dominant point detection,”
*Pattern Recognition*, vol. 24, no. 9, pp. 849–862, 1991. View at Publisher · View at Google Scholar · View at Scopus - T. M. Cronin, “A boundary concavity code to support dominant point detection,”
*Pattern Recognition Letters*, vol. 20, no. 6, pp. 617–634, 1999. View at Publisher · View at Google Scholar · View at Scopus - B. K. Ray and K. S. Ray, “Detection of significant points and polygonal approximation of digitized curves,”
*Pattern Recognition Letters*, vol. 13, no. 6, pp. 443–452, 1992. View at Google Scholar · View at Scopus - D. Sarkar, “A simple algorithm for detection of significant vertices for polygonal approximation of chain-coded curves,”
*Pattern Recognition Letters*, vol. 14, no. 12, pp. 959–964, 1993. View at Google Scholar · View at Scopus - A. Masood, “Dominant point detection by reverse polygonization of digital curves,”
*Image and Vision Computing*, vol. 26, no. 5, pp. 702–715, 2008. View at Publisher · View at Google Scholar · View at Scopus - D. K. Prasad, C. Quek, and M. K. H. Leung, “A non-heuristic dominant point detection based on suppression of break points,” in
*Image Analysis and Recognition*, A. Campilho and M. Kamel, Eds., vol. 7324, pp. 269–276, Springer, Berlin, Germany, 2012. View at Google Scholar - A. Carmona-Poyato, F. J. Madrid-Cuevas, R. Medina-Carnicer, and R. Muñoz-Salinas, “Polygonal approximation of digital planar curves through break point suppression,”
*Pattern Recognition*, vol. 43, no. 1, pp. 14–25, 2010. View at Publisher · View at Google Scholar · View at Scopus - T. P. Nguyen and I. Debled-Rennesson, “A discrete geometry approach for dominant point detection,”
*Pattern Recognition*, vol. 44, no. 1, pp. 32–44, 2011. View at Publisher · View at Google Scholar · View at Scopus - P. L. Rosin, “Techniques for assessing polygonal approximations of curves,”
*IEEE Transactions on Pattern Analysis and Machine Intelligence*, vol. 19, no. 6, pp. 659–666, 1997. View at Publisher · View at Google Scholar · View at Scopus - A. Carmona-Poyato, R. Medina-Carnicer, F. J. Madrid-Cuevas, R. Muoz-Salinas, and N. L. Fernndez-Garca, “A new measurement for assessing polygonal approximation of curves,”
*Pattern Recognition*, vol. 44, no. 1, pp. 45–54, 2011. View at Publisher · View at Google Scholar · View at Scopus - D. K. Prasad and M. K. H. Leung, “Reliability/precision uncertainity in shape fitting problems,” in
*Proceedings of the 17th IEEE International Conference on Image Processing (ICIP '10)*, pp. 4277–4280, Hong Kong, China, September 2010. View at Publisher · View at Google Scholar · View at Scopus - D. K. Prasad and M. K. H. Leung, “Polygonal representation of digital curves,” in
*Digital Image Processing*, S. G. Stanciu, Ed., pp. 71–90, InTech, Rijeka, Croatia, 2012. View at Google Scholar - D. K. Prasad,
*Geometric primitive feature extraction-concepts, algorithms, and applications [Ph.D. thesis]*, School of Computer Engineering, Nanyang Technological University, Singapore, 2012. - O. Strauss, “Reducing the precision/uncertainty duality in the Hough transform,” in
*Proceedings of the IEEE International Conference on Image Processing (ICIP '96)*, pp. 967–970, Lausanne, Switzerland, September 1996. View at Publisher · View at Google Scholar · View at Scopus - G. McCarter and A. Storkey,
*Air Freight Image Sequences*, 2003. - L. Fei-Fei, R. Fergus, and P. Perona, “Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories,”
*Computer Vision and Image Understanding*, vol. 106, no. 1, pp. 59–70, 2007. View at Publisher · View at Google Scholar · View at Scopus - California Institute of Technology, http://authors.library.caltech.edu/7694.
- D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in
*Proceedings of the 8th International Conference on Computer Vision*, pp. 416–423, July 2001. View at Scopus - P. Carbonetto, G. Dorkó, C. Schmid, H. Kück, and N. de Freitas, “Learning to recognize objects with little supervision,”
*International Journal of Computer Vision*, vol. 77, no. 1–3, pp. 219–237, 2008. View at Publisher · View at Google Scholar · View at Scopus - M. Everingham, L. V. Gool, C. K. I. Williams, J. Winn, and A. Zisserman,
*The PASCAL Visual Object Classes Challenge 2007 (VOC2007)*, 2007. - M. Everingham, L. V. Gool, C. K. I. Williams, J. Winn, and A. Zisserman,
*The PASCAL Visual Object Classes Challenge 2008 (VOC2008)*, 2008. - M. Everingham, L. V. Gool, C. K. I. Williams, J. Winn, and A. Zisserman,
*The PASCAL Visual Object Classes Challenge 2009 (VOC2009)*, 2009. - M. Everingham, L. V. Gool, C. K. I. Williams, J. Winn, and A. Zisserman,
*The PASCAL Visual Object Classes Challenge 2010 (VOC2010)*, 2010. - D. K. Prasad , “Fabrication imperfection analysis and statistics generation using precision and reliability optimization method,”
*Optics Express*, vol. 21, pp. 17602–17614, 2013. View at Google Scholar - D. K. Prasad and M. S. Brown, “Online tracking of deformable objects under occlusion using dominant points,”
*Journal of the Optical Society of America*, vol. 30, pp. 1484–1491, 2013. View at Google Scholar - D. K. Prasad, M. K. H. Leung, C. Quek, and S.-Y. Cho, “A novel framework for making dominant point detection methods non-parametric,”
*Image and Vision Computing*, vol. 30, pp. 843–859, 2012. View at Google Scholar - D. K. Prasad, “Assessing error bound for dominant point detection,”
*International Journal of Image Processing*, vol. 6, pp. 326–333, 2012. View at Google Scholar - D. K. Prasad, C. Quek, M. K. H. Leung, and S.-Y. Cho, “A parameter independent line fitting method,” in
*Proceedings of the Asian Conference on Pattern Recognition (ACPR '11)*, pp. 441–445, 2011. - D. G. Lowe, “Three-dimensional object recognition from single two-dimensional images,”
*Artificial Intelligence*, vol. 31, no. 3, pp. 355–395, 1987. View at Google Scholar · View at Scopus - D. H. Douglas and T. K. Peucker, “Algorithms for the reduction of the number of points required to represent a digitized line or its caricature,”
*Cartographica*, vol. 10, pp. 112–122, 1973. View at Google Scholar - U. Ramer, “An iterative procedure for the polygonal approximation of plane curves,”
*Computer Graphics and Image Processing*, vol. 1, no. 3, pp. 244–256, 1972. View at Google Scholar · View at Scopus - M. Marji and P. Siy, “Polygonal representation of digital planar curves through dominant point detection—a nonparametric algorithm,”
*Pattern Recognition*, vol. 37, no. 11, pp. 2113–2130, 2004. View at Publisher · View at Google Scholar · View at Scopus - B. K. Ray and K. S. Ray, “An algorithm for detection of dominant points and polygonal approximation of digitized curves,”
*Pattern Recognition Letters*, vol. 13, no. 12, pp. 849–856, 1992. View at Google Scholar · View at Scopus - C. Arcelli and G. Ramella, “Finding contour-based abstractions of planar patterns,”
*Pattern Recognition*, vol. 26, no. 10, pp. 1563–1577, 1993. View at Publisher · View at Google Scholar · View at Scopus - P. L. Rosin, “Assessing the behaviour of polygonal approximation algorithms,”
*Pattern Recognition*, vol. 36, no. 2, pp. 505–518, 2003. View at Publisher · View at Google Scholar · View at Scopus - D. K. Prasad and M. K. H. Leung, “A hybrid approach for ellipse detection in real images,” in
*2nd International Conference on Digital Image Processing*, vol. 7546 of*Proceedings of SPIE*, p. 75460I, Singapore, February 2010. View at Publisher · View at Google Scholar · View at Scopus - D. K. Prasad, “Adaptive traffic signal control system with cloud computing based online learning,” in
*Proceedings of the 8th International Conference on Information, Communications and Signal Processing (ICICS '11)*, pp. 1–5, Singapore, December 2011. View at Publisher · View at Google Scholar · View at Scopus