Abstract

This paper presents an algorithm for the automatic detection of circular shapes from complicated and noisy images with no consideration of the conventional Hough transform principles. The proposed algorithm is based on a newly developed evolutionary algorithm called the Adaptive Population with Reduced Evaluations (APRE). Our proposed algorithm reduces the number of function evaluations through the use of two mechanisms: (1) adapting dynamically the size of the population and (2) incorporating a fitness calculation strategy, which decides whether the calculation or estimation of the new generated individuals is feasible. As a result, the approach can substantially reduce the number of function evaluations, yet preserving the good search capabilities of an evolutionary approach. Experimental results over several synthetic and natural images, with a varying range of complexity, validate the efficiency of the proposed technique with regard to accuracy, speed, and robustness.

1. Introduction

The problem of detecting circular features holds paramount importance in several engineering applications such as automatic inspection of manufactured products and components, aided vectorization of drawings, and target detection [1, 2]. Circle detection in digital images has been commonly solved through the Circular Hough Transform (CHT) [3]. Unfortunately, this approach requires a large storage space that augments the computational complexity and yields a low processing speed. In order to overcome this problem, several approaches which modify the original CHT have been proposed. One well-known example is the Randomized Hough Transform (RHT) [4].

As an alternative to Hough Transform-based techniques, the problem of shape recognition has also been handled through evolutionary methods. In general, they have demonstrated to deliver better results than those based on the HT considering accuracy and robustness [5]. Evolutionary methods approach the detection task as an optimization problem whose solution involves the computational expensive evaluation of objective functions. Such fact strongly restricts their use in several image processing applications; despite this, EA methods have produced several robust circle detectors which use different evolutionary algorithms like Genetic algorithms (GA) [5], Harmony Search (HSA) [6], Electromagnetism-Like (EMO) [7], Differential Evolution (DE) [8], and Bacterial Foraging Optimization (BFOA) [9].

However, one particular difficulty in applying any EA to real-world problems, such as image processing, is its demand for a large number of fitness evaluations before reaching a satisfactory result. Fitness evaluations are not always straightforward as either an explicit fitness function does not exist or the fitness evaluation is computationally expensive.

The problem of excessively long fitness function calculations has already been faced in the field of evolutionary algorithms (EA) and is better known as evolution control or as fitness estimation [10]. For such an approach, the idea is to replace the costly objective function evaluation for some individuals by alternative models which are based on an approximate model of the fitness landscape. The individuals to be evaluated and those to be estimated are determined following some fixed criteria which depend on the specific properties of the approximate model [11]. The models involved at the estimation can be dynamically built during the actual EA execution, since EA repeatedly sample the search space at different points [12]. There are several alternative models which have been used in combination with popular EAs. Some examples include polynomial functions [13], kriging schemas [14], multilayer perceptrons [15], and radial basis-function networks [16]. In the practice, the construction of successful models which can globally deal with the high dimensionality, ill distribution, and limited number of training samples is very difficult. Experimental studies [17] have demonstrated that if an alternative model is used for fitness evaluation, it is very likely that the evolutionary algorithm will converge to a false optimum. A false optimum is an optimum of the alternative model, which does not coincide with the optimum of the original fitness function. Under such conditions, the use of the alternative fitness models degrade the search effectiveness of the original EAs, producing frequently inaccurate solutions [18].

In an EA, the population size has a direct influence on the solution quality and its computational cost [19]. Traditionally, population size is set in advance to a prespecified value and remains fixed through the entire execution of the algorithm. If the population size is too small, then the EA may converge too quickly affecting severely the solution quality [20]. On the other hand, if it is too large, then the EA may present a prohibitive computational cost [19]. Therefore, an appropriate population size allows maintaining a trade-off between computational cost and effectiveness of the algorithm. In order to solve such a problem, several approaches have been proposed for dynamically adapting the population size. These methods are grouped into three categories [21]: (i) methods that increment or decrement the number of individuals according to a fixed function; (ii) methods in which the number of individuals is modified according to the performance of the average fitness value, and (iii) algorithms based on the population diversity.

In order to use either a fitness estimation strategy or an adaptive population size approach, it is necessary but not sufficient to tackle the problem of reducing the number of function evaluations. Using a fitness estimation strategy, during the evolution process with no adaptation of the population size to improve the population diversity, makes the algorithm defenseless against the convergence to a false minimum and may result in poor exploratory characteristics of the algorithm [18]. On the other hand, the adaptation of the population size omitting the fitness estimation strategy leads to increase in the computational cost [20]. Therefore, it does seem reasonable to incorporate both approaches into a single algorithm.

Since most of the EAs have been primarily designed to completely evaluate all involved individuals, techniques for reducing the evaluation number are usually incorporated into the original EAs in order to estimate fitness values or to reduce the number of individuals being evaluated [22]. However, the use of alternative fitness models degrades the search effectiveness of the original EAs, producing frequently inaccurate solutions [23].

This paper presents an algorithm for the automatic detection of circular shapes from complicated and noisy images without considering the conventional Hough transform principles. The proposed algorithm is based on a newly developed evolutionary algorithm called the Adaptive Population with Reduced Evaluations (APRE). The proposed algorithm reduces the number of function evaluations through the use of two mechanisms: () adapting dynamically the size of the population and () incorporating a fitness calculation strategy which decides when it is feasible to calculate or only to estimate new generated individuals.

The APRE method begins with an initial population which is to be considered as a memory during the evolution process. To each memory element, a normalized fitness value, called quality factor is assigned to indicate the solution capacity that is provided by the element. Only a variable subset of memory elements is considered to be evolved. Like all EA-based methods, the proposed algorithm generates new individuals considering two operators: exploration and exploitation. Both operations are applied to improve the quality of the solutions by: () searching through the unexplored solution space to identify promising areas that contain better solutions than those found so far and () successive refinement of the best found solutions. Once the new individuals are generated, the memory is accordingly updated. At such stage, the new individuals compete against the memory elements to build the final memory configuration. In order to save computational time, the approach incorporates a fitness estimation strategy that decides which individuals can be estimated or actually evaluated. The proposed fitness calculation strategy estimates the fitness value of new individuals using memory elements located in neighboring positions which have been visited during the evolution process. In the strategy, new individuals, that are located near the memory element whose quality factor is high, have a great probability to be evaluated by using the true objective function. Similarly, evaluated those new particles lying in regions of the search space with no previous evaluations are also evaluated. The remaining search positions are only estimated by assigning the same fitness value that is the nearest location element on the memory. The use of such a fitness estimation method contributes to saving computational time, since the fitness value of only very few individuals is actually evaluated whereas the rest is just estimated.

Different to other approaches that use an already existent EA as framework, the APRE method has been completely designed to substantially reduce the computational cost, yet preserving good search effectiveness.

In order to detect circular shapes, the detector is implemented by encoding three pixels as candidate circles over the edge image. An objective function evaluates if such candidate circles are actually present in the edge image. Guided by the values of this objective function, the set of encoded candidate circles are evolved using the operators defined by APRE so that they can fit into the actual circles on the edge map of the image. Comparisons to several state-of-the-art evolutionary-based methods and the Randomized Hough Transform (RHT) approach on multiple images demonstrate a better performance of the proposed method in terms of accuracy, speed, and robustness.

The paper is organized as follows. In Section 2, the APRE algorithm and its characteristics are both described. Section 3 formulates the implementation of the circle detector. Section 4 shows the experimental results of applying our method to the recognition of circles in different image conditions. Finally, Section 5 discusses several conclusions.

2. The Adaptive Population with Reduced Evaluations (APRE) Algorithm

In the proposed algorithm, a population of candidate solutions to an optimization problem is evolved toward better solutions. The algorithm begins with an initial population which will be used as a memory during the evolution process. To each memory element, it is assigned a normalized fitness value called quality factor that indicates the solution capacity provided by the element.

As a search strategy, the proposed algorithm implements two operations: “exploration” and “exploitation.” Both necessary in all EAs [24]. Exploration is the operation of visiting entirely new points of a search space, whilst exploitation is the process of refining those points of a search space within the neighborhood of previously visited locations in order to improve their solution quality. Pure exploration degrades the precision of the evolutionary process but increases its capacity to find new potential solutions [25]. On the other hand, pure exploitation allows refining existent solutions but adversely drives the process to fall in local optimal solutions [26]. Therefore, the ability of an EA to find a global optimal solution depends on its capacity to find a good trade-off between the exploitation of so far found elements and the exploration of the search space.

The APRE algorithm is an iterative process in which several actions are executed. First, the number of memory elements to be evolved is computed. Such number is automatically modified at each iteration. Then, a set of new individuals is generated as a consequence of the execution of the exploration operation. For each new individual, its fitness value is estimated or evaluated according to a decision taken by a fitness estimation strategy. Afterwards, the memory is updated. In this stage, the new individuals produced by the exploration operation compete against the memory elements to build the final memory configuration. Finally, a sample of the best elements contained in the final memory configuration is undergone to the exploitation operation. Thus, the complete process can be divided in six phases: initialization, selecting the population to be evolved, exploration, fitness estimation strategy, memory updating, and exploitation.

2.1. Initialization

Like in EA, the APRE algorithm is an iterative method whose first step is to randomly initialize the population which will be used as a memory during the evolution process. The algorithm begins by initializing () a set of elements (). Each element is an -dimensional vector containing the parameter values to be optimized. Such values are randomly and uniformly distributed between the prespecified lower initial parameter bound and the upper initial parameter bound , just as it described by the following expression: where and are the parameter and element indexes, respectively. Hence, is the th parameter of the th element.

Each element has two associated characteristics: a fitness value and a quality factor . The fitness value assigned to each element can be calculated by using the true objective function or only estimated by using the proposed fitness strategy . In addition to the fitness value, it is also assigned to , a normalized fitness value called quality factor (), which is computed as follows: where is the fitness value obtained by evaluation or by estimation of the memory element . The values and are defined as follows (considering a maximization problem): Since the mechanism by which an EA accumulates information regarding the objective function is an exact evaluation of the quality of each potential solution, initially, all the elements of are evaluated without considering the fitness estimation strategy proposed in this paper. This fact is only allowed at this initial stage.

2.2. Selecting the Population to Be Evolved

At each iteration, it must be selected which and how many elements from will be considered to build the population in order to be evolved. Such selected elements will be undergone by the exploration and exploitation operators in order to generate a set of new individuals. Therefore, two things need to be defined: the number of elements to be selected and the strategy of selection.

2.2.1. The Number of Elements to Be Selected

One of the mechanisms used by the APRE algorithm for reducing the number of function evaluations is to modify dynamically the size of the population to be evolved. The idea is to operate with the minimal number of individuals that guarantee the correct efficiency of the algorithm. Hence, the method aims to vary the population size in an adaptive way during the execution of each iteration. At the beginning of the process, a predetermined number of elements are considered to build the first population; then, it will be incremented or decremented depending on the algorithm’s performance. The adaptation mechanism is based on the lifetime of the individuals and on their solution quality.

In order to compute the lifetime of each individual, it is assigned a counter () to each element of . When the initial population is created, all the counters are set to zero. Since the memory is updated at each generation, some elements prevail and others will be substituted by new individuals. Therefore, the counter of the surviving elements is incremented by one whereas the counter of new added elements is set to zero.

Another important requirement to calculate the number of elements to be evolved is the solution quality provided by each individual. The idea is to identify two classes of elements, those that provide good solutions and those that can be considered as bad solutions. In order to classify each element, the average fitness value produced by all the elements of is calculated as where represents the fitness value corresponding to . These values are evaluated either by the true objective function or by the fitness estimation strategy . Considering the average fitness value, two groups are built: the set constituted by the elements of whose fitness values are greater than and the set which groups the elements of whose fitness values are equal or lower than .

Therefore, the number of individuals of the current population that will be incremented or decremented at each generation is calculated by the following model: where the floor function maps a real number to the previous integer. and represent the number of elements of and , respectively, whereas and indicate the sum of the counters that correspond to the elements of and , respectively. The factor is a term used for fine tuning. A small value of implies a better algorithm’s performance at the price of an increment in the computational cost. On the other hand, a big value of involves a low computational cost at the price of a decrement in the performance algorithm. Therefore, the value must reflex a compromise between performance and computational cost. In our experiments such compromise has been found with .

Therefore, the number the elements that define the population to be evolved is computed according to the following model: Since the value of can be positive or negative, the size of the population may be higher or lesser than . The computational procedure that implements this method is presented in Algorithm 1, in form of pseudocode.

(1)Input: Current population , counters ,
  the past number of individuals and the constant factor .
(2)
(3) FindIndividualsOverJA
(4) FindIndividualsUnderJA
(5) FindCountersOfG (Where )
(6) FindCountersOfB (Where )
(7)
(8)
(9)Output: The number

2.2.2. Selection Strategy for Building

Once the number of individuals has been defined, the next step is the selection of elements from for building . A new population which contains the same elements that is generated but sorted according to their fitness values. Thus, presents in its first positions the elements whose fitness values are better than those located in the last positions. Then, is divided in two parts: and . The section corresponds to the first elements of whereas the rest of the elements constitute the part . Figure 1 shows this process.

In order to promote diversity, in the selection strategy, the 80% of the individuals of are taken from the first elements of and named as as shown in Figure 2, where (). The remaining 20% of the individuals are randomly selected from section . Hence, the last set of Se elements (where ) is chosen considering that all elements of have the same possibility of being selected. Figure 2 shows a description of the selection strategy. The computational procedure that implements this method is presented in Algorithm 2, in form of pseudocode.

(1)Input: Current population and the number of individuals .
(2) SortElementsFitness
(3) DivideMO
(4) floor
(5)
(6) SelectElementsOfX
(7) SelectRandomElementsOfY
(8)Output: Population to be evolve

2.3. Exploration Operation

The first main operation applied to the population is the exploration operation. Considering as the input population, APRE mutates to produce a temporal population of vectors. In the exploration operation two different mutation models are used: the mutation employed by the Differential Evolution algorithm (DE) [27] and the trigonometric mutation operator [28].

2.3.1. DE Mutation Operator

In this mutation, three distinct individuals , , and are randomly selected from the current population . Then, a new value considering the following model is created: where , , and are randomly selected individuals such that they satisfy , to (population size), and to (number of decision variable). Hence, is the th parameter of the th individual of . The scale factor, (0,1+), is a positive real number that controls the rate at which the population evolves.

2.3.2. Trigonometric Mutation Operator

The trigonometric mutation operation is performed according to the following formulation: where , , and represent the individuals , , and randomly selected from the current population whereas represents the fitness value (calculated or estimated) corresponding to . Under this formulation, the individual to be perturbed is the average value of three randomly selected vectors (, , and ). The perturbation to be imposed over such individual is implemented by the sum of three weighted vector differentials. , , and are the weights applied to these vector differentials. Notice that the trigonometric mutation is a greedy operator since it biases the strongly in the direction where the best one of three individuals is lying.

Computational Procedure. Considering as the input population, all its individuals are sequentially processed in cycles beginning by the first individual . Therefore, in the cycle (where it is processed the individual ), three distinct individuals , , and are randomly selected from the current population considering that they satisfy the following conditions . Then, it is processed each dimension of beginning by the first parameter 1 until the last dimension has been reached. At each processing cycle, the parameter considered as a parent, creates an offspring in two steps. In the first step, from the selected individuals , , and , a donor vector is created by means of two different mutation models. In order to select which mutation model is applied, a uniform random number is generated within the range . If such number is less than a threshold MR, the donor vector is generated by the DE mutation operator; otherwise, it is produced by the trigonometric mutation operator. Such process can be modeled as follows: In the second step, the final value of the offspring is determined. Such decision is stochastic; hence, a second uniform random number is generated within the range . If this random number is less than , ; otherwise, . This operation can be formulated as follows: The complete computational procedure is presented in Algorithm 3, in form of pseudocode.

(1)   Input: Current population
(2) for to do
(3)    SelectElements()  % Considering that
(4)  for to   do
(5)    if then
(6)     DEMutation % (7)
(7)    else
(8)     TrigonometricMutation % (8)
(9)    end if
(10)   if then
(11)   
(12)   else
(13)   
(14)   end if
(15)  end for
(16) end for
(17) Output: Population

2.4. Fitness Estimation Strategy

Once the population has been generated by the exploration operation, it is necessary to calculate the fitness value provided by each individual. In order to reduce the number of function evaluations, a fitness estimation strategy that decides which individuals can be estimated or actually evaluated is introduced. The idea of such a strategy is to find the global optimum of a given function considering only very few number of function evaluations.

In this paper, we explore a local approximation scheme that estimates the fitness values based on previously evaluated neighboring individuals, stored in the memory during the evolution process. The strategy decides if an individual is calculated or estimated based on two criteria. The first one considers the distance between and the nearest element contained in (where ) whereas the second one examines the quality factor provided by the nearest element ().

In the model, individuals of that are near the elements of holding the best quality values have a high probability to be evaluated. Such individuals are important, since they will have a stronger influence on the evolution process than other individuals. In contrast, individuals of that are also near the elements of but with a bad quality value maintain a very low probability to be evaluated. Thus, most of such individuals will only be estimated, assigning it the same fitness value that the nearest element of . On the other hand, those individuals in regions of the search space with few previous evaluations (individuals of located farther than a distance ) are also evaluated. The fitness values of these individuals are uncertain since there is no close reference (close points contained in ).

Therefore, the fitness estimation strategy follows two rules in order to evaluate or estimate the fitness values.(1)If the new individual is located closer than a distance with respect to the nearest element stored in , then a uniform random number is generated within the range . If such number is less than , is evaluated by the true objective function (). Otherwise, its fitness value is estimated assigning it the same fitness value that (). Figures 3(a) and 3(b) draw the rule procedure.(2)If the new individual is located longer than a distance with respect to the nearest individual location stored in , then the fitness value of is evaluated using the true objective function (). Figure 3(c) outlines the rule procedure.

From the rules, the distance controls the trade off between the evaluation and estimation of new individuals. Unsuitable values of result in a lower convergence rate, longer computation time, larger function evaluation number, convergence to a local maximum, or unreliability of solutions. Therefore, the value is computed considering the following equation: where and represent the prespecified lower bound and the upper bound of the -parameter, respectively, within an -dimensional space. Both rules show that the fitness estimation strategy is simple and straightforward. Figure 3 illustrates the procedure of fitness computation for a new candidate solution considering the two different rules. In the problem the objective function is maximized with respect to two parameters (). In all figures (Figures 3(a), 3(b), and 3(c)) the memory contains five different elements (, and ) with their corresponding fitness values (, and ) and quality factors (, and ). Figures 3(a) and 3(b) show the fitness evaluation () or estimation () of the new individual following the rule  1. Figure 3(a) represents the case when holds a good quality factor whereas Figure 3(b) when maintains a bad quality factor. Finally, Figure 3(c) presents the fitness evaluation of considering the conditions of rule  2. The procedure that implements the fitness estimation strategy is presented in Algorithm 4, in form of pseudocode.

(1)   Input: Population and memory
(2) for to do
(3)    FindNearestElementOfM
(4) distance FindTheDistance
(5)   if (distance < ) then
(6)    if (rand(0, 1) <= then
(7)         % Evaluation
(8)      else              (Rule  1)
(9)          % Estimation
(10)   end if
(11)    else
(12)        % Evaluation  (Rule  2)
(13)  end if
(14) end for
(15) Output: fitness values of  

2.5. Memory Updating

Once the operations of exploration and fitness estimation have been applied, it is necessary to update the memory . In the APRE algorithm, the memory is updated considering the following procedure.(1)The elements of and are merged into ().(2)From the resulting elements of , it is selected the best elements according to their fitness values to build the new memory .(3)The counters must be updated. Thus, the counter of the surviving elements is incremented by 1 whereas the counter of modified elements is set to zero.

2.6. Exploitation Operation

The second main operation applied by the APRE algorithm is the exploitation operation. Exploitation, in the context of EA, is the process of refining the solution quality of existent promising solutions within a small neighborhood. In order to implement such a process, a new memory ME is generated, which contains the same elements that but sorted according to their fitness values. Thus, ME presents in its first positions the elements whose fitness values are better than those located in the last positions. Then, the 10% of the () individuals are taken from the first elements of ME to build the set (, where ).

To each element of a probability which express the likelihood of the element to be exploited is assigned. Such a probability is computed as follows: Therefore, the first elements of E have a better probability to be exploited than the last ones. In order to decide if the element must be exploited, a uniform random number is generated within the range . If such a number is less than , then the element will be modified by the exploitation operation. Otherwise, it remains without changes.

If the exploitation operation over is verified, the position of is perturbed considering a small neighborhood. The idea is to test if it is possible to refine the solution provided by modifying slightly its position. In order to improve the exploitation process, the proposed algorithm starts perturbing the original position within the interval [] (where is the distance defined in (11)) and then gradually is reduced as the process evolves. Thus, the perturbation over a generic element is modeled as follows: where is the current iteration and is the total number of iterations from which consists the evolution process. Once has been calculated, its fitness value is computed by using the true objective function (). If is better than according to their fitness values, the value of in the original memory is updated with ; otherwise the memory remains without changes. The procedure that implements the exploitation operation is presented in Algorithm 5, in form of pseudocode.

(1) Input: New memory , current iteration
(2) ME SortElementsFitness
(3)
(4) E SelectTheFirstElements
(5) for to do
(6)
(7) if (rand(0, 1) <= ) then
(8)     for to   do
(9)     
(10)   end for
(11)     
(12)    if ( ) then
(13)       MemoryIsUpdated
(14)    end if
(15) end if
(16) end for
(17) Output: Memory

In order to demonstrate the exploitation operation, Figure 4(a) illustrates a simple example. A memory of ten different 2-dimensional elements is assumed (). Figure 4(b) shows the previous configuration of the proposed example before the exploitation operation takes place. Since only the 10% of the best elements of will build the set E, is the single element that constitutes (). Therefore, according to (12), the probability assigned to is 1. Under such circumstances, the element is perturbed considering (13), generating the new position . As is better than according to their fitness values, the value of in the original memory is updated with . Figure 4(c) shows the final configuration of after the exploitation operation has been achieved.

2.7. Computational Procedure

The computational procedure for the proposed algorithm can be summarized in Algorithm 6.

(1)   Input: and (where is the maximum number of iterations).
(2)    InitializeM
(3)    ClearCounters()
(4) for to do
(5)  Algorithm 1
(6)  Algorithm 2
(7)  Algorithm 3
(8)  Algorithm 4
(9)   UpdateM
(10)    UpdateCounters( )
(11)     CalculateQualityFactor(M(k))
(12)   Algorithm 5
(13) end for
(14) Solution FindBestElement(M(k))
(15) Output: Solution

The APRE algorithm is an iterative process in which several actions are executed. After initialization (lines 2-3), the number of memory elements to be evolved are computed. Such number is automatically modified at each iteration (lines 5-6). Then, a set of new individuals is generated as a consequence of the execution of the exploration operation (line 7). For each new individual, its fitness value is estimated or evaluated according to a decision taken by a fitness estimation strategy (line 8). Afterwards, the memory is updated. In this stage, the new individuals produced by the exploration operation compete against the memory elements to build the final memory configuration (lines 9–11). Finally, a sample of the best elements contained in the final memory configuration is undergone to the exploitation operation (line 12). This cycle is repeated until the maximum number of the iterations Max has been reached.

3. Implementation of APRE-Based Circle Detector

3.1. Individual Representation

In order to detect circle shapes, candidate images must be preprocessed first by the well-known Canny algorithm which yields a single-pixel edge-only image. Then, the coordinates for each edge pixel are stored inside the edge vector , with being the total number of edge pixels. Each circle uses three edge points as individuals in the optimization algorithm. In order to construct such individuals, three indexes , , and , are selected from vector , considering the circle’s contour that connects them. Therefore, the circle that crosses over such points may be considered as a potential solution for the detection problem. Considering the configuration of the edge points shown by Figure 5, the circle center and the radius of can be computed as follows: Considering where is the determinant and . Figure 5 illustrates the parameters defined by (14) to (17).

3.2. Objective Function

In order to calculate the error produced by a candidate solution , a set of test points is calculated as a virtual shape which, in turn, must be validated, that is, if it really exists in the edge image. The test set is represented by , where is the number of points over which the existence of an edge point, corresponding to , should be validated. In our approach, the set A is generated by the Midpoint Circle Algorithm (MCA) [29]. The MCA is a searching method which seeks the required points for drawing a circle digitally. Therefore, MCA calculates the necessary number of test points to totally draw the complete circle. Such a method is considered the fastest because MCA avoids computing square-root calculations by comparing the pixel separation distances among them.

The objective function represents the matching error produced between the pixels A of the circle candidate (individual) and the pixels that actually exist in the edge image, yielding where is a function that verifies the pixel existence in , with and being the number of pixels lying on the perimeter corresponding to currently under testing. Hence, function is defined as A value of near to one implies a better response from the “circularity” operator. Figure 6 shows the procedure to evaluate a candidate solution by using the objective function (). Figure 6(a) shows the original edge map E, while Figure 6(b) presents the virtual shape A representing the particle . In Figure 6(c), the virtual shape A is compared to the edge image, point by point, in order to find coincidences between virtual and edge points. The particle has been built from points , , and which are shown by Figure 6(a). The virtual shape A, obtained by MCA, gathers 56 points () with only 17 of them existing in both images (shown as white points in Figure 6(c)) and yielding , therefore .

3.3. The Multiple Circle Detection Procedure

In order to detect multiple circles, the APRE-detector is iteratively applied. At each iteration, two actions are developed. In the first one, a new circle is detected as a consequence of the execution of the APRE algorithm. The detected circle corresponds to the candidate solution with the best found value. In the second one, the detected circle is removed from the original edge map. The processed edge map without the removed circle represents the input image for the next iteration. Such process is executed over the sequence of images until the value would be lower than a determined threshold that is considered as permissible.

4. Results on Multicircle Detection

In order to achieve the performance analysis, the proposed approach is compared to the GA-based algorithm [5], the BFAO detector [9] and the RHT method [4] over an image set.

The GA-based algorithm follows the proposal of Ayala-Ramirez et al. [5], which considers the population size as 70, the crossover probability as 0.55, the mutation probability as 0.10, and the number of elite individuals as 2 and 200 generations. The roulette wheel selection and the 1-point crossover operator are both applied. The parameter setup and the fitness function follow the configuration suggested in [5]. The BFAO algorithm follows the implementation from [9] considering the experimental parameters as , , , , , , , , , , and . Such values are found to be the best configuration set according to [9]. Both, the GA-based algorithm and the BAFO method use the same objective function that is defined by (16). Likewise, the RHT method has been implemented as it is described in [4]. Finally, Table 1 presents the parameters for the APRE algorithm used in this work. They have been kept for all test images after being experimentally defined.

Images rarely contain perfectly-shaped circles. Therefore, with the purpose of testing accuracy for a single-circle, the detection is challenged by a ground-truth circle which is determined from the original edge map. The parameters ,  , and representing the testing circle are computed using the (6)–(9) for three circumference points over the manually-drawn circle. Considering the centre and the radius of the detected circle are defined as and , the Error Score () can be accordingly calculated as The central point difference represents the centre shift for the detected circle as it is compared to a benchmark circle. The radio mismatch accounts for the difference between their radii. and represent two weighting parameters which are to be applied separately to the central point difference and to the radio mismatch for the final error . At this work, they are chosen as and . Such a choice ensures that the radius difference would be strongly weighted in comparison to the difference of central circular positions between the manually detected and the machine-detected circles. Here, we assume that if is found to be less than 1, then the algorithm gets a success; otherwise, we say that it has failed to detect the edge-circle. Note that for and means the maximum difference of radius tolerated is 10, while the maximum mismatch in the location of the center can be 20 (in number of pixels). In order to appropriately compare the detection results, the Detection Rate (DR) is introduced as a performance index. DR is defined as the percentage of reaching detection success after a certain number of trials. For “success” it does mean that the compared algorithm is able to detect all circles contained in the image, under the restriction that each circle must hold the condition . Therefore, if at least one circle does not fulfil the condition of , the complete detection procedure is considered as a failure.

In order to use an error metric for multiple-circle detection, the averaged Es produced from each circle in the image is considered. Such criterion, defined as the Multiple Error (ME), is calculated as follows: where represents the number of circles within the image according to a human expert.

Figure 7 shows three synthetic images and the resulting images after applying the GA-based algorithm [5], the BFOA method [9], and the proposed approach. Figure 8 presents experimental results considering three natural images. The performance is analyzed by considering 35 different executions for each algorithm. Table 2 shows the averaged execution time, the averaged number of function evaluations, the detection rate in percentage, and the averaged multiple error (ME), considering six test images (shown by Figures 7 and 8). Close inspection reveals that the proposed method is able to achieve the highest success rate still keeping the smallest error and demanding less computational time and a lower number of function evaluations for all cases.

In order to statistically analyze the results in Table 2, a nonparametric significance proof known as the Wilcoxon’s rank test [3032] for 35 independent samples has been conducted. Such proof allows assessing result differences among two related methods. The analysis is performed considering a 5% significance level over the number of function evaluations and a multiple error (ME) data. Tables 3 and 4 report the values produced by Wilcoxon’s test for a pairwise comparison of the number of function evaluations and the multiple error (ME), considering two groups gathered as APRE versus GA and APRE versus BFOA. As a null hypothesis, it is assumed that there is no difference between the values of the two algorithms. The alternative hypothesis considers an existent difference between the values of both approaches. All values reported in Tables 3 and 4 are less than 0.05 (5% significance level) which is a strong evidence against the null hypothesis, indicating that the best APRE mean values for the performance are statistically significant which has not occurred by chance.

Figure 9 demonstrates the relative performance of APRE in comparison with the RHT algorithm as it is described in [4]. All images belonging to the test are complicated and contain different noise conditions. The performance analysis is achieved by considering 35 different executions for each algorithm over the three images. The results, exhibited in Figure 9, present the median-run solution (when the runs were ranked according to their final ME value) obtained throughout the 35 runs. On the other hand, Table 5 reports the corresponding averaged execution time, detection rate (in %), and average multiple error (using (10)) for APRE and RHT algorithms over the set of images. Table 5 shows a decrease in performance of the RHT algorithm as noise conditions change. Yet the APRE algorithm holds its performance under the same circumstances.

5. Conclusions

In this paper, a novel evolutionary algorithm called the Adaptive Population with Reduced Evaluations (APRE) is introduced to solve the problem of circle detection. The proposed algorithm reduces the number of function evaluations through the use of two mechanisms: () adapting dynamically the size of the population and () incorporating a fitness calculation strategy which decides when it is feasible to calculate or only estimate new generated individuals.

The algorithm begins with an initial population which will be used as a memory during the evolution process. To each memory element, it is assigned a normalized fitness value called quality factor that indicates the solution capacity provided by the element. From the memory, only a variable subset of elements is considered to be evolved. Like other population-based methods, the proposed algorithm generates new individuals considering two operators: exploration and exploitation. Such operations are applied to improve the quality of the solutions by () searching the unexplored solution space to identify promising areas containing better solutions than those found so far and () successive refinement of the best found solutions. Once the new individuals are generated, the memory is updated. In such stage, the new individuals compete against the memory elements to build the final memory configuration.

In order to save computational time, the approach incorporates a fitness estimation strategy that decides which individuals can be estimated or actually evaluated. As a result, the approach can substantially reduce the number of function evaluations, yet preserving its good search capabilities. The proposed fitness calculation strategy estimates the fitness value of new individuals using memory elements located in neighboring positions that have been visited during the evolution process. In the strategy, those new individuals, close to a memory element whose quality factor is high, have a great probability to be evaluated by using the true objective function. Similarly, it is also evaluated those new particles lying in regions of the search space with no previous evaluations. The remaining search positions are estimated assigning them the same fitness value that the nearest location of the memory element. By the use of such fitness estimation method, the fitness value of only very few individuals are actually evaluated whereas the rest is just estimated.

Different to other approaches that use an already existent EA as framework, the APRE method has been completely designed to substantially reduce the computational cost but preserving good search effectiveness.

To detect the circular shapes, the detector is implemented by using the encoding of three pixels as candidate circles over the edge image. An objective function evaluates if such candidate circles are actually present in the edge image. Guided by the values of this objective function, the set of encoded candidate circles are evolved using the operators defined by APRE so that they can fit to the actual circles on the edge map of the image.

In order to test the circle detection accuracy, a score function is used (19). It can objectively evaluate the mismatch between a manually detected circle and a machine-detected shape. We demonstrated that the APRE method outperforms both the evolutionary methods (GA and BFOA) and Hough Transform-based techniques (RHT) in terms of speed and accuracy, considering a statistically significant framework (Wilconxon test). Results show that the APRE algorithm is able to significantly reduce the computational overhead as a consequence of decrementing the number of function evaluations.

Funding

The proposed algorithm is part of the vision system used by a biped robot supported under the Grant CONACYT CB 181053.