Abstract

This paper proposed an improved approach to decompose structuring elements of an arbitrary shape. For the model of this method, we use an improved dilation-union model, adding a new termination criterion, as the sum of 3-by-3 matrix should be less than 5. Next for the algorithm of this method, we introduced in the restarted simulated annealing particle swarm optimization method. The experiments demonstrate that our method can find better results than Park's method, Anelli's method, Shih's SGA method, and Zhang's MFSGA method. Besides, our method gave the best decomposition tree of different SE shapes including “ship,” “car,” “heart,” “umbrella,” “vase,” “tree,” “cat,” “V,” “bomb,” and “cup.”

1. Introduction

Mathematical morphology (MM) is a theory and technique for the analysis and processing of geometrical structures based on set theory, lattice theory, topology, and random functions. MM not only is commonly applied to digital images, but also can be employed to graphs, surface meshes, solids, and many other spatial structures [1].

In these applications, the inherent strategy in MM is to explore the characteristics of an object by probing its microstructure with various forms, known as structuring element (SE). Most image processing architectures adapted to morphological operations use SEs of a limited size. However, implementation becomes difficult when a large-sized SE is employed. Hence, the techniques for decomposing a large-sized SE into combined small-sized SEs are of importance [2].

In the last decade, techniques have been proposed for the decomposition of SE, but they are intricate and sometimes suffer from indecomposable cases [3]. Hashimoto and Barrera indicated that traditional algorithms have the disadvantage of being unable to decompose many simply connected decomposable SEs [4]. Shih and Wu developed a method for decomposing arbitrarily shaped binary SEs by standard genetic algorithm (SGA). The algorithm performed an iterative process to create new ones that minimize a predefined fitness function [2]. Zhang and Wu proposed a recursive model and used the migration fitness scaling genetic algorithm (MFSGA) for SE decomposition [5].

The abovementioned SGA and MFSGA algorithms are time-consuming, and they can be easily trapped into local optimal points, leading to a wrong solution. Particle swarm optimization (PSO) is an efficient algorithm compared to SGA [68]. Zhang and Wu proposed a restarted simulated annealing PSO (RSAPSO) algorithm [9] for further improvement. In their paper, they reported that RSAPSO combined the global search ability of PSO and the local search ability of restarted simulated annealing (RSA) algorithm, and RSAPSO offset the weakness of both PSO and RSA. They also proved that the RSAPSO is superior to SGA, RSA, and PSO by six benchmark functions. In this paper, we proposed to introduce the recursive model and the RSAPSO algorithm for SE decomposition.

The rest of the paper is organized as follows. Section 2 described the methodology, including the concept of SE decomposition, the recursive dilation-union model, the encoding strategy, and the objective function. We also introduced the RSAPSO algorithm. Experiments in Section 3 compared the RSAPSO algorithm with Park’s method, Shih’s method, Anelli’s SGA method, and Zhang’s MFSGA method. Finally, Section 4 is devoted to discussions and conclusions.

2. Methodology

2.1. Concept of SE Decomposition

Suppose denotes a binary image and denotes a SE. If we decompose into , the dilation and erosion will become as follows according to associative law: The computational cost (CC) for dilation/erosion operators is equal to the number of nonzero element of . Usually the CC of is extremely large; however, the sum of CC of and are relatively small. Therefore, the SE decomposition can dramatically reduce the CC.

2.1.1. Dilation Model

Dilation model is to decompose SE by only dilation operator. Suppose is decomposed into ; the serial computational cost (SCC) of dilation model is equal to the sum of CC of , and .

Consider Figure 1 shows three examples. The first decomposes a square SE of size into a row vector of size and a column vector of size . The SCC decreases from 9 to . The second decomposes a square SE of size into two row vectors of size and two column vectors of size , and the SCC decreases from 25 to . The third example decomposes a diamond SE of diameter 7 into three small SEs of size , and the SCC decrease from 25 to .

2.1.2. Dilation-Union Model

Dilation-union model is to decompose SE by both dilation and union operators. There are two types of computational cost for dilation-union model: the SCC and the parallel computational cost (PCC).

Consider Suppose can be decomposed into the union of and ; then the SCC is equal to the sum of CC of , , and ; the PCC is equal to the maximal value of and . Figure 2 gives an example; here the subscript (−1, −1) denotes that this square matrix should be translated −1 at both the -axis and the -axis. The SCC decreases from 14 to . The PCC decreases from 14 to max .

2.1.3. Recursive Dilation-Union Model

Let denote a SE of size . The first level decomposition is written as Here we use the dilation-union model: denotes the variable-size matrix, denotes the fixed-size prime component, and denotes the residue component. The can be easily decomposed into union of factors of size because the size is the golden standard as the elementary structuring component for decomposition in the literature.

Consider Here denotes the number of 3-by-3 residual matrix. The subscript denotes that the should be translated by . Therefore, the iteration continues until the termination criteria are satisfied. In Zhang and Wu’s paper [5], they defined the termination criteria (TC) as that the size of is smaller than or equal to : However, this is not perfect because the binary SE can be decomposed further as shown in Figure 1(a). So our new criterion is set as We improved the TC by adding the rule . The reason is that any 3-by-3 SE with less than five 1 is indecomposable.

The flowchart of the recursive dilation-union model is depicted in Figure 3, which indicates that we record 4 variables at each iteration level , and the decomposition tree can be depicted via those variables. The following task is to develop an effective method to perform formulas (4) and (5).

2.2. Optimization Problem

The SE decomposition problem can be transformed into an optimization problem with the help of dilation-union model. In what follows we will briefly discuss the encoding strategy and the objective function of the optimization problem.

2.2.1. Encoding Strategy

Formula (4) or formula (5) can be regarded as an optimization problem. We transform the prime component into a row vector string of chromosomes. For any SE at any iteration, the is a 3-by-3 two-value image; therefore, the chromosome can be written as Here denotes the chromosome and denotes the locus. Figure 4(a) illustrates the numbering scheme for , namely, from left to right and then from top to bottom. Two examples are shown in Figures 4(b)-4(c), and their 1D strings of chromosomes are “001001101” and “101110101,” respectively.

2.2.2. Objective Function

Since the prime component is encoded, variable matrix of SE and residual matrix can be obtained through the following formula: Then, we can extract two different types of costs, SCC and PCC, as follows: Note that those variables satisfy the equality as . In this paper we choose SCC as the objective function, since the PCC needs pipelined architecture, which is difficult to implement in practice.

2.3. RSAPSO Algorithm

Now that the encoding strategy and objective function are established, we employed the RSAPSO algorithm [9] to search the optimal points. PSO is a method that optimizes a problem by iteratively improving a candidate solution with regard to a given measure of quality [10]. It is commonly known as metaheuristic method as it makes few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, PSO does not use the gradient of the problem being optimized, which means that PSO does not require the optimization problem to be differentiable as is required by classic optimization methods such as gradient descent and quasi-Newton methods [11]. PSO can therefore also be used in optimization problems that are partially irregular, noisy, adaptive, and so forth [12].

Simulated annealing (SA) was chosen as the local search method. SA comes from annealing in metallurgy [13], a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects [14]. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one [15]. We introduced the restarted simulated annealing (RSA) technique to improve the performance of SA.

The proposed algorithm RSAPSO combines both the exploitation ability from RSA and the exploration ability from PSO. It divides the population into two halves: one half runs PSO and the other half runs RSA. In what follows, we will describe those three algorithms in depth.

2.3.1. Particle Swarm Optimization

PSO is a population based stochastic optimization technique, which simulates the social behavior of a swarm of bees, a flock of birds, or a school of fish. By randomly initializing the algorithm with candidate solutions, the PSO successfully leads to a global optimum [16]. This is achieved by an iterative procedure based on the processes of movement and intelligence in an evolutionary system [17]. Figure 5(a) shows the flow chart of a PSO algorithm.

In PSO, each potential solution is represented as a particle. Two properties (position and velocity ) are associated with each particle. Suppose that and of the th particle are given as where stands for the dimensions of the problem. In each iteration, a fitness function is evaluated for all the particles in the swarm [18]. The velocity of each particle is updated by keeping track of the two best positions: one is the best position a particle has traversed so far and is called “ ” and the other is the best position that any neighbor of a particle has traversed so far. It is a neighborhood best called “ .” When a particle takes the whole population as its neighborhood, the neighborhood best becomes the global best and is accordingly called “ .” Hence, a particle’s velocity and position are updated as follows: where is called the “inertia weight” that controls the impact of the previous velocity of the particle on its current one. The parameters and are positive constants, called “acceleration coefficients.” The parameters and are random numbers that are uniformly distributed in the interval . These random numbers are updated every time when they occur. The parameter stands for the given time step. The population of particles is then moved according to (12) and tends to cluster together from different directions. However, a maximum velocity should not be exceeded by any particle to keep the search within a meaningful solution space [19]. The PSO algorithm runs through these processes iteratively until the termination criterion is satisfied [2022].

2.3.2. Restarted Simulated Annealing

SA algorithm is a probabilistic hill-climbing technique that is based on the annealing/cooling process of metals [23]. This annealing process occurs after the heat source is removed from a molten metal and its temperature starts to decrease. As the temperature decreases, the energy of the metal molecules reduces, and the metal becomes more rigid. The procedure continues until the metal temperature has reached the surrounding ambient temperature, at which stage the energy has reached its lowest value and the metal is perfectly solid [24].

The SA procedure begins by generating an initial solution at random. At initial stages, a small random change is made in the current solution . The new solution is called . The perturbation depends on a temperate parameter and a scaling constant .

Consider Here is a random value between 0 and 1 with uniform distribution. The temperature decreases with each iteration of the algorithm, thus reducing the size of the perturbations as the search progresses. This mechanism produces large perturbation in the initial stages of the search and ensures that the resulting parameters are fine-tuned towards the end of the optimization [25].

A move is made to the new solution if it has smaller energy or if the probability function has a higher value than a randomly generated number. Otherwise a new solution is generated, evaluated, and compared again. The probability of accepting a new solution which is called “Metropolis law” is given as follows: In order to avoid getting trapped at local extrema points, the reduction rate of should be slow enough. In this study the following method to reduce the temperature has been used: Here is the initial temperature, is the reduction constant, and is the number of iterations. In general, most worsening moves may be accepted at initial stages, but at the final stage only improving ones are likely to be allowed. This can help the procedure jump out of a local minimum.

However, sometimes it is better to move back to a former solution that was significantly better rather than always moving from the current state. This process is called “restarting” of SA [26]. To do this we set the temperature to a former value and restart the annealing schedule. The decision to restart can be based on several criteria, including whether a fixed number of steps had passed, whether the current energy is too high compared with the best obtained so far, or whether the random number falls within prescribed range (randomly restart). In this paper, we restart the SA when the current energy is too high compared with the best energy because it performs best among all criterions [27]. The flowchart of RSA is shown in Figure 5(b).

2.3.3. Pseudocodes of RSAPSO

Traditional PSO algorithm suffers from getting trapped at the early stage [28]. On the other hand, RSA accepts a worse solution so it can escape from a local minimum, resist premature convergence, and increase the diversity [12]. Therefore, a new hybrid strategy was proposed and referred to as RSAPSO [9]. The proposed RSAPSO algorithm offsets the weaknesses of PSO and RSA. The main idea lies in the fact that it divides the population into two halves: one half runs PSO and the other half runs RSA. During each step, the results are combined and updated by the best result that is picked up from the whole population. The flowchart of RSAPSO is depicted in Figure 5(c), and its pseudocodes are given as follows.

Step 1 (initialization). Generate the population randomly.

Step 2 (evaluation). Evaluate each particle’s objective function .

Step 3 (segmentation). Halve the population randomly: one half was updated by PSO according to formula (12) and the other half was updated by RSA according to formulas (13) and (14).

Step 4 (update). Update and ; update according to formula (15).

Step 5 (repeat). Repeat Step 2 to Step 4 until the termination criterion is satisfied.

Step 6 (output). Output the final results.

3. Experiments and Discussions

The experiments were carried out on the platform of P4 IBM with 2.2 GHz Intel Core i3-2330 M CPU and 6 GB RAM, running under 64-bit Windows 7 operating system. The algorithm was in-house developed via the global optimization toolbox of MATLAB 2013a. Readers can repeat the results of the experiment on any desktop installing MATLAB.

3.1. Parameters Setting

We compared the proposed method RSAPSO with Park’s method, Anelli’s method, Shih’s SGA method, and Zhang’s MFSGA method. Some important parameters are obtained through trial-and-error method and listed in Table 1. Here, denotes the number of population, denotes the crossover probability, denotes the mutation probability, denotes the migration interval, and denotes the migration rate.

3.2. Comparison with Park’s Method

The SE in Figure 6 is indecomposable by Park’s algorithm [4]. Its original SCC is 21. We ran our RSAPSO method 20 times, and all runs obtained the optimal decomposition shown in Figure 6(a) with SCC as 10. Besides, we ran SGA and MFSGA 20 times for comparison. Their results were shown in Table 2. The second row shows the result of all 20 runs. Take SGA as an example; “ ” represents that 19 runs obtained SCC as 10 and the left one run obtained SCC as 11. The third row “Averaged SCC” shows either the averaged SCC result of heuristic methods or the SCC result of deterministic methods. Table 2 shows that SGA obtained optimal result 19 times and one suboptimal result as shown in Figure 6(b). MFSGA obtained optimal result 19 times and one suboptimal result as shown in Figure 6(c). The SCC of the two suboptimal SE decomposition results was 11. The averaged SCC of SGA, MFSGA, and RSAPSO were 10.05, 10.05, and 10, respectively.

3.3. Comparison with Anelli’s Method

We compared the proposed RSAPSO with Anelli’s method [29], SGA, and MFSGA. We ran SGA, MFSGA, and RSAPSO 20 times, since the distribution of their initial population is random.

Figures 7(a) and 7(b) show two successful decomposition trees of Anelli’s SE, and both of their SCC are 18. Figures 7(c) and 7(d) show two failed decomposition trees, of which their SCC are 22 and 23, respectively.

The comparison results are shown in Table 3. The SCC of original SE was 41. Anelli’s method reduced it to 22, and the corresponding decomposition tree can be found in [29]. Among 20 runs, SGA obtained 15 times of optimal results, MFSGA obtained 17 times of optimal results, while the proposed RSAPSO method obtained all 20 successful runs. The averaged SCC of SGA, MFSGA, and RSAPSO were 19.05, 18.6, and 18, respectively.

3.4. Comparison with SGA

Shih’s paper proclaimed that the optimal SCC results of decomposed trees for “ship” and “car” shapes by SGA were 47 and 46, respectively. After searching by RSAPSO, we obtained better results (see Table 5). For the “ship” shape, the original SCC was 125, the optimal SCC found by SGA was 47, and our method found better SCC result as 44 (see 2nd row in Table 4). For the “car” shape, the original SCC was 168, the optimal SCC by SGA was 46, and our method achieved better SCC as 43 (see the 3rd row in Table 4).

3.5. Comparison with MFSGA

In what follows, we compared the proposed RSAPSO with MFSGA method. Zhang et al. proclaimed that the optimal SCC for “heart” and “umbrella” by MFSGA were 32 and 44, respectively. Using RSAPSO, we obtained better results. For SE of “heart,” the original SCC was 142. MFSGA minimized SCC to 32. Our method obtained better SCC result as 30. For SE of “umbrella,” the original SCC was 94. MFSGA minimized SCC to 44. Our method obtained better result as 41.

3.6. Other Examples

We used the RSAPSO for six other benchmarks SEs of different shapes, such as vase, tree, cat, V, bomb, and cup. The results were shown in Table 6. For the “vase” shape, the SCC of the original and decomposed SE was 149 and 32, respectively. For the “tree” shape, the SCC of the original and decomposed SE was 73 and 26, respectively. For the “cat” shape, the SCC of the original and decomposed SE was 102 and 30, respectively. For letter “V,” the SCC of the original and decomposed SE was 94 and 34, respectively. For the “bomb” shape, the SCC of the original and decomposed SE was 91 and 26, respectively. For the “cup” shape, the SCC of the original and decomposed SE was 113 and 25, respectively.

4. Conclusions

In this paper, a novel decomposition method for arbitrarily shaped SE was proposed. The SE decomposition problem was first transformed into an optimization problem with virtue of the improved recursive dilation-union model that contains a new termination criterion. Afterwards, the RSAPSO method was introduced as the searching algorithm. In the experiments, we compared our method with Park’s method [4], Anelli’s method [29], Shih’s SGA method [2], and Zhang’s MFSGA method [5]. The results based on several benchmark shapes showed that our method was more robust than the aforementioned algorithms and was able to find the optimal decomposition tree.

The contribution of the paper lies in the following 4 aspects: we proposed improved termination criteria for the recursive dilation-union model, adding the rule “ ”; we used the serial computational time as the objective function; we introduced the RSAPSO algorithm and proved it is superior to Park’s method, Anelli’s method, SGA, and MFSGA for SE decomposition application; and we gave the best decomposition results for the shapes “heart,” “ship,” and “car” among all the state-of-the-art SE decomposition methods.

The future tentative research will focus on other optimization algorithms, such as cuckoo search [30], harmony search [31], genetic pattern research [32], Tabu search [33], firefly algorithm [34], honey bee mating [35], and artificial bee colony [36, 37]. We will also try other termination criteria [38] and check the effectiveness of the new SE decomposition methods.

Conflict of Interests

The authors declare that they do not have any commercial or associative interest that represents a conflict of interests in connection with the work submitted.

Acknowledgments

The authors would like to express their gratitude to the three anonymous reviewers. This paper is supported from the Nanjing Normal University Research Foundation for Talented Scholars (no. 2013119XGQ0061).