Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 1317349 | https://doi.org/10.1155/2020/1317349

Gang Zhou, Kai Zhong, Zhongwei Li, Yusheng Shi, "Direct Least Absolute Deviation Fitting of Ellipses", Mathematical Problems in Engineering, vol. 2020, Article ID 1317349, 11 pages, 2020. https://doi.org/10.1155/2020/1317349

Direct Least Absolute Deviation Fitting of Ellipses

Academic Editor: Thomas Schuster
Received09 Nov 2019
Revised16 Apr 2020
Accepted04 May 2020
Published11 Jul 2020

Abstract

Scattered data from edge detection usually involve undesired noise which seriously affects the accuracy of ellipse fitting. In order to alleviate this kind of degradation, a method of direct least absolute deviation ellipse fitting by minimizing the algebraic distance is presented. Unlike the conventional estimators which tend to produce a satisfied performance on ideal and Gaussian noise data, while do a poor job for non-Gaussian outliers, the proposed method shows very competitive results for non-Gaussian noise. In addition, an efficient numerical algorithm based on the split Bregman iteration is developed to solve the resulting optimization problem, according to which the computational burden is significantly reduced. Furthermore, two classes of solutions are introduced as the initial guess, and the selection of algorithm parameters is studied in detail; thus, it does not suffer from the convergence issues due to poor initialization which is a common drawback existing in iterative-based approaches. Numerical experiments reveal that the proposed method is superior to its counterpart and outperforms some of the state-of-the-art algorithms for both Gaussian and non-Gaussian artifacts.

1. Introduction

Ellipse fitting is a fundamental tool for many computer vision tasks such as object detection, recognition, camera calibration [1], and 3D reconstruction [2]. A large amount of technologies have been developed for this issue and work well under the assumption that the fitting error follows a Gaussian distribution, among which Fitzgibbon’s direct least square fitting [3] has attracted a lot of attention due to its high accurateness, solution uniqueness, and fast and easy to use and has been regarded as the standardized landmark of fitting. Based on which, the scale normalization [4], robust data preprocessing for noise removal [5], optimization of the Sampson distance [6, 7], minimizing the algebraic distance in subject to suitable quadratic constraints [8, 9], and modeling the noise as a sum of random amplitude-modulated complex exponentials [10] and iterative orthogonal transformations [11] are launched. Although the performance has been improved to some extent, the inherit disadvantage of norm-based methods is that the square term will enlarge the influence of outliers and lead to a rough result. To overcome this shortcoming, the [12] and norms [13] are cited to address this issue; theoretically, it turns to be less sensitive to outliers for algorithms based on -norm , and the selection of iterative initial value is still pending yet. Given the advantage of -norm and direct algebraic distance minimizing strategy, a natural way is to replace the item with in Fitzgibbon’s model thus comes out our model, and we will explore its efficacy in the following sections.

2. The Proposed Minimization Model

2.1. Problem Modeling

For a general conic by an implicit second-order polynomial, it can be drawn aswhere  =  is denoted as the conic parameter,  =  is the data point, for every point , where N is the number of data points, and is called the algebraic distance between the data point and the conic ; note that the symbol stands for the absolute value of x. The sum of all algebraic distances at each point from 1 to N is

In order to force the conic curve into an ellipse, the parameters of the conic curve should meet the necessary conditions ; for convenience, we take its subset instead [3], and its matrix form can be rewritten as , with

Thus, the matrix form of the objective minimization model is given as follows:

By introducing the Lagrange multiplier , the constrained problem can be turned into an unconstrained one as

2.2. Numerical Aspects

Thus, our method is to seek optimal that minimizes the cost function in equation (5). The difficulty in solving equation (5) is that the term is nondifferentiable and inseparable. To overcome this problem, we follow the method of Zhou et al. [14]. The split Bregman method is first introduced in [15] as a very efficient tool to solve the general -regularized optimization problems. The basic idea is to convert the unconstrained minimization problem in equation (5) into a constrained one by introducing one auxiliary variable ; this leads to the constrained problem:

We can approximate equation (6) by adding one penalty function term as done in [16] to obtain an unconstrained problem. This yieldswhere is a positive penalization parameter. Finally, we strictly enforced the constraints by applying the split Bregman iteration [17] with a Bregman variable to obtain two subproblems:

In order to further simplify the subproblems, we split equation (8) into two separate ones. We now investigate these subproblems one by one.(1)The -related subproblem isNote that it is a least-square problem. Thus, we can obtain a closed solution as follows:(2)The -related subproblem is

This type of problem can be effectively computed using the standard soft-threshold formula presented in [16].where

The complete procedure of minimization problem equation (5) with the split Bregman iteration is summarized in Algorithm 1, where is the maximum number of iterations; to avoid the procedure falling into an invalid solution, the iteration stop criterion is set as the algebraic distance calculated on the (k + 1)-th iteration, is greater than k-th or the iterations reach to the maximum.

(1)Set
(2)while
(3) Solve using (13)
(4) Solve using (9)
(5) Solve using (11)
(6) Set k := k + 1.
(7)end while
(8)return

With the high development of the variational partial differential equation (PDE) in the field of image process, the -regularized PDE-based methods have gained more and more attention and turned out to be a promising approach for various applications such as denoise [18], destripe [19], deblur [20, 21], compressive sensing [22], and object tracking [23]. Though, the -regularized model typically achieves a very competitive performance, however, we have to select its parameters carefully or else it may fall into a local extremum and results in an undesired outcome. Almost all PDE-based methods suffer from this defect, so it is difficult to be promoted in industrial environments. Considering the classical ellipse fitting [3, 24], it aims to minimizing the following energy functional:

For it is a least-square problem, set , and we have

Comparing the solution derived from equation (11) with the solution in equation (16), when it satisfies the following conditions,the initial solution of is close to . Without losing generality, we set , forwhere x, y represents the position of the data point, thus, ; therefore, we can get . On the contrary, when , for any input data point , if and only if , where is a constant, as , the expression . Until now, only the empirical parameter is undetermined. When the ellipse parameter  =  is solved, we can normalize ; thus, it becomes  = . Then, its centroid , major semiaxis , minor semiaxis , and angle can be determined by [25]

For two ellipses E and F, their relative error can be defined aswhere denotes the area covered by ellipses E and F and simultaneously said the intersection of E and F and represents the area covered by ellipses E and F in total and said the union of E and F.

3. Experimental Results

In this section, some experiments are conducted to verify the robustness of the proposed method in suppressing the influence of noise for ellipse fitting with synthetic and natural data. Several state-of-the-art ellipse fitting algorithms are implemented for performance comparison such as Fitzgibbon’s -norm based direct least-squares ellipse fitting (DLS) [3], the hyper-least-squares fitting (HyperLS) [4], guaranteed ellipse fitting with Sampson distance (SampsonLS) [6], ElliFit [26], and the RANSAC method (RANSAC) [2] which are investigated. If there is no special explanation, all the parameters hold the same for all experiments. The scale normalization parameter is set for HyperLS. The number of sampling, the maximum number of iterations, and the threshold for eliminating the outliers in the RANSAC method are 5, 1,000, and 2, respectively, as the author advised. The parameter is set for the proposed method. We also studied the average running time of all tested algorithms as shown in Table 1.


MethodsDLSHyperLSSampsonLSElliFitRANSACProposed ()Proposed ()

9.2615.96153.227.3425.5637.6127.61

3.1. Ellipse Fitting with Synthetic Data

In this section, experiments on three types of simulated ellipse data including Gaussian noise, Laplace outlier, and ellipse fragment are conducted to evaluate the performance of the proposed method.(1)In the first experiment, 256 data points were sampled from an ellipse according to a uniform sampling in angle. The noise is stochastic additive Gaussian one with mean and standard deviation , where the noise level is equivalent to of the length of the ellipse minor semiaxis . For this kind of degradation, 100 data points are polluted. Figure 1(a) gives an example of simulated ellipse points corroded with Gaussian noise with . For a total run of 100 trials at per noise level , the mean, standard deviation of the relative error, and estimated ellipse parameters are demonstrated in Table 2. It can be found that, for each noise level , the HyperLS method obtains the highest accuracy, and the proposed method () is the suboptimal. It outperforms about 30% which corresponds to the DLS method while causes a little performance reduction by 4% to HyperLS. Figures 2(a) and 2(b) demonstrate the mean and standard deviation of the relative error obtained with the tested approaches, respectively. It should be pointed out that though the DLS method indicates a more competitive stability compared with the proposed method, an obvious performance improvement has been gained by our fitting model.(2)In the second experiment, take the ellipse parameters unchanged instead of the noise type with Laplace whose mean and standard deviation hold the same as the first experiment (in this case, 64 points from 256 are selected randomly polluted by Laplace noise; an example is shown in Figure 1(b) with ). In order to illustrate the convergence behavior of the proposed algorithm, we plot the functional energy calculated by equation (5) which varies with the number of iterations as shown in Figure 3. From the figure, we can see that the proposed method () can work well in a large wide parameter space; with the increasing of , the convergent speed becomes slower gradually. Empirically, when the number of the iterations approaches 300, a satisfactory output will be gained by the proposed method. For a total run of 100 trials at per Laplace noise level , the mean, standard deviation of the relative error, and estimated ellipse parameters are demonstrated in Table 3. It can be found that the proposed method achieves the best performance at each noise level except for and even outperforms the robust RANSAC method. Figures 2(c) and 2(d) are the mean and standard deviation of the relative error obtained with the tested approaches, respectively, from which it can be declared that the RANSAC method is very robust to Laplace outliers for different noise levels; meanwhile, the proposed and the RANSAC method are far superior to the other algorithms in this case.(3)In the third experiment, we characterized a point on an ellipse by its angle with respect to the canonical Cartesian coordinate system and formed eleven groups, namely, , , , , , , , , , , and , as shown in Figure 4. For each group, we generated 100 ellipses, and for each ellipse, we sampled 256 points. Two kinds of noise are simulated including Gaussian and Laplace both of which have mean and standard deviation . Figures 5(a) and 5(b) give the estimated means and standard deviation of the relative error for the Gaussian polluted ellipse fragment, while Figures 5(c) and 5(d) correspond to its Laplace counterpart. From Figure 5, we can find that, for the Gaussian ellipse fragment, the SampsonLS and the proposed method () get the lowest relative error, while for Laplace outliers, the proposed method () and the RANSAC method take a significant promotion in accuracy as expected. Either for Gaussian or Laplace, the DLS and ElliFit methods have good stability.


MethodsNoise level Relative error

DLS0.50.23%/0.1872.00/0.0464.01/0.0570.01/0.0550.00/0.0530.00/0.14
52.43%/0.7871.96/0.4664.00/0.4469.85/0.5750.85/0.5330.24/1.33
106.53%/1.6171.92/0.8864.01/1.0070.05/0.9852.94/0.9929.40/2.86
1512.61%/2.2372.15/1.4663.84/1.6871.43/1.5955.80/1.4929.23/4.37
2019.61%/2.9372.11/1.9264.26/1.9773.35/1.8559.33/2.1130.50/6.26
2527.02%/3.3872.26/2.9064.29/2.7476.51/2.2362.75/2.4431.04/8.88

HyperLS0.50.21%/0.1672.00/0.0464.01/0.0570.01/0.0549.99/0.0530.00/0.14
52.03%/0.6571.96/0.4664.00/0.4569.96/0.6150.05/0.5430.25/1.38
104.28%/1.3871.91/0.9264.00/1.0470.14/1.3050.11/1.1629.27/3.24
157.49%/2.5372.17/1.5863.81/1.8571.35/2.6950.11/1.9329.02/5.68
2011.56%/3.4972.12/2.2564.29/2.3673.18/4.2250.73/3.3030.64/9.65
2517.69%/5.8272.23/4.0564.31/3.4678.44/6.8950.48/3.8430.50/14.51

SampsonLS0.50.23%/0.1671.99/0.0464.01/0.0570.01/0.0549.99/0.0529.99/0.14
52.40%/0.7871.96/0.4663.99/0.4569.71/0.5549.37/0.5630.27/1.28
107.98%/2.4071.89/0.9864.09/1.0969.12/1.3147.03/1.5329.67/3.02
1520.51%/5.6271.94/2.2964.02/2.1269.76/3.0640.79/3.5030.22/6.54
2049.30%/12.9573.19/6.5864.12/4.5379.35/10.1926.23/10.9329.27/10.09
2569.01%/11.0772.21/27.8153.12/96.88106.86/100.5323.59/33.3930.32/14.70

ElliFit0.50.23%/0.1472.00/0.0464.01/0.0570.02/0.0549.99/0.0529.97/0.14
53.36%/0.8071.96/0.4764.00/0.4471.33/0.6149.84/0.5227.10/1.25
1011.56%/1.7771.91/1.0164.00/0.9676.42/1.4149.25/1.0620.30/2.18
1522.31%/2.5772.28/1.8363.90/1.6386.60/2.9648.76/1.3515.35/2.88
2032.76%/3.6072.40/3.4064.17/2.02100.37/6.0349.30/1.9412.50/3.27
2544.58%/4.8972.30/6.9764.18/2.71121.54/11.8550.32/2.3110.65/3.53

RANSAC0.51.22%/0.3871.99/0.2763.99/0.2769.97/0.3649.96/0.3430.08/0.73
56.68%/2.3572.18/1.5564.32/1.4570.68/2.2150.21/1.8230.39/4.96
1014.49%/4.7772.04/3.5864.23/3.6473.12/4.8450.10/4.2829.10/11.67
1525.43%/11.0571.89/7.0264.29/8.0677.11/11.5349.59/12.4930.50/17.60
2035.94%/15.7773.63/22.7566.36/11.8686.19/29.4451.30/17.0330.57/21.58
2551.86%/20.7558.89/97.7550.42/129.4495.55/73.2788.45/176.1533.40/22.94

Proposed ()0.50.26%/0.1572.00/0.0564.01/0.0670.01/0.0750.00/0.0629.99/0.17
52.96%/1.3172.09/0.6664.18/0.5869.56/0.8350.77/0.7529.95/2.24
106.63%/1.9772.18/1.3764.21/1.0269.37/1.5452.24/1.4528.29/4.34
1510.77%/3.0072.27/1.51263.95/1.6570.72/1.6954.85/1.8329.03/4.52
2016.82%/3.7972.31/1.9464.35/1.9872.26/2.0058.00/2.3330.27/5.89
2523.89%/3.9772.32/2.7664.33/2.4875.06/2.2661.31/2.5230.39/8.10

Proposed ()0.50.39%/0.2572.01/0.0964.01/0.0970.00/0.1350.00/0.1230.02/0.28
52.34%/0.9572.05/0.5564.09/0.5069.81/0.6749.89/0.6530.05/1.79
104.84%/1.7772.14/1.1964.15/1.0669.94/1.4349.65/1.2528.74/3.72
157.78%/2.5972.27/1.5963.98/1.9671.15/2.6249.62/2.2128.93/5.65
2011.75%/3.8072.32/2.4564.47/2.2872.94/4.3549.95/3.6830.47/9.43
2517.87%/6.3972.63/3.6164.61/3.2778.39/7.0848.95/4.5330.40/14.06


MethodsNoise level Relative error

DLS0.50.16%/0.1272.00/0.0264.00/0.0270.00/0.0350.00/0.0330.01/0.06
51.18%/0.4771.99/0.2963.98/0.2469.99/0.3450.23/0.2929.85/0.79
102.54%/0.9971.91/0.5463.96/0.4769.99/0.6750.67/0.5729.92/2.14
155.43%/2.3572.06/1.0264.01/1.1970.00/1.3551.80/1.3830.07/4.90
207.78%/3.1672.00/1.5064.04/1.6470.35/2.1452.80/1.9730.94/6.14
2511.96%/4.2171.71/2.3164.25/2.5170.87/2.5854.54/2.1532.08/10.98

HyperLS0.50.15%/0.1272.00/0.0264.00/0.0270.00/0.0350.00/0.0330.01/0.06
51.10%/0.4971.99/0.2963.98/0.2470.03/0.3550.02/0.2829.85/0.80
102.40%/0.9971.91/0.5463.96/0.4870.12/0.7449.87/0.5529.92/2.27
155.28%/3.0372.05/1.0663.99/1.3070.25/1.9749.86/1.3630.10/5.65
207.96%/4.4871.99/1.6964.02/1.8370.98/4.0849.42/2.3231.33/7.77
2513.32%/9.1671.80/2.7464.54/4.5172.98/8.7448.96/3.0432.77/14.60

SampsonLS0.50.17%/0.1272.00/0.0264.00/0.0270.00/0.0350.00/0.0330.01/0.06
51.05%/0.4872.01/0.2964.00/0.2369.97/0.3349.85/0.2829.90/0.65
103.08%/2.3672.00/0.8063.93/0.9169.78/0.7749.12/1.1130.21/1.88
156.69%/4.1172.14/1.7564.36/2.0569.54/1.3147.93/1.8130.01/3.66
2010.34%/5.6071.91/2.3864.12/2.7769.88/1.7546.28/2.7630.47/5.11
2515.09%/7.7771.38/2.5164.10/3.5070.56/2.5843.80/4.3929.15/6.23

ElliFit0.50.13%/0.1172.00/0.0264.00/0.0270.00/0.0349.99/0.0330.00/0.06
51.37%/0.5571.99/0.2963.98/0.2470.37/0.3749.96/0.2828.98/0.80
103.72%/1.4471.91/0.5563.96/0.4771.48/0.8049.67/0.5726.89/1.99
158.35%/2.7772.08/1.1064.01/1.1773.84/1.9349.29/1.1623.15/3.38
2013.41%/5.0372.05/1.7364.05/1.7077.93/5.0748.62/1.7320.59/4.49
2520.12%/7.3171.64/3.0264.27/2.4584.23/8.0148.32/2.0618.25/8.54

RANSAC0.50.66%/0.5171.99/0.2264.01/0.2069.98/0.2649.98/0.2330.00/0.52
51.39%/0.6372.04/0.4464.04/0.3670.02/0.5150.02/0.4429.95/0.97
101.59%/0.7072.03/0.4764.05/0.4269.99/0.5049.97/0.4830.05/1.16
151.57%/0.6672.07/0.4763.98/0.4169.99/0.4749.99/0.5129.88/1.09
201.61%/0.7872.00/0.4864.05/0.4570.05/0.5249.97/0.5329.95/1.00
251.56%/0.7872.00/0.4564.05/0.4170.02/0.5149.96/0.5130.08/1.08

Proposed ()0.50.14%/0.1272.00/0.0064.00/0.0070.00/0.0150.00/0.0030.00/0.01
50.12%/0.1072.00/0.0064.00/0.0070.00/0.0150.00/0.0030.00/0.01
100.18%/0.1272.00/0.0164.00/0.0070.00/0.0150.00/0.0030.00/0.01
150.17%/0.2272.00/0.0163.99/0.0970.00/0.0150.01/0.0930.00/0.09
200.20%/0.3972.00/0.0264.02/0.1369.99/0.0650.03/0.1829.98/0.13
251.62%/4.4472.07/0.8164.29/1.5070.04/0.9550.57/1.6329.50/3.73

Proposed ()0.50.42%/0.2472.00/0.1064.01/0.0969.99/0.1150.01/0.1130.03/0.26
50.30%/0.2072.00/0.0864.00/0.0670.00/0.0950.00/0.0829.98/0.19
100.29%/0.1772.00/0.0864.01/0.0769.98/0.1149.99/0.0829.96/0.20
150.38%/0.3372.02/0.1064.02/0.0969.94/0.1649.99/0.0929.87/0.37
200.70%/0.9872.05/0.1864.10/0.2869.85/0.4550.01/0.3029.71/0.90
251.44%/3.1772.08/0.8764.18/0.8469.74/1.1450.07/1.0729.22/3.44

3.2. Ellipse Fitting with Natural Data

Selective laser melting is one of the crucial steps in additive manufacturing (AM). The fusion area contour plays a very important role in geometrical quality control for AM [27]. In our project, the levelset method [28] is used to extract the profile of the part. Figure 6(a) is a frame of the image captured with a CCD camera in the fusion area. In order to highlight the visual quality, a subimage is zoomed in Figure 6(b), and its contour extract results are shown in Figures 6(c) and 6(d). It can be easily seen from Figure 6(c) that there are several bumps that appeared in the outline, owing to which, the estimated ellipse center and the major semiaxis produced an obvious offset towards the outlier excluding our fitting model as shown in Figure 7. It may can be declared that corresponding to the -norm based methods, though the proposed -norm-based model may lead to a slight performance degradation in the Gaussian outlier, it turns out to be a more robust estimator for Laplacian and Laplacian-like abrupt degradation data.

4. Conclusions

In conclusion, a novel direct -norm-based ellipse fitting model is presented in this study. It not only works well with the Gaussian outlier but also Laplacian degradation. Comparison experiments suggest that the proposed method can mitigate a wide variety of artifacts regardless of the noise type which is a very challenging issue for other state-of-the-art algorithms. Benefited from the standard , HyperLS estimator, and split Bregman iteration, its convergence behavior is tested to verify its efficacy.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was supported by the National Key R&D Program of China (Grant no. SQ2018YFB110170), the Hubei Technology Innovation Project (major project) (Grant no. 2019AAA008), and the China Postdoctoral Science Foundation under Grant 2017M612453.

References

  1. K. Zhong, “Pre-calibration-free 3D shape measurement method based on fringe projection,” Optics Express, vol. 24, p. 13, 2016. View at: Publisher Site | Google Scholar
  2. K. Kanatani, Y. Sugaya, and Y. Kanazawa, “Robust fitting,” in Ellipse Fitting for Computer Vision Implementation and Applications, G. Medioni and S. Dickinson, Eds., pp. 31–36, Morgan & Claypool, San Rafael, CA, USA, 2016. View at: Google Scholar
  3. A. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct least square fitting of ellipses,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 5, pp. 476–480, 1999. View at: Publisher Site | Google Scholar
  4. K. Kanatani and P. Rangarajan, “Hyper least squares fitting of circles and ellipses,” Computational Statistics & Data Analysis, vol. 55, no. 6, pp. 2197–2208, 2011. View at: Publisher Site | Google Scholar
  5. J. Liang, M. Zhang, D. Liu et al., “Robust ellipse fitting based on sparse combination of data points,” IEEE Transactions on Image Processing : A Publication of the IEEE Signal Processing Society, vol. 22, no. 6, pp. 2207–2218, 2013. View at: Google Scholar
  6. L. Zygmunt, “Szpak, wojciech chojnacki, and anton van den hengel guaranteed ellipse fitting with the sampson distance,” in Proceedings of the European Conference on Computer Vision, ECCV, pp. 87–100, New York, NY, USA, 2012. View at: Google Scholar
  7. Z. L. Szpak, W. Chojnacki, and A. van den Hengel, “Guaranteed ellipse fitting with a confidence region and an uncertainty measure for centre, axes, and orientation,” Journal of Mathematical Imaging and Vision, vol. 52, no. 2, pp. 173–199, 2015. View at: Publisher Site | Google Scholar
  8. M. Kesäniemi and K. Virtanen, “Direct least square fitting of hyperellipsoids,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 1, pp. 63–76, 2017. View at: Google Scholar
  9. J. Liang, Y. Wang, and X. Zeng, “Robust ellipse fitting via half-quadratic and semidefinite relaxation optimization,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 4276–4286, 2015. View at: Publisher Site | Google Scholar
  10. S. Mulleti and C. S. Seelamantula, “Ellipse fitting using the finite rate of innovation sampling principle,” IEEE Transactions on Image Processing, vol. 34, p. 1, 2015. View at: Google Scholar
  11. A. Reza and A. S. Sengupta, “Least square ellipsoid fitting using iterative orthogonal transformations,” Applied Mathematics and Computation, vol. 314, pp. 349–359, 2017. View at: Publisher Site | Google Scholar
  12. J. Liang, “Robust ellipse fitting via alternating direction method of multipliers,” Signal Processing, vol. 34, 2019. View at: Google Scholar
  13. Z. Shi, H. Wang, C. S. Leung et al., “Robust real-time ellipse fitting based on lagrange programming neural network and locally competitive algorithm,” Neurocomputing, vol. 23, 2020. View at: Google Scholar
  14. G. Zhou, “Robust destriping of MODIS and hyperspectral data using a hybrid unidirectional total variation model,” Optik–International Journal for Light and Electron Optics, vol. 126, pp. 7-8, 2015. View at: Publisher Site | Google Scholar
  15. T. Goldstein and S. Osher, “The split Bregman method for L1-regularized problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 2, pp. 323–343, 2009. View at: Publisher Site | Google Scholar
  16. Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM Journal on Imaging Sciences, vol. 1, no. 3, pp. 248–272, 2008. View at: Publisher Site | Google Scholar
  17. Y. Chang, L. Yan, H. Fang, and C. Luo, “Anisotropic spectral-spatial total variation model for multispectral remote sensing image destriping,” IEEE Transactions on Image Processing : A Publication of the IEEE Signal Processing Society, vol. 24, no. 6, pp. 1852–1866, 2015. View at: Google Scholar
  18. Z. Qin, D. Goldfarb, and S. Ma, “An alternating direction method for total variation denoising,” Optimization Methods and Software, vol. 30, no. 3, pp. 594–615, 2015. View at: Publisher Site | Google Scholar
  19. M. Bouali and S. Ladjal, “Toward optimal destriping of MODIS data using a unidirectional variational model,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 8, pp. 2924–2935, 2011. View at: Publisher Site | Google Scholar
  20. L. Yan, H. Fang, and S. Zhong, “Blind image deconvolution with spatially adaptive total variation regularization,” Optics Letters, vol. 37, no. 14, pp. 2778–2780, 2012. View at: Publisher Site | Google Scholar
  21. H. Fang, L. Yan, H. Liu, and Y. Chang, “Blind Poissonian images deconvolution with framelet regularization,” Optics Letters, vol. 38, no. 4, pp. 389–391, 2013. View at: Publisher Site | Google Scholar
  22. J. Yang and Y. Zhang, “Alternating direction algorithms for $\ell_1$-Problems in compressive sensing,” SIAM Journal on Scientific Computing, vol. 33, no. 1, pp. 250–278, 2011. View at: Publisher Site | Google Scholar
  23. X. Mei and H. Ling, “Robust visual tracking using minimization,” in Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, IEEE, New York, NY, USA, 2009. View at: Google Scholar
  24. R. Halir and F. Jan, “Numerically stable direct least squares fitting of ellipses,” in Proceedings of the 6th International Conference in Central Europe on Computer Graphics and Visualization WSCG, vol. 98, New York, NY, USA, 1998. View at: Google Scholar
  25. P. L. Rosin, “Further five-point fit ellipse fitting,” Graphical Models and Image Processing, vol. 61, no. 5, pp. 245–259, 1999. View at: Publisher Site | Google Scholar
  26. D. K. Prasad, M. K. H. Leung, and C. Quek, “ElliFit: an unconstrained, non-iterative, least squares based geometric Ellipse Fitting method,” Pattern Recognition, vol. 46, no. 5, pp. 1449–1465, 2013. View at: Publisher Site | Google Scholar
  27. Z. Li, X. Liu, S. Wen et al., “In situ 3d monitoring of geometric signatures in the powder-bed-fusion additive manufacturing process via vision sensing methods,” Sensors, vol. 18, no. 4, p. 1180, 2018. View at: Publisher Site | Google Scholar
  28. C. Li, R. Huang, Z. Ding, J. C. Gatenby, D. N. Metaxas, and J. C. Gore, “A level set method for image segmentation in the presence of intensity inhomogeneities with application to MRI,” IEEE Transactions on Image Processing : A Publication of the IEEE Signal Processing Society, vol. 20, no. 7, pp. 2007–2016, 2011. View at: Google Scholar

Copyright © 2020 Gang Zhou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views336
Downloads308
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.