Abstract

The continuous monitoring of the particle size distribution in particulate processes with suspensions or emulsions requires measurement techniques that can be used as in situ devices in contrast to ex situ or laboratory methods. In this context, for the evaluation of turbidimetric spectral measurements, the application of different numerical inversion algorithms is investigated with respect to the particle size distribution determination of polystyrene suspensions. A modified regularization concept consisting of a Twomey-Phillips-Regularization with an integrated nonnegative constraint and a modified L-curve criterion for the selection of the regularization parameter is used. The particle size (i.e., particle diameter) of polystyrene suspensions in the range –3 µm was validated via dynamic light scattering and differential centrifugal sedimentation and compared to the retrieved particle size distribution from the inverted turbidimetry measurements.

1. Introduction

The characterization of solid and liquid (sub)micron-sized particle, for example, in suspensions or emulsions, is of interest for industry and research. The industrial usage and application are manifold, for example, in pharmaceutical, biotechnological, and machining industry. Hereby, the measurement technique needs to be reliable, rapid, in situ, and noninvasive [1]. One promising optical measurement technique for this application is the UV-Vis spectroscopy (also known as multiwavelength spectroscopy, turbidity spectroscopy, or turbidimetry) which had been widely applied in scientific and industrial applications [24] for a broad range of different disperse systems [57]. For instance, turbidimetry proved to be capable of covering the particle size (i.e., diameter) range = 0.02–20 μm for NIST-traceable polystyrene latex standards [8].

The UV-Vis spectroscopy is an indirect optical characterization method [9] which requires numerical inversion algorithms to determine the particle size (distribution). The underlying continuous Fredholm integral equation of the first kind relating measured turbidity to the particle size distribution function is inherently ill-posed because no unique solution exists. Additionally, for the recreation of the size distribution from turbidity measurements, the continuous input signal is discretized which leads to ill-conditioned matrices. Therefore, small measurement errors lead to large errors and unacceptable solutions, which may be solved, for example, by an a priori restriction of the shape or type of the particle size distribution [10, 11]. Another critical problem of the turbidimetry method is caused by oscillations of the kernel function due to the dependence of Mie-extinction on the Mie-parameter [12, 13]. The integral operators based on continuous models like Wiener, Hammerstein, or Fredholm operator are treated with iterative approaches like the Landweber-Bialy, projected Landweber, and Chahine, as well as direct methods like nonnegative least square, Active-Set, and the Twomey-Phillips-Regularization [14]. A general overview over inverse problems is provided by Hansen [15] and Ghosh Roy and Couchman [16], whereas a comparison of different conventional applied inverse algorithms is given by Kandlikar and Ramachandran [17], Vargas-Ubera et al. [18], and Riefler and Wriedt [19].

Iterative methods with an implemented nonnegative constraint are characterized by a good numerical stability of the method due to the avoidance of singular value decomposition (like the Twomey-Phillips-Regularization (TPS)) [20], but dealing with the drawback of an increased computational effort (slow convergence of the solution). Crilly [21] also points out that iterative methods have advantages due to their simplicity, because no a priori knowledge of statistics of the measured data is needed and an easy implementation of constraints and error boundaries are given.

The Twomey-Phillips-Regularization (also known as Tikhonov-Regularization) treats discrete ill-posed problems to overcome the numerical instabilities associated with the large condition number by replacing the problem with a nearby well-conditioned problem to yield an approximate solution [22]. This is done by the choice of a regularization parameter γ that controls the degree of smoothing or regularization for the applied problem [22], since a controlled smoothing can be used to suppress unwanted oscillations that are often found in numerical solutions [23, 24]. This leads to more satisfactory solutions than using the ordinary least squares solution [14]. The choice of the value of the regularization parameter γ is the crucial point of the algorithm, since its selection is “a trade off between an oscillatory solution based on the minimization of the residuals and a perfectly smooth solution not describing the real system” [3]. Conventionally, the selection of the regularization parameter γ is done via the L-curve criterion [22, 25] or generalized cross validation (GCV) [10, 25]. The smoothing of the solution is enforced by the smoothing matrix by penalization of oscillations in the particle size distribution that may be formulated as the sum of the squares of the zeroth, first, or second differences [26]; hereby, the sums of the second squares are a “less restrictive” measure of smoothness [10] than the identity matrix or the sum of the first differences [2628] and the most common applied smoothness criteria [17]. Other methods improve the solution of TPR by constraining the solution regarding to nonnegativity as well as normalization of the PSD to unity [27, 29].

Horváth et al. [30] recently introduced an optimised nonnegative Twomey-Phillips-Regularization (NNTPR) and a modified L-curve criterion (LCC) for the identification of the “corner” of the L-curve as the optimal regularization parameter, which will be compared in this contribution to conventionally applied algorithms for turbidity measurements of monodisperse polystyrene suspensions. The comparison includes the standard Tikhonov regularization [23, 24, 31] with an applied L-curve criterion [3234] and generalized cross validation (GCV) [10, 35, 36] for choice of the regularization parameter. Moreover, iterative approaches like the nonnegative least square solution (NNLS) [32, 37], Active-Set (AS) [38, 39], Landweber-Bialy (LB) [40, 41], projected Landweber (PLW) [20, 42], Chahine method (CH) [9, 43, 44], and a least square fit to a monomodal log-normal distribution (LSF) [45, 46] are applied. The inversion results will be evaluated for the numerical approaches as well as the computational effort. The retrieved particle size distributions were evaluated by the determination of the median, modal, and Sauter mean particle size in comparison to the original measured values.

2. Materials and Methods

2.1. Materials

Polystyrene particle samples have been used (see Table 1) with particle diameters  nm and μm from Fischer Scientific (Niederau, Germany), with μm from Hans Lach GmbH (Brunn, Austria), with μm from Sigma Aldrich (St. Louis, USA), and with μm produced in-house [56] (see Figure 1). The actual particle size distribution of the polystyrene suspension has been validated via dynamic light scattering (DLS) (Delsa Nano C, Beckman Coulter, Brea, USA) and differential centrifugal sedimentation (DCS) (CPS Centrifuge DC 24000 UHR, CPS Instruments, Seagate Lane, USA). The suspension samples have been dispersed indirectly in a water-bath via ultrasonification with a Sonopuls HD 3200 equipped with an apex KE76 (all Bandelin, Berlin, Germany) for 15 minutes and an intensity of 50% prior to the measurement.

2.2. Spectroscopy Measurements

The UV-Vis-NIR transmission spectra were measured with a miniaturized spectrometer HR2000+ ES, a light source DH2000-BAL UV/Vis (all Ocean Optics, Dunedin, USA), a quartz glass cuvette with a path length of  cm, and deionized water as continuous phase. The spectra were recorded in the wavelength range of  nm ( is the wavelength in vacuum) at 1454 different wavelengths; thus, the spatial wavelength resolution of the single beam spectrometer is  nm. All measurements were carried out at room temperature 60 minutes after warming up of the bulbs. Particle concentrations of the suspensions were adjusted to measure a slight turbidity to avoid multiple scattering, which lead to lower number concentrations of the larger particles. The dark noise (i.e., the electronic background noise of the sensor/measurement equipment) was measured prior to the reference spectrum (pure water in the cuvette) and both spectra were stored to subtract them from the measurements. To improve the signal-to-noise ratio the spectra were averaged over 100 measurements [2].

The turbidity is defined by with the wavelength in the medium (i.e., the continuous phase water): with the path length , the refractive index of the medium , and the intensities of the incident and the attenuated beam and . The term is called absorbance, extinction, or optical density. The absorbance is proportional to the concentration (Beer’s Law) and proportional to the optical path length (Bouguer’s Law), which is summarized by where corresponds to the concentration of the dispersed material in mol/L and ε is the molar absorptivity when no stray light, emission, scattering, or reflection occurs. The suspensions have been homogenized before each measurement with a measurement duration shorter than 100 ms. Due to this short time span, settlement during measurements can be neglected, which would anyway only influence the particle concentration.

2.3. Scattering Calculation

The calculations of the Mie scattering coefficient were carried out by the MATLAB code from Mätzler [47] which is based on the work of Bohren and Hufman [48]. The description of the optical dispersion of the polystyrene particles was taken from the recent work of Gaigalas et al. [49] with the assumption of spherical particles, which seems to be valid as shown in Figure 1. The spectral dispersion for the continuous aqueous phase was taken from the work of Thormählen et al. [50] for standard conditions.

The Mie-theory is used to represent the scattering behaviour of spherical particles. The turbidity , the wavelength of the light in vacuum, and the normalized particle size distribution for a diluted dispersion may be calculated by [51] where is the size of the particle, is the particle size distribution, and is the Mie extinction coefficient. It is mentioned that the retrieved particle size distribution is “equivalent to volume-based particle size distribution obtained from DLS” [52]. The complex relative refractive index is given by where corresponds to the real component and to the imaginary part of the refractive index of the dispersed phase and to the refractive index of the continuous phase.

2.4. Numerical Inversion

By defining the Kernel (4) can be identified as a Fredholm integral equation of the first kind [26], which is a classical ill-posed problem, since different size distributions can fit the turbidity data within the same level of accuracy [53]. Discretization of (4) requires an appropriate linear discrete model [10], where the midpoint approximation rule was applied. For intervals, the integral can be, for a wavelength , approximated as the sum: where the approximation lead to “finite error in representing an integral as a discrete sum” [17]. The discretized kernel corresponds to an ill-conditioned matrix which produces a smoothing effect on the retrieved size distribution of (index relates to the size of the particle) for a direct problem for the minimization of and, therefore, amplifies high frequency components of (index relates to the measured turbidity) for indirect problems. Thus, small measurement errors in the measured turbidity spectra lead to large errors in the retrieved size frequency [26]. The kernel matrix coefficient contains the scattering information at the wavelength and the particle size .

The proposed solution of the vector (particle size distribution) may be gained via numerical inversion algorithms. Hereby, the MATLAB implementation of the Active-Set (AS) method was taken from Roque [54] which is based on the derivation of [38] and the Chahine (CH) and Landweber-Bialy (LB) method from Riefler and Wriedt [19]; the nonnegative least squares (NNLS) method uses the standard MATLAB integrated “lsqnonneg” routine; the projected Landweber (PLW) method was taken from Rennoch [42] with a fixed relaxation parameter: with the 2-norm of Eliçabe and García-Rubio [10] who have introduced the factor β in the elements and of the smoothing matrix forcing the size distribution to tend to zero for the minimal and maximal droplet size in case of the Twomey-Phillips-Regularization (TPR). This factor corresponds to an a priori restriction mentioned in Section 1. A fixed value of was used in case of the standard L-curve criterion (LCC), taken from Hansen [55], and the generalized cross validation (GCV), whereas was applied to the modified L-curve criterion (Mod. LCC), based on the work of Horváth et al. [30] for the selection of the regularization parameter . The three applied smoothing matrices are the following 0th, 1st, and 2nd order derivatives: These matrices are applied to all noniterative, direct inversion algorithms based on the TPR scheme to investigate the influence of higher degrees of smoothing, starting from to .

The retrieved particle size distributions were discretized into = 13, 26, 51, 101, 250, and 1000 linear distributed size classes between  nm and  nm resulting in a spatial resolution of  nm to  nm. Thus, the scattering matrix is characterized by the dimension 13 … .

3. Results and Discussion

The measured turbidity spectra for the polystyrene suspensions with = 0.03, 0.497, and 3.02 μm in the wavelength range 240 nm < < 900 nm are illustrated in Figure 2. The absorbance in the visible light range and the oscillation increases with increasing particle size for the investigated system.

An issue to be addressed is the discretization of the retrieved particle size distribution. A decreased width of the size classes leads an increased spatial resolution, which may lead to oscillation in the resulting particle size distribution [19]. This is illustrated in Figure 3 for the modal value of the size distribution of the NNLS algorithm. It can be seen that 26–51 size classes led to a sufficient determination of the particle size and a further increased resolution led to no significant increased resolution, but a strong increase in computational effort.

Table 2 gives an overview over the normalized (normalization with respect to the NNLS method) computational demand of the applied numerical inversion algorithms for a discretization of 1000 size classes, 1454 measurement points, and a fixed iteration number of 50000 for the (projected) Landweber (LB and PLW) and Chahine (CH) method. Because of this fixed iteration number, the computational demand of LB, PLW, and CH is not directly comparable. Further investigations on convergence behaviour of each method need to be done. The Least square fit (LSF) and GCV method were implemented as a slower brute force approach.

Figure 4 illustrates the measured particle size distribution using DLS and DCS, while Figure 5 shows the retrieved particle size distribution for the case of μm for the nonregularization algorithms. The least square fit, Chahine method, NNLS, and the Active-Set method obtained a narrow distribution close to the original value of the polystyrene suspension. The good results of the least square fit are caused by the low noise in the measured spectra , since there is a strong dependence on the spectral noise for this algorithm in synthetic data experiments. The two iterative approaches Landweber-Bialy and projected Landweber method are characterized by a broader distribution with a mode value close to the original particle size, where the projected Landweber method was less computational demanding.

The NNTPR approach is characterized by a good inversion quality, where the retrieved distribution got slightly broader with an increasing degree of the smoothing matrix (Figures 6(a)6(c)). The regular solution of the Twomey-Phillips-Regularization with an applied L-curve criterion is shown in Figures 6(d)6(f). Here, the mode value of the retrieved distribution is close to the original size distribution but with some numerical noise in the range of μm (negative values are not shown). Even this result surpasses the regular approach for the selection of the corner of the L-curve criterion for this experimental case (see Figures 6(g)6(i)). However, by the GCV (Figures 6(j)6(l)) worse results are obtained due to the flat shape of the objective function which has to be minimized. As reported, the objective function may form a flat global minimum that could be difficult to detect [22].

Figure 7 illustrates the benefit of the second step of the NNTPR approach compared to the regular Twomey-Phillips-Regularization with the modified L-curve criterion for μm and the smoothing matrix . The numerical noise caused by the oscillating character of the Twomey-Phillips-Regularization of the solution is reduced and the nonnegative constraint that suppresses the nonphysical solutions.

Tables 36 give an overview over the retrieved particle sizes , , and for the discretization of the retrieved particle size distribution into size classes and the particle size for size classes. The application of the Landweber-Bialy and projected Landweber method did not lead to a reasonable prediction of the mode particle size for the largest particle size μm, while for μm these two methods led to comparable results. It should be noted that the improved resolution of the retrieved particle size distribution did not lead to an improved determination of the particle size . It was possible to determine the particle sizes , , and within a range of ±10% of the original particle size for = 0.497–3 μm with the Chahine method, Active-Set, least square fit, NNLS, and NNTPR, whereas the polystyrene particles with a nominal size of  nm were slightly overestimated by these methods. It is assumed that the reason for the overestimation is the small difference of the turbidity spectra between different particle sizes in the Rayleigh scattering regime. However, simulations show that the extinction spectra curves have slightly different slopes and, therefore, distinction between different particle sizes is in principle possible.

The conventional Twomey-Phillips-Regularization is not able to predict the Sauter mean diameter caused by the oscillating behaviour and the numerical noise in the particle size distribution; however, the regular L-curve criterion led to comparable results for the particle sizes and for  nm and the modified L-curve criterion for 30 nm < < 3 μm. The generalized cross validation is able to predict the modal value for  nm.

4. Summary and Conclusion

The characterization of polystyrene latex particle suspensions with particle sizes in the range = 0.03–3 μm via turbidimetric measurements was demonstrated. Hereby, different conventional applied numerical methods have been evaluated and compared to a recently introduced approach of Horváth et al. [30] consisting of a Twomey-Phillips-Regularization with an integrated nonnegative constraint (NNTPR) and a modified L-curve criterion for the selection of the regularization parameter (no other a priori information are introduced).

The comparison showed that the Chahine method, Active-Set, least square fit, and nonnegative least squares (NNLS) were able to determine the characteristic particle sizes , , and within a range of ±10% of the original particle size for = 0.497–3 μm but overestimate the nanoscalic polystyrene system with = 30 nm. The Landweber-Bialy and projected Landweber method were not able to sufficiently retrieve the modal particle size for the largest particle size = 3 μm. The NNTPR approach obtained comparable good results surpassing the results of the standard Twomey-Phillips-Regularization with an applied L-curve criterion and generalized cross validation. How to compensate a poorly chosen regularization parameter by the applied nonnegative constraint could be shown.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The authors acknowledge the support of this project by the Deutsche Forschungsgemeinschaft (DFG) within the Brazilian German Collaborative Research Initiative in Manufacturing Technology (BRAGECRIM) in the project “Emulsion Process Monitor in Metalworking Processes (EPM).”