Abstract

Noncoherent light, as a common light source in life, can effectively avoid problems such as scattering noise caused by optical components incoherent light imaging, and through the design of the optical path can also trigger interference and holographic imaging of objects, allowing holography to be used in more fields. Various techniques have emerged for recording holograms using incoherent light sources as technology has developed. A recording method has been proposed that exploits the correlation between the object wave information and the Fresnel band sheet to achieve incoherent hologram recording. Using a spatial light modulator (SLM) loaded with a bit-phase mask with multiplexed lens function, the incident light wavefield is phase-modulated to achieve diffraction spectroscopy and phase shifting. And holograms with different phase shifts can be obtained and combined with phase-shifting techniques to eliminate the effects of twin images caused by coaxial holography in the reproduction process. Based on the study of this incoherent holographic imaging system, the influence of the characteristics of the main components of the system and the corresponding parameters on the resolution of the recorded and reproduced holograms is investigated, and optimization methods are given from both theoretical and experimental studies. The empirical analysis of the FINCH imaging system is carried out. The observed optical path is designed, and the method of making a bit-phase mask loaded on a spatial light modulator is presented. The effect of the focal length and recording distance of the dislocation mask on the resolution of the system is investigated by both computer simulation and experimental operation.

1. Introduction

With the continuous progress of science and technology, people’s requirements for optical imaging technology and imaging systems have increased. The improvement of imaging resolution has become an essential issue in modern scientific research. The development of holography was greatly facilitated by the advent of the laser in 1960. The highly coherent nature of the laser light source produced excellent interference effects, which significantly improved the quality of holograms. The advent of the laser improved the quality of holography but limited its application.

On the other hand, it has been found under incoherent light source illumination conditions. The emergence of incoherent holography has adapted to the development of holographic imaging, expanding the field of application and freeing it from the requirement of high coherence of light sources. Holographic imaging results from the interference effect between two light beams, which can record the amplitude and phase information of an object in the resulting interference fringe, which is the hologram of the object. According to the principle of reversibility of the optical path, the reproduction process can be seen as a plane wave irradiating the hologram vertically, producing diffraction phenomena to recover the wavefront information of the object’s light field. The quality of the reproduced hologram image is affected by the fact that the former is not as effective as the latter compared with the light field interference produced by an object illuminated by a noncoherent light source. Therefore, improving holographic imaging resolution under noncoherent light has become an important research topic.

2.1. Holographic Display

In conventional imaging techniques, based on the principle of geometrical optics, the image detector can only receive the object’s light intensity (i.e., amplitude), and the object’s light intensity in three dimensions is superimposed on a flat surface. Holographic imaging is achieved by introducing a reference light that interferes with the light waves reflected or emitted from the object and records interference fringes that correlate the amplitude and phase of the thing. The reproduction process uses the principle of diffraction to recover the wavefront information of the object, and the recorded interference fringe pattern is known as a ‘hologram.’ It is a two-step imaging technique and an accurate three-dimensional imaging technique.

The Fresnel incoherent correlation holography (FINCH) technique was first proposed by Park and Yu’s groups [1, 2] in 2007. The FINCH technique uses a spatial light modulator to diffract and phase shift the incoherent light emitted from an object. Joseph Rosen et al. demonstrated the feasibility and nonscanning nature of this technique.

In addition to Joseph Rosen’s group, Czech scientists such as Lavlesh et al. studied the FINCH system’s point spread function and resolution [3]. They found that adjusting the optical path increases the interference area of the two beams on the surface of the image detector [46] and loading the spatial light modulator with a vortex phase mask improves the contrast at the edges of the object [7]. By enhancing the Michelson interferometric optical path, Rajput’s group used two concave mirrors with different curvatures to replace the diffraction spectroscopy effect of the SLM, which is no longer limited by the resolution of the SLM [811] reproduction.

2.2. Noncoherent Optical Digital Holographic Imaging Technology

Wang Pan used synthetic aperture imaging technology to improve the resolution of incoherent digital holography [12]. Teruyoshi and Horisaki’s group at Jinan University used light-emitting diodes (LEDs) as the illumination source and studied the effect of light source size and diffraction distance on the resolution of hologram reproduction [13] and optimized the quality of phase reconstruction [14]. Teruyoshi et al. from Huazhong University of Science and Technology improved the quality of incoherent light imaging by investigating the imaging system’s signal-to-noise ratio and edge contrast [15]. Tatsuki et al. from Zhengzhou University used the FINCH system to perform color holographic imaging of dice, verifying the feasibility of the incoherent light imaging system to record color holograms [16]. Ying et al. investigated the imaging characteristics of the FINCH microscope imaging system by building a reflective incoherent digital holographic microscope imaging system [17]. Changwon’s group at Xi’an Institute of Optics and Mechanics studied digital holographic microscopy systems for LED light source illumination and partially coherent light holographic based on point diffraction interferometry [18]. In 2013, Yuhong et al. from South China Normal University conducted a simulation analysis and experimental validation of incoherent digital holography’s recording and reproduction process with white light irradiation [19, 20]. In 2007, Yun’s group at Beijing University of Technology summarized the characteristics and research progress of incoherent optical holographic imaging [21]. Further improving the resolution of incoherent light holographic imaging is an important aspect to advance the development of this technology [22]. In future practical applications, holographic imaging technology under incoherent light illumination still has an important role and research value in this field.

Although some progress has been made in FINCH imaging technology through the research of Liu Yingchen’s group, the existing analysis of noncoherent optical imaging systems is primarily complex and limited to the influence of a specific parameter on the imaging quality, lacking a comprehensive analysis method. Therefore, the thorough analysis of noncoherent optical imaging system parameters is of great importance for optimizing the optical path and improving the imaging system’s resolution.

3. Materials and Methods

3.1. Model Design

This experiment uses LED white light with a wide spectral range as the light source to build a Fresnel incoherent correlation imaging system to experimentally investigate the effect of different recording parameters on the resolution. The experimental optical path is shown in Figure 1.

In Figure 1, S is the LED white light source with a central wavelength of about 455 nm and a spectral linewidth of 30 nm. F is the filter with a central wavelength of 450 nm and a spectral linewidth of 20 nm. P is the parallel light tube. O is the target object steel ruler. D is the polarizer (the SLM sensitivity axis’s polarization direction). I is the diaphragm. L is the collimated lens ( mm), and BS is the beam splitter. The LCD spatial light modulator used in the experiment is a pure phase reflection type HED-450 manufactured by Holy. With an image plane size of , a resolution of and an image element size of 6.4 μm of which the CCD camera model is MVCII-1M, with an image element size of 5.4 μm and a resolution of , actually using pixels of . When using MATLAB to produce an SLM-loaded multiplexed Fresnel lens, the two-dimensional grayscale matrix to be generated contains the amount of phase modulation of the focal length of the two lenses. Therefore, the mask is generated so that the two focal length values are each half of the pixels and evenly distributed, in five main steps as follows. (1)Call the rands function to generate a two-dimensional random matrix(2)Assigning two focal values randomly to a two-dimensional matrix(3)The amount of phase modulation corresponding to each point in the matrix is calculated from the expression for the phase distribution(4)Converts the phase modulation quantities into their corresponding gray values and generates a gray matrix(5)The image shows a function that generates a grayscale map, i.e., a bit-phase mask

As shown in Figure 2(a), the gray value of each of its pixels represents the amount of phase modulation. When a well-made grayscale is loaded onto the SLM, its gray value will control the voltage across the SLM, deflecting the liquid crystal molecules and thus changing the refractive index. The phase-modulating effect on the light waves is equivalent to coinciding two Fresnel lenses with different focal lengths, as shown in Figure 2(b).

The mean-square error (MSE) method is used to evaluate the image quality, which is calculated by the following formula.

The mean square error method calculates the mean square value of the pixel difference between the original image and the distorted image and determines the distortion of the distorted image by the size of the mean square value,,; the smaller the mean square error value, the smaller the distortion of the image, and the closer the resolution to the original image. The smaller the RMS error value, the smaller the distortion and the closer the resolution to the original image. A comparison of the mean square error of the reconstructed image and the simulated target at different recording distances are shown in Table 1.

As the recording distance increases, the mean square error value decreases, representing a better image quality, i.e., an increase in resolution. The results are consistent with the results of the subjective evaluation method, which justifies the mean square error method for assessing image quality.

Next, fixing the other parameters constant, the effect on the imaging resolution of the system is investigated by varying the mask focal length. Set the recording distances to , , , respectively, and the reconstructed image at its corresponding position is shown in Figure 3.

As shown in Figures 3(a)3(c), the resolution of the reproduced image decreases as the mask focal length increases, in line with the theoretical analysis that increasing leads to a decrease in numerical aperture and a reduction in ratio. A comparison of the mean square error of the reconstructed image and the simulated target object when loaded with different mask focal lengths are shown in the following Table 2.

As can be seen from Table 2, the mean square error value of the reproduced image becomes more significant as the focal length of the mask increases, in line with the theoretical analysis.

The experiments also simulated the effect of two different loading modes of the SLM on the imaging quality. The first one loads the SLM with a plane wave and a spherical wave bit-phase mask with a focal distance of , and the second one packs the SLM with two spherical wave bit-phase masks with the mask focal distance set to and . The recording distances for both loading modes are assigned to , and the reconstructed images after phase shifting for both loading modes are given in Figure 4.

As shown from Figures 4(a) and 4(b), when the SLM is loaded with planar and spherical wave bit-phase masks, the image background information interferes with the reconstructed image, and the quality of the reproduced image is not as good as when loaded with two spherical wave bit-phase masks. In essence, when the SLM is loaded with only one focal length mask, half of the spatial light modulator pixels can phase modulate. Still, because its fill factor is less than 100%, the light waves incident to the effective pixels will be reflected without modulation. The proportion of the reflected light as reference light is greater than the proportion of the signal light after phase modulation by the SLM, which causes part of the reference light to not participate in the interference. For a mask with two focal lengths, the ratio of the object wave to the reference light is close to 1 : 1, so the contrast of the recorded interference fringe is higher than for a mask loaded with only one focal length, and the resolution of the reconstructed image is relatively better. The mean square error values for comparing the reconstructed image with the simulated target for the two diffraction modes are shown in the following table.

As can be seen from Table 3, the mean square error value of SLM loaded with two spherical wave potential phase masks is smaller than that of the loaded plane and spherical wave likely phase masks, which is in line with the theoretical analysis. Therefore, the imaging quality of loading the two spherical wave phase factors during incoherent optical coherence imaging is more excellent than that of loading the plane and spherical wave phase factors.

Photodetectors cannot directly record the phase information of light waves emitted from an object. They can only sense the light intensity and need to encode the phase information in the intensity information map received by the detector and then decode the object light field information through diffraction phenomena. The reproduction process is equivalent to irradiating a hologram vertically with a monochromatic plane wave. The diffraction pattern differs from position to position because of the different shapes of the interference fringes at each place, thus allowing the recording and reproduction of the original object light field distribution. This is analogous to Morse code, where the 26 letters of the alphabet correspond to the length and order of the different electrical pulse response times. The electrical signals are decoded through a previously agreed translation.

The recording process of coherent light digital holographic imaging technology is shown in Figure 5. The object transmitted or emitted light waves carrying the phase and amplitude information of the thing. After a distance to the CCD recording plane, while allowing a beam of light waves with coherence with the object light waves to irradiate the recording plane, the two beams of light waves will interfere, the interference pattern in the form of stripes intensity information recorded by the CCD.

In Figure 5, and denote the object-wave complex amplitude and the reference-wave complex amplitude, respectively, the hologram recorded by the CCD is represented as

In Equation (2), the first term indicates the intensity distribution of the light object field. The second term means the intensity distribution of the reference light wave, which can be taken as a natural constant incoherent light illumination due to better interference. In general, monochromatic plane waves can be used as reference light waves. The third and fourth terms encode the object light wave’s complex amplitude and phase information, which can be regarded as the distribution function of the interference fringe.

In digital holography, the holographic dry plate is replaced by an image detector component, a continuously distributed recording medium, whereas the target surface of the image detector is not continuous. For example, in the case of CCDs, the target surface is a combination of many discontinuously distributed pixel units so that the recorded hologram is a discrete intensity distribution.

In formula (3), is an integer , , are the horizontal and vertical pixel cell size of the CCD, and are the number of horizontal and vertical pixels, respectively, and then, the CCD detection surface width can be expressed as height is .

Consider the integration effect of the CCD’s pixel cells during sampling.

The symbol “” indicates a convolution operation. It can be seen that a discrete intensity score is stored in a computer in the form of a numerical matrix, which is reproduced through the process and processing of the numerical matrix.

3.2. Recording of Incoherent Digital Holograms

Noncoherent digital holography uses a noncoherent light source, which is very different from coherent digital holographic imaging. Due to the large width of the noncoherent light spectrum and the existence of different wavelengths of light waves, it is difficult to interfere using reference light that is coherent with the object’s soft waves. To obtain a hologram of an object under incoherent illumination light, one has to solve the problem of how interference occurs. In incoherent light, digital holography, the thing is composed of many independent point sources, and the light waves emitted between these independent point sources do not satisfy coherence. Still, the light waves emitted from the same point satisfy coherence. According to this property, interference can be triggered to achieve incoherent holographic recording.

Using beam-splitting techniques, the light from each point source is split into two beams, and the optical path is designed so that the two beams converge and interfere to form a point source hologram. In other words, point sources are spatially self-coherent, and incoherent optical coherence imaging takes advantage of this property. A point source hologram can record the amplitude and phase information of the point. By superimposing the holograms of all the independent point sources in a non-coherent manner, all the amplitude and phase information of the object can be recorded. As shown in Figure 6, the recording process of incoherent optical digital holograms can be simply expressed as follows.

Suppose there exists a point at a particular position of an object in space, the light wave from this point arrives at the wave splitting plane and is decomposed into two light waves by the optical element with separating effect, and they come on the CCD plane when the complex amplitude distribution is expressed as and , and satisfy the spatial coherence, and the point source hologram formed on the CCD recording plane expresses the formula as

where and , respectively, are theandamplitude, representing the intensity information of the point source; and , respectively, are theandphases, carrying the three-dimensional position information of the point source. From the hologram expression, when the complex amplitude and , the phase part of the difference, that isandis not equal, and the phase term is not constant. The intensity distribution associated with the spatial location, the amplitude, and phase information of the point source can be recorded in full. An object can be seen as a combination of many point sources. Assuming that an object has an intensity distribution function , its incoherent hologram is a noncoherent correlation superimposed on the hologram formed by the object’s point interference on the CCD, with the expression

In contrast to the coherently superimposed hologram, the hologram is the intensity distribution of all point sources after superimposing the complex amplitude. In incoherent light, the complex amplitude superimposition is not satisfied, and the hologram is the convolution integral between the intensity distribution of all independent point sources and the point spread function, satisfying the intensity superimposition. In the study of incoherent digital holography, it is necessary to start with the point spread function, which gives a more intuitive view of the imaging system’s response to the input light wavefield.

Assuming a CCD detection plane of size , expressed as a function , the expression for the hologram after discretization is

where is the interval between adjacent pixels of the CCD, and the function can be expressed as

In Equation (8), and indicate the number of pixels of the CCD, and take a range of integers between , and the product between and in expression (7) represents the relationship between the CCD size and the hologram. Multiplying by indicates a discrete sampling of the hologram received by the CCD.

Figure 7simulates a comparison of the reconstructed image of the obtained hologram without phase shift and after a three-step phase shift, under identical optical path conditions.

Figure 7(a) shows the diffraction template used in the simulation. When the hologram is not phase-shifted, the reconstructed image in Figures 7(b) and 7(c) is disturbed by the zero-level and conjugate images.

4. Results

4.1. Effect of Nonmonochromatic Light Sources on Interference

From a spectroscopic point of view, any light source has a specific spectral line width, and the spectral linewidth of a noncoherent source is greater than that of a coherent source. Compared to quasimonochromatic light sources, the degree of interference fringe lining captured by the CCD is significantly affected when performing interference experiments. Many wave trains of finite length are emitted from an object illuminated by incoherent light. A point in space is chosen where many trains pass through at a single observation time, with uncertainty in the phase relationship between these trains and each other. For the Fresnel incoherent correlation imaging system studied in this paper, a wave train emitted from any object is modulated by a spatial light modulator into two twin wave trains with different radii of curvature but equal wave train lengths. For a point on the CCD, the two wave trains are divided into two beams after each has passed a certain distance and then combined at the end when the difference between the two wave trains traveled is more significant than the coherence length. The two wave trains are recorded at the exact moment from different incident wave trains; that is, the twin wave trains do not overlap. In a single observation time, many through the wave train on the interference contribution to cancel each other out can not observe stable interference fringes when the difference in the optical range of the two twin wave trains tends to zero. It can be considered that they arrive at the point simultaneously. At this time, they superimpose on each other to produce interference, and stable interference fringes can be observed. This situation is self-coherent.

Spectral lines with spectral density and total optical intensity as the integral of spectral density over the width of the spectral lines: where . The light intensities corresponding to different wavelengths vary with the optical range difference , and the non-coherent superposition of the light intensities at all wavelengths can be expressed as

The first term of Equation (10) is constant, and the second term is a quantity related to the optical range difference . For the sake of discussion, the above equation is simplified by considering that it is equal to a constant in the range and 0 in the rest, which can be simplified as follows:

The liner ratio of the interference fringe can be derived as

The above equation shows that the optical range difference corresponding to an interference fringe with a liner ratio equal to zero is the maximum visual range difference to achieve coherence, and the coherence length can be expressed as

In a Fresnel, incoherent correlation digital holographic imaging system, the difference in optical range between the two twin wave trains after being split by a spatial light modulator is more significant than the coherence length. This will reduce the quality of the interference fringe or even failure to interfere only when the coherence length is less than that. A clear interference fringe can be recorded on the CCD, and a higher quality reproduction image can be obtained when reconstructing. For nonmonochromatic light sources, the greater the coherence length, the more wave trains will interfere with each other at one point of observation, making the interference fringe clearer. Therefore, when using noncoherent light interference for digital holographic recording, a source with a significant coherence length can be chosen to improve the image quality.

The light source used in this paper is the GCI-060411 type produced by Daheng Optoelectronics. The electrical power is 3 W LED white. The added filter center wavelength is 450 nm, the spectral line bandwidth is about 20 nm and belongs to the visible light band, and the more extensive spectral range is closer to the actual application. The noncoherent light source and spectral diagram are shown in Figures 8(a)8(d).

As can be seen from Figure 9, the central wavelength of the LED white light source is about 455 nm, and the spectral line width is 30 nm. According to Equation (13) calculation, the coherence length of the light source is about 6.9 μm. After filtering by the filter, the coherence length is approximately 10.1 μm.

To obtain good interference fringes, the maximum optical range difference in the imaging system must be less than 10.1 m. In the FINCH imaging system, the visual range difference of the imaging system is related to the phase mask loaded on the spatial light modulator and the CCD recording position. The imaging system’s resolution can be improved by adjusting the focal value of the phase mask and the recording distance to meet the coherence length while determining the light source.

The relationship between the recording distance and the optical range difference of the imaging system is first investigated for a light source with a coherence length of μm, assumed , , respectively, and the relationship curve between the optical range difference and the recording distance is given.

The relationship between the recording distance and the optical range difference when the SLM is loaded with three different sets of plane wave and spherical wave bit phase masks is given in Figure 9. The horizontal line is the coherence length of the light source. For the part of the system located below the horizontal line, the optical range difference is less than the coherence length. It satisfies the coherence condition, while the visual range difference for the part above the horizontal line is more significant than the coherence length and does not satisfy the coherence condition.

As can be seen from diagram 10, the maximum optical range difference of the system changes proportionally with the recording distance . Keeping the value of the bit phase mask focal length loaded by the SLM unchanged, the optical range difference of the imaging system increases with the distance between the CCD position and the SLM position. Suppose the CCD recording distance continues to grow. In that case, it will cause the CCD to record to the edge of the hologram where the overlap of the two beams will not interfere, reducing the radius of the hologram, and the quality of the reproduced image will be reduced. If the recording distance is kept constant, i.e., the position of the CCD is not moved. The focal length of the mask loaded on the SLM is increased. The curve of the optical range difference will become more inclined, i.e., the visual range difference at this position is reduced, and the coherence condition can be better met. The CCD sampling interval limits the minimum recording distance. The optical range difference is less than the coherence length when the SLM is loaded with a bit-phase mask focal length , . Still, the CCD placed at this position will cause the information sampled to be incomplete, resulting in a decline in imaging quality. According to calculations, only when the SLM is loaded with a mask focal length that meets the coherence condition can the CCD record complete information. Therefore, the recording distance is limited in two ways, both by the CCD meeting the sampling interval and by the fact that it cannot be greater than the coherence length.

From the above analysis, it can be seen that the maximum optical range difference of the system is related to the recording position of the CCD, the focal length of the bit-phase mask loaded on the SLM and the radius of the modulated spot, fixed . To study the relationship between the focal length of the front-loaded on the SLM and the maximum optical range difference,was 130 mm, 150 mm, and 170 mm, respectively. Figure 10 gives the variation curve of the leading visual range difference with the focal length of the loaded mask.

Figure 10 shows that the optical range difference of the imaging system decreases as the focal length of the SLM loading mask increases. Keeping the recording position of the CCD unchanged, and only when the focal length of the mask is more significant than a specific value will the coherence condition be satisfied at that recording position. At a fixed value of the focal length of the lens loading, it is to fulfill the coherence condition, but not less than the minimum recording distance limited by the sampling interval.

5. Evaluation

In summary, when the SLM is loaded with planar and spherical waves, the recording distance of the CCD must satisfy both the minimum distance limited by the sampling interval and the requirement that the optical range difference corresponding to the CCD at that position is less than the minimum coherence length. According to Rayleigh’s criterion, the resolution of the system increases with the increase of the recording distance. When the recording distance is equal to 2 , the solution of the system reaches its maximum. As the recording distance increases, the optical range difference also increases, the maximum value being when the visual range difference at that location is precisely equal to the coherence length. With the light source unchanged, i.e., the minimum coherence length is entire. In the case of satisfying the coherence condition , the focal length of the SLM-loaded mask has to be increased. The improvement of the imaging resolution by the ratio change at this point is no longer noticeable.

When the recording distance is fixed constant, from the numerical aperture of the imaging system, which directly determines the resolution of the imaging system, the larger the value, the higher the imaging quality, which in this diffraction mode can be expressed as Figure 11 gives the variation of mask focal length versus numerical aperture.

As can be seen from Figure 11, the system’s numerical aperture decreases as increases, leading to a rapid decrease in resolution and a decrease in ratio. At this point, the coherence length of the light source becomes the main factor limiting the key. Assuming that the coherence length of the light source is increased to μm, , and the mask focal lengths are set to 180 mm, 230 mm, and 280 mm, respectively; the relationship between the optical range difference and the recording distance is shown in Figure 11.

As shown in Figure 12, the lower horizontal line is μm, and the upper horizontal line is μm, . For example, is the the maximum value for a coherence length of μm, when the coherence condition is met 0.74; is approximately 1.46 when the coherence length is increased to μm, which corresponds to an increase in resolution of roughly two times .

Therefore, ; the resolution can be improved in three ways: (1) When the focal length of the SLM loading mask is constant, the,larger the recording distance, the better, but the maximum difference in the optical range at this position must not be greater than 10.1 μm. (2) When the recording distance of the CCD is fixed, the smaller the focal lengthof the SLM loading mask, the bigger, but the minimum difference in optical range here must not be greater than 10.1 μm. (3) Non-coherent light illumination can change the light source or choose a suitable filter to reduce the spectral line width to improve the minimum coherence length. So that the graph 13 of the horizontal line upward can make in the SLM loaded mask that focal length isconstant. The more considerable recording distance can also meet the minimum coherence length. The recording distance is constant; SLM loaded with a smaller mask focal length can also meet the minimum coherence length. The relative increase in the ratio improves the resolution of the imaging system.

6. Conclusion

Holography was first proposed in 1948, and from the beginning, the coherence of the light source limited the imaging resolution. The advent of lasers brought a highly coherent light source to holography and introduced coherent noise into the optical system. As the technology continued to develop, digital holography emerged as a simple recording and reproduction process but still required lasers as the illumination source. Noncoherent digital holography frees holography from the limitations of light sources. It is based on the principle of spatial self-coherence of point sources, allowing objects to be illuminated by noncoherent light to trigger interference to produce holograms, allowing holography to be applied to a broader range of fields. Noncoherent light imaging based on modulators of spatial light solves the two significant problems of coherence dependence on light sources and twin image overlap. The optical path is simple, and the computer makes the recording reduction. This paper investigates the problem of recording parameters in the FINCH imaging system and finally validates the theoretical analysis through experiments, providing a reference for improving the resolution of incoherent optical coherence imaging.

Data Availability

Data are available upon reasonable request from the corresponding author.

Conflicts of Interest

Author Guoliang Yang has received research support from Xi’an Technological University. The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.

Authors’ Contributions

The first draft of the manuscript was written by Guoliang Yang, and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Acknowledgments

This work is supported by the Shaanxi science and technology plan project-key R & D plan: “development of narrow energy spectrum ion beam emission source” (No.: k20180076).