Abstract

When performing digital image processing, the most critical technology that affects its use effect is the autofocus technology. With the advancement of science and the development of computer technology, autofocus technology has become more and more widely used in various fields. Autofocus technology is a key technology in robot vision and digital video systems. In order to allow digital image processing technology to better serve humans, it is necessary to further improve the focus evaluation function algorithm. This article focuses on the imaging principle of defocused images, using different evaluation functions to analyze and process the experimental images to observe the changes in image clarity. Through the introduction and analysis of the existing evaluation function, it can be known that the focus evaluation function will directly affect the quality of digital image processing. Therefore, it is best to choose unimodality, unbiasedness, low noise sensitivity, wide coverage, and a small amount of calculation. For the evaluation function, the Laplacian gradient function is an ideal choice. However, because the current digital image processing technology is not perfect enough, the focus function is still prone to multiextreme problems when the image is severely defocused and the high-frequency components in the image are missing; the balance between image processing speed and focus accuracy also still needs to be improved. Therefore, this paper studies the autocontrol microscope focus algorithm based on digital image processing, analyzes the principle of visual image imaging, and makes some improvements to the microscope focus algorithm. Through experiments, it can be seen that the real-time data of the original Laplace function in the edge-obvious target is 76.9, and it reaches 77.6 after improvement. The improved algorithm can better maintain the single-peak state during the focusing process, which improves the image processing efficiency while ensuring the measurement accuracy.

1. Introduction

With the rapid development of science and technology, the autofocus problem in digital imaging systems has attracted more and more people’s general attention. Especially for some nonprofessional users, if they need to adjust the focus and aperture of the scene for a long time, they are prone to fatigue. Images are an important way for people to obtain information intuitively. With the development of modern technology, people’s reading of image information has long ceased to be seen by the naked eye. A microscope is an optical instrument composed of lenses, which can magnify a target to 1500 times its own, creating a new path for people to understand the world. In a digital image imaging system, in order to get a clear image, there must be a focusing process [1]. When the target image has a particularly high requirement for focusing, it is difficult to focus manually. At this time, the effect of autofocus appears. In order for the microscope to better exert its value, it is very important to improve the focus evaluation function algorithm and develop a smarter autofocus technology. Now, the realization of autofocus makes focusing effortless with just one tap. There are many ways to realize automatic focusing, and the intelligent focusing method is the most important, which has the advantages of fast speed and high precision.

Foreign research on autofocus and focus evaluation functions has a long history. As early as the 16th century, someone made an instrument with a magnifying effect, which is actually very close to the most basic microscope function. The clarity evaluation function of out-of-focus blurred images is a key to the use of digital image processing technology to realize automatic focusing, and the accuracy and effectiveness of the evaluation function need to be continuously improved. Arakawa et al. used the combination of ultrasound microscopy and optical microscopy for the multiparameter characterization of single cells. They believe that this fusion has further enhanced people’s understanding of cell biomechanics [2]. Schultheiss and Denil analyzed the history of microscopy and the development of microsurgery in their research, calling it a revolution in reproductive tract surgery [3]. Szmaja has improved the research in the field of Low vision around the digital image processing system. The upgrade of autofocus technology allows scholars to obtain higher-quality images in experiments and perform detailed analysis [4]. Wigianto et al. used digital image processing to perform a three-dimensional inspection of the bone structure around hydroxyapatite implants and said that high-quality 2D images are unimaginable for the construction of 3D models, and this is also very useful for biomechanical research [5]. When analyzing the application of multidimensional privacy perception evaluation function in automatic feature selection, Jafer et al. emphasized the importance of evaluation function in data analysis. They believe that this kind of automated information processing can not only ensure the privacy of data sets but also improve experiments, efficiency, and accuracy [6, 7]. Stepanov comparatively analyzed the spectral characteristics of the Laplacian operator and the Tachibana operator on the compact Riemannian manifold, found the limit of its spectrum, and evaluated its multiplicity [8]. As one of the key technologies of digital imaging systems, automatic focusing technology has developed rapidly, has been widely used in the camera field, and has been continuously updated and improved.

With the continuous advancement of domestic science and technology, the research on digital image processing and focus evaluation functions has become more and more abundant [9]. Chen et al. have published a research on the application of autofocus technology in the mesh membrane measurement system. Using an automatic system to replace manual mesh membrane and noncoplanar mesh membrane arrays can not only reduce costs but also save a lot of time [10]. Li et al. discussed the important role of autofocus in the tool preset measuring machine based on computer vision. They used a second-order discrete difference prediction model to predict the direction of movement of the lens, which is conducive to achieving faster and more accurate autofocus [11]. When studying data preprocessing and fault diagnosis based on the information contribution evaluation function, Ji and Wen said that it is difficult for the traditional gradient method and the least square method to automatically correct all parameters, so the evaluation function needs to be improved to ensure the model’s performance [12]. When analyzing boundary value problems of fractional differential equations with p-Laplacian operators, Tian and Li used the fixed-point theorem on convex cones to propose the existence and multiplicity of positive solutions [13].

Digital image processing technology has become more and more widely used in high-end fields, and the study of autofocus function is a key factor to promote its progress. Combined with the basic requirements of the performance parameter test of the optical system of the light weapon optical sight, a set of automatic focusing system based on image processing technology was designed. In order to discover the existing problems of the evaluation function in the current digital image processing technology, this paper has conducted an in-depth exploration on it. The article takes full advantage of the fast speed of the depth-of-focus method and the high precision of the depth-of-focus method to improve the effect of automatic focusing. Since common algorithms are still prone to multipeak conditions, this article has made some improvements to the original algorithm, which not only helps to improve the real-time performance of autofocus but also ensures the accuracy of image analysis.

2. Autocontrol Microscope Focusing Algorithm Based on Digital Image Processing

2.1. Basic Principles of Microscope Imaging

The basic principle of a microscope is to use a convex lens to form an enlarged image. Suppose we image point A, where is equal to the object distance, is equal to the image distance, and is the focal length of the lens, then they satisfy the following imaging formula:

The imaging principle of the microscope is shown in Figure 1.

The traditional focusing technology is mainly based on the subjective feeling of people by manually adjusting the focus. This method can achieve very high accuracy under the operation of professionals, but not in situations where it is necessary to capture a momentary picture or frequently switch pictures. In the image imaging process, if the various data can meet this formula, it is called quasifocus. At this time, point A will form a clear image A’ on the plane; if the imaging detector is displaced, it will also be that a point A” is formed on the plane, but at this time, and no longer satisfy the imaging formula, and the imaging looks blurred. This phenomenon is called defocusing. The greater the deviation of the detector from the original position, the more serious the defocus and the more blurred the image. To get a clear image, you can flexibly adjust the object distance or image distance. This process is the process of focusing [14, 15].

Assuming that is the clear aperture and is the radius of the diffuse spot, the following formula is satisfied during the displacement of the detector:

When is less than zero, the radius of the diffuse spot is negative, and the image acquisition device will be located at the front end of the focus plane; similarly, when the radius of the diffuse spot is positive, the image acquisition device is located at the back end of the focus plane.

2.2. Point Spread Function

The point spread function (PSF) is the impulse response function in an optical system. When the optical system is focused, the point light source will be mapped to the phase surface to form an ideal spot [16, 17]. If there is an out-of-focus condition, a relatively blurred spot will be formed on the plane. The distribution of this diffuse spot is called the point spread function ; represents the spatial coordinates in the imaging, and the ideal point spread function satisfies the following formula:

Assuming that is the spatial distribution of the target when imaging at the ideal focus, and the imaging is the corresponding distribution of planar light intensity, they satisfy the following formula:

The point spread function of the lens system is like a filter; the lower the imaging defocus, the higher the cutoff frequency, and the two are inversely correlated. Therefore, when the degree of defocus is extremely high, the cutoff frequency will be too low, resulting in the reduction and lack of high-frequency components in the image, and the final result will be focus failure and blurred images.

2.3. Depth of Field of Imaging System

Focusing is required before imaging. After focusing, a relatively clear image can be presented within a distance before and after the focus [18]. This distance is called the depth of field, and it will be affected by the distance from the focal plane to the subject.

2.3.1. Geometric Depth of Field

In each specific image plane, there will only be one conjugated object plane in the object space. The diffuse spot is the light intensity distribution caused by the diffraction image formed on the image surface of the point light source after passing through the optical system. In the actual optical system imaging, in addition to the ideal imaging point, other points on the object will form a diffuse spot [19, 20]. If the size of the diffuse spot can still be clearly imaged within the range of the human eye, it is called the geometric depth of field. Setting equal to the spatial refractive index, is the limit resolution distance, is the magnification of the microscope system, and is the numerical aperture of the objective lens; if the element size is , then ; the geometric depth of field calculation formula is

2.3.2. Physical Depth of Field

In the process of point light source diffraction imaging, the spot formed at the focal point is called the Airy disk. When the object point moves laterally along the optical axis, the energy of the diffracted bright spot on the image surface will also change [21]. On the premise that the diffraction spot is basically unchanged, the energy change of the Airy disk is at most 20%, and the movement of the object point at this time is the physical depth of field. Setting equal to the spatial refractive index, is the wavelength, is the numerical aperture of the objective lens, and the physical depth of field calculation formula is

The total depth of field is the geometric depth of field plus the physical depth of field, which means

2.4. Depth of Focus of the Imaging System

It means that when the focus is on the object point in the optical imaging system, not only can the points on the plane be seen clearly, but also within a certain range above and below the plane, the value of this range is the depth of focus [22]. The Rayleigh criterion believes that in order to obtain a clear image, the wave aberration in the optical system needs to be less than 1/4 of the wavelength. Set as the aperture and as the wavelength. It can be seen that the focal depth formula is

2.5. Why Choose the Focus Area

Figure 2 shows the curve of the ideal focus evaluation function and the actual focus evaluation function curve. Why choose the focus area: (1) You can shorten the autofocus time. (2) In order to reduce or even avoid the “double peak” or even “multipeak” phenomenon caused by the focus evaluation function due to the influence of the focal depth and depth of field of the optical system. “Double peak” and “multipeak” refer to the phenomenon in that the focus evaluation function has multiple peaks. During the image quality evaluation, the result that the evaluation values of the two images collected are consistent within the allowable depth of field will appear. Using this automatic focusing method, if the selected controller has a good control strategy, it can not only meet the requirements of fast focusing but also get high focusing accuracy.

2.6. Autofocus Method Classification
2.6.1. Ranging Method

The working principle of the distance measurement method is to calculate the corresponding image distance by measuring the object distance and combining formula (1) and then adjusting the position of the image plane to achieve focusing. Common ranging methods include ultrasonic ranging, infrared ranging, and triangulation. Triangulation involves solid-state triangulation, image migration, VAF component correlation, and PSD ranging.

(1) Ultrasonic Ranging Method. The imaging system is equipped with ultrasonic sending and receiving devices. After the imaging system emits ultrasonic waves, it will be reflected back and recorded by the imaging system. By measuring the round-trip time of ultrasound between the imaging system and the target, it can be converted into distance data. Supposing is the object distance, is the ultrasonic propagation speed and is the time of ultrasonic emission and round trip; then, they satisfy the following formula:

(2) Infrared Ranging Method. The imaging system is equipped with an infrared transmitter. After the imaging system emits infrared rays, it will be recorded by the imaging system when the target is reflected back to the lens. By recording the round-trip time of infrared rays between the imaging system and the target, it can be converted into distance data.

(3) Triangular Forecast Method. The rich image information in the video signal is not fully utilized when adjusting the focus, which means that the video signal analysis method is still a certain distance from the intelligent focusing, because a lot of information contained in the image can be used as important reference information in the focusing. There is a reflector on the left and right sides of the imaging system. The left reflector is fixed in position and the surface is coated with the center; the right reflector can be rotated a certain distance to adjust the distance measurement. The time reference image is formed by the mirror at the left end during distance measurement, and the light reflected from the right end underneath is called the reference image. The position error of the reference image and the reference image in the image plane can reflect the distance of the target. Setting the adjustable angle of the right mirror to α/2, represents the length of the baseline and the object distance to . If the reference image and the reference image are overlapped, then the following formula is satisfied between α/2 and :

In the actual measurement, if the distance between the target point and the lens is mastered, the formula can be used to move the image surface to realize the focusing process.

2.6.2. Dynamic Focus Lens Method

The main tool of the dynamic focus lens method is a special lens with focusing function. This lens is designed to imitate the principle of human eyes to see clearly. The process of using a bimorph to change the shape of the lens is like changing the curvature of the human eye lens. The same can achieve the effect of focusing. When the voltage changes, the bimorph will cause the internal structure of the lens to deform, forming lenses with different curvatures, so this kind of zoom lens can automatically focus by adjusting the focal length.

2.6.3. Time-of-Flight Method

The time-of-flight method is a two-way ranging technology. As the name suggests, its working principle is based on the time of flight. The time-of-flight method must send continuous light pulses to the target, then the sensor records the light signal returned from the object, and finally converts the flight time of the light pulse into specific distance data. Each pixel must analyze the light intensity information while recording the return time of the light source. Because this method has high requirements for the light source, it is rarely used in areas other than industry.

2.7. Automatic Focusing Method Based on Digital Image Processing

With the development of computer technology, digital image processing technology has also achieved new breakthroughs and has been gradually applied to more and more fields. Compared with traditional focusing methods, autofocusing methods based on digital image processing have unique advantages. Under the premise of ensuring accurate focusing, computers are not only more operable but also more efficient in image processing [23]. It is precisely because of this that the deep integration of digital image processing technology and autofocus technology has become the top priority of the research on autofocus methods in this era. The video signal analysis method of the microprocessor greatly simplifies the structure of the focusing system. The original information obtained is greatly increased compared with the traditional focusing method. The focusing effect has also been greatly improved.

In actual work, whether the image is in focus during the focus processing can be highlighted from the image quality. The sharpness of the image proves that the focus is very close to the quasifocus; the more blurred the image, the worse the focus effect [24, 25]. Generally speaking, automatic focusing methods based on digital image processing can be divided into two types: depth-of-focus method and depth-of-defocus method.

2.7.1. Depth of Focus Method

The depth-of-focus method is a focusing method that continuously searches and finds the optimal solution in the process. During the focusing process, the computer will continue to record images of different definitions formed at different focal points and control the focusing actuator through the focus evaluation function to continuously adjust the focus until the focus is found to ensure that the definition meets the requirements and to obtain the final high-quality images needed. The depth-of-focus method has simple principles, low cost, and high degree of automation, so it can be applied to most imaging systems. However, if the requirements for the focus of the image are very high, too many images will appear during the work, and it will take a long time.

2.7.2. Defocus Depth Method

The working principle of the depth-of-focus method is to obtain the depth information of the imaging system from the defocused image and to achieve focusing through further analysis. Generally speaking, the defocus depth method is divided into two types: one is the defocus depth method based on image restoration, and the other is the defocus depth method based on the estimation of the size of the diffuse spot.

The defocus depth method based on image restoration needs to use some general knowledge and theories to calculate the point spread function of the experimental target in the system, so as to form the defocus degradation model of the image and, on this basis, restore the sharpness through reverse calculation of the high image. In actual operation, it is necessary to estimate the point spread function through image observation, experimental testing, mathematical modeling, and other means. If there is not much valuable information in the image, it is difficult to use this method to obtain high-quality images.

The defocus depth method based on the estimation of the size of the diffuse spot needs to analyze two or three images at different positions after acquiring them. In the analysis process, we can focus on the blur relationship between images to estimate the size of the speckle. The larger the diameter of the diffuse spot, the more blurred the image and the more serious the defocus of the imaging system. Using the principle of geometric optics to adjust the focus position can reduce the time spent in multiple acquisition and analysis of images, but the accuracy of focus adjustment needs to be improved compared with the depth of the focus method.

3. Autocontrol Microscope Focusing Algorithm Experiment Based on Digital Image Processing

3.1. Research Background

In this era of informatization and digitization, various imaging systems have gradually penetrated into people’s lives and have irreplaceable practical effects. Because of this, people have increasingly higher requirements on the key technology of the imaging system, the autofocus technology. This paper conducts experimental research on the autofocus evaluation function, finds out some of the problems and improves them, and hopes to help promote the further development of digital imaging technology.

3.2. Flow Design of Microscope Autofocusing Structure

The autofocus structure is shown in Figure 3.

With the rapid development of electronic technology and signal processing technology and the emergence of a new generation of imaging devices, the information required for automatic focusing is becoming more and more abundant. The working process of the entire microscope autofocusing is as follows: First, the CCD camera collects the image, and after the image is transmitted to the PC, the PC analyzes and calculates the image to obtain the corresponding control instructions and then sends the control instructions to the MCU of the lower computer through the serial. The single-chip microcomputer controls the stepping motor to make corresponding actions according to the instruction and controls the axis of the microscope through the transmission device to achieve the purpose of focusing.

In the design circuit, the composition of the hardware circuit power supply module is shown in Table 1.

3.3. Experimental Data Collection

In order to make the research results more scientific, the experiment collected the same series of images in different focus states during the process. First, we adjust the focus manually, change the distance of the object by rotating the gear, and then perform visual inspection until it is determined that the target is the clearest image and, at the same time, ensure that the target image is in the center of the line of sight. Then, we adjust the focus gear to move the lens plane back. The driving motor belt will move the focus gear to make the lens plane approach the target at an even speed and collect the images formed after each movement.

In the end, the experiment collected a total of 50 images with a resolution of . When comparing and observing, it is obvious that they have changed from blur to clear and then blur again. Taking the center position of the image as the focusing interval, the function curve corresponding to the autofocus evaluation function in images with different definitions can be measured. In addition to the comparison of sample pictures and different focusing function curves, the sensitivity of the algorithm and the time spent are also important reference information in the experiment.

3.4. Experiment Process

In order to improve the focus evaluation function in the digital imaging system, it is first necessary to compare and analyze the advantages and disadvantages of the existing evaluation functions. Autofocus evaluation functions are usually divided into periodic spectrum functions, entropy functions, and functions based on gradient information. The experimental objects in this article are commonly used functions based on gradient information.

Laplace’s equation is also called harmonic equation; it is a kind of partial differential equation independent of direction, with isotropic characteristics. Set as the gray value of the image at , then the function conforms to

Commonly used functions based on gradient information include the Laplace function, Brenner function, Fourier function, Tenengrad function, and EOG function. The autofocus evaluation function based on the image gradient mainly calculates the image sharpness evaluation value in the spatial domain according to the richness of the edge and detail information of the image. The function of evaluating the image sharpness in the spatial field is mainly based on the following facts: the edge of the clear image is clearer than the edge of the blurred image, the contrast is higher, and the gray level changes are more serious, which can be used to judge focus and defocus in experiment images.

4. Research and Analysis of Autocontrol Microscope Focus Algorithm Based on Digital Image Processing

4.1. Experimental Analysis of the Influence of Different Focusing Functions on Image Sharpness

In order to understand the influence of different focus evaluation functions on the autofocus function of the digital imaging system, this paper compares and analyzes several commonly used focus functions. After controlling the factors that may cause experimental errors, the distance between the experimental object and the objective lens is continuously adjusted, and the objective lens is moved longitudinally at regular intervals. In order to test the response speed of different focus evaluation functions to light, the researcher created different test environments by changing the lighting conditions to highlight the difference of the image.

During the experiment, the researcher collected two sets of photos under different lighting conditions: one set was 25, and finally, a total of 50 images with a resolution of were collected. They were sorted and observed according to the defocus distance, and the lines were found, which presents fuzzy and clear states. It can be seen that both the far focus and the near focus will essentially cause the image to become blurred according to the defocus distance.

In the image sharpness evaluation experiment, it is difficult to make a unified comparison without an accurate sharpness measurement unit. Therefore, the results of different algorithms are processed uniformly in this paper. The final two sets of data are drawn into Figures 1 and 2. The abscissa in the figure is the number of the image during the acquisition process, and the ordinate is the sharpness value after unified processing.

4.1.1. Time Performance Analysis of Different Focusing Functions

Through research, people have found some technical methods for processing image signals, which have greatly improved the ability to accurately focus image signals and out-of-focus image signals. The result is an automatic focusing technology based on video signal analysis. When using the focusing function for autofocus, efficiency is a feature that people value very much. An ideal imaging system needs to form high-quality images in the shortest possible time. Table 2 shows a comparison of several different autofocus functions in terms of time performance.

As can be seen from Table 2, the sensitivity of each algorithm has its own characteristics. The Fourier operator’s focusing range and sensitivity are in the middle, but the amount of calculation is larger than other functions. The time performance of the gray variance operator is relatively ordinary, suitable for simple focusing in a wide range. The Brenner operator performs relatively well in sensitivity. The improved Laplace operator uses more edge information, so its sensitivity is higher than before. As a focusing function of the airspace, the improved Laplace operator has a certain improvement in real-time performance compared with the previous one.

4.1.2. Clarity Analysis of Different Focusing Functions

(1) The Sharpness Comparison of Multiple Focusing Functions when the Edge Features of the Target Are Obvious. Figure 4 is the scatter plot of function curve of five focusing algorithms when the edge feature is obvious. According to Figure 4, the unbiased performance of several focusing functions in the experiment is good. When the abscissa is in the center position, the real-time data of each function reaches the peak value. The improved Laplace function not only has a narrow cap width but also has a higher sensitivity.

(2) Comparison of Sharpness of Different Focusing Functions in the Case of Obvious Target with Edge Feature. Figure 5 is a graph of the function curves of the five focusing algorithms when the edge feature is not obvious. According to Figure 5, when the edge feature is not obvious, that is, the contrast is low, the function still has good unbiasedness. Due to changes in the lighting environment, the sensitivity of several focusing functions has been slightly reduced. However, the focus range of the Brenner algorithm is small, and the sensitivity is relatively poor compared to targets with obvious edge features. When the lens deviates from the in-focus position, the image is blurred, the high-frequency component of the video processing signal is small, and the low-frequency component is large.

In the experimental analysis of the effects of different focusing functions on image clarity, the improved Laplace operator can be used as a unique reference to compare with other evaluation functions, highlighting its competitive advantage. The improved Laplace operator not only has higher sensitivity in changing lighting environments but also has a larger focus range than other algorithms. This high real-time performance is very valuable in spatial computing.

4.2. Autofocus Experiment Analysis and Data Summary

In the comparison experiment, the 10x, 20x, and 50x objective lenses were selected to evaluate the condenser parameters. The comparison experiment parameters are shown in Table 3.

According to the comparative experimental parameters set in Table 3, the experimental data was obtained under the same upright transmission microscope. In this paper, the collected image is processed by the average gray value of a single column, and then, the average gray value of each column of pixels is calculated for the 1024 columns of the image. The results are shown in Table 4.

The source of data deviation under different conditions mainly comes from the influence of the image data collected from the sample and the incident angle under different objective lenses.

In the process of comparing several evaluation functions, the window size of the focus area is also selected at the same time, mainly because the amount of calculation is proportional to the number of pixels when evaluating the sharpness of the image, so in order to achieve speed for the effect of computing and increasing efficiency, the number of pixels for computing must be reduced. The following experiments are images of onion epidermal cells collected with 10x, 20x, and 50x objective lenses. Focused images of onion epidermal cells under different magnifications are shown in Figure 6.

And when different window sizes are selected, the corresponding normalized curve of the evaluation function is shown in Figures 7, 8, and 9. The evaluation functions used in this experiment are the Roberts function, Laplace function, sum function of absolute value of gray difference, gradient square function, and Brenner function.

It can be seen from Figures 7, 8, and 9 that as the objective lens magnification increases, the entire working area of the system becomes narrower, and the image definition evaluation function curve becomes steeper and sharper. This is because the microscope works in different ways. When zooming in, the depth of field is different, and as the magnification increases, the depth of field becomes smaller and smaller. Judging from the focus evaluation function curve, all evaluation functions are relatively smooth at low magnifications, and no obvious local peaks appear, while at high magnifications, more local peaks appear.

4.2.1. Autofocus Experiment Return Gap Measurement

The autofocus mechanism in the system can adjust the distance of objects passing through the transmission. When focusing, because the stepping motor moves in both the forward direction and the opposite direction, there will be a pause in the middle, which is the return gap of the operating mechanism. That is, once the movement of the engine is disguised, there will be a gap between the number of steps used by the motor when it starts and when it returns, causing the return position of the motor to be inconsistent with the expected. If the return gap is not filled, and the motor position deviates from the ideal position during the return, then in the subsequent operation, as the number of phase changes of the motor increases, the error position will become larger and larger. The video signal amplitude increases and the focusing effect is improved. At this time, the video signal amplitude change and the position change are in the opposite phase. At this time, the microprocessor sends a control signal to drive the focusing motor to move the lens focus forward.

In this paper, the distance of the backhaul gap is mastered in the experiment and automatically compensated by the software setting. After five tests, we take the average of the test data as the backhaul gap. Refer to Figure 10 for the complete data. Through data analysis, it can be known that the return gap of the motor is 18, that is, 900 pulses. This number is within the expected error, so it can be automatically corrected by the reverse program in the software. In actual operation, if the motor has a backlash caused by the change of the moving direction during the working process, the system can run the program of reverse rotation and add 900 pulses to the original number of running steps, thereby improving the efficiency and accuracy of the focusing degree.

4.2.2. Research on Autofocus Search Strategy

The focus search technology is a very important part of the autofocus system. Simply put, the focus search technology is the process of confirming the peak value on the autofocus evaluation function curve. The ideal focus search technology must be able to quickly and accurately confirm the focus to complete the autofocus work. In order to use the hill-climbing method to improve the focusing efficiency in the image processing process, it is necessary to ensure that the focusing evaluation function has unimodality. However, in the actual operation process, too many interference factors will affect the simple parabolic shape of the function image, causing multiple different peaks to appear locally, thereby affecting the efficiency of focusing.

This article compares and analyzes the three models of BPIC, ODFM, and ROL, and tests the fit between them and the defocus of the optical system. Figure 11 shows the accuracy of the autofocus test of the three models in several different scenarios.

According to the data analysis in Figure 11, the BPIC model is greatly affected by the environment. It has a very high accuracy rate in scene 1 and scene 3, but the accuracy rate in scene 2 is only 0.25. The accuracy of the ROL model is not much different in each scene, but the overall accuracy is low, and the highest is only 0.825. In contrast, the ODFM model has very good performance, with 3 out of 4 test scenarios achieving 100% accuracy. It can be seen that under normal circumstances, choosing the ODFM model for autofocus can have better results.

5. Conclusions

An excellent autofocus algorithm is an important guarantee for the imaging system to obtain high-quality images. The research in this paper is centered on the autocontrol microscope focusing algorithm of digital image processing. After understanding the research history of the adjustment evaluation function at home and abroad, the working principle of the microscope, the depth of field, and the depth of focus of the imaging system, the various methods of autofocus, the depth of focus method, and the depth of defocus method of digital image processing are introduced. Combined with experiments, this article analyzes the different focus evaluation functions in digital image processing technology and examines their sensitivity and clarity in actual operation. Comparing the improved Laplace operator with the original algorithm, it can be found that it has higher sensitivity and stronger noise resistance, and the focus range has been upgraded to a certain extent. Even in the presence of interference, the improved Laplace function can maintain good unimodality, and the mountain climbing method can obtain high-quality clear images in the shortest time.

During the automatic focusing process of the system, the disguised movement of the motor will cause its actual moving distance to be inconsistent with the theory, thus affecting the accuracy of focusing. This paper calibrates the return gap of the autofocus system, calculates the return delay and motor pulse through the data collected in the experiment, and sets up a reverse program in the software to compensate, which improves the accuracy of focusing. In this paper, a self-focusing search strategy based on digital image processing is studied, combined with the principle of focusing curve adjustment, and the focusing effects of three models of BPIC, ODFM, and ROL are investigated under different conditions. Autofocus search technology must ensure that the imaging system can effectively find the focus for imaging. It can be seen from the experimental data that ODFM is the best performer among the three methods, and the accuracy is very high in any scenario. This kind of autofocus search strategy based on the optical defocus gradient model can achieve precise focusing while ensuring real-time performance.

This article fully grasps the background and significance of the research subject and analyzes it with the current situation and application of autofocus technology. Although the research has reached some valuable conclusions, but is subject to objective conditions, the research still has certain limitations. In future research work, the following issues need to be considered more. First of all, different autofocus functions have different advantages and disadvantages. In specific practical operations, how to select the most suitable focus algorithm according to different imaging requirements is an urgent problem to overcome. The system has the advantages of fast speed, high precision, small size, etc.; solves the problem of fast and accurate automatic focusing; and meets the requirements for accurate testing of the performance parameters of the optical sights of light weapons. The second is to continuously improve the anti-interference function of the autofocus function through research to ensure that the imaging system can still achieve focus smoothly in a complex environment. The last is to optimize the intelligent choice of the imaging system for imaging targets. Many mobile phone cameras now have face recognition functions, but if you want to quickly confirm the target subject in a complex background environment, you need to further improve the technology. The pace of human exploration of the world has never stopped. To better face the unknown challenges in the future, continuous research on microscope focusing algorithms is very necessary.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that there is no conflict of interest with any financial organizations regarding the material reported in this manuscript.

Acknowledgments

This work was supported by the Jilin Province Higher Education Association Project (JGJX2020D516).