Abstract

This paper studies the processing of digital media images using a diffusion equation to increase the contrast of the image by stretching or extending the distribution of luminance data of the image to obtain clearer information of digital media images. In this paper, the image enhancement algorithm of nonlinear diffusion filtering is used to add a velocity term to the diffusion function using a coupled denoising model, which makes the diffusion of the original model smooth, and the interferogram is solved numerically with the help of numerical simulation to verify the denoising processing effect before and after the model correction. To meet the real-time applications in the field of video surveillance, this paper focuses on the optimization of the algorithm program, including software pipeline optimization, operation unit balancing, single instruction multiple data optimization, arithmetic operation optimization, and onchip storage optimization. These optimizations enable the nonlinear diffusion filter-based image enhancement algorithm to achieve high processing efficiency on the C674xDSP, with a processing speed of 25 posts per second for size video images. Finally, the significance means a value of super pixel blocks is calculated in superpixel units, and the image is segmented into objects and backgrounds by combining with the Otsu threshold segmentation algorithm to mention the image. In this paper, the proposed algorithm experiments with several sets of Kor Kor resolution remote sensing images, respectively, and the Markov random field model and fully convolutional network (FCN) algorithm are used as the comparison algorithm. By comparing the experimental results qualitatively and quantitatively, it is shown that the algorithm in this paper has an obvious practical effect on contrast enhancement of digital media images and has certain practicality and superiority.

1. Introduction

In recent years, with the development of information science and technology, images as one of the key mean to obtain news, express news, and transmit news in human’s daily life, and with its different forms of expression occupies a large part of the information expression, the importance of images is gradually discovered and attached to people; so, the research on image processing is also launched [1]. The digital image is the use of computers to process the information contained in the image, and the processed image information can better meet the subjective needs of people for the target image information access and objective needs. One is selective smoothness; that is, the image can be selectively blurred to protect some of the characteristic areas while smoothing out other characteristic areas; the second is that it is easy to produce in the iterative evolution process of the nonlinear diffusion equation. Piecewise constant” image: the essence of the digital image is the digital code, which can be displayed by computer and output as an image. Image processing generally includes image transformation, enhancement, restoration, segmentation, coding, and morphological processing. Image restoration includes image denoising, restoration, and deblurring, and this paper focuses on one part of image restoration, i.e., image denoising [2].

Digital images usually contain noise in practical applications, which we call noisy images. The source of noise in an image is the influence of two major factors: the defects of the imaging equipment itself and the external uncontrollable environment in the process of image generation [3]. People are often disturbed by noise when observing images to extract image information, which affects people’s correct understanding and judgment of the real information. In some cases, the available information in the image has been completely submerged by the noise, resulting in the subsequent related work being hindered. Observing the data of various indicators, it can be found that although the Perona-Malik model can effectively smooth the image noise and background when the value is small, it does not effectively improve the image contrast and the average gray value. On the contrary, the average gray value decreases. For darker images, it does not enhance the image well. If you insist on using noisy images for image processing, such as image recognition, you will usually not get very satisfactory results from such recognition processing. Therefore, to ensure the reliability and validity of the subsequent image processing, as well as the storage and analysis of image information, the denoising of contaminated images is an essential part of image processing. Usually, we use some conventional image denoising methods, such as wavelet denoising, median denoising, mean denoising, and Wiener denoising [4].

Image enhancement algorithms, aiming to increase useful image information and reduce invalid information such as burr noise through pixel-level processing of images, eventually provide more effective image features for the human eye to acquire information or for subsequent computer processing. In the evolution of image enhancement algorithms, the anisotropic diffusion model image enhancement algorithm based on partial differential equations inherited and optimized the local processing enhancement algorithm, where the traditional Perona-Malik model can improve image contrast, increase image details, and reduce noise by combining with gradient calculation [5]. However, since this method smoothes the detail part in the image enhancement process, resulting in more loss of detail information in the image during enhancement, there is a need to add the processing of retaining the detail part for this algorithm. At the same time, the model has great limitations in single processing, and the enhancement effect of the enhanced region determined by the gradient change in the single processing is not obvious; so in practice, when using the Perona-Malik model to enhance the edge and non-edge regions, most of the cases are to iterate the algorithm for several times to achieve the ideal effect of enhancing the detail and contrast This makes the algorithm more demanding to meet the real-time condition. The Perona-Malik model is improved in the following three aspects. First, the edge information is preserved through the improvement of model parameters. Second, by importing algorithms before the model, the image contrast and brightness are improved. Third, under the premise of ensuring image quality, try to speed up the iterative calculations in the processing process as much as possible.

People are satisfied that its method can also be extended to the color domain, which provides a very important basis and reference for the theoretical development of image enhancement [6]. Grayscale transformation is a relatively simple enhancement method, which changes the pixel values of an image point by point according to certain target conditions following a certain pixel grayscale value transformation function, the method has the characteristics of simple implementation and fast speed, the disadvantage is that because the same global function is used for each pixel value, and thus the image local detail enhancement is insufficient. In practice, the most widely used class of image enhancement methods is histogram enhancement-based algorithms [7]. Histogram enhancement, or histogram equalization, is the process of changing the histogram distribution of an image by some operation so that the image pixel gray values are spread over the entire gray level, thus obtaining an enhancement in image contrast and information. The method of histogram enhancement has the characteristics of simple implementation and is easy to produce better results, but its disadvantage is that it does not selectively enhance the image data, which may invariably amplify the noise of the image; on the other hand, since in practice it can only approximate the image histogram uniformity, it cannot achieve the equalization of the image pixels in the whole range of gray levels, which makes the information of the image weakened to some extent.

Scholars have done a lot of research on single image homogenization. The brightness unevenness phenomenon of the submarine visual image caused by point light source irradiation is effectively corrected by gray-scale stretching element by element [8]. The traditional Perona-Malik model combines gradient calculations to improve image contrast, increase image details and reduce noise. However, because this method smooths the details in the image enhancement process, resulting in more loss of details in the image during the enhancement process, it is necessary to increase the processing of preserving the details for the algorithm. The construction of light field correction filter based on Gaussian difference filter is proposed, and the filter parameters are optimized, which has good processing effect and speed for the illumination unevenness phenomenon; the adaptive brightness equalization method is proposed for the brightness unevenness phenomenon in the text image and achieves effective processing effect; the adaptive segmentation and adjustment of the image to be processed by using the mean and variance achieve a good experimental effect. A color leveling method based on normal intercepting linear stretching Gaussian bandstop filtering is proposed, which can better eliminate surface creases and enhance brightness, and has a good effect on the color leveling processing of scanned historical topographic maps. A light compensation method is proposed to eliminate uneven illumination [9]. The wavelet-based transform is used for the homogenization process, and better results are achieved. The processing algorithms can be divided into two categories according to the differences of the adopted models: one is an additive model and the other is a multiplicative model. The additive model treats the image to be processed as if it is obtained by adding the brightness-balanced target image and the noisy image (background image) and removes the noisy image by subtracting the calculation to eliminate the illumination unevenness in the image to be processed. This type of algorithm focuses on the problem of determining the noise image of the grayscale uneven remote sensing image, and the noise image largely determines the good or bad result of the uniform light processing.

In 2017, a large Field Programmable Gate Array (FPGA) architecture based on bilateral filtering for real-time deblurring single image computation acceleration was designed, capable of processing video in real-time. In the same year, an FPGA implementation of a fast median filtering algorithm based on rank order was designed, which greatly exploits the parallelism operation principle of FPGAs. In the last two years, the number of FPGA applications of the algorithm has increased, and based on previous research, an enhancement algorithm based on statistical and logarithmic image processing is proposed [10]. The proposed method uses the fusion of multiple computed luminance channels with the statistical information of the color image channels obtained from the input color image to perform an adaptive color enhancement. To address the problem of excessive resources occupied by the filter module, a hardware structure with a maximum template based on a tree structure is proposed, which greatly solves the problem of excessive resource occupation. The hue mapping algorithm with halo reduction filter is implemented on FPGA, and functions such as adaptive parameter estimation and Gaussian-based halo reduction filter are accomplished [11]. An FPGA implementation of an infrared image enhancement algorithm with a combination of bilateral filtering and histogram equalization is proposed for infrared images. Using the image partition processing method, the repetitive processing of the adaptive histogram equalization algorithm in one image is avoided, and the pipeline operation and parallelism of FPGA image processing are fully utilized. Among the many algorithm acceleration applications, FPGA high-speed implementations of image enhancement algorithms for an isotropic diffusion models are less available; so, research in this area is also valuable. Today, the world’s major companies have introduced numerous custom solutions based on hardware-based implementations that can be directly selected by developers for operations such as the commonly used contrast adjustment as well as edge detection, on the one hand. On the other hand, there is a move towards high-level design, where one can see the development of HLS heterogeneous design and MATLAB heterogeneous design in recent years [12].

3. Image Contrast Enhancement Algorithm Based on Diffusion Equation

3.1. Image Enhancement Algorithm

Nonlinear diffusion is a class of nonlinear filtering methods, which are partial differential equations that mimic the physical process of impurity particle motion. The filtering method based on the nonlinear diffusion equation has two advantages, one is selective smoothing, i.e., it can selectively blur the image to protect some of the feature regions and smooth out others; the other is that during the iterative evolution of the nonlinear diffusion equation, it is easy to produce “segmented constant “ type of image. Therefore, nonlinear diffusion filtering can be used for the image contrast enhancement algorithm in this paper. In this paper, an objective analysis method is used for the image results [13]. The statistical model based on the Bayesian framework uses the maximum posterior criterion when estimating the probability of the original image. This model is based on a certain probability distribution model, but this will make it difficult to draw a prior probability model about the image. Since the results are judged subjectively mainly by observing image detail information and image quality, and the criteria are influenced by the observer’s state, they are not used. The objective criteria are compared concerning image contrast, image standard deviation, and image mean gradient, the processing time of the i5 processor with a main frequency of 2.6 GHz is added for reference, and this reference time also includes the time of reading the image cache at the beginning of the project in the program. The image contrast is calculated as shown below.

Observing the index data, it can be found that although the Perona-Malik model can effectively smooth the image noise and background when the value is small, it does not effectively improve the contrast and gray mean of the image, but the gray mean decreases, which is not good for darker images to enhance the image. To prevent the blurring of details caused by the increase in the number of iterations, the algorithm can only preserve the original detailed information of the image utilizing no operation; so, it can be called a smoothing function based on the function of the diffusion function in the traditional Perona-Malik model for smoothing operation. If the value of is increased, the image is more susceptible to smooth diffusion, thus requiring more iterations to retain edge information, but this leads to a decrease in image quality, and although it improves contrast, it also continues to blur image details in the iterations Although the image texture has increased as shown by the average gradient, the overall quality of the image has decreased compared to the original image based on the standard deviation and the degree of representation of image detail information [14]. Furthermore, according to the processing time in the table, the processing time of 5 iterations for a single image of size has proved difficult to meet the real-time requirements using the software algorithm implementation, and these phenomena have greatly limited the application of the Perona-Malik model in the field of image enhancement. We use the variables, dynamic parameters, and motion variables in the simulation of porous media flow as continuous functions of time and determine the coordinates of the space position. To complete this operation, we need to assume a medium and establish a seepage simulation based on this background, for example, the interpolation function of image grayscale. After an effective analysis of the model, the Perona-Malik model is improved for the following three aspects. First, the edge information is preserved by improving the model parameters. Second, image contrast and brightness are improved by introducing algorithms in front of the model. Third, the iterative computation during the processing is accelerated as much as possible while ensuring the image quality, as shown in Figure 1.

The development of partial differential equation (PDE) methods in the direction of image processing has been very considerable in the last three decades, and digital image processing, which is the darling of academia, is studied and researched by more field disciplines, such as physics, chemistry, computer science, and information engineering. Partial differential equations (PDE) have been widely applied not only in mathematics but also in several other branches, such as the study of thermal lensing in the direction of optics, where the PDE-based image processing methods are attracting scholars with their unique properties, and the application of PDE methods in image processing also involves almost the whole field of image processing, among which are image segmentation, denoising, recognition, feature extraction, etc. This chapter mainly introduces the basic concept of the variational method of image processing, variational principle, and Euler equation in detail and introduces and studies the principle of image variational method through the variational problem in image recovery; that is, the problem is transformed into a generalized function on the extreme value problem to find the minimum value of the problem. Next, the PDE model for image processing is studied, focusing on the full variational TV model, the linear model, the P-M model, and the numerical solution for interferograms. Finally, Chen’s coupled PDE model is studied and solved numerically for this model, which is free from the prefiltering in the regularized P-M model.

At the same time, many experts and scholars have studied the variational methods of image processing in more depth, and many newer and better models of variational image processing have been proposed on this basis, among which statistical models based on Bayesian framework and variational models based on PDE are the two main models currently used in the field of image restoration. When the diffusion coefficient is positive, that is, in a flat area with a low gradient value, the noise in this area is smoothed by the method of forward diffusion, and when the diffusion coefficient is negative, it is a process of backward diffusion through backward diffusion. Equations enhance the edge details of the image and other information. The statistical model based on the Bayesian framework utilizes the maximum posterior criterion in estimating the probability of the original image, which follows a certain probability distribution model, but this makes it difficult to derive a priori probability models about the image. Based on the framework of this model, the Gibbs random field image recovery model and Markov’s random field have been derived. Over the years, the research hotspots have gradually shifted to variational models; although, the current state of relevant research in China is still lacking. The variational image recovery model is a class of deterministic methods in which the problem of extrema of general functions can be converted from the problem of image recovery by introducing energy functions, also known as variational problems.

HSV color space is a class of color space based on human eye perception, divided into hue ( component), saturation ( component), and brightness ( component). Since the color and luminance components are reckless in the HSW color space, the color information of the original image can be preserved while processing the luminance. Since the nonlinear diffusion image enhancement algorithm based on the HSV domain focuses only on enhancing the contrast of the image luminance component, the conversion from RGB color space to HSV color space is performed at the beginning of the processing flow, and only the image luminance component is processed in the enhancement flow of the kernel tun, while the chromaticity component is retained in the saturation component , and finally, the HSV color component is performed at The HSV color space to RGB color space conversion is performed accordingly at the end of the algorithm process. The processing flow of the HSV domain-based nonlinear diffusion image enhancement algorithm is shown in Figure 2.

Simulations of seepage in porous media with continuous ideas are usually assumed to be performed with a fluid having continuity. In practice, however, the density of the fluid occupying the space can only be expressed as a function of some continuum. To analyze and explain the percolation flow in porous media using the method of PDE in mathematics, we take the variables, dynamic parameters, and motion variables in the simulation of the flow in porous media as continuous functions of time and determine the coordinates of the spatial location, and to accomplish this operation, it is necessary to assume a medium and build the percolation simulation in this context, such as the interpolation function of the image grayscale [15]. For a different single-phase Newtonian fluid, it is clear from studying the flow in porous media that in above Darcy’s law is independent of the properties of the single-phase Newtonian fluid and is only related to the structural properties of the porous media itself. In other words, if the same porous medium flows with a different single-phase Newtonian fluid, the value of in Darcy’s law represents the permeability, and its magnitude does not change. So, the scale factor in Darcy’s law is equivalent to the permeability, and the scale factor is usually a constant which is used to reflect the structural properties of the porous medium.

3.2. Diffusion Equation Enhancement Algorithm Model Construction

Image processing based on partial differential equations has gradually become an important branch in the field of digital image processing because of its strong physical and mathematical underlying theoretical support. The simplest and best studied partial differential equation approach to image smoothing is applied to the diffusion process. Diffusion is a physical process, which refers to balancing concentration differences without creating or destroying mass, where the more classical thermal diffusion equation lays the theoretical foundation for partial differential equations in image processing. It can be expressed mathematically as

From the derivation process of the law of conservation of matter, it is known that the general law of the motion of matter flows from high concentration to low concentration, the flow rate is proportional to the concentration of matter, and this physical process can be called Fick’s law. Therefore, both the algorithm in this paper and the Markov random field algorithm have a certain degree of missed detection when detecting this part of the building, the detected part of the building is not complete, half of the building target is detected, and half is marked. Categorized as background: The mathematical model constructed by the image processing method based on the partial differential equation, which is usually improved based on the partial differential equation of Eq. (4), can generally be divided into linear diffusion and nonlinear diffusion methods, which are mainly determined by the diffusion coefficient [16]. When the diffusion coefficient is a scalar constant, Eq. (4) is a linear isotropic diffusion equation; when the diffusion coefficient is a scalar variable, it is a nonlinear isotropic diffusion equation, and when the diffusion tensor is chosen as a function of the estimated local image structure, this diffusion process leads to a nonlinear anisotropic diffusion filtering.

A nonlinear diffusion model is a partial differential equation consisting of a diffusion function that describes the differential structure of an image. Among them, the partial differential equation based on nonlinear diffusion was first proposed by Perona and Malik, also known as the PM model, which builds the diffusion term and diffusion direction of the partial differential equation based on the gradient of the input image, with the initial intention of solving the problem of blurred image edges caused by the traditional linear diffusion equation. To address the sensitivity of the partial differential equation to noise, the Perona-Malik model was improved by proposing the regularized Catte model by Gilboa et al. On the other hand, the diffusion coefficients of the Perona-Malik model were locally adjusted according to the image features (e.g., edges and textures), and the forward and backward diffusion algorithms for adaptive image enhancement and denoising were proposed by Gilboa et al. Later, based on these methods, the diffusion coefficients of scalars were replaced by diffusion tensor constructed depending on the texture features of the image, such as the edge enhancement model and the coherent enhancement diffusion model proposed by Weickert.

From Eq. (5), it can be noticed that the first term on the right side of the equation is like the diffusion function in the Perona-Malik model, i.e., forward diffusion, while the second term represents the equation for backward diffusion. When the diffusion coefficient is positive, i.e., the noise in a flat region with low gradient values is smoothed by forwarding diffusion in that region, while when the diffusion coefficient is negative, it is a backward diffusion process that enhances information such as edge details of the image by the equation of backward diffusion. The stability of the algorithm can be achieved since most pixels in natural images have low gradient features and singular edges lead to the inversion of the sign of the diffusion coefficient. However, the FABD model is not very practical when it encounters highly textured or very noisy images, as shown in Figure 3.

In the system, the display of the image needs to cache the data, and this paper uses a three-frame cache mode for display, which is more reliable than the two-frame cache ping-pong operation. At the same time, the image data transfer module is not able to interface directly with the MIG IP core; so, the driver control module is needed to implement the above functions; so, the DDR3 driver control module is the focus of the design of the data storage module in this paper. There are two ways to implement the line cache, one is to use shift registers to save data into registers, this method is easy to design, but there is a limit in the number of registers, up to 1080 registers can be formed, while the use of shift registers will lead to a large number of register resources The image format used in this paper is , and the pixel data of one line exceeds the maximum amount of shift registers, which makes it necessary to have more than one shift register to achieve this, which also makes this implementation method more inefficient, and the designed state diagram is shown in Figure 4.

With the development of science and technology and the expansion of discipline fields, many physical problems that need to be solved have been gradually revealed, and the theories of the same problem solving have become increasingly complete. The PDE method in the image processing field has been improved as the mathematical theory becomes more complete, and the PDE method in the image processing field has received a lot of attention from scholars at home and abroad and has been researched extensively for its superiority over other traditional denoising methods. To achieve a good denoising effect, corresponding nonlinear diffusion models are therefore established, expecting that the detailed texture information of the image can be maintained while denoising. However, usually, such models are pathological. To correct such problems, many scholars have conducted indepth research on the regularization of nonlinear diffusion models, and two classical regularization models have been proposed to solve the fitness problem, namely, the null domain regularization model and the time domain regularization model [17]. When people are observing images and extracting image information, they are often disturbed by noise, which affects people’s correct understanding and judgment of real information. What is more, the available information in the image has been completely overwhelmed by noise, which has hindered subsequent related work. These two models transform the pathological nonlinear model into a benign one, but the regularization model usually requires preprocessing the original image, and the selection of Gaussian preprocessing parameters is determined by the amount of noise in the image to be processed, and since the noise level in the image to be processed is unknown, the setting of the size of the preprocessing scale parameters is limited. This Gaussian prefiltering is difficult to meet the requirements in terms of real-time processing in the In SAS system. Therefore, the study of multiphysic field image processing methods shows its importance in the real-time denoising processing of images.

4. Analysis of Results

4.1. Enhanced Algorithm Results

From the experimental results, we can see that for the independent and regular buildings in the upper left part of the S1 experimental image, both the algorithm in this paper and the Markov random field algorithm can achieve good results; while for the long strip-shaped buildings in the lower part of the image, because they are more similar to the background in terms of color and brightness, the edges of the buildings are irregularly jagged; thus, the algorithm in this paper and the Markov random field algorithm can detect the buildings in this part of the building both have some leakage, the detected part of the building is not complete enough, some building target half part is detected, and half part is classified as background. The biggest difference between the experimental results of the two methods is located in the upper right half of the image, where there are several contiguous buildings with irregular shapes. The algorithm in this paper extracts the irregular buildings on the right side well because it takes into account various features such as brightness, color, edge, structural symmetry, and structural integrity, while the Markov random field model performs poorly in this point and causes more missed detections. From the experimental results, it can be seen that for the detection of single buildings, both the algorithm in this paper and the Markov random field model achieve satisfactory results; however, some gray houses in the middle of the image are not detected by the Markov random field model due to the small contrast between the grayscale and the background. Several gray buildings at the upper end of the image, which are indistinguishable from the background, are divided into a whole block in the superpixel segmentation and are thus extracted in the algorithm of this paper, while the Markov random airport model directly omits the buildings there, as shown in Table 1.

Compiler optimizations are done automatically by adding a compile optimization option or by adding optimization information to the code. The goal of the optimization is to achieve software pipelining, a balanced allocation of arithmetic units , and single instruction multiple data (SIMD) optimization effects on the loop body in the code. (1)Software pipelining. Software pipelining techniques can used to orchestrate instructions in a loop body so that instructions from multiple iteration layers are executed in parallel at the respective iteration layer. One factor that limits the optimization of software pipelines is the dependency between adjacent iteration layers, i.e., the execution of a later iteration depends on the results produced by the previous iteration(2)Balanced allocation of arithmetic units. In some loop bodies, under a single iteration layer, instructions are not balanced allocated to the two arithmetic units in the CPU, resulting in the two arithmetic units consuming different numbers of clock cycles. The solution is to expand the loop body and reallocate the arithmetic units by treating multiple iteration layers as a single iteration layer

Fingerprint image enhancement is an important part of the preprocessing step of the fingerprint recognition process, which aims to repair degraded fingerprint images, connect broken lines, and improve the contrast of fingerprint images. After carefully studying the anisotropic diffusion algorithm, a two-stage fingerprint image enhancement algorithm based on anisotropic diffusion and impact filtering is proposed in combination with the impact filtering algorithm [18]. To ensure the subsequent image processing, as well as the reliability and effectiveness of image information storage and analysis, denoising of contaminated images is an essential part of image processing. Usually, we use some conventional image denoising methods, such as wavelet denoising, Median denoising, mean denoising, and Wiener denoising. The first stage of this method uses the CED method to enhance the degraded fingerprint image, and afterward, sharpens the image edges by impact filtering. The degraded fingerprint image is processed by this method to retain the positive results of coherent enhancement diffusion for repairing the interrupted lines of the fingerprint image while enhancing the edges and contrast of the fingerprint image. The basic principle of the discontinuous impact filter proposed in 1975 is to determine whether a pixel point at a point of the image belongs to the maximum or minimum impact region by the value of the Laplace operator, if the value of the Laplace operator is negative, the pixel point belongs to the maximum impact region, and the expansion operation is used for the pixel points in this region, if the value of the Laplace operator is negative, the pixel point is the minimum impact region, and the process of erosion is performed on that pixel. The above process is iterated as per the requirement until the oscillation is generated at the boundary of these two influence regions to sharpen the enhanced edges and make the image edges sharp as shown in Figure 5.

In this subsection, we use two numerical methods to implement our model. The numerical format of the model is designed, using the existing finite difference method and the AOS method. In terms of numerical discretization, we first refer to the basic discretization method, but the finite difference numerical discretization method is time-consuming; we need to consider a fast algorithm, and using the AOS algorithm to improve the efficiency of the algorithm, the AOS algorithm is stable in terms of stability, to meet the accuracy required for the experiment, by adjusting the time step can be obtained to higher experimental efficiency. According to the numerical format for programming implementation, observe the experimental effect, analyze the model more deeply according to the experimental results, find the shortcomings, and correct them in time, so that the model can be improved. The finite difference method can be directly used for numerical discretization of the diffusion equation, but a small-time step needs to be chosen to ensure the stability of the calculation; so, at least hundreds of iterations are needed to reach the steady-state; thus, it is meaningful and necessary to find an efficient numerical method. If in the above discretization format, layers of data are used for both coefficient data and image data, this constitutes an explicit numerical scheme for tensor diffusion. If layer data is used for part of the model data, and layer data is used for the other part a semi-implicit numerical scheme can be constructed, in which case, the AOS algorithm can also be used for numerical computation.

This section uses two methods simultaneously to accelerate the light estimation step of the enhancement algorithm in this paper; the more efficient AOS algorithm is used for the iterative update of the diffusion equation to increase the iterative update speed; the entire law generation process uses a multiresolution pyramid structure to downsample the image several times and generate low-resolution images at different scales for iteration, setting more iterations for the lower resolution images [19]. Since the AOS numerical scheme has been discussed in Chapter 2, this section only describes the multiresolution acceleration scheme. Multiresolution processing of images is an effective way to increase the processing speed, shipped because lower resolution images contain less data while keeping the structure unchanged, making the algorithm less computationally intensive. In this paper, we simply downsample the original image by a factor of W2 to generate a multiresolution pyramidal hierarchy, then iteratively estimate the illumination starting from the top layer with the lowest resolution, and after the completion of the iteration perform a 2-fold interpolation to expand and use it as the initial image for the next iteration layer. The number of downsampling should not be too many, and the size of the low-resolution image at the top of the pyramid should be allowed to be no less than 200, preventing the loss of image boundary stomach rest. Thanks to the superior selective filtering performance of nonlinear diffusion filtering, the contrast enhancement effect of the enhancement algorithm in this paper is better than that of ’s previous similar algorithms.

4.2. Image Multifeature Contrast Processing Results

The traditional Gabor filter enhancement algorithm is susceptible to the influence of the background area when processing the edges of fingerprint images and cannot accurately find out the block frequency, resulting in the edge part cannot being effectively enhanced. Designing extractors for extracting fingerprint scars takes up more memory space and has a longer computing time. In this paper, during the study of anisotropic diffusion algorithm for enhancing fingerprint images, it is found that the method can avoid some of the weaknesses of traditional algorithms if the structure tensor is used as a tool to guide the evolution of the scale space and measure a more reliable local orientation when enhancing images of flow-like structures [20]. The algorithm’s diffusion acts mainly along the direction with the highest coherence and becomes stronger as the coherence increases. Therefore, in the actual use process, when using the Perona-Malik model to enhance the edge and nonedge areas, in most cases, the algorithm must be processed multiple iterations to achieve the desired effect of improving details and contrast. This makes the algorithm necessary. The conditions for meeting real-time performance are more stringent. And its disadvantage is that it tends to lead to the phenomenon that some fingerprint images have reduced contrast between coherent structure and background due to excessive smoothing, and it has the disadvantage that the estimated diffusion direction is inaccurate when calculating the structure tensor for low-quality fingerprints (e.g., fingerprints with ridge adhesion phenomenon due to environmental, pressure, and other conditions). Fingerprint image enhancement is an important part of the preprocessing step of the fingerprint recognition process, which aims to repair degraded fingerprint images, connect broken lines, and improve the contrast of fingerprint images. After carefully studying the anisotropic diffusion algorithm, a two-stage fingerprint image enhancement algorithm based on anisotropic diffusion and impact filtering is proposed in combination with the impact filtering algorithm. The first stage of this method uses the CED method to enhance the degraded fingerprint image and afterward sharpen the image edges by impact filtering. The degraded fingerprint image is processed by this method to retain the positive results of coherent enhancement diffusion for repairing the intermittent lines of the fingerprint image, but also to enhance the edges and contrast of the fingerprint image, as shown in Figure 6.

In this paper, we use the MATLAB simulation to enhance the fingerprint images using the coherent enhancement diffusion algorithm, the impact filtering algorithm, and the CED-SF algorithm designed in this paper and compare the performance of the three methods in fingerprint enhancement according to the experimental results. The enhanced fingerprint images are then simulated for fingerprint recognition, the fingerprint images with the same ID are selected as the template fingerprints, the enhanced fingerprints are compared with the template fingerprints, and the corresponding matching scores are calculated. The experiments prove that the enhanced fingerprints of the algorithm in this paper are better than the other two methods in terms of enhancement, and the algorithm in this paper can enhance the detailed features of the fingerprints and improve the matching degree of degraded fingerprints [21]. Finally, these three methods are artistically created in other flowing texture images, and their feature performance in art creation and other aspects is analyzed. Fingerprint enhancement mainly enhances the feature information of the texture so that the basic feature points in the fingerprint image can be accurately obtained in the subsequent extraction process. The extraction of feature points in the fingerprint recognition process is generally done directly in the grayscale fingerprint image or binarized fingerprint image, which extracts the detail points directly from the texture line structure of the fingerprint, but the feature information extracted by this method is not always reliable. The most widely used and reliable method is to first refine the fingerprint image and then obtain feature points from the refined ridges, which is simpler to calculate and more accurate. The method used in this paper is based on the extraction of the refined fingerprint image in the simulated fingerprint identification experiments. The algorithm consists of four steps: binarization, refinement, extraction of detailed feature points, and matching, as shown in Figure 7.

From a biological point of view, studies have analyzed that the brightness, color, and frequency of an image affect the contrast sensitivity of the human eye. Studies targeting spatial frequency have shown that the human eye has the characteristics of a band-pass system with limited discriminative ability. It was found that the contrast sensitivity function (CSF) was used to represent the correlation between human eye sensitivity and frequency at different spatial frequencies. CSF is a band-pass function with the independent variable of spatial frequency and the dependent variable of visual sensitivity. The response curve of the CSF function, as can be seen from the figure, is the relative magnitude of vision that shows band-pass characteristics. The sensitivity of contrast is highest in the interval of [0.03, 0.25]. A deeper study of visual physiology sets up the existence of many band-pass filters on the retina, which decomposes the image into different frequency bands, each band has a narrow bandwidth, and its width tends to increase multifold, i.e., its bandwidths are equal when analyzed in a logarithmic perspective. The external image information can be split and processed according to the different frequency bands, and the image information in each band is processed separately by the corresponding channel. Thus, the process of processing information by human eye vision is a multichannel system that processes according to image characteristics.

5. Conclusion

The model proposed in this paper can effectively protect edges when extracting nonlocal structure information while avoiding problems such as causing image distortion. To enhance the coherence of images with flow structures, two different diffusion schemes, adaptive diffusion scheme and filtered diffusion scheme, are designed to guide the diffusion process of the model according to the feature structure of the image to adapt the diffusion process to the image itself. The filtered diffusion scheme has lower computational complexity, more comprehensive consideration of image information compared to the adaptive diffusion scheme, and the refined partitioning can better image information features to guide the diffusion. The model proposed in this paper can well achieve the effect of improving the quality of the stream-like structure without destroying some details of the original image. The model enhances the effect while smoothing the smooth region, and the anisotropic diffusion makes the geometric features of the restoration map remain intact; so, it is valuable in practical applications. The model is modeled using multiphysic field coupling theory, the interferogram is denoised, and it is determined that the model has good denoising and fringe retention capabilities. A modified multiphysic field coupling model is proposed, the interferogram is denoised using the model, and the processed interferogram is evaluated using the equivalent apparent number and fringe retention coefficients. The results show that the modified model can effectively denoise the interferogram and retain the fringe information better.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work was supported by Research Fund of National Natural Science Foundation under Grant: Research on the theory and method of reversible data hiding of large capacity images (No. 62162006); Young and Middle-Aged Teachers’ Ability Improvement Project of Guangxi: research on reversible and separable data hiding algorithm in encryption image (2020KY16021) and research on remote sensing scene classification based on feature hybrid coding (2021ky0650); Research Fund of Guangxi Key Lab of Multi-Source Information Mining & Security: Research on reversible and separable data hiding algorithm in encryption image (MIMS18-05), China ASEAN Institute of Statistics.