Abstract

Image enhancement and reconstruction are the basic processing steps of many real vision systems. Their purpose is to improve the visual quality of images and provide reliable information for subsequent visual decision-making. In this paper, convolution neural network, residual neural network, and generative countermeasure network are studied. A rain fog image enhancement generative countermeasure network model structure including a scalable auxiliary generation network is proposed. The objective loss function is defined, and the periodic consistency loss and periodic perceptual consistency loss analysis are introduced. The core problem of image layering is discussed, and a layering solution framework with a deep expansion structure is proposed. This method realizes multitasking through adaptive feature learning, which has a good theoretical guarantee. This paper can not only bring a pleasant visual experience to viewers but also help to improve the performance of computer vision applications. Through image enhancement technology, the quality of low illumination image can be effectively improved, so that the image has better definition, richer texture details, and lower image noise.

1. Introduction

As an important multimedia information medium, image contains rich information and has been closely integrated into modern human daily life. With the rapid development of deep learning technology, computer vision-related applications such as image-based target classification and target recognition are constantly emerging, which brings more convenience to people [1]. Although the resolution of mobile phone cameras is getting higher and higher, the camera used in this paper still has limitations. In many scenarios, the images captured by the camera cannot meet the requirements of this paper, and this situation is often caused by the influence of the environment [2]. This requires the computer to recover the original scene from the defective scene. These images that need to be restored usually have various problems, noise, or lack of information [3]. The main source of information obtained by the vision system is images, but the acquired low-quality images lack the information input required by the system, thus affecting the effectiveness and accuracy of the computer vision system [4]. Therefore, it is particularly important to study the algorithm of enhancement processing for degraded images. When selecting models, they are integrated into multiple deep learning models. Take the average value of the prediction results of each of them. The greater the difference between the models, the better the effect. Using very different network topologies and technology, if each model is independent and effective, the effect of the integrated result is more stable. On the contrary, you can also do experiments in turn. Each time you train the network model, you initialize it and the final weight converges to different values. This process is repeated many times to generate multiple network models and then integrate the prediction results of these models, which is helpful to overcome the generalization ability of the algorithm.

Due to the imaging limitations of existing equipment, in the natural environment, various weather conditions often have an adverse impact on the image acquisition equipment, resulting in the acquisition of low-quality natural images [5]. In this paper, we know that the imaging principle of the camera is through the reflected light of the object. However, in the evening or dark conditions, due to the lack of light, the light that the scene can reflect to the sensor is limited, and the scene information is seriously lost [6]. The above three scenarios analyzed in this paper all have the problem of low quality of acquired natural images due to environmental factors, so how to improve the quality of these images becomes the focus of this paper [7]. Deep learning is in the training data or test data of a deep neural network. At present, high visibility images, clear images, and images or videos with high feature density are used in image recognition or other computer vision tests, and the computer vision tasks under these images or videos with obvious features and in line with aesthetics have achieved exciting results [8]. The problems of image enhancement and reconstruction cover a wide range, involving many application problems such as image smoothing and denoising, image deblurring, image defogging, rain and snow, image superresolution, image inpainting, detail enhancement, and high dynamic range tone mapping [9]. When the input image is a degraded image, the common goal of both is to improve the visual quality of the image and obtain an improved image in a sense. In contrast, image enhancement does not seek the original appearance of the image, but selectively highlights some interesting parts and useful information, suppresses or weakens some uninterested parts, and obtains the enhancement results with strong expressiveness, obvious characteristics, and rich effective information.

In this paper, by analyzing the mapping relationship of depth neural network, an illumination map estimation function is proposed, and the calculated illumination map is used to locally adjust the brightness information of RAW data so that only a single image can be used to protect the high dynamic range of photos in low illumination environment [10]. Using the media transmittance estimated by the neural network model, this paper performs the detail enhancement operation on the enhanced image [11]. This paper also notes that the super parameters of detail enhancement methods have a great impact on the results of image enhancement [12]. In order to obtain stable and effective results, this paper uses a multiobjective optimization method to solve these super parameters. Therefore, this paper adopts a differential evolution algorithm to solve this problem [13]. Finally, a lightweight version reduced from the basic model can run on mobile devices. Different from the traditional model MAP method, which studies image features by designing a prior, this paper proposes a flexible way to integrate the deeply learned network architecture into the prior of various image processing applications [14]. On the other hand, instead of designing the network structure based on experience, this paper provides a method to build deep models using task domain knowledge, resulting in better interpretability compared to deep learning-based methods.

The rapid development of the computer has prompted us to enter the information age, which is a valuable information age. How to maximize the amount of information has become an urgent problem for people to study and has also contributed to the development of science and technology. Vision ranks the highest among the human senses. In the process of direct contact with the outside world, it is also the most important way to obtain information directly. However, clear images are not always available in the process of image acquisition, and the demand for image price reduction arises at the historic moment. From a practical point of view, the real-time performance of outdoor monitoring system, road monitoring system, mineral geological system, and outdoor detection system is very high, because when working outdoors, it is often affected by the environment, but the image is high, so the demand for image processing application is the greatest. Therefore, how to obtain a clear image efficiently is often the key to image degradation.

This paper discusses the core problem of image layering and proposes a layering solution framework with a deep expansion structure. The research is divided into five sections. Section 1 expounds that image, as an important multimedia information carrier, contains rich information. The background of image enhancement and reconstruction is described. Section 2 analyzes the research methods and expounds the deep learning and neural network. Section 3 is the research on image enhancement methods, including image enhancement methods based on edge filtering and decomposition. Media rate enhancement method based on neural network. Section 4 analyzes and discusses the results. Finally, the full text is summarized. This paper proposes an image enhancement network based on deep learning, which can directly convert the original image into a color image. Compared with the traditional algorithm, it solves the complex problem of low illumination image enhancement.

2. Methodology

2.1. Deep Learning and Neural Network

Deep learning is a kind of machine learning. It learns abstract features from experience and data to solve problems and can also extract more abstract and complex features from simple features [15]. For traditional algorithms, feature extraction is mainly done by manual operation, which makes it difficult to extract complex features. The proposal of deep learning solves the problem that traditional algorithms are difficult to extract high-level and abstract features and makes the problem solving simpler and more effective [16]. Based on this basic structure, many scholars have conducted in-depth research and improvement on convolutional neural networks, and the application directions are also more diversified, involving image segmentation, recognition, pose estimation, natural language processing, and optimization of underlying vision [17]. With the increase of the number of iterations, the effectiveness of the algorithm will also be lost to a certain extent, and due to the lack of data fitting ability, the adaptability of the algorithm will be difficult to ensure. For the deep learning method, the network structure design based on experience and the large-scale training data used to train and improve the network capability are the key to determine the network performance. A new composite illumination estimation method is designed for the local similarity of illumination and edge mutation type; for illumination intensity, this paper proposes an illumination intensity image estimation method based on limited expansion; for illumination edges, this paper uses a small window closing operation-based illuminated edge image estimation method. A neuron is the most basic unit of a neural network. The neuron model is shown in Figure 1.

The output formula of the nerve unit is as follows:

Here, is the information input of neurons, is the corresponding weight, is the bias, and is the output.

When several neurons together form a neural network, they are generally divided into three layers, as shown in Figure 2.

Among them, the hidden layer is a very important part of the neural network. It is in the middle part of the hierarchical structure and consists of all intermediate nodes. The input data are processed by the hidden layer to realize the characteristic division of samples. At this time, the output of the neural network iswhere represents the output value of unit of layer .

In deep learning, a convolutional neural network is usually composed of several convolution layers, activation function, and lower sampling layers alternately stacked, and the final judgment is completed through the full connection layer at the end of the network [18]. The convolution layer is the core operation unit in the convolution neural network, and each convolution layer is composed of several convolution kernels, and each convolution kernel acts as a filter [19]. The convolution layer filters the input features through the convolution kernel, which realizes the function of feature extraction [20]. For a two-dimensional image, the convolution process of each convolution kernel is as follows:where is the input of a two-dimensional image, is the convolution kernel, and is the output of convolution. Since the parameters in each convolution kernel of the convolution layer are obtained through the back propagation of the neural network, it has a better processing effect for complex abstract images [21]. The features are usually located in the middle of the downsampling layer and the upsampling layer with the same output image size. By processing the features of the same level, the mapping from the source image features to the target image features can be realized. Since the size of the data will become smaller and smaller after the convolution operation, it will limit the deep design of the network. In addition, in the research of this paper, it is found that other data may be repeated during calculation, which will lead to some data that may only be calculated. The depth of the convolution kernel is consistent with the depth of the image or feature map. Convolution is to convolute in two directions of rows and columns, so as to better extract plane features. This paper uses PyTorch or TensorFlow models to build a deep learning network and expand the edge[22]. Therefore, it is necessary to propose a nonlinear activation layer. After the processing of the nonlinear activation layer, the data can be processed and filled again, so as to liberate the data with no calculation or fewer calculation times [23]. There are many kinds of nonlinear operations, including Sigmoid, Tanh, Re LU, and Le Re LU. Figure 3 shows the operation diagram of a basic maximization pool.

Among them, ReLU is the most commonly used and the best nonlinear activation function in recent years. The ReLU function performs a nonnegative operation on all data, that is

The above formula shows 0.5 for data in plural form. Moreover, the reference of the nonlinear activation function is only the nonlinear characteristic quantity of the whole model with positive valence and will not affect other data or ranges. In order to solve the problem of reflection, we assume that the photos taken by the camera are the sum of natural background and glass reflection scenes. The purpose of image dereflection is to separate the reflected scene from the captured content and restore a clear and clean image.

3. Research on Image Enhancement Methods

3.1. Image Enhancement Method Based on Edge Filtering and Decomposition

The image enhancement method based on edge-preserving filtering and decomposition model selectively highlights the feature areas in the image according to specific needs without considering the image degradation process and suppresses unimportant features so that the enhanced image is more accurate than the original image suitable for specific applications [24]. The color offset mostly appears in the white area or black area, especially when shooting night images. If the equipment accuracy is not enough, it is easy to appear color blockiness, and the color offset will appear after being stretched. Therefore, if you have no absolute confidence in the improvement of color saturation, you can directly solve it by using a bright channel or changing color space without considering color factors. Generally, the fusion decomposition of the above filtering can be expressed as follows:

Here, represents the observed image, represents the edge-preserving filtering operation, represents the detail layer of the image, represents an important parameter, the main function of this parameter is to control the expansion of the detail layer, and represents the image after the above enhancement [25]. Low illumination image enhancement is a complex problem. The traditional image signal processing method is composed of a series of subtasks such as white balance and denoising, but its noise level is high and the color is not bright. Therefore, this paper proposes an image enhancement network based on deep learning, which can directly convert the original image into a color image [26]. The low light image at this time can be defined as follows:

Here, represents the proposed network and is the parameter set in mining.

Therefore, the recognition model must be trained with massive data as samples in order to give full play to its due effect. However, although the whole society is still facing the problem of data explosion, the phenomenon of data “island” faced by various enterprises exists for a long time [27]. This means that technicians may face the embarrassing dilemma of not being able to obtain the training data of specific object examples. In order to judge whether an image conforms to human visual perception, the objective image quality evaluation method is a commonly used evaluation method. To this end, this paper proposes two indicators of peak signal-to-noise ratio and structural similarity to judge the image quality. Generally, the use of peak signal-to-noise ratio is a common method in image processing. It determines the image quality by calculating the error between corresponding pixels. The calculation is expressed as follows:

According to the above formula, when the calculation result of is larger, it means that the enhanced image is closer to the reference image, the distortion is smaller, and the image quality will be better. In addition, represents the mean square error value calculated based on the pixel point difference between the reference image and the image enhanced by the result algorithm, which represents the number of pixel bits in the image, and represent the length and width of the image. At this time, the introduced auxiliary variable is not directly obtained through the image gradient, but considering the influence of noise, color inequality, and other factors in the image, the image gradient is smoothed to obtain a smooth version of the image gradient, so as to avoid the wrong value of the auxiliary variable. It can be found that the presmoothing operation used is critical to avoid the ladder effect. The effective iterative algorithm can enhance the smoothing ability of the method in this chapter and reduce the ladder effect.

The enhanced image is set as and the reference image as . This paper analyzes and processes the image from the aspects of contrast, structural information, and brightness of the image through these indicators, which are expressed as follows:

The extended expression of the three parameters is as follows:

Therefore, this paper obtains and image contrast data graphs as shown in Table 1. are the means of , are the standard deviations of , is the covariance of , and are the bias values. Generally, due to the rationality and significance of the calculation interval, the function will be in the interval , and the larger the calculation result of the function, the better the quality of the image.

3.2. Media Rate Enhancement Method Based on Neural Network

Because the U-shaped network can better retain the details of the image, in order to learn the mapping relationship between low light image and normal light image, the convolution neural network with a U-shaped structure is adopted in this paper. In the original data preprocessing layer, inspired by multiple exposures, the image is multiplied by different magnification factors as input. An attention mechanism is added after sampling to extract more useful information for low-light image enhancement, which contains two attention modules of channel attention and spatial attention, which is beneficial to eliminate the color artifacts caused by the magnification. In the task of image classification, image data enhancement is generally one of the methods that most people will adopt. This is because deep learning has certain requirements for the size of the data set. If the original data set is relatively small, it can not well meet the training of the network model, thus affecting the performance of the model. Image enhancement is to process the original image to expand the data set, which can improve the performance of the model to a certain extent. Therefore, in this paper, the medium transmission based on a neural network is proposed to enhance the image, so it is necessary to study the detailed enhancement scheme of the defogging enhancement result with a smooth transition. Generally, evidence based on the sharpening coefficient will be selected for detail enhancement, which is generally expressed as follows:

Among them, D is the definite function of air light, and V is a definite function of the degree of visibility. Since the above two functions are both curvilinear functions, they are expressed as follows:where are the slope factors of the function. is the mean of Weber light contrast. Figure 4 is a schematic diagram of a general model for deep learning image enhancement.

In the low light environment, the number of photons captured by the camera is very weak, and people's daily use of RGB is generally 8 bit format. Compared with 14 bit or higher bit RAW data, RGB images have a lot of information loss, so directly using RAW data will get better results than using RGB data. Multigranularity cooperative neural network, which is a network with bidirectional information flow, cooperative forward information transfer, and reverse information transfer, respectively. A multigranularity neural network is composed of multiple single-granularity neural networks, and a single-granularity neural network can be designed in any form. The result of the nonlocal operation is the weighted sum of all location features, which aims to enhance the feature representation ability of the network. Therefore, using nonlocal operation, the different location information in the feature graph can be aggregated together, so that the network has a global receiving domain.

4. Result Analysis and Discussion

The commonly used depth algorithm image content is selected. Through the simulation experiment, the actual data are analyzed and calculated, and their practical significance and effect are tested. In this paper, the proposed method and algorithm calculation are introduced and explained in detail, and an image enhancement model with high feasibility and good effect is established. Therefore, this paper will experimentally analyze several important reference coefficients, such as the quantization rate of the loss function, the enhancement rate of network structure before and after optimization, the average error feedback rate, and the actual image enhancement rate. All data sources are derived from data analysis conclusions.

As shown in Figures 5 and 6, the loss function quantization rate and network structure enhancement rate of test sets a and B are analyzed.

Through the above analysis experiments, it can be concluded that the loss function is at a reasonable level on the quantization axis of the whole test set, which will also lay the foundation for the whole image enhancement because the loss function represents the distortion of the image. Therefore, it is necessary to pay more attention to the phenomenon that the image enhancement effect will be extremely weak if the loss function is at the limit value. Therefore, this paper standardizes the algorithm to keep it basically in a reasonable range, which will improve the experimental effect. Due to the characteristics of deep learning in network structure, it will tend to be stable as a whole. This is also the result of the existence of convolutional neural and recurrent networks that make the data well processed, and it can be found in the interval of 0–4. The increase and decrease are basically kept in the same proportion, so for different data sets, the effect of image enhancement remains basically unchanged without being strongly influenced by the outside world, which will make the image enhancement effect well reflected. Figures 7 and 8 are the analysis charts of the average error feedback rate and actual image enhancement rate on test sets X1 and X2.

Significantly, in both images, the overall trend tends to be close. When the number of test sets is more, the lines on the whole quantization axis will gradually approach one trend. This is because in terms of average error, when the error rate is reduced, the enhancement efficiency of the whole image will be greatly increased, so as to promote the accuracy of feedback information. As a result, a closed loop is formed to achieve a virtuous circle, which will facilitate subsequent image enhancement, simplify the calculation process and time, and reduce the error rate of actual operation by 78.5%. As for the enhancement rate of the actual image, since it is between 2 and 3, it can be seen that the image shows great volatility, which may be caused by the processing error or useless information caused by the interference of the algorithm. Because the algorithm is designed by using the deep learning network model, there is a certain distortion in the loop.

Therefore, this paper proposes an image enhancement network based on deep learning, which can directly convert the original image into a color image. Compared with traditional algorithms, it solves the complex problem of low illumination image enhancement. In the composition of a series of subtasks such as white balance and denoising, it overcomes its high noise level and makes its color more bright. This paper improves the robustness and reliability of the system, reduces the volatility, realizes the all-weather work of the system, denoises the image enhancement, and enhances the detailed information of the image. The improved algorithm applies the holomorphic filtering algorithm to histogram equalization. The linear combination of image holomorphic filter, low-pass filter, high-pass filter, and image with global histogram equalization is obtained. Experiments show that the algorithm can effectively preserve image details and optimize the display effect. According to the problem of image detail loss in the traditional Retinex algorithm in image processing, based on the principle of local histogram equalization algorithm, the algorithm can better preserve the image details.

5. Conclusions

With the rapid development of image processing technology and smart machines, image enhancement technology has gradually been integrated into daily life applications. The development of deep learning has rapidly promoted the research of image recognition. In order to better enhance the given image, deep learning continuously increases the complexity to strengthen the learning ability and constrains the generated neural network parameters through the objective function. The core of low illumination image enhancement is the adjustment of contrast so that the image content can be easily seen. Enlarge the feature information, so that the machine can recognize its information more accurately. A multigranularity image enhancement method based on deep learning, a multigranularity cooperative image enhancement method, and a low-light image enhancement method based on local light map adjustment are proposed. This method uses a hybrid attention mechanism combining spatial attention and channel attention to extract features, which improves the efficiency of the network, so as to obtain high contrast and noise free color images. In the experimental analysis, a closed loop is formed to achieve a virtuous circle, which will provide convenience for subsequent image enhancement, simplify the calculation process and time, and reduce the error rate of actual operation by 78.5%. However, there are still some problems to be further analyzed in this paper. For example, it is necessary to correct the deterioration and distortion of images in the process of formation, transmission, storage, recording, and display. Image restoration must first establish the image deterioration model and then restore the image according to the reverse process of its fading. These need to be further supplemented in future research.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The author declares that there are no conflicts of interest.