Abstract

The average accuracy of the fusion of color harmony and composition features is 75.17%, which is higher than that of a single feature. The classification accuracy of NP-DP-DCNN structure is about 1% higher than that of other methods and 1.77% higher than that of NP-DCNN. Traditional image aesthetic evaluation methods are only effective for specific image sets or specific style images and are not suitable for all types of images. Based on the introduction of the partial differential equation image filtering method, through the parallel supervised learning of aesthetic attribute labels, this paper extracts the global aesthetic depth features, adopts the partial differential equation to evolve the contour C constant, and constructs a convolution neural network. The structure of a convolution kernel learned by using parallel network structure achieves better classification performance. Through the aesthetic evaluation experiment, the overall test accuracy is improved by 0.58% and the average accuracy of the integration of color harmony and composition features is 75.17%, which is higher than that of a single feature. The classification accuracy of NP-DP-DCNN structure is about 1% higher than that of other methods and 1.83% higher than that of NP-DCNN. It has achieved better test accuracy than before in the seven subcategories with discrimination between high aesthetic and low aesthetic images. It has achieved better classification performance than the traditional feature extraction methods in the public dataset CUHK database, and it has excellent aesthetic reference value.

1. Introduction

With the development of science and technology and the popularization of digital products, the number of digital images on the network is increasing explosively. How to select images that meet users’ aesthetic feelings and meet users’ psychological needs from a large number of images has become an urgent problem to be solved. With the development of computer vision, image processing, and pattern recognition, people are gradually quantifying the standards of image aesthetics. In this context, computational aesthetics came into being. The aesthetic standard in aesthetics is a relatively fixed scale to measure and evaluate the aesthetic value of the object. The aesthetics of the image is determined by the composition, color distribution, image theme, and other factors of the image [1]. Therefore, for a long time, the beauty of images mostly measures the symmetry, color, combination, and contrast of images. From the perspective of photography to discuss aesthetics, an image with high aesthetic feeling has the characteristics of obvious theme, accurate exposure, and clear focus. Therefore, a high aesthetic image should be able to clearly express the subject object and weaken the expression of other elements of the image [2].

Traditional image aesthetic evaluation methods design features based on the original image data and input the extracted features into the classifier to train the classification model. Although the traditional methods have achieved better classification performance, the features designed by the traditional methods are designed for specific databases. How to design features that can be widely applicable to a variety of databases is a difficulty. The advantages of the deep learning method of feature learning based on the original data are highlighted [3]. Therefore, this paper uses the deep learning method to automatically learn features and applies it to the image aesthetic evaluation system. At present, the partial differential equation (PDE) is widely used in the field of image analysis and processing and computer vision and has achieved good results in image restoration, denoising, image segmentation, and edge detection. Nowadays, there are not many partial differential equation methods for sonar image denoising. Most of them use traditional filtering methods and wavelet methods [4]. At present, although there is a certain research foundation in the field of computable image aesthetics, there are still some deficiencies: (1) the existing color harmony feature extraction methods do not consider that the model itself is only suitable for relatively simple color combination, ignoring the difference in the number of color types between the main area and the background area of the image, and the final effect is not good, and (2) most of the current methods only have a good evaluation effect on a certain type of image and are not suitable for all types of images [5]. To solve the above problems, this paper proposes an improved color harmony algorithm and the method of integrating composition features, which is conducive to improving the accuracy of image aesthetic classification and has good robustness to all kinds of images.

Based on the experimental research of traditional denoising methods, this paper introduces the directional diffusion method in the partial differential equation image filtering method and combines with lifting wavelet transform to diffuse each low-frequency and high-frequency subimage after transformation. Finally, a new image is obtained by inverse change. From the analysis of the experimental results, it can be seen that compared with the value-based wavelet denoising method, the peak signal exposure ratio and edge preservation are greatly improved. Traditional image aesthetic evaluation methods design features based on the original image data and input the extracted features into the classifier to train the classification model. Although the traditional methods have achieved better classification performance, the features designed by the traditional methods are designed for specific databases. How to design features that can be widely applicable to a variety of databases is a difficulty. The advantages of the deep learning method of feature learning based on the original data are prominent. Therefore, this paper uses the deep learning method to automatically learn features and applies it to the image aesthetic evaluation system.

With the development of the field of image aesthetic evaluation, researchers gradually apply the evaluation results of image aesthetic evaluation to the actual scene. For example, in the field of image retrieval, after Obrador et al. finished the aesthetic analysis of the image, they applied the image aesthetic evaluation algorithm to the image query system. The aesthetic evaluation algorithm first segmented the image, then extracted the subject area using the segmented image, and finally calculated the image score using the contrast, color concentration, and sharpness of the subject area. Due to the introduction of this image aesthetic evaluation algorithm, the retrieval system can screen aesthetic images that more meet the needs of users. Therefore, it is possible to search high-quality images by combining image search with image aesthetic quality [6]. In the field of image restoration, the research results of image aesthetic evaluation will also be used. Datta et al. [7] extracted color, texture, and other features for linear regression to obtain the image aesthetic grade based on photographic experience rules, public common intuition, and observation of the rating trend. By extracting 56 dimensional features such as color, saturation, brightness, and depth of field of the image, they used the classification regression tree algorithm to select the most effective 15 dimensional features and used support vector machine to classify the image. Although the accuracy is not high, it lays a foundation for the follow-up study of image aesthetics. Based on the research of Datta et al., Wu et al. [8] used “good,” “bad,” and “ugly” to describe and judge image aesthetics. Ke et al. [9, 10] distinguished professional photos and snapshots by extracting a series of high-level semantic features such as spatial distribution, color distribution, hue, and blur of edges. Marchesotti et al. [11] proposed to use the general description of images to realize the task of image classification. Wong et al. [5] focused their research on the salient region of the image, proposed a saliency enhancement algorithm based on the visual attention mechanism model, extracted the salient region of the image as the main region, and then distinguished professional photos and snapshots by using the characteristics of the main region, global features, and the relationship between the subject and the background. Khan and Vogel [12] extracted seven features such as color, spatial composition, clarity, depth of field, and contrast according to the relationship between the face and background area to evaluate the beauty of photographic portrait. Aydin et al. [13] proposed an aesthetic evaluation system for automatic calibration of photographic images according to five attributes of images, such as tone, clarity, and depth. Similarly, the low-level and high-level features of images are also applied to the quality evaluation of art works. Dong and Tian [14] considered the importance of the main area, adopted the main background separation, and extracted the color histogram of the main area and the ratio of the main area to the size of the whole image to distinguish the quality of the picture. On the basis of previous studies, Wang et al. [15] extracted features such as color, texture, depth of field, complexity, and color harmony and used these features to build an aesthetic classification model, which achieved good results. Among a series of image aesthetic features, color harmony has always been considered one of the factors that have a great impact on the aesthetic feeling of images. In the early stage of image aesthetics research, the color histogram of the image was mainly used as an important standard to measure the aesthetic quality. Luo and Tang [16] measured the color harmony of the color histogram, and the final effect was not good. In recent years, color harmony theory has been applied to image aesthetic evaluation. Chamaret and Urban [17] used 8 color harmony templates defined by Matsuda [18] to obtain the harmony distance map and used the perception mask map and contrast mask map to simulate the perception of color harmony by the human visual system, with a good final effect. Tang et al. [19] also used the method of 8 color harmony template pairs defined by Matsuda [18] to measure the color harmony of the whole image.

The image processing method based on the partial differential equation comes from constrained optimization, energy minimization, and variational method. The basic idea is to reduce the problem to a functional extremum problem with or without constraints. Then, one or a group of partial differential equations sometimes requiring initial or boundary conditions are derived by the variational method. Finally, the equations are solved by numerical calculation and the desired numerical solution is obtained. The numerical solution may be a segmented image and its boundary or a preprocessed or restored image. There are not many researches on image classification based on partial differential equations, most of which focus on image denoising, image segmentation, image amplification, and image restoration. The research on remote sensing images is also in the preliminary exploration. The existing research has a level-set evolution model of supervised image classification, which completes the best image classification by setting the level-set function of each type of region and interacting in the dynamic evolution process [20]. In addition, a group of wavelet packets are used to analyze various textures and extract the texture energy distribution of each category in each subband as the constraint condition for classification, which has also achieved good results [21].

3. Image Aesthetic Evaluation System Based on Partial Differential Equation Image Classification

3.1. Hue Feature Extraction

First, the hue value of the main color is subtracted from the hue value of each pixel in the image subblock to obtain HS. The main tone refers to the overall tendency of the picture color in a painting, which is a large color effect. At present, the commonly used theme color extraction algorithms are as follows: minimum difference method, median segmentation method, octree algorithm, clustering, color modeling method, and so on. Clustering and color modeling methods need parameter adjustment and regression calculation for extracted functions, samples, and characteristic variables. Then, according to the moon Spencer color harmony theory, determine whether the absolute value of HS is harmonious, and the symbol function SGN to judge whether the hue is harmonious is shown in

Among them, 1 represents harmony and 0 represents disharmony. SGN is only a sign. When meets , it represents that the hue value of this pixel is harmonious with the hue value of the main color. Otherwise, it is considered disharmonious. Finally, count the number of pixels harmonious with the main tone in the subblock, and take the ratio of the number of harmonious pixels to the total number of pixels in the current subblock as the tone characteristic value of this subblock [22], as shown in

where is the number of pixels with harmonious tone in the -th image subblock, is the total number of pixels in the subblock, and is the total number of blocks in the main area or background area.

To judge whether the lightness and chroma of the subblock are harmonious, see

where and are, respectively, the chromaticity average value and lightness average value of the main color area in the subblock and and are, respectively, the chromaticity and lightness values of other pixels in the subblock excluding the main color area. Statistics of lightness and chromaticity characteristic values is calculated according to

Then, the hue, lightness, and chromaticity eigenvalues are integrated into the corresponding eigenvectors [23]. After calculating the hue characteristic values, lightness and chroma characteristic values of all subblocks normalize the obtained values to the interval, respectively, and divide the interval into equally. According to the number of hue characteristic values, lightness, and chroma characteristic values falling in each subinterval, the hue characteristic vector, lightness, and chroma characteristic vector are obtained, as shown in

where is the feature vector of hue, is the feature vector of lightness and chroma, and are the -th quantity of the hue value and lightness value and chromaticity value, respectively, and is taken as 10.

This paper adopts the composition criterion of trisection rule. Trisection composition refers to the horizontal division of the picture into three parts, and the main form can be placed in each subcenter. This composition is suitable for subjects with multiple forms and parallel focus. The criterion of trisection rule is the smallest normalized Euclidean distance from the center point of the main area of the image to the four intersections. The closer the central point of the main area is to the intersection, the more it is in line with people’s aesthetic habits. Calculate the nearest distance from the center point () to the four intersections (), as shown in

3.2. Treatment of Partial Differential Equations

The variational method is a method to study and solve functional extremum (maximum or minimum). It is a mathematical field dealing with functions [24], which corresponds to ordinary calculus of functions dealing with numbers. Such a functional can be constructed by the integral of the unknown function and its derivative, so the variational problem is to find the extreme value of the functional. Firstly, the functional definition is given. Let be a set of functions. If any function in this function set has a real number corresponding to it, recorded as , then is a universal function, is an admissible function, and is an autovariable function. In short, a functional is a function whose argument is a function.

The classical variational problem is to find the function in the domain and make the functional reach a minimum when the boundary conditions are satisfied. A complete inner product space is called the Hilbert space. The Hilbert space has many properties similar to Euclidean space. In the application field of image processing, we only consider the functional in the Hilbert space, mainly to find the boundary value of the energy functional in this space. After the above discussion, we know that the mathematical method used here is the variational method, and the core of the variational method is to find the extreme value function of the functional and the corresponding extreme value problem. Next, we will discuss several variational problems in the scope of image processing, that is, finding the extreme value of functional in the Hilbert space. Firstly, the linear functional in the Hilbert space is given: there is a mapping from the Hilbert space to the real number field , which satisfies the condition that for any point and on the Hilbert space and numbers and on the real number field. If there is such a constant that , then we call the linear functional bounded. If a function is given and if satisfies the following functional , it is obvious that is a bounded linear functional. If a universal function is a bounded linear functional, then it must satisfy the following: assuming that there is a space and a Hilbert space, for any that satisfies the condition, there is a variable , so that all exist. Let us look at the following definition: let be a functional in the Hilbert space to the real number field and let , if

If the limit of this formula (9) exists, we can regard this formula as the directional derivative of functional in the direction of at . If this exists and if is a bounded linear functional about , then is differentiable about according to the bounded linear functional theorem in the Hilbert space above, because is a bounded linear functional in the Hilbert space. Therefore, if there is a function in the Hilbert space so that , then we call the first-order variational of and record it as . If is the minimum of functional , then for all functions in the Hilbert space. In particular, when direction .

The above is a brief introduction to the variational method in the Hilbert space, which is the basis of image processing. The image processing method based on PDE has better accuracy than the traditional linear processing method and can directly process some image features, such as gradient and geometric curvature, which is convenient to establish various digital models and express them flexibly. The following introduces a kind of common functional and first-order variational in PDE image processing: there is an energy functional. In this function, is a continuous differentiable function about its three variables , , and , where is an -dimensional vector, , and then, there is the following theorem: the derivative of functional can be expressed as

For any , the following formula holds:

Using the properties of function space and the rule of step-by-step integration during operation, if the above formula is easy to prove when it holds, then

In general, we can express the directional derivative of functional along the direction at as , where is the Gateaux derivative of . The above equation is solved by the Cauchy-Schwarz inequality and variational method. When the upper equation obtains the minimum boundary, the velocity function is equal to ; then, this direction is the direction in which changes most rapidly. Therefore, we can say that the formula is the gradient descending flow or the steepest descending flow equation. Then, we can get the energy function , and its corresponding steepest descending flow equation is

3.3. Curve Evolution Theory

In differential geometry, we know that to describe the geometric characteristics of curves, we must start with curvature and unit normal vector . Generally, two important parameters describing the geometric characteristics of a curve are the unit normal vector and curvature of the curve. The former describes the direction of the curve, and the latter describes the degree of curvature of the curve. The theory of curve evolution is to study the deformation (evolution) of the curve with the application only by using the geometric parameters such as the unit normal vector and curvature of the curve. In the theory of curve geometric evolution to be studied next, we mainly study the eigenvalues and of the curve and see the law of curve evolution from the law of their change with time. The theoretical description is given below. If it is assumed that the contour curve to be evolved on the plane is represented by , where is an arbitrary parametric vector in the function and represents the evolution time, there are and curvature . The curve evolution process is that the curve evolves at speed towards the normal direction driven by force . When is regular, it evolves inward, and when it is negative, it evolves outward. In the following, we express the evolution of the functional energy curve on the unit normal vector in terms of the partial differential equation [25]:

In the formula of this evolution process, we can see that the deformation speed of each point on the curve is controlled, and the formula controlling the speed is the speed function , which is represented in a graphic image, as shown in Figure 1. In this figure, the overall outline shows the energy curve , and the unit normal vector is represented by a line segment with an arrow. Analyze this figure, the evolution in the tangent direction of the curve does not affect the curve, and the main change process occurs in the direction perpendicular to the tangent.

Constant evolution and curvature evolution are hot issues in curve evolution theory in recent years. More and more scholars pay attention to these two evolution problems. The curve curvature evolution can be described by the partial differential equation as follows:

In formula (15), is the curvature of the curve and is a normal number. This model has the following advantages. The closed curve will gradually shrink and deform smoothly. This first shrinkage process not only changes the shape of the curve but also changes the length of the curve. The above equation is used for general curves of various shapes, and the above treatment effect will be obtained. The evolution of the contour constant can be described by the partial differential formula:

In formulas (14)–(16), we can see that the deformation speed of each point on the curve is controlled. In this evolution process, the curve is not smoothed, but the tip position is generated. Moreover, this equation does not deal with the continuation problem of the curve region, and the size of the region will change with the evolution. Therefore, the curves of curvature evolution and constant evolution are two evolution processes with opposite characteristics. Constant evolution makes the curve produce corners, while curvature evolution removes the corners of the curve and makes the curve smooth.

Using their respective aesthetic attribute tags, parallel supervised learning is used to extract the global aesthetic depth features and jointly classify the aesthetic levels. In addition, by Gaussian modelling the predicted image aesthetic evaluation distribution and using the curve evolution based on the partial differential equation to realize the predicted distribution, the research work is creatively extended from aesthetic evaluation to predicted image aesthetic evaluation distribution. The objective function is shown in

where represents the potential distribution, which is calculated by fitting the image aesthetic grade histogram, and represents the predicted distribution. If and only if and are completely equal and diverges from , the value of function is 0. The convolution neural network model is shown in Figure 2.

3.4. Image Aesthetic Evaluation

Compared with the early evaluation mainly focusing on the quality of images, the later image aesthetic evaluation increases the evaluation of subjective sensory beauty of images. The common framework of sensory-based image aesthetic evaluation is shown in Figure 3. The focus of getting a good image aesthetic evaluation classification model is how to effectively describe aesthetics and measure aesthetics and design effective aesthetic features to train the classification model according to these theoretical measurement methods.

As early as 1933, the American mathematician Birkhoff used mathematical models to describe aesthetics in his works. His aesthetic measurement formula is . Among them, refers to measure, refers to order, and refers to complexity. This formula shows that the measurement of aesthetics is related to the internal order of the image and the complexity of the image. Machado et al. believe that whether an image is beautiful is related to the complexity of the image itself (IC) and the complexity of human brain processing image information (PC). The measurement formula proposed by Machado et al. is . This measurement formula indicates that the aesthetics of the image is directly proportional to the IC and inversely proportional to the PC. When the IC of an image is high, that is, the internal repeatability of the image is high, although the image looks complex, the recognition and processing of the human visual system are relatively simple; that is, the PC is low. At this time, the aesthetic measurement value of the image is high, so people still feel its “beauty.” In recent years, Rigau et al. proposed an aesthetic quality measurement method combining information theory and Kolmogorov complexity. Rigau et al.’s method defines the aesthetic measure by calculating the Shannon entropy and Kolmogorov complexity of the image and carries out aesthetic digital analysis.

4. Experiment and Analysis

4.1. Experiment on Characteristic Parameters of Color Harmony

The CUHK database is the high aesthetic and low aesthetic images selected from the image competition website (http://www.dpchallenge.com), which are taken by photographers. There are 60000 images in the CUHK database, and many users of the website are invited to rate the images. Take out the images with the highest score of 10% and the lowest score of 10%, i.e., 6000 images each, as the high aesthetic image and the low aesthetic image, respectively, i.e., forming the CUHK database. Before the experiment, this paper randomly selects 1/2 high aesthetic images and 1/2 low aesthetic images from the database as the training set, and the remaining images are the test set. The experimental method is similar to that of the photoquality dataset database. The image database contains not only the images of subject objects but also the natural landscape images of nonsubject objects.

The experiment is divided into color harmony feature parameter experiment and feature fusion experiment. In this paper, the support vector mechanism with kernel function as RBF is used to build the image aesthetic classification model, in which the best parameters and are selected by the 5-fold cross-validation method . High aesthetic images are defined as positive samples and low aesthetic images as negative samples. This paper randomly selects 750 high aesthetic images and 644 low aesthetic images from the Datta database, with a total of 1394 images for testing and training. Among them, this paper randomly selects half of the high aesthetic image and half of the low aesthetic image as the training set, and the rest are the test set. Block size . Figure 4 shows the experimental results.

It can be seen from Figure 4 that when the block size is , the accuracy is the highest (78.01%), indicating that the color in the block is relatively single, which is more suitable for the moon Spencer color harmony model. When the block size is less than , the accuracy increases with the increase in the block size. When the block size is greater than , the accuracy decreases with the increase in the block size. The whole image is divided into several images, and then, the color histogram of each image is counted, respectively. The difference of the whole image can be reflected through the difference of small images. The more the blocks, the higher the accuracy of measurement, and the amount of calculation will increase accordingly.

This paper not only tests each type of feature separately but also tests the whole. As shown in Figure 5, accuracy refers to the ratio of the number of correctly classified images to the number of all test images, that is, the average accuracy.

As can be seen from Figure 5, the average accuracy of color harmony features is 60.253%, the average accuracy of composition features is 63.621%, and the average accuracy of their fusion is 67.425%, which is higher than that of a single feature. From the perspective of a high aesthetic image, the accuracy of the two fused features is higher than that of a single feature. Similarly, in a low aesthetic image, the accuracy of fusion is also higher than that of a single feature. The aesthetic features of manual design are often inspired by photography or psychology, and they have some known limitations. Due to the fuzziness of some photographic or psychological rules and the difficulty of implementation in calculation, these manually designed features are usually only approximate values of these rules, so it is difficult to ensure the effectiveness of these features. General features, such as sift and Fisher vector, are used to capture the general features of natural images, rather than specifically describe the aesthetics of images, so they also have great limitations.

4.2. Accuracy of Image Aesthetic Evaluation

In this paper, the photoquality dataset database is used for training and testing, and the salient features GC, HC, and edge feature Sobel of the pictures in the extracted database are used for data parallelism. The test accuracy of seven subclasses in the database is shown in Table 1. It can be seen from Table 1 that the classification accuracy obtained by the NP-DP-DCNN method combining data parallelism and structure parallelism is the best globally, which is 0.6% higher than that of the previous best 2-NP-DCNN and 3-NP-DCNN methods. Among the four subcategories of animals, architecture, scenery, and people, the classification accuracy obtained by the NP-DP-DCNN structure is higher than that of other methods, which is about 1% higher than the previous best classification accuracy. This is mainly because there are obvious differences between high aesthetic images and low aesthetic images; that is, the theme area of high aesthetic images is obvious. The subject area of the low aesthetic image is not obvious, and the image composition is messy. These four types of GC, HC, and Sobel features can well distinguish the high aesthetic image from the low aesthetic image. Therefore, after the GC, HC, and Sobel features are input into the parallel network structure in parallel with the RGB original data, more effective information is provided for the training of the model as a whole, the overall test accuracy is improved by 0.6%, and better test accuracy is obtained in the four subcategories with discrimination between high aesthetic and low aesthetic images.

In the plant subclass, the NP-DP-DCNN method is about 1.77% higher than the NP-DCNN method. It is known from the images in the database, because there are obvious differences between high aesthetic images and low aesthetic images in the subject area. But NP-DP-DCNN is better than the best Luo and Tang [16]. The classification performance of the method proposed by Luo is 0.14% lower than that of the method proposed by Luo, and the complexity combined feature has a good classification effect for plant subclasses, reaching 89.72% classification accuracy. In static and night scene subclasses, the classification accuracy of the NP-DP-DCNN method is roughly the same as that of the NP-DCNN method which only carries out structural parallelism. The GC, HC, and Sobel features added by NP-DP-DCNN do not have the effect of improving the classification accuracy in theory. The classification accuracy of the static subclass is basically the same as that of 2-NP-DCNN, and the classification accuracy of the night scene subclass is 0.1% lower than that of 2-NP-DCNN.

4.3. Overcome Overfitting and Underfitting Problems

At the same time, this paper also applies the data NP-DP-DCNN structure to the CUHK database and inputs the salient features GC, HC, and edge feature Sobel of the extracted image into the network structure as parallel data for training and testing and obtains the test accuracy of seven subcategories, as shown in Figure 6.

According to the results in Figure 6, the classification accuracy of SP-DP-DCNN applied to the CUHK database is higher than that of other methods, which is 0.5% higher than that of the best 3-NP-DCNN method proposed in this paper. 88% is 2.4% higher than that of the best traditional feature extraction method.

In order to prove that the NP-DCNN network structure proposed in this paper can prevent underfitting and overfitting problems, this paper gives the classification error rate curve, which describes the change of the classification error rate with the increase in iteration times. The number of NP-DCNN parallel networks selected is divided into 2, 3, and 4, which are shown in Figure 7, respectively. It can be seen from Figure 7 that NP-DCNN is difficult to have the problems of underfitting and overfitting.

The larger the standard deviation of the learned convolution kernel, the more representative image features can be extracted and the better the convolution kernel can be learned. Therefore, this paper averages the standard deviations of multiple convolution kernels in the first convolution layer in the convolution neural network structure to obtain the standard deviation that can represent whether the convolution kernel is well learned or not. Table 2 shows the convolution kernel standard deviation of the first convolution layer in Arch2, ParaArch1, and ParaArch5, respectively. It can be seen from Table 2 that the average standard deviation of ParaArch5 is the largest, which shows that the convolution kernel learned by ParaArch5 is the best. Although the standard deviation of conv1a in ParaArch5 is a little smaller than that of Arch1, in general, the convolution kernel standard deviation learned by ParaArch5 is the largest and most different. Therefore, this paper can draw a conclusion that the convolution kernel learned by the parallel network structure has better classification performance than that learned by the nonparallel structure, and among the parallel network structures, the convolution kernel learned by the parallel three convolution neural network structures is the best.

5. Conclusion

In order to obtain better classification performance, experiments show that the three structures based on the deep convolution neural network proposed in this paper have achieved better classification performance than the traditional feature extraction methods in the two public aesthetic evaluation databases. The average accuracy of color harmony features is 70.85%, and the average accuracy of composition features is 64.35%. At the same time, this paper studies the NP-DCNN structure for preventing overfitting and underfitting. The experiments show that NP-DCNN plays a certain role in preventing overfitting and underfitting. Based on the work of this paper, it is possible to further improve the aesthetic effect by more detailed calculation and analysis of the aesthetic characteristics and the application of the newly proposed aesthetic characteristics. If we can further explore the corresponding relationship between image category, image content, and aesthetic characteristics, it will also help to improve the accuracy of aesthetics and appropriately improve the ability of aesthetic prediction.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work was supported by the Natural Science Plan Project of Xiaogan in 2020: Research and design of household waste classification system (no. XGKJ2020010038), and the higher education teaching reform project of China Textile Industry Federation in 2021: Research on the innovative development model of practical education in art experiment demonstration center under the background of “new liberal arts” (No. 2021BKJGLX699).