Abstract

At present, the image mining is mainly based on its local and key features, which focuses on its texture and statistical grayscale features, but it focuses on its edge and shape features rarely. However, the contour is also an important feature for image shape recognition. In this paper, a good target image contour coding algorithm was adopted, and an LCV segmentation model with good image boundary acquisition capability that can reflect the target image contour features was selected for the original image contour segmentation. The detailed features analysis of the contour coding algorithm was carried out through the experiments; the experimental results showed that the algorithm was a significant technological breakthrough in image feature extraction and recognition.

1. Introduction

At present, the image mining is based on its texture and statistical grayscale features mostly. However, during the actual target image recognition process, the image features are not derived from its texture, grayscale, pixel density, and other features alone [1]. In fact, in a specific application field, the image contour is also a key target feature. Therefore, the specific processing of the target contour can bring a new concept in the image recognition field [28], especially in the physical image recognition field in a 3D space. There is still an insufficient mining depth in this field at home and abroad currently. Although the feature extraction of the target image contour was proposed in some references [913], the secondary conversion of the contour features was still not analyzed in depth; there was still a lack of mature and perfect model algorithms, there was a large error in the retrieval of the contour boundary when the target image contains some noises, and there was a poor discrimination among the similar images in the traditional secondary feature extraction, which resulted in failure to the wide applications of the target contour features in the image recognition field.

At present, most of the traditional image coding algorithms are based on its internal textures and pixels, and more image coding algorithms are applied mainly in the image compression and background prediction. Several traditional image coding algorithms are described as follows:

The entropy coding refers to a nonsemantic data stream compressed mainly with statistical information of data and is a lossless coding. The common entropy codes include Shannon code, Huffman code, and arithmetic code. The entropy code in video coding is a compressed code stream for storage or transmission transformed from the element symbols (representing the video sequence). The input symbols include additional information, header information, motion vectors, and transform coefficients. Several common entropy codes are described briefly as follows:

Shannon coding is a coding method called as Shannon-Fano algorithm obtained by Robert Fano, a mathematics professor from Shannon and MIT based on the information theory proposed by Shannon in 1948∼1949 and it is a kind of symbol coding with variable lengths. Shannon-Fano algorithm is coded from top to bottom as follows: the first is to use the probability of the symbol occurrences as the sequence basis; the second is to divide the symbols into two parts with approximately equal frequencies from top to bottom recursively and mark their boundaries with 0 and 1, respectively; the experimental results 0 and 1 are the binary codes of the target.

Huffman coding is a brand-new coding method proposed by David Albert Huffman in the early 1950s. It has top-down feature and is optimal statistically. The shortest code is assigned to the most frequent symbol, and so on. The coding steps are as follows: the first step is to arrange the leaf nodes of the symbols from right to left in the probability sequence; the second step is to connect the two top-row nodes with the lowest probability to get the parent node and mark the two lines of the left and right child nodes with 0 and 1; the third step is to repeat the second step until the root node is obtained to get a binary tree; the fourth step is to obtain the binary coding of the symbols, i.e., leaf nodes 0/1 string of each symbol started from the root node. From the above coding steps, the codes are different but their average length is the same. The specific analysis can be carried out as follows: the detailed probability sequence can be from right to left or from left to right, because the symbols are only related to the probability. There is no effect when the left and right branches are marked with 0 or 1, so the coding result is not specified.

The prototype of arithmetic coding was proposed by P. Elias in 1960, and the algorithm was pioneered by J. Rissanen and R. Pasco in 1976, systemized and implemented by G. G. Langdon and Rissanen in 1979, and corrected by Rissanen to obtain a lossless compression algorithm in 1984. Based on the information theory, this coding method and Huffman coding are the optimal variable code length types. Its advantage is that it is no longer limited to Huffman coding integer bits. For example, if a symbol in Huffman coding needs to be represented by 0.1 bits, it can only be represented by 1 bit, which causes a waste of storage space. The symbol in the arithmetic coding is different from those of the above two coding methods. The symbol is indicated with a real interval with a width equal to its occurrence probability within [0,1), and then all the symbols in the symbol table can just fill the entire interval of [0,1), and the input symbol string (data stream) is mapped to a real value in the interval of [0,1).

The predictive coding is based on the feature that there is a certain correlation among the discrete signals to predict the unknown signals and then code the prediction errors. Obviously, the coding accuracy is closely related to the prediction error. If the predicted value is accurate, the error will be small. When the coding accuracy requirements are relatively similar, this method can also be used to compress the data. It has the features of strong error diffusion, simplified and fast algorithm, and easy hardware implementation.

Since the image data and sound data are sampled, the predictive coding method is more suitable for them, the differences between adjacent values are not obvious, and too many bits are not required. The standard predictive coding diagram is shown in Figure 1, and the corresponding coding steps are as follows:Step 1: calculate the difference between and the prediction value generated by the predictor at the transmitting end to obtain the prediction error ;Step 2: is quantized by the quantizer to , and a quantization error is generated;Step 3: is encoded by the encoder into a code word for sending, and is added into to restore the input signal . Due to the existence of quantization error, is not equal to , but they are very close. The local decoder at the sending end is the predictor and its loop;Step 4: set the memory in the predictor at the sending end in advance to store for prediction of the next pixel and then input the second pixel; repeat the abovementioned operation.

In 1968, the transform coding was proposed firstly by Pratt by applying Fourier transform, followed by the oblique transform, Walsh transform, the discrete cosine transform (DCT), and k-l transform. The essence of transform coding is to reduce the spatial correlation of the image signals at the frequency domain level, and its digital rate reduction effect is similar to that of the predictive coding. Since 1980, a hybrid coding scheme was emerged gradually, which combined with the transform coding and the motion compensation, to promote a great progress in the digital video coding technologies. In the early 1990s, the famous video coding proposal for videoconferencing applications with the hybrid coding scheme was proposed firstly by ITU. Afterwards, with the continuous improvement of the video coding standards and recommendations, the hybrid coding technology was slowly developed and stabilized and became a digital video coding technology with a relatively high application frequency. The transform coding is an indirect coding method and is generated by the orthogonal transformation of image information. Because the correlations among signals are omitted in the orthogonal transformation, the redundancy of the signals can be reduced, and the coding method is relatively easy. In this way, the coding of the image is transformed to the coding of transform coefficients. Because the amount of data is not large, and the parameters are not related to each other, a more compression ratio can be obtained. Haar transform, Walsh Hadama transform, discrete Fourier transform, and discrete cosine transform are all quasioptimal transforms, and the last one is used most frequently.

Thus, a good target image contour coding algorithm was adopted, and the feasibility analysis and related experimental analysis of the algorithm were carried out in this paper. The target image contour coding algorithm is a model algorithm based on the coding transformation of the contour feature information after extracting the contour features of the target image. Compared with other traditional coding algorithms, the proposed algorithm has a unique role in the secondary conversion of image contour features, which can extract the image boundary contour information from multiple perspectives. The detailed featuresanalysis of the contour coding algorithm was carried out through the experiments; the experimental results showed that the algorithm was a significant technological breakthrough in image feature extraction and recognition.

2. Target Image Contour Coding Algorithm

From the above traditional coding algorithms, it can be seen that the traditional image coding is mainly used in the compression and storage of images. In a certain sense, the image attribute features are not mined, that is, the image contour attribute features are not extracted, while the data quantity of the digital image is reduced effectively only under the premise that the saved image information is not missed as much as possible. Therefore, a transformation model of target contour quadratic feature codes was adopted and the characteristics and stability of the algorithm were analyzed through relevant verification experiments in this paper.

The target image contour coding algorithm is used mainly to obtain the target image boundary coordinate information, perform a certain image conversion, and then extract the target image contour information matrix. The model is different from the traditional coding algorithms in coding role, coding speed, and coding principle, and it has a unique role in the secondary conversion of image contour feature extraction and can be used to extract the image boundary contour information from multiple aspects. The coding algorithm steps are as follows:Step 1: obtain the target image boundary information from the image analysis model, represented as (x, y) coordinates;Step 2: normalize the coordinates of the contour boundary obtained in Step 1 with the interval [1,255], respectively, to eliminate the effect of the difference in absolute phase pixels of the image;Step 3:convert the contour matrix data obtained in Step 2 into uint8 format, for subsequent calculation and processing;Step 4: carry out the grayscale process for the contour matrix obtained in Step 3, and convert it to obtain the image boundary coding matrix;Step 5: carry out the binarization of the image boundary coding matrix obtained in Step 4 by setting the corresponding threshold;Step 6: in order to combine the image boundary coding matrix with other model algorithms, arrange and encode the binary coding matrix obtained in Step 5 in row, column, and oblique directions; finally, draw the corresponding coding chain, among which the coding in row direction is converted with each row of the binarized coding matrix into a row in order, the coding in column direction is converted with each column of the coding matrix into a row in order, and the coding in oblique direction is converted with each diagonal of the coding matrix from upper right to lower left into a row in order.

3. Feasibility Analysis of Target Image Contour Coding Algorithm

From the above coding model algorithm principle, it can be seen that the most critical feature of the target image contour coding algorithm adopted in this paper is the image contour boundary, that is, as long as the target image has a certain contour boundary, which can be obtained with a certain image processing method, the target features can be extracted to obtain the contour feature information of the image. It is worth noting that since the actual object is 3D, when it is converted into an image, only the projection information at one dimension can be obtained, and such an image contour cannot be correctly reflected on a 2D image. Therefore, the images with 3D features were not studied and only the typical images with 2D features were studied and analyzed in this paper.

At present, for most images, whether they are obtained from static or dynamic objects, the specific contours can be generated on the images. As long as the effective target object boundary contour is obtained from a reasonable model, the model algorithm adopted in this paper can be executed; at the same time, because it is transformed into a 1D 01 coding sequence during the subsequent process of the model algorithm, this algorithm can be directly integrated with other models and there is no integration problem in the subsequent calculation process. Therefore, the model is applicable actually.

4. Experimental Analysis of Target Image Contour Coding Algorithm

The target image contour coding algorithm adopted in this paper can be used to extract the secondary features of the target image contour in a targeted manner and can convert them into the key image information. The coding experiments of the contour features extracted with the row set model Local Chart-Vest (LCV) [1] segmentation were carried out to analyze the features of the target image contour coding algorithm as follows.

All experiments in this paper were completed under the experimental conditions of Intel I7-4712HQ 2.30 GHz CPU, NVIDIA 610M graphics card, 8 GB memory, and MATLAB R2011b.

Because the LCV model has better image boundary acquisition capability under uneven image grayscale and brightness conditions, and the extracted target image contour boundary is smooth and can reflect the detailed contour features of the target, the target image contour coding algorithm adopted in this paper was based on the image contour boundary extracted from the LCV model; at the same time, the algorithm was analyzed with actual images and artificial images in the experiments.

4.1. Analysis of Image Contour Coding Algorithms in Different Ways

The ultimate objective of the image coding adopted in this paper was to extract the features of the image. For different types of pictures, different coding chains can be obtained with different coding methods. The codes in different directions were analyzed as coding in row direction, coding in column direction, and coding in oblique direction. During the experimental process, the uniform contour boundary was set as [20,50], which could be modified with A1 = imresize(A1,[20 50]) program, and the binarized threshold coefficient was 0.9, which could be modified with A1 = im2bw(A1,0.9) program. The coding experiments with the global pixel points of the image and the contour boundary of the image from the row set model were carried out; the experimental results are shown in Figure 2.

From the coding results in Figure 2, the following conclusions were obtained.

In Figure 2(c), the global pixels of the image were coded simply; in the experimental resulted coding chain, there were more white value segments, which were small and thin and distributed unevenly. Such coding chain had a very low degree of interpretation for the features of the image, because the small and thin white value segments obtained by coding would disappear, resulting in a weak memory for the interpretation of the image when there was a slight change in a part of the image.

In Figure 2(d), the image boundary contour was extracted from the row set model and encoded in the row direction. The coding results showed that there were less white value segments in the coding chain, which were longer and more obvious. Therefore, the experimental result obtained by the coding had a high degree of interpretation for the features of the image. When the image is changed locally, the relative position and length of the coding chain would still not be changed significantly. Therefore, the coding in row direction has higher anti-interference ability in the image recognition.

In Figure 2(e), the coding chain result of the contour boundary in column direction showed that the white value segments were narrower and thinner and distributed more uniformly; in other words, the coding has better anti-interference ability, because only some white value segments disappeared in the coding chain when the image was changed locally, and there was only a small impact on the image features. Therefore, the coding in column direction still has an anti-interference ability in the image recognition.

In Figure 2(f), the coding chain result of the contour boundary in oblique direction showed that the white value segments were narrower and thinner and distributed more uniformly and the coding also had a certain anti-interference ability. Therefore, the coding in oblique direction can reflect the overall image features largely and has still an anti-interference ability in the image recognition.

From the experimental results with the coding methods in above directions, it can be summarized as follows. In the extraction of global pixels, the image was compressed only while the original image information was unchanged; when they were arranged in row direction, there were fewer, narrower, and unevenly distributed white value segments in the coding chain obtained with global pixel features. Therefore, these white value segments are greatly affected by the local changes of the image and are lost easily under the presence of other interference factors and the coding chain is not suitable for characterizing the image features. In the coding in row direction, there were wider and fewer white value segments. Therefore, it can reflect the single-phase image features obviously, has a higher anti-interference ability, and is suitable for single-phase image recognition. In the coding in column and oblique directions, there were thinner and more and evenly distributed white value segments. Therefore, they are suitable for the multiphase image recognition.

4.2. Analysis of Coding Algorithms for Contour Attributes with Different Parameters

From the coding algorithm flow, it can be seen that the core parameters that affect the image contour boundary coding include the binarized threshold coefficient and the image coding matrix size. Therefore, different image features can be extracted with different core parameters. In the actual applications, in order to extract the coding chain that reflects the image features, the relatively good coding chain parameters should be provided.

4.2.1. Effect of Threshold Coefficient

The coding experiments of the image contour boundary extracted from the row set model in different directions were shown as follows, in which the binarized threshold coefficient was set to 0.7. The coding result is shown in Figure 3.

From the coding results in different directions in Figure 3, it can be seen that the number and width of the white value segments of the image contour boundary encoded in all directions were increased when the binarized threshold coefficient is decreased during the coding process.

In the coding result in row direction, there were wider white value segments. However, compared with the coding result with the binarized threshold coefficient of 0.9, it can be seen that the white value segments of the original coding did not disappear, but only the width and number of the coding chain were increased on the original ones. Therefore, for special images, when the coding is not significant, the image contour coding chain can be enhanced by decreasing the binarized threshold coefficient.

In the coding in column and oblique directions, when the binarized threshold coefficient was decreased during the coding process, the number of white value segments of the coding chain was also increased, but the original white value segment positions were not changed. Therefore, the degree of discrimination of the image coding chain can also be enhanced by decreasing the binarized threshold coefficient, that is, the similarity between the coding chains of two different images can be reduced and the difference between image features can be enhanced.

4.2.2. Effect of Coding Matrix

The coding experiments of the image contour boundary extracted from the row set model in different directions were shown as follows, in which the image coding matrix sizes were 1050 and 2050 uniformly, and the threshold coefficient was 0.9. The coding results in row and column directions are shown in Figure 4.

From the abovementioned experimental results, it can be seen that when the matrix of the coding process was reduced, the number and width of the white value segments of the image contour boundary coding in all directions would be decreased, and the relative positions of the white value segments would be changed. In the coding results in row and column directions, the relatively clear white value segments could be obtained, but the number of white value segments was only a half of that of 2050 coding matrix. Therefore, when there are many types of images in the image recognition, the matrix size of the coding process can be reduced appropriately, so that the coding chain can accommodate more sample codes for distinguishing.

4.3. Experimental Analysis of Target Contour Coding Face Expression Recognition Based on Active Appearance Models (AAM) [14]

In this paper, the contours of eyebrows, eyes, and mouth, which had a great relationship with facial expressions, were extracted by AAM to obtain the contours of each part (N1–N6 ), and then the algorithm was used to convert and encode the contours to obtain the feature values of each part. The contours of each part were extracted from the AAM, and the algorithm was used to convert and encode the values of each part. The coding matrix size was 2050, the threshold coefficient was 0.8, and the coding mode was coded in row directions. The obtained facial feature parts and coding results in row directions are shown in Figure 5. Finally, the comprehensive judgment result of facial expressions was given by the specific expression characteristics of each part in the face, and the expression recognition rate in the face database JAFFE reached 98.78%.

5. Conclusion

The contour is the important information for image recognition. The relevant researches showed that, for any object under specific conditions, the contour is a main factor to distinguish different objects; even for extremely similar objects, the local difference in the target contour is quite significant. For example, for the leaves under one tree, although their shapes are very similar, if all the leaves are unified, the features of their contours also have obvious differences. Therefore, compared with other commonly used image recognition algorithms, such as KL recognition algorithm, texture-based recognition algorithm, model-based recognition algorithm, and geometric feature-based recognition algorithm, only the image boundary shape information is required for the image contour-based recognition algorithm and different target images have different contours. Therefore, a transformation model with the secondary feature coding of the image boundary contour information from multiple aspects, the target image contour coding algorithm, was adopted in this paper.

From the above coding experiments with global pixels of the image and the coding experiments with the image boundary contours in all directions based on the row set energy, it can be seen that, the coding features obtained from the coding process with the global pixels of the image were weak, could not reflect the features of the image fully, and might be lost under external interference; therefore, the anti-interference ability was poor. Thought the experiments with actual images and artificialimages, it could be seen that, for the coding with image boundary features, the coding methods in row, column, and oblique directions could better reflect the image features. It was undoubtedly an effective experiment in image feature extraction and recognition.

Data Availability

The data that support the findings of this study are available from the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

Acknowledgments

The research was supported by Guangdong General Colleges and Universities (Natural Science) Special Innovation Projects of China (no. 2018GKTSCX060).