Abstract

Video-based moving vehicle detection and tracking is an important prerequisite for vehicle counting under complex transportation environments. However, in the complex natural scene, the conventional optical flow method cannot accurately detect the boundary of the moving vehicle due to the generation of the shadow. In addition, traditional vehicle tracking algorithms are often occluded by trees, buildings, etc., and particle filters are also susceptible to particle degradation. To solve this problem, this paper proposes a kind of moving vehicle detection and tracking based on the optical flow method and immune particle filter algorithm. The proposed method firstly uses the optical flow method to roughly detect the moving vehicle and then uses the shadow detection algorithm based on the HSV color space to mark the shadow position after threshold segmentation and further combines the region-labeling algorithm to realize the shadow removal and accurately detect the moving vehicle. Improved affinity calculation and mutation function of antibody are proposed to make the particle filter algorithm have certain adaptivity and robustness to scene interference. Experiments are carried out in complex traffic scenes with shadow and occlusion interference. The experimental results show that the proposed algorithm can well solve the interference of shadow and occlusion and realize accurate detection and robust tracking of moving vehicles under complex transportation environments, which has the potentiality to be processed on a cloud computing platform.

1. Introduction

In recent years, the popularization of vehicles has caused severe traffic accidents and traffic congestion [1], and it has become necessary to relieve traffic pressure during peak period. Therefore, in a complex transportation environment, detecting the moving vehicles and counting them can be used to reasonably regulate the traffic flow of a certain section of the road or quickly arrange the traffic police to deal with the traffic problems. For many public places with a number of people [2], such as bus stations, railway stations, and airports, if a terrorist incident occurs, it will have unimaginable effects. To prevent the occurrence of dangerous events, such as vehicle intrusion, sudden acceleration, and overspeed, it is necessary to detect and track the vehicles with abnormal behavior in these public areas and transfer data information to a cloud platform for storage and further analysis by IoT [35].

In general, there are mainly four kinds of methods to detect moving vehicles, i.e., interframe difference method, background subtraction method, loop detectors, and optical flow method [6].

For the interframe difference method, Ma Yongjie, et al. used the interframe difference method for vehicle detection [7]. The key to this method is to select an appropriate threshold for image binary obtained. If the threshold is not chosen properly, it will have severe effects on the test results. If it is too big, it cannot detect the moving vehicle, and if it is too small, it cannot effectively suppress noise in the image. This is one of the disadvantages of the interframe difference method, i.e., it is sensitive to noise. In addition, in the process of moving vehicle detection, the interframe difference method will also cause a “cavity” phenomenon. The reason of the so-called “cavity” is that the speed of the moving vehicle is too slow or the detected vehicle is not moving.

Although the computational complexity of the interframe difference method is low in implementation and it is insensitive to illumination changes, it is only sensitive to moving targets. Therefore, it is often combined with other methods such as the background subtraction method to achieve the desired detection effect. The background subtraction method takes a still image as a comparison image and then implements difference operation between the still image and each frame of image subsequently taken to obtain a difference image, thereby detecting the moving vehicle from the image. Gao Tao et al. used the background subtraction method to detect moving vehicles [8]. Background modeling and updating are the key for the background subtraction method [9, 10]. The background subtraction method is simple to implement, and the threshold can be set according to actual conditions. In addition, the detection results of the method can well reflect various situations of the moving target, such as orientation and size. However, this method is sensitive to external environments and is susceptible to illumination, wind, background motion, and camera shake, which will result in inaccurate detection results. In addition, the threshold in difference operation requires artificial setting, and there is no uniform method for every specific setting. Therefore, this is also one of the disadvantages of the background subtraction method.

For loop detectors [11], the detection of true loop closure in visual simultaneous localization and mapping (vSLAM) can help in relocalization and map registration algorithms, which improves the accuracy of the map and obtains more accurate and consistent results. However, the loop closure detection [1214] may be affected by many factors, including illumination conditions, seasons, different viewpoints, and mobile objects.

The so-called optical flow refers to the moving speed of pixels of moving objects in a grayscale image. Assume that in the image sequence, every pixel of previous frames of images moves to the same position of current frame image with a certain velocity vector, that is, every pixel on every frame image has a velocity vector. Ideally, the background is constant; therefore, the optical flow of its pixels is zero, and the part with nonzero pixels is the moving target to be detected. The optical flow method has high detection accuracy and can accurately analyze moving targets [15, 16]. At the same time, the optical flow method can also detect moving targets in the case of background motion. Compared with the three methods described above, the optical flow method can obtain more information about moving targets. However, although the optical flow method has high detection accuracy, it cannot obtain an accurate contour of a moving target due to the generation of a shadow.

Although the research on moving vehicle detection has aroused great interest at home and abroad, existing methods cannot still solve the problem of how to accurately detect moving vehicles. Major reason is that the real-world environment is complicated, such as lightning change, roadside railings, and shaking trees, which will have severe effects on the detection results. In addition, in an open environment, because of complex targets and backgrounds, most algorithms cannot meet the requirements of real-time performance and accuracy.

For vehicle tracking, many researchers have conducted in-depth research and have proposed many target tracking algorithms, such as traditional Kalman filter algorithm [17], deep learning [18], Meanshift algorithm [19], clustering algorithm [20], Camshift algorithm [21], aggregation algorithm [22], and particle filter algorithm [23]. In recent years, as deep learning technology has shined in computer vision community, especially since 2015, GPUs have become widespread, and the deep learning algorithm has also been introduced into target tracking. Although many vehicle tracking algorithms have achieved good results, there are still various challenges in terms of the occlusion and changes of vehicle scale and illumination under the complicated traffic environment.

Cloud computing [24, 25] is a type of distributed computing, which refers to the decomposing of huge data computing processing programs into countless small programs through the network [26] “cloud” and then processing and analyzing these early stages of cloud computing through a system of multiple servers [27]. With this technology, we can complete the processing of tens of thousands of data in a short time, thereby achieving powerful network services [28, 29]. The cloud computing not only is a kind of distributed computing but also can provide the facilities of utility computing, load balancing, parallel computing, collaborative filtering [30], network storage, hot backup redundancy, and virtualization result [31].

To address above challenges, we propose a new moving vehicle detection and tracking method based on optical flow and immune particle filtering under complex transportation environments. Firstly, the moving vehicle is roughly detected by the optical flow method, and then the shadow detection algorithm based on HSV color space is used to detect the shadow position by threshold segmentation. Then, according to the detected shadow area, accurate moving vehicle detection is realized by removing shadow area. Finally, immune particle filtering is used as the algorithm framework [32] to realize adaptive motion vehicle tracking [33]. Based on the proposed method, we can do big data analysis [3437] for vehicle monitoring and transportation management. The algorithm flowchart is shown in Figure 1.

The main contributions of this work are summed up as follows:(i)This paper proposes a moving vehicle detection method based on the optical flow method with shadow removal, which can improve the accuracy of moving vehicle detection.(ii)A new vehicle tracking framework based on the immune particle filter is proposed, which improves the reliability and robustness of vehicle tracking by improving the particle resampling process.

This paper is organized as follows. Section 2 presents the moving vehicle detection based on the optical flow method. Section 3 explains the proposed moving vehicle detection based on shadow removal. Section 4 discusses the improved immune particle filter for vehicle tracking. Finally, Sections 5 and 6 present a conclusion and some experiment results.

2. Moving Vehicle Detection Based on the Optical Flow Method

There are many ways to calculate the optical flow. According to the theory of optical flow field and mathematical calculation methods, Barron, et al. divided them into four types: gradient-based methods, matching-based methods, energy-based methods, and phase-based methods [38]. Considering that the proposed algorithm is based on the classical Horn–Schunck (HS) algorithm, the HS algorithm is introduced first. The flowchart of the HS algorithm is shown in Figure 2.

The HS method has four assumptions: (i) the grayscale value of the image is always constant; (ii) the change of the optical flow satisfies certain constraints; (iii) the pixel’s changes in the image are slow or unchanged, that is, the change of the optical flow in the image is smooth; and (iv) the overlap of objects in the image is ignored.

Based on the assumptions above, the optical flow equation can be obtained as follows:where , , , , and .

Next introduce the velocity smooth condition, as shown in the following equation:

Based on the optical flow constraint equation (1) and the velocity smoothing condition, the minimization equation is established:where represents the smoothing weight coefficient [39], which refers to the proportion of the speed smooth term.

Then, using the variational calculation [40], the Laplacian operator can be obtained approximately by making difference between the velocity of a pixel point and the average velocity of its surrounding pixel points. Thus, we can use the iterative method to solve the optical flow equation (1), where n represents the number of iterations.

But before solving equation (1), we need to determine two parameters, and . The two parameters can be calculated by the nine-point difference algorithm [41]. Then, calculate the grayscale gradient values, , , and :

After determining all the parameters, we can solve the velocity field based on the current-frame and previous-frame grayscale images.

3. Moving Vehicle Detection Based on Shadow Removal

The changes outside of the vehicle in the complex transportation environment can affect the detection of moving vehicles, for example, the wind blows the leaves and causes changes in the background. Although the optical flow method is suitable for the detection of all moving objects on an image sequence, it is sensitive to noise and light sources, such as shadows around the vehicle, which adversely affect the accurate detection of moving vehicles. Therefore, how to effectively remove the shadow is an important factor for optical flow detection. A large number of experiments have shown that shadows can be well detected in the HSV color space [42]. Therefore, this paper combines the shadow removal with the optical flow method to eliminate the interference of shadows and make the moving vehicle detection more accurate. The detailed steps are as follows:(i)Read the video of moving vehicle detection, determine the shadow areas after converting each frame image of the video two times in the HSV color space, and then convert the obtained image into a grayscale image.(ii)Calculate the optical flow field vector and add the optical flow field vector to the video frame.(iii)Calculate the average amplitude value of the optical flow vector to obtain the speed threshold; then, extract the moving object according to the speed threshold; finally, remove the noise by the median filter.(iv)Remove the shadow areas on the moving vehicle according to the shadow areas detected by the shadow detection algorithm.(v)Remove the road area by morphological erosion algorithm and then fill the “cavity” areas of the vehicle by morphological close operation. Repeat the next frame until all frames are completed.

The flowchart of the improved algorithm is shown in Figure 3.

3.1. Shadow Detection Based on HSV Color Space

In order to obtain accurate moving vehicle detection, shadows need to be removed. Commonly, there are two kinds of shadow detection methods, one is based on shadow feature and the other is based on the geometric model. This paper uses the color features of shadows in the HSV color space to detect shadows.

The pipeline of shadow detection in the HSV color space is as follows: first, input a frame of image and convert the RGB image into a gray image by color space transformation; second, use the Otsu threshold detection method to obtain the threshold of image and binarize the image; third, remove noise by filter; fourth, the shadow area is detected. The flowchart is shown in Figure 4.

3.1.1. HSV Color Model

The HSV color model is similar to an inverted hexagonal pyramid model [43], where V, H, and S represent the brightness, color, and saturation, respectively. The transformation from RGB to HSV is shown in equations (5)–(7):

3.1.2. Shadow Detection

The HSV color space is closer to human vision and can accurately reflect the information of the target. The shadow of the moving vehicle is detected by the nature of the various parameters on the background by the shadow in the HSV color space. In the detection, the V component will become smaller and change a lot relative to the background area, and it will be an important parameter for discriminating the shadow. For the S component, the shadow has a lower value and the difference from the background is negative. The H component usually does not change [39]. Based on this feature, the shadow can be detected. The specific algorithm is as follows:where and represent the threshold values of luminance; and represent the threshold values of saturation and color, respectively; and . , , , and are determined by many experiments, and in the paper their values are 15, 23, 0.15, and 0.47, respectively.

3.1.3. Automatic Threshold Determination Based on Otsu

The Otsu algorithm [43] is used in the HSV color space for shadow detection [44], which is a self-adaptive threshold determination method. The larger the variance between the foreground and the background is, the greater the difference between them is. Assuming is the set threshold, the gray level of the image is ; then traverse from 0 to ; if , the variance between foreground and background becomes the largest, and then is regarded as the required threshold, where the variance is calculated as follows:where , is the proportion of the moving target pixels to the total image pixels, is the average grayscale value of the moving target pixels, is the proportion of the background pixels to the total image pixels, is the average grayscale value of the background pixels, and B is the total average grayscale value of the total image.

3.2. Shadow Removal

Remove the shadow of the moving vehicle based on the detected shadows area and obtain accurate vehicle detection. Then, use morphology operation [45] to remove the road area. Finally, obtain complete and accurate detection area of the moving vehicle by filling the “cavity” via close operation [46].

4. Improved Immune Particle Filter for Vehicle Tracking

This study tracks vehicle targets based on the particle filter framework under the complex transportation environments. To reduce the occurrence of particle degradation and maintain the diversity of particles as much as possible, inspired by the literature [32], a vehicle tracking framework based on immune particle filter is proposed, which improves the reliability and robustness of vehicle tracking by improving the particle resampling process.

During vehicle tracking, the goal is a rigid object. Therefore, the algorithm uses a rectangular box with a constant ratio of width and height to indicate the target state, i.e., , where , , , , and represent the center point abscissa, center point ordinate, rectangle width, rectangle height, and scale factor of the rectangular box, respectively.

4.1. Particle Filter

Particle filtering is a computational method for discretely approximating Bayesian posterior probability density [33]. Particle filtering refers to estimating the current state of a target by previously determined states. The current posterior probability of the target is calculated based on the detected data and the posterior probability of the previous moment. Particle filter tracking is to first spread the particles by resampling the distribution of particles. Then, redetect the target, and finally update the state by normalization [33]. The advantage of the particle filter tracking is that it runs fast and can solve the problem of partial occlusion. Therefore, we use the particle filter to track the vehicle in the paper.

According to Monte Carlo algorithm [47], the mathematical expectation can be obtained, and the expectation value is used to express the estimated weight of the particle. In addition, during the calculation process, the weights of particles are generally obtained from an approximate posterior probability density. According to the Bayes formula, we get the probability density ; let ; then, the particle sample can be obtained.

4.2. Color Distribution Model

The observation model is based on the color feature during particle filtering process. The dimension of the HSV color histogram for statistical calculation is set as [48], where , , and represent the dimensions of the H component, S component, and V component, respectively. Compared to methods based on other color spaces, the method based on HSV color histogram can reduce the impact of lighting change on tracking accuracy to some extent. On the other hand, due to the frequently occurring occlusion in the vehicle tracking process, the HSV color histogram is calculated by the blockwise statistic way in the algorithm, which can effectively reduce the probability of target loss in this case with the help of the likelihood calculation method. The color histogram based on blockwise way can be defined aswhere represents the dimension of HSV histogram of every block; and , which represent the target rectangular box are equally divided into identical blocks; represents the number of pixels in the block of ith row and jth column that fall within the color interval corresponding to ; and represents the total number of pixels in every block. Therefore, the proposed histogram based on blockwise way can be regarded as a series of normalized subhistograms.

4.3. Artificial Immune Algorithm

Although the classical immune resampling algorithm optimizes overall particle filtering algorithm frame by making particles maintain diversity to a certain extent, it does not consider scene changes and interference of similar objects in practical applications. The algorithm uses the anti-interference affinity calculation and the improved mutation function of antibody to make the particle filter algorithm have certain adaptivity and robustness to scene interference in the vehicle tracking process.

4.3.1. Affinity Calculation

Affinity is used to measure the probability that the region determined by the particle at time t coincides with the target, defined aswhere represents variance of Gaussian distribution and represents the distance function. To reduce the tracking error caused by partial occlusion, the study further makes an improvement for the distance function . is updated as follows:where represents the number of the blocks where objects are occluded; represents the given threshold (typically, let ); and is the mean value of the distance values between the occluded blocks and the corresponding target templates, , in which is the Bhattacharyya distance between the histogram of every occluded block and the corresponding target template [49], .

4.3.2. Antibody Mutation Function Improvement

During vehicle tracking, there is no need to perform antibody mutation operation for high-weight particles, and antibody mutation operation is performed only for the low-weight particles according to U(-u, u) distribution. Here, we adopt a new antibody mutation function , which is defined as follows:where represents the affinity function, represents the critical affinity of meeting the weight threshold condition, represents the dynamic adjustment factor, , represents the particle state, indicates particle weight, and is a set particle weight threshold. Note that in the paper, which is determined by many experiments.

4.4. Algorithm Process

(i)Initialization: calculate the color histogram of object of template by equation (10). Taking the center of the target as the origin point, select N particles by random sampling based on Gaussian distribution as antibodies and assign the affinity value for each antibody to be 1/N.(ii)Selection: N samples are randomly selected from the N input samples according to the weights of samples. The selected probability of the particles with high weights is high, and the probability of the particles with small weight is small.(iii)Cloning: the affinity of each antibody is calculated from equation (11). The aim of cloning is to promote antibodies with high affinity and to inhibit antibodies with low affinity. Therefore, the proposed algorithm can quickly converge to the global optimal solution by cloning operation.(iv)Variation: mutation calculation is performed for all antibodies according to equation (14), where is calculated by equation (13).(v)Maturity: calculate the affinity of all antibodies after mutation operation. After ranking according to the degree of affinity, reselect the top N antibodies as a new antibody memory set for further optimization.(vi)Estimation: calculate the sum of weights of the N antibodies in the memory set to obtain a new updated target value. Next, judge whether the program ending condition is satisfied, and if it is satisfied, then it ends; otherwise, the program turns to step (ii).

5. Experiment

Experiments are implemented on the computer with Intel(R) Core(TM) i5-2410M CPU 2.30 GHz and 4.00 G memory. The image size is 320 × 240 pixels, and the software is programmed by Matlab 2013. The experimental results are as follows.

5.1. Shadow Detection Results

The video of moving vehicles is captured by a camera fixed on an overpass. The video is in AVI format with a frame rate of 15 frames/s. The number of moving vehicles is sufficient for the experiment. The experimental results of shadow detection are shown in Figure 5, where the left column is the original images and the right column is the detection results of the shadow area.

On the right column, the white area indicates the shadow area detected by the shadow detection algorithm based on HSV color space. In order to obtain complete shadow area, the morphological close operation [46] is performed after the image binary based on the threshold segmentation.

5.2. Comparison of the Proposed Method and Traditional Optical Flow Method

In order to verify the effect of the proposed method, we compare the proposed method with the traditional optical flow method. Figure 6 shows the detection results based on traditional optical flow method. Figure 6(a) shows the original image, Figure 6(b) shows the grayscale image after gray processing, Figure 6(c) shows the optical flow vector after optical flow calculation, Figure 6(d) gives the image through morphological filtering, and Figure 6(e) shows the detection result of moving vehicle.

Figures 7 and 8 show the detection results based on the proposed method, where Figure 7 shows the results of frame #40 and Figure 8 shows the results of frame #100. Figures 7(a) and 8(a) show the original images, Figures 7(b) and 8(b) show the grayscale images after gray processing, Figures 7(c) and 8(c) show the optical flow vectors after optical flow calculation, Figures 7(d) and 8(d) show the image through morphological filtering, and Figures 7(e) and 8(e) show the detection results of moving vehicle.

As shown in Figure 6, the shadow area is included into the detection box of moving vehicle detection based on the traditional optical flow method. However, from Figures 7 and 8, the shadow area is removed from the detection box of moving vehicle detection based on the proposed method. Therefore, the proposed method can more accurately detect moving vehicle than the traditional optical flow method.

We conduct an extensive experiment on a video with a total of 30 vehicles based on the proposed method and traditional optical flow method. The proposed method with shadow removal can accurately detect the moving vehicles and give correct number of the moving vehicles. However, the traditional method cannot correctly detect the number of the moving vehicles in the video. The reason is that a black car in the left lane of the frame #70 is not correctly detected, as shown in Figure 9, where the car is removed as a shadow because illuminance is poor, and the color of the car itself is also black. Therefore, the proposed method has strong robustness to shadow and similar-shadow objects compared with the traditional optical flow method.

Table 1 shows the performance comparison of the proposed method, background subtraction method, interframe difference method, and traditional optical flow method in terms of average computational time and accuracy. As shown in Table 1, although the background subtraction method, interframe difference method, and traditional optical flow method have fast detection speed, they have lower accuracy than the proposed method.

5.3. Tracking Results

Based on the proposed tracking algorithm, we perform the vehicle tracking experiments. Experimental results are shown in Figures 1012. Figures 1012 show the tracking results in the 2nd, 10th, and 20th frames in the captured video, respectively. The red points in Figures 1012 indicate the detection result based on optical flow method, and the blue points in Figures 11 and 12 indicate the trajectory of the tracked moving vehicle.

The videos of the tracking experiments include the normal traveling condition of vehicles on the highway and abnormal conditions such as illumination change, similar neighboring objects, occlusion, and scale changes. To verify the effectiveness of the proposed tracking algorithm, experiments were performed under the same conditions, and the tracking results were compared with two typical target tracking algorithms, i.e., Kalman filter algorithm and Camshift algorithm.

In terms of parameter setting, the particle number of the particle filter is set as 100, and other corresponding parameters, such as system noise and observation noise, are selected as consistent as possible in the three algorithms. Since the tracking results of the test videos have been manually labeled, we refer to them as the ground truth and define two evaluation criteria to compare the tracking results, namely, the centroid error and the target domain coverage accuracy. The centroid error and the target domain coverage accuracy of single-frame image are expressed by Err and Acc, as shown in equations (15) and (16), respectively:where x and y indicate the horizontal and vertical coordinates of the center of tracked target vehicle, respectively; and indicate the horizontal and vertical coordinates of the ground truth; and expresses the overlap area between the area R determined by the proposed algorithm and the area RG determined by the ground truth.

Based on the two evaluation criteria, we select the tracking results of 200 frames of continuous vehicle images to evaluate the performance of the proposed method. First, the tracking experiments are performed on a sunny day with good visibility. The average value of centroid errors of the proposed method is 2.1, and the average value of the target domain coverage accuracy is 93.6%. However, the average values of centroid errors of the Camshift algorithm and Kalman filter algorithm are 4.2 and 3.8, respectively, and the average values of the target domain coverage accuracy of the two algorithms are 69.1%, and 71.3%, respectively. Experimental results show that in simple scenarios, the proposed algorithm in this paper can well adapt to vehicle changes and has higher tracking accuracy than other two algorithms.

In addition to the experiments in good traffic scenarios, we also perform vehicle tracking experiments under complex transportation environments with poor visibility and target-like interference around target vehicle on a rainy day. The average value of centroid errors of the proposed method is 3.5, and the average value of the target domain coverage accuracy is 75.3%. However, the average values of centroid errors of the Camshift algorithm and Kalman filter algorithm are 7.4 and 5.9, respectively, and the average values of the target domain coverage accuracy of the two algorithms are 60.8%, and 67.1%, respectively. Experimental results are shown in Figures 13 and 14. At the 30th frame image, an obvious turn of the tracked vehicle leads to the failure of tracking for the Camshift algorithm. Furthermore, at the 76th frame image, the Kalman filter algorithm fails to accurately track the object vehicle due to a sudden acceleration of the tracked vehicle along with a poor illumination. Besides, at the 3rd frame of the video, because the object vehicle is small and a black sedan similar to the object vehicle in color just appears on its right side, the Camshift algorithm fails to track the object vehicle. However, thanks to the use of the anti-interference affinity calculation and the improved mutation function of antibody, the proposed method can make the particle filter algorithm have certain adaptivity and robustness to scene interference. Therefore, despite poor illumination, sudden acceleration of the tracked, and similar-color interference, the proposed method can still favorably track the object vehicle.

6. Conclusion

This study proposed a new moving vehicle detection and tracking method based on optical flow and immune particle filter under complex transportation environment. The proposed method firstly uses the optical flow method to roughly detect the moving vehicle and then uses the shadow detection algorithm based on the HSV color space to accurately detect the moving vehicle and finally robustly tracks the moving vehicle based on the proposed immune particle filter algorithm. The experiments under complex traffic scenes with shadow interference demonstrate that the proposed method can well solve the impact of shadow interference on moving vehicle detection and realize accurate detection as well as robust tracking of a moving vehicle. Due to adopting shadow removing and improved immune particle algorithms, the proposed method achieves higher accuracy of vehicle detection and tracking compared with existing algorithms of Camshift and Kalman filter. However, the proposed method in the paper is limited to daytime with good and poor illumination; for the conditions at night, following research will be further done by considering to employ infrared image of moving vehicles.

In the future, the experimental results can be transferred to a cloud computing platform through a wireless sensor network and can be further analyzed and processed by policymakers to enhance the vehicle management level [25, 27, 51].

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was supported in part by the National Natural Science Foundation of China (nos. 61304205 and 61502240), Natural Science Foundation of Jiangsu Province (BK20191401), and Innovation and Entrepreneurship Training Project of College Students (201910300050Z and 201910300222).