Internet of Things in Multimedia Communication SystemsView this Special Issue
Research on Video Quality Diagnosis Technology Based on Artificial Intelligence and Internet of Things
The remote video diagnosis system based on the Internet of Things is based on the Internet of Things and integrates advanced intelligent technology. To better promote a harmonious society, constructing a video surveillance system is accelerating in our country. Many enterprises and government agencies have invested much money to build video surveillance systems. The quality of video images is an important index to evaluate the video surveillance system. However, as the number of cameras continues to increase, the monitoring time continues to extend. In the face of many cameras, it is not realistic to rely on human eyes to diagnose video-solely quality. Besides, due to human eyes’ subjectivity, there will be some deviation in diagnosis through human eyes, and these factors bring new challenges to system maintenance. Therefore, relying on artificial intelligence technology and digital image processing technology, the intelligent diagnosis system of monitoring video quality is born using the computer’s efficient mathematical operation ability. Based on artificial intelligence, this paper focuses on studying video quality diagnosis technology and establishes a video quality diagnosis system for video definition detection and noise detection. This article takes the artificial intelligence algorithm in the diagnosis of video quality effect. Compared with the improved algorithm, the improved video quality diagnosis algorithm has excellent improvement and can well finish video quality inspection work. The accuracy of the improved definition evaluation function for the definition detection of surveillance video and noise detection is as high as 95.56%.
The video monitoring system is being constructed in China in order to promote the development of a harmonious society. Many enterprises and government agencies have invested much money to build video monitoring systems. The quality of video images is an important index to evaluate the video monitoring system. However, with the increasing number of cameras, the monitoring time is constantly extended. It is not realistic to rely on human eyes to diagnose the quality of the video. Besides, there will be some deviation through human eye diagnosis due to the human eye’s subjectivity. These factors all bring new challenges to system maintenance. Therefore, the intelligent diagnosis system of monitoring video quality based on the computer efficient mathematical operation ability relies on artificial intelligence technology and digital image processing technology. This paper focuses on video quality diagnosis technology based on artificial intelligence and establishes a video quality diagnosis system of video definition detection, video color deviation detection, and video noise detection. The artificial intelligence algorithm adopted in this paper has a pronounced effect on video quality diagnosis. The improved optical frequency quality diagnosis algorithm has a more significant improvement than the improved algorithm, which can better complete video quality inspection work.
A video monitoring system’s video image quality is the most critical performance index of a monitoring system [1, 2]. Good quality monitoring video can bring great convenience to users and improve their work efficiency. However, the video image quality in the case will be seriously reduced when failure of monitoring equipment happens while with no timely maintenance. It is undoubtedly a devastating disaster for the monitoring system. Therefore, accurate time detection and maintenance monitoring equipment are essential for people. However, with the increasing number of surveillance cameras, the monitoring time is constantly extended. The existing manual detection camera has a huge workload, low efficiency, and high human cost, making traditional video image quality diagnosis relying on human eyes quantity no longer realistic. Therefore, based on computer image vision technology and artificial intelligence technology, intelligent video quality diagnosis algorithms with the help of a powerful computer’s powerful processing ability emerges. Intelligent video quality diagnosis algorithm is a branch of artificial intelligence research. It establishes the relationship between images through computer image vision technology and extracts valuable information from video images, thus analyzing and processing video images. Using a video quality diagnosis algorithm, we can simultaneously analyze thousands of video devices and screen out the places of interest. Therefore, it can significantly improve the efficiency of maintenance personnel, greatly ease the human cost, and improve the ability to handle emergency affairs.
Intelligent video quality diagnosis technology originated in the 1990s, an essential branch of intelligent video analysis technology. An intelligent video quality diagnosis algorithm is an integral part of the video monitoring system, and it is an essential tool for the video monitoring system’s maintenance personnel. The popularization of the modern video monitoring network, the construction of a safe city at home and abroad, the development of cities, the rapid expansion of cities, frequent accidents in some security fields, and military use all promote the accelerated development of video monitoring. It also makes people realize the importance of the video quality diagnosis function to the video monitoring industry .
Video quality diagnosis system is an intelligent video fault analysis and early warning system. It is mainly used in the control center of the large-scale monitoring system. By controlling the video switching output of the monitoring center’s matrix host or connecting the digital video streaming media management server, the video signal of all the cameras in the front end is obtained. The standard camera failures such as snowflake, rolling screen, blur, color deviation, picture freezing, gain imbalance, and malicious behavior of blocking and damaging the monitoring equipment are judged accurately. The alarm information is sent out. The video quality diagnosis system uses video image analysis to detect the monitoring system’s common video faults. From the current common faults, the detection is mainly carried out from the following aspects: camera interference detection, video definition detection, video interference detection, video signal missing detection, video brightness anomaly detection, and video color deviation detection.
Camera interference detection [4, 5]: automatically detects camera lens when artificial or because some unexpected events are moved or blocked, making the camera deviate from the monitoring area. This function can diagnose whether the camera position changes or whether the camera lens is blocked.
Video definition detection : automatically detects the image blur of the central part of the field of vision caused by improper focus and lens damage in the video; the function evaluates the clarity of the real-time video picture and the content of information.
Video interference detection : automatically detects noise phenomena such as video image distortion, snowflake, jitter, or rolling screen; the main monitoring objects are dot, prick, and strip interference on the video screen due to aging of the line, transmission failure, poor contact, or electromagnetic interference.
Video brightness detection : automatically detects the phenomenon of over dark, over bright, or black screen of the picture caused by camera failure, gain control disorder, and abnormal lighting conditions; this function will diagnose the brightness and darkness of video. Because the diagnostic plan and monitoring threshold can be changed in different periods, the brightness anomaly detection can play a role in day and night.
Video color deviation detection : automatically detects screen color deviation caused by poor line contact, external interference, or camera failure, mainly including the color deviation of the whole screen with single color or mixed colors; the function analyzes the color information of video, and its feature is that when there is rich color in the video, it can distinguish them from natural scenes or from camera fault, which makes camera color detection practical.
Video freeze detection [10, 11]: automatically detects the freezing phenomenon of video pictures caused by the failure of video transmission and dispatching system can avoid missing real-time video image.
Video signal missing detection: automatically detects intermittent or continuous video loss caused by the operation abnormality, damage, evil human intention, or video transmission link failure of front-end platform and camera.
This paper focuses on the video definition detection and video noise detection in video quality detection.
2. Video Definition Detection
When the playing picture of a video exceeds 24 frames/s, according to the principle of visual retention, the human eye cannot distinguish every single static picture, which appears to be a smooth and continuous visual effect. Objects in a video are usually divided into static and moving categories. Objects that remain static in a continuous multiframe frame can be regarded as the static background. In contrast, objects whose positions change in a continuous multiframe frame can be regarded as the moving foreground. Therefore, every frame in a real-time video image can be divided into two areas: the static background and the moving foreground. It is challenging to detect real-time video images’ sharpness because of the unexpected change of moving foreground area in video sequence image, which causes the random change of pixel gradient value. This paper’s algorithm uses the static background area in the real-time video image to detect the clarity of the visual composition sequence image; that is, it consists of background extraction and definition detection.
This algorithm is roughly divided into three steps: (1)Intercept a real-time video image to obtain the initial background image
The calculation formula of background image is as follows: where is the number of frames captured in a video sequence and is the gray value of an image pixel . (2)The current real-time video image is used to update the initial background to detect the background image(3)Calculate the sum of the edge gradient values of the background image according to the Sobel operator, judge the background image’s sharpness according to the threshold value, and get the sharpness evaluation value of the real-time video image. The Sobel operator can be expressed by formula (2):where is gradation of image and and are finished by convolution template.
The implementation steps of the algorithm are as follows:
A video image with a length of 1 min was intercepted from the real-time video image, sampled every 5 seconds, and a total of 12 frames of images were obtained. The sampled 12 frames of images are converted from RGB space to gray space because of reducing the amount of calculation. The gray value of each pixel in the image is accumulated and averaged to get the real-time video image’s initial background image. The calculation formula is
The current frame image in the real-time video image needs to update the background image to achieve real-time monitoring constantly. The calculation formula of background update is as follows: where is the latest frame image in a live video image and and are the background image before and after updating, respectively.
According to the gray value of the pixels in the background image, the Sobel operator is used to detect the image’s edge. Complete the detection of the sharpness of the real-time video image. The detection process is as follows: (1)According to the edge detection theory of the Sobel operator and the template of four directions as shown in Figure 1, neighborhood convolution calculation is carried out for each pixel point in the image to extract the edge components of pixel points in four directions:
(a) 0° gradient direction template
(b) 45° gradient direction template
(c) 90° gradient direction template
(d) 135° gradient direction template
The gradient value of each pixel in the image is (2)If , the pixel point is an edge point; if , the pixel point is a nonedge point. The edge gradient energy value and the edge gradient energy value are taken as the evaluation value of image sharpness detection by adding the gradient values of image edges:where is the number of edge points. We compared with the sharpness detection range value of a clear real-time video image background. If , the live video image is clear; if , the live video image is blurry
In order to compare the effect of the Sobel operator and other algorithms in clarity detection, the article selected 6 real-time video images. The first three are transparent images; the latter three are fuzzy images and analyzed the effect of the Sobel operator and square gradient method. The comparison results are shown in Table 1.
As shown in Table 1, the changing trend of the clarity evaluation value of Sobel operator clarity detection and square gradient algorithm is consistent with the facts observed by the naked eye. The more blurred the video sequence image, the smaller the clarity evaluation value. The further comparative analysis found that Sobel operator clarity detection accuracy was higher than in the square gradient method. In addition, the Sobel operator detects faster than the square gradient method.
3. Video Color Bias Detection
Traditional noise detection algorithms include the grayscale interpolation method and spatial neighborhood method. To better detect the noise of the image and effectively eliminate the false positives generated by noise detection of nonnoise points in the picture, this paper proposes to use the “image smoothing region” module to describe the noise in the video image.
The “image smoothing area” is a particular area in the video picture, which has the following two characteristics. The first is the channel change between the values of all pixels in the smooth area. The second is the high similarity of all pixels in the smooth area to the pixels in the surrounding neighborhood. The video image area that satisfies the above two points is called the “image smoothing area.”
The algorithm flow is as follows: (1)The user requests video diagnosis, and the video diagnosis platform finds the idle algorithm analysis unit and sends a message to VAM requesting the establishment of the association between the algorithm analysis unit and the camera(2)The user starts setting diagnostic rules, including the items to be detected and the thresholds for the items to be detected(3)The user requests “start diagnosis” through the video quality diagnosis platform. After receiving the request, VAM will send the diagnosis items, diagnosis threshold, and related information to the corresponding algorithm analysis unit according to the diagnosis rules(4)After receiving the diagnosis request, the video quality algorithm analysis unit finds the corresponding diagnosis algorithm according to the diagnosis item. If it is video noise detection, the noise detection algorithm is selected as the diagnosis algorithm, and the corresponding threshold value is set for the algorithm(5)The video quality algorithm analysis unit obtains the camera ID to be detected according to the associated camera list, and the algorithm analysis unit obtains the video code stream to be detected through the interface provided by the video monitoring platform according to the camera ID(6)Use OpenCV computer image vision function to extract the current video image frame data, and judge whether the current frame data is YUV data; if not, the current picture gray processing is used, so as to obtain Y, U, and V data(7)First, the current frame image data is transmitted to the video image analysis module. Then, the current frame image data is sent to the video image noise anomaly analysis algorithm unit according to the currently selected diagnostic items. The algorithm in the analysis unit analyzes the image data. The implementation process of the algorithm is from Step 4 to Step 8(8)A template of “Image Smooth Area” is designed to scan the pixel points of the current video picture. As shown in Figure 2, the template of “Image Smooth Area” designed in this paper is composed of four templates with different vector directions(9)According to the template of “Image Smooth Area” designed in this paper, each pixel of the current video image frame is scanned successively from left to right and from top to bottom. In other words, the value of the current pixel is multiplied by each element value in the four modules to obtain the four “scan” values of the current pixel corresponding to the four templates: Value1, Value2, Value3, and Value4; .(10)Calculate the MinValue corresponding to each pixel in the current video image according to Step 4. When the MinValue of a pixel is 0, it is judged that this pixel is in the smooth area of the image. We call it “smooth pixel”; otherwise, this pixel should be a noise point(11)The method in Step 6 is used to calculate the total number of “smooth pixel points” in the current video picture, and then, the total number of “smooth pixel points” is compared with the total number of pixel points in the current video picture. When this ratio is less than the noise threshold value of 0.9, it is determined that the current video image has noise abnormality(12)The total number of frames with abnormal video noise in a certain period of time is counted, and the ratio of the total number of frames with noise in the video to the total number of frames in the video picture in this period of time is used. When the ratio is greater than 0.4, the video is judged to have abnormal noise(13)After being processed by the video noise diagnosis algorithm, if the current video is found to be noisy, alarm information will be sent to the video quality analysis module immediately, and the alarm information will be uploaded to the network management system(14)Upload the warning information to the network management system
4. Video Quality Diagnostic System Testing
4.1. Construction of Test Environment
The software environment tested by this algorithm is shown in Table 2.
4.2. Construction of Test Environment
The testing process of the quality diagnosis algorithm in this paper is as follows.
Firstly, the video code stream to be analyzed is obtained through the video monitoring platform’s video media interface. Secondly, the received video code stream is connected to the video diagnosis and analysis management device, simultaneously managing the VA’s multiple video analysis algorithm units. Finally, the video analysis algorithm unit VA is used to analyze the quality of the specified video quality diagnosis items. Each VA can independently complete the diagnostic analysis of all the videos. Finally, the VQCC video quality diagnosis client is used to browse the alarm results obtained by analyzing each route VA’s specified video quality diagnosis algorithm. The test topology for this article is shown in Figure 3.
4.3. Analysis of Test Result
To verify the text’s accuracy improved diagnostic algorithm, this paper uses the above test topology to test and compare the commonly used diagnostic algorithm and the improved diagnostic algorithm.
Table 3 shows the results of video detection using different definition evaluation algorithms and noise detection algorithms. From the detection rate results of the algorithm, it can be seen that the accuracy of the improved definition evaluation function for the definition detection of surveillance video and noise detection is as high as 95.56%, which is much higher than the detection rate of other definition evaluation functions. It can more accurately judge the clarity of images. As can be seen from the algorithm’s false-positive rate results, the false-positive rate of the improved evaluation function for the definition detection of surveillance video and noise detection is 4.44%, which is much lower than other evaluation functions. It can be seen that the improved algorithm is a reliable evaluation function of video definition and video noise.
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Conflicts of Interest
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
L. Xi, “Analysis of current situation and development trend of network video surveillance,” China Computer & Communication, vol. 14, 2017.View at: Google Scholar
F. Zhu, H. Y. Tian, Z. Y. Zhou, and Y. Q. Gong, “The full digitalization and development trend of video monitoring,” Science and Technology of West China, vol. 2, 2011.View at: Google Scholar
L. Q. Guo, “The application of the video surveillance technology in the mid-route of the south-to-north water,” Applied Mechanics & Materials, vol. 170-173, pp. 2037–2042, 2012.View at: Publisher Site | Google Scholar
Z. Buchta, P. Jedlicka, M. Matejka et al., “White-light interference fringe detection using color CCD camera,” in AFRICON 2009, Nairobi, 2009.View at: Google Scholar
A. Schwarz, Y. Sanhedrai, and Z. Zalevsky, “Digital camera detection and image disruption using controlled intentional electromagnetic interference,” IEEE Transactions on Electromagnetic Compatibility, vol. 54, no. 5, pp. 1048–1054, 2012.View at: Publisher Site | Google Scholar
Y. G. Sui, “Method for optimally controlling crossing vehicle signal under high definition video detection condition: CN, CN101976510 A,” 2011.View at: Google Scholar
D. Alonso-Caneiro, D. R. Iskander, and M. J. Collins, “Computationally efficient interference detection in videokeratoscopy images,” in 2008 IEEE 10th Workshop on Multimedia Signal Processing, Cairns, QLD, Australia, 2008.View at: Google Scholar
Z. Rui, S. Zhang, and S. Yu, “Moving objects detection method based on brightness distortion and chromaticity distortion,” IEEE Transactions on Consumer Electronics, vol. 53, no. 3, pp. 1177–1185, 2007.View at: Publisher Site | Google Scholar
L. V. Alphen and J. G. Lourens, “Detection of colour biases in video images,” IEEE Transactions on Broadcasting, vol. 37, no. 2, pp. 69–74, 2002.View at: Google Scholar
V. Zlokolica, V. Pekovic, N. Teslic, T. Tekcan, and M. Temerinac, “Video freezing detection system for end-user devices,” in 2011 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 2011.View at: Google Scholar
R. Grbi, D. Stefanovi, M. Vranje, and M. Herceg, “Real-time video freezing detection for 4K UHD videos,” Journal of Real-Time Image Processing, vol. 17, no. 5, pp. 1211–1225, 2020.View at: Google Scholar