- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Advances in Mechanical Engineering
Volume 2013 (2013), Article ID 921402, 5 pages
Research on Wireless Video Monitoring System
1School of Computer and Communication, Lanzhou University of Technology, Lanzhou 730050, China
2Research Institute of Petroleum Exploration and Development Northwest, PetroChina, Lanzhou 730050, China
Received 8 July 2013; Revised 13 September 2013; Accepted 23 September 2013
Academic Editor: Wuhong Wang
Copyright © 2013 Zhao Hong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
According to the fact that the sites of wired video monitoring system cannot move, wiring is not flexible and high network bandwidth is required to transmit video data stream. H.264 video compression standard is adopted, WiFi is introduced as the wireless transmission network, and video coding and compression are produced on embedded S3C6410-based platform, which makes compressed data streams transmitted via WiFi. Based on the above, a wireless video monitoring system is built. It is proved that the site amount of the monitoring system could be increased or decreased in demand and moved arbitrarily through the experiment. Monitor image is clear and stable, and the system could be actually applied in petroleum exploitation, geological exploration, and so on.
Wireless network possesses many advantages such as no installation work, good flexibility, and convenience of communication [1, 2], and therefore it gets more attention and wider application. However, because of wireless network’s low bandwidth, it could not support immense real-time data stream transmission. Hence, the high efficient video compression coding algorithm must be introduced when it is applied to video monitoring field to transmit video data stream. For example, in a camera of a monitoring system with true color of 24 bit per pixel, 25 frames per second, and pixels per frame, the data flow will occupy MB bandwidth without video coding compression; this will be such a huge data flux that wireless network is unable to ensure real-time transmission. In order to effectively reduce the volume of data, the video data must be encoded and compressed before transmitting and storing. At present, common Moving Picture Expert Group 1 (MPEG1) and Moving Picture Expert Group 2 (MPEG2) are developed for high-level media, and they are difficult to ensure real-time transmission in wireless fidelity (WiFi) network because of their poor interactive function, low flexibility, and low compression ratio.
In H.264 standard, the constituents of joint video team (JVT) are International Telecommunication Union-Telecommunication Sector (ITU-T) video coding experts group and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) motion pictures experts group . It released a high compression digital video codec standard, H.264, in 2003, which was accepted as the tenth part of MPEG-4. The greatest advantage of H.264 is that it still has high-quality fluent image in the case of high data compression ratio . Under the condition of the same image quality, H.264 compression ratio is two times higher than MPEG-2, with higher quality video picture, improving network adaptability, using hybrid encoding structure, and having powerful function of error recovery; therefore, it is suitable to be used in a wireless monitoring system.
After configuring the camera, the system uses an embedded processor and the H.264 standard to encode and compress the video information collected by the camera. For long-distance monitoring [5, 6], the compressed video stream is transmitted to back-end through WiFi.
2. The Composition of Wireless Video Monitoring System
Wireless video monitoring system is composed of three parts: the mobile robot capturing and processing video data terminal, wireless network, and monitoring platform, as shown in Figure 1.
A mobile robot is connected to the wireless router by a USB wireless network card. The real-time video images are transferred from the mobile robot to the PC monitoring platform through the wireless router .
3. System Design
3.1. Hardware Platform
The hardware platform of mobile robot’s video capturing terminal is the embedded development platform based on ARM11 chip (Samsung S3C6410) as the kernel processor. The kernel processor is designed based on the ARM1176JZF-S; its dominant frequency is up to 667 MHz, and powerful multiformat codec (MFC) is integrated in it, and the function of multiple format video code/decode is supported, such as MPEG4, H.263, and H.264.
Hardware architecture of the system is shown in Figure 2.
3.2. Algorithm Design for Video Coding and Compression
Fundamentally, in this paper, two different layers were independently designed. One is video coding layer (VCL); the other is network abstraction layer (NAL). Successively, video data is encoded and processed through the VCL, and the encoded data will be sent into the NAL by mapping and encapsulating method. And thus the data will be sent to the network for transmitting via the NAL in an appropriate way.
H.264 standard consults two entropy encoding ways. One is the combination of context-based adaptive variable-length code (CAVLC) and universal variable length code (UVLC), and the other is context-based adaptive binary arithmetic Coding (CABAC) [8, 9]. The current block is decoded in accordance with the circumstance of adjacent blocks for the purpose of achieving a better encoding efficiency. The compression efficiency of CABAC is higher than of CAVLC; that is, the former can save about 20% bandwidth in the case of the same quality.
3.2.1. Principle of H.264
H.264 is a video compression coding standard. Compared with the previous video compression standard, it has many outstanding advantages , which increases the coding efficiency in the prediction of image content, improves image quality, and augments the function of error correction and the adaptability in the various network transmissions by means of adopting variable block size, one-fourth sampling precision, and weighted prediction algorithm . Redundancies are eliminated using the interframe and intraframe prediction in the spatial and time domains  and using the transformation and quantization in the frequency domain . In the case of the same image quality, compression efficiency of H.264 standard has been increased much more greatly than the previous one . However, it is at the cost of making the coding more complicated. Consequently, it will consume more resources of system. Nevertheless, with the constant improvement of the performance of ARM processor and DSP, the processing speed will be so fast that there is no need to focus on it. Figure 3 demonstrates the encoding process block diagram of H.264 standard. At present, the commonly used video coding algorithms are almost based on motion estimation and time-frequency transformation by employing blocks as basic partition units . Motion estimation algorithms are commonly used as follows .
The mean square error (MSE) is shown as formula (1): where is the displacement vector, and are the gray values which are the current frame in and the reference frame in , and represents block size. If a block in makes that is the minimum value, it is the most optimum matching block.
The mean absolute difference (MAD) is shown as formula (2): where is the displacement vector, and are the gray values which are the current frame in and the reference frame in , and represents block size. If a block in makes that is minimum, it is the most optimum matching block.
For the purpose of concentrating energy, data blocks are transformed from spatial domain to time domain. Conversion formulas are shown as formula (3) and formula (4). Parameters obtained by the time domain module and the spatial domain module are sent to entropy encoder, and the statistical redundancy information should be removed for the final compression where represents the whole image block mean value when and are equal to zero in the discrete cosine transform (DCT) transformation process. is called direct-current coefficient (DC) and the remaining 63 numbers after transformation are called alternating-current coefficient (AC). The distance from AC to DC is farther, and the AC frequency is higher.
The same parts are processed through motion estimation in adjacent frames, and energy of data blocks is made more concentrated by the time-frequency transform. The common time-frequency transform is DCT. There is some certain correlation between adjacent frames; the images are divided into many blocks to find the most similar blocks in the adjacent frames through motion estimation searching algorithms, and the relative displacement between the two is called motion vector. In order to reduce time redundancy between frames through motion estimation, during the process of encoding, motion vectors and mixed projections are encoded.
3.2.2. The Algorithm Implementation of Video Coding and Compression
The API provided by Samsung could be called to hardware-encode/decode. As shown in Figure 4, three video encoding formats are supported in it. H.264 encoding format is used in this paper.
The implementation of calling MFC API is achieved by initial function mfc_encoder_init, executive function mfc_encoder_exe, and releasing handle function mfc_encoder_free.
The H.264 encoding flow chart is shown in Figure 5.
The H.264 encoding flow could be divided into the following main stages.
System initialization. According to requirement, the system and environmental parameters can be set up correspondingly.
Each size of frame could be set by modifying frame size. A buffer is reserved generally for storing each frame video data before carrying out the program.
Image information acquisition. TVideo class is called to achieve this function in source program main.cpp, and the camera will be open to grab images.
The images which were captured by camera will be encoded by using the TH264Encoder class, and the video data are compressed into H.264 format files through calling API provided by multimedia codec.
The last is the processing function after encoding. Mappings are rescinded by using SsbSipH264EncodeDeInitn (handle) which is in mfc_encoder_free (void * handle) function.
The compressed video data will be packed and sent to the network transmission module. WiFi is used to transmit video data, and the packed video data are sent to the monitoring terminal. Video image information will be decoded and broadcast by monitoring terminal in real-time.
4. System Testing
Because of using more advanced compression algorithm, the video compression efficiency has improved significantly. At the request of the same encoding quality, the coding performance of H.264 is significantly better than of MPEG-4. The comparison parameters of using H.264 and MPEG-4 compression standard to code the same video file are shown in Table 1.
In this paper, embedded development platform based on S3C6410 processor and MFC module which was integrated in S3C6410 are adopted to encode video data. The monitoring system based on H.264 video encoding compression standard has been implemented, which has higher coding efficiency to ensure video data compression quality, overcomes the poor network applicability of the original compression standard, and suppresses the phenomena of image distortion and background flow, so the quality of the video data compression is guaranteed. The motion estimation algorithm and blocks are integrated to improve the compression efficiency in the compression process. At last, video data are transferred through a wireless network. To separate video coding compression from network data transmission, modular design is adopted and takes full advantage of hardware processing capacity. Meanwhile, we aim to study hardware decoding characteristics in depth based on H.264 video encoding compression standard, optimizing the algorithm to improve existing performance. Besides, we will consider making the function perfect through adding audio function to the system so as to gain much broader market prospect.
This work is supported by the National Natural Science Foundation of China under Grant no. 61262016, the University Foundation of Gansu Province under Grant no. 14-0220, the Natural Science Foundation of Gansu under Grant no. 1208RJZA239, and the Technology Project of Lanzhou (2012-2-64).
- Y. Li, C. Ai, Z. Cai, and R. Beyah, “Sensor scheduling for p-percent coverage in wireless sensor networks,” Cluster Computing, vol. 14, no. 1, pp. 27–40, 2011.
- N. Neji, M. Jridi, A. Alfalou, and N. Masmoudi, “Evaluation and implementation of simultaneous binary arithmetic coding and encryption for HD H264/AVC codec,” in Proceedings of the International Multi-Conference on Systems, Signals and Devices (SSD '13), pp. 18–21, Hammamet, Tunisia, 2013.
- A. Puri, X. Chen, and A. Luthra, “Video coding using the H.264/MPEG-4 AVC compression standard,” Signal Processing, vol. 19, no. 9, pp. 793–849, 2004.
- X. Su, L. Ji, and X. Li, “A fast and low complexity approach for H.264/AVC intra mode decision,” Multimedia Tools and Applications, vol. 52, no. 1, pp. 65–76, 2011.
- H. Zhao, L. Yin, J. Cao, and C. Shen, “Design and implementation of embedded multimedia terminal based on ARM9 platform,” Journal of Beijing Institute of Technology, vol. 19, no. 2, pp. 50–54, 2010.
- E. Soyak, S. A. Tsaftaris, and A. K. Katsaggelos, “Low-complexity tracking-aware H.264 video compression for transportation surveillance,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, no. 10, pp. 1378–1389, 2011.
- Y. Kang, W. Xie, and B. Hu, “A scale adaptive mean-shift tracking algorithm for robot vision,” Advances in Mechanical Engineering, vol. 2013, Article ID 601612, 11 pages, 2013.
- J. Heo and Y.-S. Ho, “Efficient differential pixel value coding in CABAC for H.264/AVC lossless video compression,” Circuits, Systems, and Signal Processing, vol. 31, no. 2, pp. 813–825, 2012.
- W. Li, F. Yang, and G. Ren, “High-speed rate estimation based on parallel processing for H.264/AVC CABAC encoder,” IEEE Transactions on Consumer Electronics, vol. 59, no. 1, pp. 237–243, 2013.
- P. Wang, H. Huang, and Z. Tan, “Fast feature-based mode decision for 4 × 4 intra prediction in H.264/AVC,” Science China Information Sciences, vol. 54, no. 11, pp. 2386–2399, 2011.
- W. Wang, F. Hou, H. Tan, and H. Bubb, “A framework for function allocations in intelligent driver interface design for comfort and safety,” International Journal of Computational Intelligence Systems, vol. 3, no. 5, pp. 531–541, 2010.
- F. Pan, X. Lin, S. Rahardja et al., “Fast mode decision algorithm for intraprediction in H.264/AVC video coding,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 15, no. 7, pp. 813–822, 2005.
- N. Bahri, I. Werda, A. Samet, and M. A. B. Ayed, “Fast intra mode decision algorithm for H264/AVC HD baseline profile encoder,” International Journal of Computer Applications, vol. 37, no. 6, pp. 8–13, 2012.
- G. Wei, L. Wu, S. Wang, and C. Qi, “Fast mode selection for H.264 video coding standard based on motion region classification,” Multimedia Tools and Applications, vol. 58, pp. 453–466, 2012.
- G. Pastuszak and M. Jakubowski, “Adaptive computationally scalable motion estimation for the hardware H.264/AVC encoder,” IEEE Transactions on Circuits and System for Video Technology, vol. 23, no. 5, pp. 802–812, 2013.
- S.-C. Hsia and Y.-C. Hung, “Fast multi-frame motion estimation for H264/AVC system,” Signal, Image and Video Processing, vol. 4, no. 2, pp. 167–175, 2010.