Table of Contents Author Guidelines Submit a Manuscript
Scientific Programming
Volume 2017, Article ID 1431574, 15 pages
Research Article

A Highly Parallel and Scalable Motion Estimation Algorithm with GPU for HEVC

School of Computer, National University of Defense Technology, Changsha 410073, China

Correspondence should be addressed to Yun-gang Xue; moc.361@nuygnagnuyeux

Received 13 March 2017; Revised 13 August 2017; Accepted 10 September 2017; Published 12 October 2017

Academic Editor: Christoph Kessler

Copyright © 2017 Yun-gang Xue et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


We propose a highly parallel and scalable motion estimation algorithm, named multilevel resolution motion estimation (MLRME for short), by combining the advantages of local full search and downsampling. By subsampling a video frame, a large amount of computation is saved. While using the local full-search method, it can exploit massive parallelism and make full use of the powerful modern many-core accelerators, such as GPU and Intel Xeon Phi. We implanted the proposed MLRME into HM12.0, and the experimental results showed that the encoding quality of the MLRME method is close to that of the fast motion estimation in HEVC, which declines by less than 1.5%. We also implemented the MLRME with CUDA, which obtained 30–60x speed-up compared to the serial algorithm on single CPU. Specifically, the parallel implementation of MLRME on a GTX 460 GPU can meet the real-time coding requirement with about 25 fps for the video format, while, for , the performance is more than 100 fps.