Table of Contents
Advances in Artificial Intelligence
Volume 2016, Article ID 6361237, 34 pages
http://dx.doi.org/10.1155/2016/6361237
Research Article

Automatic Representation and Segmentation of Video Sequences via a Novel Framework Based on the D-EVM and Kohonen Networks

Group of Multidisciplinary Research Applied to Education and Engineering (GIMAEI), The Technological University of the Mixteca (UTM), Carretera Huajuapan-Acatlima Km 2.5, 69004 Huajuapan de León, OAX, Mexico

Received 28 September 2015; Revised 19 January 2016; Accepted 20 January 2016

Academic Editor: Francesco Buccafurri

Copyright © 2016 José-Yovany Luis-García and Ricardo Pérez-Aguila. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Recently in the Computer Vision field, a subject of interest, at least in almost every video application based on scene content, is video segmentation. Some of these applications are indexing, surveillance, medical imaging, event analysis, and computer-guided surgery, for naming some of them. To achieve their goals, these applications need meaningful information about a video sequence, in order to understand the events in its corresponding scene. Therefore, we need semantic information which can be obtained from objects of interest that are present in the scene. In order to recognize objects we need to compute features which aid the finding of similarities and dissimilarities, among other characteristics. For this reason, one of the most important tasks for video and image processing is segmentation. The segmentation process consists in separating data into groups that share similar features. Based on this, in this work we propose a novel framework for video representation and segmentation. The main workflow of this framework is given by the processing of an input frame sequence in order to obtain, as output, a segmented version. For video representation we use the Extreme Vertices Model in the -Dimensional Space while we use the Discrete Compactness descriptor as feature and Kohonen Self-Organizing Maps for segmentation purposes.