Computational intelligence (CI) has emerged as a powerful tool for information processing, decision-making, and knowledge management. CI approaches, in general, are useful for designing advanced computerized systems that possess useful characteristics mimicking human behaviors and capabilities in solving complex tasks, for example, learning, adaptation, and evolution. Examples of some popular CI models include fuzzy systems, artificial neural networks, evolutionary algorithms, multiagent systems, decision trees, rough set theory, knowledge-based systems, and hybrid of these models.

On the other hand, images have always played an essential role in human life. In the past they were, today they are, and in the future they will continue to be one of our most important information carriers. Recent advances in digital imaging and computer hardware technology have led to an explosion in the use of digital images in a variety of scientific and engineering applications. Therefore, each new approach that is developed by engineers, mathematicians, and computer scientists is quickly identified, understood, and assimilated in order to be applied to image processing problems.

Classical image processing methods often face great difficulties while dealing with images containing noise and distortions. Under such conditions, the use of computational intelligence approaches has been recently extended to address challenging real-world image processing problems. The interest on the subject among researchers and developers is increasing day by day as it is branded by huge volumes of research works that get published in leading international journals and international conference proceedings.

The main objective of this special issue is to bridge the gap between computational intelligence techniques and challenging image processing applications. Since this idea was first conceived, the goal has aimed at exposing the readers to the cutting-edge research and applications that are going on across the domain of image processing, particularly those whose contemporary computational intelligence techniques can be or have been successfully employed.

The special issue received several high-quality submissions from different countries all over the world. All submitted papers have followed the same standard of peer-reviewing by at least three independent reviewers, just as it is applied to regular submissions to the Mathematical Problems in Engineering journal. Due to the limited space, a very short number of papers have been finally included. The primary guideline has been to demonstrate the wide scope of computational intelligence algorithms and their applications to image processing problems.

The paper authored by T. Wu and L. Zhang presents an uncertainty algorithm based on cloud model for the generation of image-guided Voronoi aesthetic patterns. As a computational intelligence tool, cloud model handles the uncertainty more completely and more freely, and it cannot be considered as randomness compensated by fuzziness, fuzziness compensated by randomness, second-order fuzziness, or second-order randomness. To obtain the default parameters, authors conduct seven groups of experiments to test the proposed method. Using both visual and quantitative comparisons, T. Wu and L. Zhang prove the efficacy of the proposed method using two groups of experiments. Compared with the related methods, experimental results show that the Voronoi-based aesthetic patterns with soft borders can be successfully generated by using the new technique.

K. Zeng et al. introduced a ranking model by understanding the complex relations within product visual and textual information in visual search systems. To understand their complex relations, authors focused on using graph-based paradigms to model the relations among product images, product category labels, and product names and descriptions. K. Zeng et al. developed a unified probabilistic hypergraph ranking algorithm, which, modeling the correlations among product visual features and textual features, extensively enriches the description of the image. The authors conducted experiments on the proposed ranking algorithm on a data set collected from a real e-commerce website. The results of their comparison demonstrate that the proposed algorithm extensively improves the retrieval performance over the visual distance based ranking.

N. R. Soora and P. S. Deshpande present a novel License Plate (LP) detection method using different clustering techniques, based on geometrical properties of the LP characters. In the paper, authors also propose a new character extraction method, for noisy/missed character components of the LP due to the presence of noise between LP characters and LP border. The proposed method detects the LP of any type of vehicle (including vans, cars, trucks, and motorcycles), having different plate variations, under different environmental and weather conditions because of the geometrical properties of the set of characters in the LP. The proposed method is independent of color, rotation, size, and scale variances of the LP. The concept is tested using standard media-lab and Application Oriented License Plate (AOLP) benchmark LP recognition databases. The success rate of the proposed approach for LP detection using media-lab database is 97.3% and using AOLP database is 93.7%. Results clearly indicate that the proposed approach is comparable to the previously published papers, which evaluated their performance on publicly available benchmark LP databases.

The paper authored by C. Nyirarugira et al. presents a gesture recognition method derived from particle swarm movement for free-air hand gesture recognition. Under such conditions, authors suggest an automated process of segmenting meaningful gesture trajectories based on particle swarm movement. A subgesture detection and reasoning method is incorporated in the proposed recognizer to avoid premature gesture spotting. Evaluation of the proposed method shows promising recognition results: 97.6% on preisolated gestures, 94.9% on stream gestures with assistive boundary indicators, and 94.2% for blind gesture spotting on digit gesture vocabulary. The proposed recognizer requires fewer computation resources; thus it is a good candidate for real-time applications.

R. Al Shehhi et al. present a hierarchical graph-based segmentation for blood vessel detection in digital retinal images. This segmentation method employs some of perceptual Gestalt principles: similarity, closure, continuity, and proximity to merge segments into coherent connected vessel-like patterns. The integration of Gestalt principles is based on object-based features (e.g., color, black top-hat (BTH) morphology, and context) and graph-analysis algorithms (e.g., Dijkstra path). The segmentation framework consists of two main steps: preprocessing and multiscale graph-based segmentation. Preprocessing is to enhance lighting condition, due to low illumination contrast, and to construct necessary features to enhance vessel structure due to sensitivity of vessel-patterns to multiscale/orientation structure. Graph-based segmentation is to decrease computational processing required for region of interest into most semantic objects. The segmentation was evaluated on three publicly available datasets. Experimental results show that preprocessing stage achieves better results compared to the state-of-the-art enhancement methods. The performance of the proposed graph-based segmentation is found to be consistent and comparable to other existing methods, with improved capability in detecting small/thin vessels.

The paper authored by G. Niu et al. proposes a multikernel-like learning algorithm based on data probability distribution (MKDPD) for classification proposes. In the approach, the parameters of a kernel function are locally adjusted according to the data probability distribution and thus produce different kernel functions. These different kernel functions will generate different Reproducing Kernel Hilbert Spaces (RKHS). The direct sum of the subspaces of these RKHS constitutes the solution space of the learning problem. Furthermore, based on the proposed MKDPD algorithm, an algorithm for labeling new coming data is also introduced, in which the basic functions are retrained according to the new coming data, while the coefficients of the retrained basic functions remained unchanged to label the new coming data. The experimental results presented in this paper show the effectiveness of the proposed algorithms.

H. Yang et al. introduce a new general TV regularizer, namely, generalized TV regularization, to study image denoising and nonblind image deblurring problems. In order to discuss the generalized TV image restoration with solution-driven adaptivity, authors consider the existence and uniqueness of the solution for mixed quasivariational inequality. Moreover, the convergence of a modified projection algorithm for solving mixed quasivariational inequalities is also shown. The corresponding experimental results support our theoretical findings.

C.-L. Cocianu and A. Stan propose a new method that combines the decorrelation and shrinkage techniques to neural network-based approaches for noise removal purposes. The images are represented as sequences of equal sized blocks, each block being distorted by a stationary statistical correlated noise. Some significant amount of the induced noise in the blocks is removed in a preprocessing step, using a decorrelation method combined with a standard shrinkage-based technique. The preprocessing step provides for each initial image a sequence of blocks that are further compressed at a certain rate, each component of the resulted sequence being supplied as inputs to a feed-forward neural architecture. The local memories of the neurons of the layers are generated through a supervised learning process based on the compressed versions of blocks of the same index value supplied as inputs and the compressed versions of them resulted as the mean of their preprocessed versions. Finally, using the standard decompression technique, the sequence of the decompressed blocks is the cleaned representation of the initial image. The performance of the proposed method is evaluated by a long series of tests, the results being very encouraging as compared to similar developments for noise removal purposes.

The paper authored by L. Chang et al. introduces a method to solve the problems which basic Vibe algorithm cannot effectively eliminate such as the influence of background noise, follower shadow, and ghost under complex background. Therefore, considering the basic Vibe algorithm, this paper puts forward some improvement measures in threshold setting, shadow eliminating, and ghost suppression. Firstly, judgment threshold takes adjustment with the changes of background. Secondly, a fast eliminating ghost algorithm depending on adaptive threshold is introduced. Finally, follower shadow is detected and inhibited effectively through the gray properties and texture characteristics. Experiments show that the proposed algorithm works well in complex environment without affecting computing speed and has stronger robustness and better adaptability than the basic algorithm. Meanwhile, the ghost and follower shadow can be absorbed quickly as well. Therefore, the accuracy of target detection is effectively improved.

L. Zeng et al. propose an image enhancement algorithm to solve the well-known problems that involve the detection methods for 3D nondestructive testing of printed circuit boards (PCBs). Therefore, considering the characteristics of 3D CT images of PCBs, the proposed algorithm uses gray and its distance double-weighting strategy to change the form of the original image histogram distribution, suppresses the grayscale of a nonmetallic substrate, and expands the grayscale of wires and other metals. The proposed algorithm also enhances the gray difference between a substrate and a metal and highlights metallic materials. The proposed algorithm can enhance the gray value of wires and other metals in 3D CT images of PCBs. It applies enhancement strategies of changing gray and its distance double-weighting mechanism to adapt to this particular purpose. The flexibility and advantages of the proposed algorithm are confirmed by analyses and experimental results.

The paper authored by H. Xiang et al. presents a pixel-value-ordering hybrid algorithm for error prediction in images. The proposed method predicts pixel in both positive and negative orientation. Assisted by expansion bins selection technique, this hybrid predictor presents an optimized prediction-error expansion strategy including bin 0. Furthermore, a novel field-biased context pixel selection is already developed, with which detailed correlations of around pixels are better exploited more than equalizing scheme merely. Experiment results show that the proposed approach improves embedding capacity and enhances marked image fidelity. It also outperforms some other state-of-the-art methods of reversible data hiding, especially for moderate and large payloads.

J. Jia et al. introduce a novel normal inverse Gaussian model-based method that uses a Bayesian estimator to carry out image denoising in the nonsubsampled contourlet transform (NSCT) domain. In the proposed method, the model is first used to describe the distributions of the image transform coefficients of each subband in the NSCT domain. Then, the corresponding threshold function is derived from the model using Bayesian maximum a posteriori probability estimation theory. Finally, optimal linear interpolation thresholding algorithm (OLI-Shrink) is employed to guarantee a gentler thresholding effect. The results of comparative experiments conducted indicate that the denoising performance of our proposed method in terms of peak signal-to-noise ratio is superior to that of several state-of-the-art methods, including BLS-GSM, K-SVD, BivShrink, and BM3D. Further, the proposed method achieves structural similarity (SSIM) index values that are comparable to those of the block-matching 3D transformation (BM3D) method.

The paper authored by B. Li et al. develops a new approach for solving the problem of single image superresolution by generalizing this property. The main idea of this approach takes advantage of a generic prior that assumes a randomly selected patch in the underlying high resolution (HR) image should visually resemble as much as possible with some patch extracted from the input low resolution (LR) image. Under such conditions, this approach deploys a cost function and applies an iterative scheme to estimate the optimal HR image. For solving the cost function, authors introduce Gaussian mixture model (GMM) to train upon a sampled data set for approximating the joint probability density function (PDF) of input image with different scales. Through extensive comparative experiments, this paper demonstrates that the visual fidelity of our proposed method is often superior to those generated by other state-of-the-art algorithms as determined through both perceptual judgment and quantitative measures.

Acknowledgments

Finally, we would like to express our gratitude to all of the authors for their contributions and the reviewers for their efforts to provide valuable comments and feedback. We hope this special issue offers a comprehensive and timely view of the area of applications of computational intelligence in image processing and that it will grant stimulation for further research.

Erik Cuevas
Daniel Zaldívar
Gonzalo Pajares
Marco Perez-Cisneros
Raúl Rojas