Mathematical Problems in Engineering

Volume 2016 (2016), Article ID 5481602, 8 pages

http://dx.doi.org/10.1155/2016/5481602

## Glowworm Swarm Optimization and Its Application to Blind Signal Separation

^{1}College of Computer Science, Communication University of China, Beijing 100024, China^{2}Business College, Beijing Union University, Beijing 100025, China

Received 31 December 2015; Revised 27 March 2016; Accepted 7 April 2016

Academic Editor: Marco Mussetta

Copyright © 2016 Zhucheng Li and Xianglin Huang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Traditional optimization algorithms for blind signal separation (BSS) are mainly based on the gradient, which requires the objective function to be continuous and differentiable, so the applications of these algorithms are very limited. Moreover, these algorithms have problems with the convergence speed and accuracy. To overcome these drawbacks, this paper presents a modified glowworm swarm optimization (MGSO) algorithm based on a novel step adjustment rule and then applies MGSO to BSS. Taking kurtosis of the mixed signals as the objective function of BSS, MGSO-BSS succeeds in separating the mixed signals in Matlab environment. The simulation results prove that MGSO is more effective in capturing the global optimum of the objective function of the BSS algorithm and has faster convergence speed and higher accuracy, compared with particle swarm optimization (PSO) and GSO.

#### 1. Introduction

“Cocktail party” problem can be seen as a classic example of blind signal separation: imagine being at a friend’s party where, during your conversation with your friend, even though the sounds that reach your ears are a complicated mix of music, other people talking, wine glasses tinkling, and so on, you are able to understand your friend and enjoy the music at the same time. It is the task of blind signal separation (BSS) to recover unknown independent source signals obtained from sensors. The BSS technology has received considerable attention in recent years because of its significant potential applications such as sonar and radar signal processing [1, 2], wireless communication [3], geophysical exploration [4, 5], biomedical signal processing [6, 7], speech and image processing [8, 9], and machine fault diagnosis [10, 11]. BSS has two important components: the objective function and the optimization algorithm. The objective function is responsible for determining the statistical independence of separation signals, and the optimization algorithm is to ensure that the objective function value reaches its peak in subsequent updates. The convergence speed and accuracy of BSS mainly rely on the latter. Therefore, how to choose an appropriate algorithm is the crucial challenge of BSS.

Conventional optimization algorithms for BSS are based on gradient techniques, yet these methods would get a “poor” solution unless suitable initial parameters are given. However, it is very difficult to select these parameters because of the blind hypothesis. In particular, these algorithms cannot be used when the objective function is discontinuous and nondifferentiable. To solve the above problems, swarm-based algorithms have been gradually applied to BSS in the past few years, such as genetic algorithms (GA) [12, 13], particle swarm optimization (PSO) [12, 14], and artificial bee colony (ABC) [15]. Swarm-based algorithms belong to a family of nature-inspired, population-based optimizations and the behavior of their agents is inspired by biological swarms like ants, fish, bees, frogs, and bacteria, which interact in accordance with certain behavioral law to cooperatively achieve some necessary tasks. Compared with conventional gradient-based approaches, these techniques for BSS are characterized by higher accuracy, efficiency, and robustness. However, there is room for improvement of the performance of these optimization algorithms in terms of their tendency to fall into local optimum, convergence rate, and computational accuracy.

The GSO acronym can stand for two different swarm-based optimizations: Genetical Swarm Optimization [16, 17] and Glowworm Swarm Optimization. Genetical Swarm Optimization is a hybrid evolutionary algorithm that combines the well-known PSO and Genetic Algorithm (GA). Glowworm Swarm Optimization [18–23] proposed by Krishnanand and Ghose imitates the behavior that a glowworm carries a luminescence quantity called luciferin along with itself to exchange information with companions. GSO in this paper is only used for Glowworm Swarm Optimization.

GSO can effectively avoid missing the optimal solution because of intelligent changes of the decision radius and is very competent in capturing the global optimum of the objective function in finite-dimensional vector space. At present, GSO has been successfully applied in various fields, such as vehicle routing problem [24], dock scheduling problem [25], and wireless sensor networks [26]. Despite the above-mentioned advantages, the standard GSO has a trade-off between convergence speed and accuracy because of the fixed step-size (the suggested step-size is 0.03).

In this paper, we present a modified GSO (MGSO) algorithm to conquer the above defects and then apply MGSO to BSS; finally, the experiment proves the effectiveness of MGSO-BSS. The remainder of this paper is organized as follows: the next section gives a complete presentation of the basic GSO and describes the proposed methods; Section 3 introduces the working principle of BSS; in Section 4, seeking mode of new BSS algorithm based on MGSO is described; in Section 5, we carry out experiments to evaluate MGSO-BSS and analyze the simulation results; the last section contains the concluding remarks on this work.

#### 2. Basic Glowworm Swarm Optimization

##### 2.1. Algorithm Representation

In GSO, a swarm of glowworms are initially deployed randomly in the solution space. Each glowworm represents a solution of objective function in the search space and carries a certain quantity of luciferin along with it. The luciferin level is associated with the fitness of the agent’s current position. The brighter individual means a better position (is a better solution). Using a probabilistic mechanism, each agent can only be attracted by a neighbor whose luciferin intensity is higher than its own within the local-decision domain and then moves towards it. The density of a glowworm’s neighbors affects its decision radius and determines the size of its local-decision domain: when the neighbor-density is low, the local-decision domain will enlarge in order to find more neighbors; otherwise, it will reduce to allow the swarm split into smaller groups.

The above process is repeated until the algorithm satisfies the termination condition. At this point, the majority of individuals gather around brighter glowworms. Briefly, the GSO involves five main phases: luciferin-update phase, neighborhood-select phase, moving probability-computer phase, movement phase, and the decision radius update phase.

##### 2.2. Basic GSO Algorithm

###### 2.2.1. Luciferin-Update Phase

The luciferin update depends on the fitness value and previous luciferin value, and its rule [18–23] is given by

Here, denotes the luciferin value of glowworm at time , is the luciferin decay constant, is the luciferin enhancement constant; is the location of glowworm at time , and represents the value of the fitness at glowworm ’s location at time .

###### 2.2.2. Neighborhood-Select Phase

Neighbors [18–23] of glowworm at time consist of the brighter ones and can be written as

Here, represents the Euclidean distance between glowworms and at time , and represents the decision radius of glowworms at time .

###### 2.2.3. Moving Probability-Computer Phase

A glowworm uses a probability rule to move towards other glowworms having higher luciferin level. The probability [18–23] of glowworm moving towards its neighbor can be stated as follows:

###### 2.2.4. Movement Phase

Suppose glowworm selects glowworm with ; the discrete-time model of the movement of glowworm is given by (4) as in [18–23]

Here, represents the Euclidean norm operator, and is the step-size.

###### 2.2.5. Decision Radius Update Phase

In each update, decision radius of glowworm is given as follows:

Here, is a constant, denotes the sensory radius of glowworm , and is a parameter to control the neighbor number. Figure 1 shows the sensory radius and decision radius of glowworm .