Table of Contents
Advances in Artificial Intelligence
Volume 2010, Article ID 405073, 21 pages
Research Article

Computing with Biologically Inspired Neural Oscillators: Application to Colour Image Segmentation

Intelligent Systems Research Centre, School of Computing and Intelligent Systems, Faculty of Computing Engineering, University of Ulster, Magee Campus, Northland Road, Northern Ireland, Derry BT48 7JL, UK

Received 2 December 2009; Accepted 26 February 2010

Academic Editor: Abbes Amira

Copyright © 2010 Ammar Belatreche et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


This paper investigates the computing capabilities and potential applications of neural oscillators, a biologically inspired neural model, to grey scale and colour image segmentation, an important task in image understanding and object recognition. A proposed neural system that exploits the synergy between neural oscillators and Kohonen self-organising maps (SOMs) is presented. It consists of a two-dimensional grid of neural oscillators which are locally connected through excitatory connections and globally connected to a common inhibitor. Each neuron is mapped to a pixel of the input image and existing objects, represented by homogenous areas, are temporally segmented through synchronisation of the activity of neural oscillators that are mapped to pixels of the same object. Self-organising maps form the basis of a colour reduction system whose output is fed to a 2D grid of neural oscillators for temporal correlation-based object segmentation. Both chromatic and local spatial features are used. The system is simulated in Matlab and its demonstration on real world colour images shows promising results and the emergence of a new bioinspired approach for colour image segmentation. The paper concludes with a discussion of the performance of the proposed system and its comparison with traditional image segmentation approaches.

1. Introduction

Image segmentation is a crucial problem in machine vision, as there is no generic method that can be applied to all types of images and the choice of a particular method is rather problem specific. The segmentation process consists of partitioning an image into its homogenous regions whose pixels share similar features. It plays an important role in computer-aided diagnosis schemes, image understanding, and object recognition systems. Although the human brain performs the task of image segmentation efficiently and with apparent ease, it is still a major challenge for computer vision systems and remains an open research field which encompasses a variety of applications such as pattern recognition, medical diagnosis, remote sensing, robotics, content-based information retrieval, and search engines, to name but a few. Over the last few decades of research in machine vision, many techniques have been produced, usually based on pixel classification, edge detection, or region growing [13]; yet modelling the segmentation output in a network of neurons remains a challenging task. There is a large body of research work on image segmentation in different application fields and the number of publications is still growing every year. It is beyond the scope of this work to provide a full survey of the various techniques developed to date in different application fields; the reader is referred to the following references for a comprehensive review of the plethora of algorithmic and machine learning techniques used for scene segmentation [47]. However, the main focus of this research is on using biologically plausible and biologically inspired techniques which mimic the way the animal and human brain represents and process sensory information.

This paper investigates the computing capabilities and potential applications of biologically plausible neural oscillators to unsupervised colour image segmentation. A recent hypothesis in neuroscience is that segmentation of different objects in a visual scene is based on the temporal correlation of neural activity [810]. Accordingly, a population of neurons which fire in synchrony would signal attributes of the same object. Also, neurons with asynchronous activity would participate in the formation of different objects [11]. The paper presents a neural system based on the synergy between networks of neural oscillators and Kohonen self-organising maps. A network of locally excitatory and globally inhibitory neural oscillators is first applied to gray scale images then extended to colour images. The HSV (Hue, saturation, Value) colour space is used and a Kohonen self-organising map-based colour reduction method is developed. Both chromatic components and local spatial characteristics of image pixels are used. The proposed system is applied to segmentation of real world scenes and segmentation is achieved through synchronisation and desynchronisation of the activity of populations of neurons. The paper highlights the key benefits of the proposed approach and its main differences with existing approaches.

Section 2 presents a brief review of neural oscillators and discusses their biological relevance and the current related research. Section 3 introduces the mathematical models that underlie the dynamics of neural oscillators and presents simulations of both single and coupled neural oscillators and their phase plane analysis. Section 4 discusses the feature binding problem, the synchronisation of neural oscillators’ activity, and its relevance to computer vision. Section 5 presents the application of networks of neural oscillators to segmentation of grey scale images. Section 6 describes the proposed colour image segmentation system. Finally, Sections 7 and 8 discuss the findings of this research and conclude the paper.

2. Biologically Inspired Neural Oscillators

The model of neural oscillators used in this work represents an instance of a general model called “relaxation oscillators” which represents a large class of dynamic systems that emerge from many physical systems (e.g., engineering, mechanical, and biological systems) [12, 13]. The first observation of relaxation oscillators dates back to 1926 when van der Pol [14] studied the characteristics of a triode circuit which show self-sustained oscillations. In his study, Pol realised that the observed oscillations were almost sinusoidal for a certain range of parameters and that abrupt changes were exhibited for a different range of parameters. The name of the model reflects the system time constants, also called relaxation times. A relaxation oscillator is characterised by a slow time scale needed to charge the capacitor and a fast time scale for its quick discharge. The period of oscillation is proportional to the relaxation time needed to charge the capacitor, hence the name of the model.

Relaxation oscillators can be observed in a variety of biological phenomena such as heartbeat and neural activity. Some scientists, such as the physiologist Hill [15], went even further by stating that all physiological periodic phenomena are governed by this type of oscillations, that is, relaxation oscillations. However, despite the existence of such models in a number of domains, the real motivation of exploring them in this work comes from their relevance to neurobiology; hence the focus of this study is oriented towards models that represent neural activity in the brain and the investigation of emerging behaviours from networks of such neural models (oscillators) and their potential application in solving real world engineering problems.

It has been recognised that relaxation oscillators are similar to their biological counterparts and were therefore used to model biological phenomena. They were used in 1928 by van der Pol and van der Mark to describe the heartbeat and to present an electrical model of the heart [16]. Using a system of differential equations based on four dynamic variables, Hodgkin and Huxley have mathematically described the dynamics of the neuron membrane potential, ionic exchange, and electric transmission of impulses along nerves [17]. Their famous system for which they obtained the Noble prize in “Physiology or Medicine” in 1963 ( was later reduced to a system of two dynamic variables called Fitzhugh-Nagumo model [18, 19] which represents a relaxation oscillator. A relaxation oscillator description of the burst-generating mechanism in the cardiac ganglion cells of the lobster was later developed by Mayeri [20]. Oscillations in cortical neurons were also modelled by Wilson-Cowan polynomial equations which model interactions between excitatory and inhibitory neurons [2124]. Their model consists of a two-variable system of differential equations with a number of parameters which offer a variety of dynamics when carefully adjusted [25]. Another model was presented by Morris and Lecar and consists of a two-variable relaxation oscillator which describe the voltage oscillation in the giant muscle fibre [26].

Since coherent oscillations were discovered in the visual cortex and other areas of the brain [2729] and due to the neurobiological relevance of relaxation oscillators, they have attracted more research work where they have been studied as mathematical models representing the behaviour of neurons [3036]. These research findings concluded that the observed oscillations are stimulus-dependent as they are triggered by appropriate sensory stimulus, that synchronisation of these oscillations emerges when the sensory stimuli appear to belong to a coherent object, and that no such synchrony is observed otherwise, that is, in the case where the stimuli are not connected by similarities or a common object. These findings are consistent with the principles of the theory of temporal correlation [37, 38] which explains how the perception of a coherent object is brought about by the brain through temporal correlation of the firing activity of various neurons which detect the features of the perceived object. This is also related to the binding problem which is explained later in Section 4.

Synchronisation of neural activity was observed in different areas of the brain [39, 40]. It has been observed in mammals, amphibians, and insects [29, 41, 42]. It has been measured in the monkey motor cortex [43] and across different hemispheres of the brain in the visual cortices of cats [44]. One of the main concerns of this research work is to understand how synchrony is brought about in locally interacting groups of neural oscillators and how the emerging-synchronised activity of populations of neural oscillators can be applied to image perception, particularly the important problem of image segmentation which is a crucial task in image understanding and object recognition. This motivation is driven by the fact that the exhibition of synchronised neural activity in a wide range of brain areas and by a diversity of organisms indicates that neural synchronisation could be fundamental to information processing.

3. Modelling Dynamics of Neural Oscillators

3.1. Dynamics of Single Neural Oscillators

The neural oscillator model used in this work is similar to the relaxation oscillator defined by Terman and Wang [36] which is based on the neural oscillator model developed by Morris and Lecar [26]. However the chosen model is considerably simpler and has been shown to achieve synchrony more rapidly than other neural oscillators models [45]. According to this model, a neural oscillator consists of a feedback loop between two units, one excitatory (called and the other is inhibitory (called as illustrated in Figure 1.

Figure 1: A single neural oscillator which consists of two coupled units (excitatory) and (inhibitory). Unit sends excitation (indicated by a triangle) to unit unit sends inhibition (indicated by a small circle) to unit and represent an external input and a possible coupling with other neural oscillators, respectively.

The dynamics of the temporal activity of the coupled units ( and ) are governed by the following system of first order differential equations: where represents an external input (e.g., pixel features) and defines the total coupling from other neurons (in the case of a single oscillator, the coupling term does not exist, i.e., = 0). The parameter ρ represents the amplitude of a Gaussian noise added to the total oscillator input in order to test its robustness to noise, and also to contribute to the desynchronisation process when a neural oscillator is coupled with other neurons. The parameter is a small positive number whose value affects the time scales (which characterise relaxation oscillators) of the activities of units and The parameter β controls the steepness of the -nullcline which will be explained below when the dynamics of the neural oscillator are analysed in the phase plane. The parameter controls the amount of the time spent on either phase (a larger value leads to a shorter time spent on the active phase). In order to better understand the dynamics of a neural oscillator defined by the system in (1), the phase plane analysis approach is used where the oscillator nullclines, limit cycles, and motion directions are examined. Phase plane analysis is a graphical method which helps understand the behaviour of a dynamic system over time. The nullclines and limit cycle of the oscillator defined in (1) are examined first. These figures were generated using Matlab implementation of the dynamics described above. The 4th-order Runge-Kutta method was used to numerically solve the system of first-order differential equations in (1).

The -nullcline is represented by the curve along which the derivative of the unit function is nil (i.e., ). It is given by the function and results in a cubic curve that is characterised by a left branch denoted by LB (which extends from the local minimum to ) and a right branch denoted by RB (which extends from the local maximum to ) as illustrated in Figure 2. This cubic curve has two crucial values (related to the dynamics of a neural oscillator) which are defined by the function values (i.e., the -values) at the two local extrema and referred to by left knee (LK) and right knee (RK) (see Figure 2). On the other hand, the -nullcline is determined by the curve along which the derivative of the unit function is nil (i.e., ). It is given by the function , a hyperbolic tangent function which results in a sigmoidal curve whose steepness is controlled by the parameter β such that a big value of β makes the sigmoidal curve close to a step function. Based on these nullclines, the dynamics of a neural oscillator depend on the external input. In the presence of a positive input (i.e., the neuron is stimulated), the neuron dynamics converge to a periodic solution, called limit cycle, and as shown in Figure 3. However, the neurons dynamics converge to stable point (i.e., no oscillation is produced) in the presence of a negative input (i.e., the neuron is unstimulated) as shown in Figures 4 and 5.

Figure 2: - and -nullclines with a periodic solution of the system in (1) (limit cycle) represented in the phase plane. This oscillator trajectory jumps periodically between the left and the right branches of the limit cycle. LC, LB, and LK indicate the upper left corner, left branch, and left knee, respectively. RC, RB, and RK indicates, lower right corner, right branch, and right knee, respectively. The starting point for this simulation is set to and the values of other parameters are (Note that the parameter values are based on the range of values found in literature.)
Figure 3: Dynamics of a single neural oscillator and its convergence to a periodic limit cycle. ((a), (c), and (e)) Temporal activity of the excitatory unit which represent the output of the neural oscillator (a). Temporal activity of the inhibitory unit (c). Trajectory of the neural oscillator with a starting point (e). ((b), (d), and (f)) Temporal activity of the excitatory unit (b). Temporal activity of the inhibitory unit (d). Trajectory of the neural oscillator with a starting point (f). It can be seen how the trajectory of the neural oscillator converges to the same stable periodic limit cycle. The same parameters in Figure 2 are used for this simulation.
Figure 4: Activity of an unstimulated neural oscillator. (a) Temporal activity of units and with initial conditions set to (b) Convergence of the oscillator trajectory to a fixed point (instead of a periodic limit cycle). The parameters used in this simulation are as follows:
Figure 5: Another example of the same unstimulated neural oscillator in Figure 2 with a different starting point
3.2. Dynamics of Coupled Neurons

In this section, the behaviour of two coupled neural oscillators is examined with particular focus on the emerging synchronisation and desynchronisation of their temporal activities which depends, amongst other parameters, on the coupling strength linking them together. The same model equations in the previous section are used for each neural oscillator with the addition, this time, of the coupling term (which was discarded in the case of a single oscillator). The connection between two neural oscillators is illustrated in Figure 6 which clearly shows that it is the unit of the neural oscillator that sends (receives) output (input) to (from) another oscillator and that the role of unit is only “internal” and consists of sending an inhibiting signal to the unit of the same oscillator as explained in the previous section. As a result, the unit has no interaction with other units of other oscillators.

Figure 6: Example of two coupled neural oscillators. Besides the external input received and fed to each neural oscillator, both oscillators send/receive an output/input to/from each other.

The mathematical equations defining Oscillator and Oscillator are given, respectively, by (2) and (3) as follows: where represents the coupling term received by Oscillator from Oscillator and represents the coupling term received by Oscillator from Oscillator The general form of this coupling term (which applies to the case where a neural oscillator is connected to several oscillators) is given by (4): where are positive synaptic weights (coupling strength) connecting neuron and represents the neighbourhood of neuron is a Heaviside function, and is a threshold that is applied to the received input from neighbouring neurons.

From (4), it can be seen that the application of the Heaviside function on the received input allows an oscillator to receive its neighbour input only if the latter is above a certain threshold Other transform functions can also be used (instead of the Heaviside function) such as a sigmoidal function. Only positive weights are assumed for the coupling term which mimics the excitatory synapses. For the applications which will be discussed in the following sections, neural oscillators will be connected locally in an excitatory way while a global common inhibitor is used to allow competition between groups of neurons.

The and -nullclines of both neural oscillators and the convergence of their trajectories to a limit cycle are represented in the phase plane as illustrated in Figure 7. The threshold is represented by a vertical line with a circle on the -axis. Three nullclines are obtained for this simulation as the same -nullcline is common to both oscillators; however two different -nullclines are obtained because the change in the parameter values only affects the equations of unit while the unit equations remain identical (this change is caused by the use of different initial parameters and also during the continuous interaction defined by the coupling term). The upper cubic is referred to as the “excited -nullcline/cubic” because it represents the excited neural oscillator (i.e., the neuron whose coupling term is above the threshold). Likewise the lower cubic is referred to as the “unexcited cubic” as it represents the neural oscillator without excitation (i.e., with a nil coupling term ).

Figure 7: Nullclines of two-coupled oscillators and the convergence of their temporal activity to a synchronised limit cycle which is represented by a thick solid line. The -nullclines (cubic curves) of the system in (2) and (3) are represented by a dashed line and the -nullcline (sigmoid curve) is represented by a dotted line. The excitation threshold is set between the left and the right branches of the limit cycle (vertical line with a circle end on the axis). ULB (upper left branch), URB (upper right branch), LLB (lower left branch), LRB (lower right branch), ULK (upper left knee), LLK (lower left knee), URK (upper right knee), and LRK (lower right knee). The parameters used for this simulation are

Based on the coupling strength, the emerging behaviour is either a synchronisation or a desynchronisation of the neural oscillators’ temporal activities. Increasing the coupling strength leads to an elevation of the -nullcline (the cubic); on the other hand a decrease in the coupling strength will shift the cubic downwards. The example shown in Figure 8 illustrates the dynamics of strongly coupled oscillators which led to synchronised activities (and to the neural oscillators’ trajectories converging to a limit cycle as shown in Figure 7).

Figure 8: Synchronisation in a pair of oscillators. When the coupling between two neural oscillators is strong enough, a synchronisation between their temporal activity is obtained. Same parameters of Figure 7 are used with a starting point for oscillator at and for oscillator

The example shown in Figure 9 illustrates the dynamics of weakly coupled oscillators which led to desynchronised activities.

Figure 9: Desynchronisation in a pair of oscillators. A weak coupling between two neural oscillators is applied. Other parameters of the two neurons remain similar to those used in Figure 8.

The interactions between coupled oscillators, which result in either a synchronisation or a desynchronisation of their temporal activities, will form the basis of the following sections which describe the application of networks of neural oscillators to grey scale and colour image segmentation developed in this work. However, before discussing the developed system in Sections 5 and 6, it is important to introduce the problem of feature binding and highlight its relation to the temporal synchronisation in neural oscillators and its relevance to object segmentation and image analysis tasks. The following two sections will briefly address these issues.

4. Relation to Digital Image Segmentation

There are two main competing hypotheses to explain how the brain binds together the distributed features of a perceived object (i.e., the binding problem [4652]). The first hypothesis suggests that information about a stimulus is conveyed by the average firing rate of a single neuron and that neurons at higher brain areas become more selective [5356] so that a single neuron may represent each single object (the grandmother-cell representation). Consequently, multiple objects in a visual scene would be represented by the coactivation of multiple units at some level of the nervous system. This hypothesis faces, however, major theoretical and neurobiological criticism which is rather in favour of another hypothesis which postulates that it is the temporal correlation of the firing activities of distributed neurons that is used for binding the features of an object. This means that an object is coded by a population of many neurons whose firing activity is synchronised. According to this hypothesis, different objects can be encoded by different populations of neurons each of which has different firing times. Evidence of this approach has been established through several neurophysiological experiments (a review of these experiments can be found in [49, 57]).

Based on the second hypothesis and its theoretical and experimental supporting evidence, the work in this paper attempts to understand and exploit the emerging synchronisation in networks of locally connected neural oscillators and apply such self-organising behaviour to colour image segmentation. Although the focus of this work is on visual scenes (images) only, the idea of temporal correlation-based feature binding can be exploited in the processing of other sensory modalities such as speech signals. The image features (pixel attributes) are distributed over a network of neural oscillators where each feature is represented by a single neural oscillator. Therefore, the synchronisation of a group of oscillators encodes the object they represent. On the other hand, the desynchronisation of other neural oscillators indicates that they do not belong to the same object and might rather represent other objects of the scene. Figure 10 illustrates the idea of temporal scene segmentation where existing objects are segmented and emerge through time as the temporal activity of its corresponding neural oscillators synchronise with each other and desynchronise with other neural oscillators which represent other objects.

Figure 10: A colour image and its temporal segmentation using a network of neural oscillators. Neural oscillators which belong to the same object get their temporal activities synchronised with each other and desynchronised with other neurons from other different objects. This synchronisation and desynchronisation signal the detection an object.

We referred to the features of an image as “pixel attributes” instead of pixel values because they depend on the type of the processed image. That is, in a black and white (binary) or grey scale image the pixel values could be directly used as attributes/features, although the use of other features based on the values of neighbouring pixels is also possible and sometimes might be unavoidable (such as in texture images). In a colour image, on the other hand, the situation is different and the colour of a pixel cannot be directly represented by a single oscillator as it is encoded by a triplet of integer values. Therefore the feature value should be determined by some mechanism before being eventually fed to a neural oscillator. In this work we present the self-organising maps of Kohonen (SOM) as a feature extraction phase where the pixel values (triplets) of the original colour image are mapped to a new reduced space of colours where each pixel is assigned one value instead of a triplet. The creation of a new space and the mapping of original colours are based on the self-organising characteristic of Kohonen maps. The implementation details of this process are presented in Section 6. The following section discusses the application of neural oscillators to image segmentation, starting with the segmentation of grey scale images.

5. Application of Neural Oscillators to Grey Scale Image Segmentation

It is hypothesised in neuroscience that segmentation of different objects in a visual scene is based on the temporal correlation of neural activity. Accordingly, a population of neurons which fire in synchrony, or in a highly correlated way, would signal attributes of the same object. Also, neurons with asynchronous activity would participate in the formation of different objects. Based on these principles and the dynamics of neural oscillators introduced in previous sections, this work uses a network (grid) of neural oscillators that are locally connected through excitatory synapses and globally inhibited by a common inhibitor as defined in [36]. Such a network is first applied to grey scale images where the intensities of pixels are directly assigned to neural oscillators then extended to colour images.

5.1. Network Architecture

The network consists of a grid (a 2D array) of neural oscillators which are locally connected through excitatory synapses represented by positive weights and a global neuron which receives excitation from each neuron and sends inhibition to all neurons in the grid (i.e., negatively connected to all neurons). Each neuron input is assigned a feature of the image pixel as illustrated in Figure 11. The local connections between each two neurons are reciprocal and each neuron is connected with its nearest four neighbouring neurons as shown in Figure 12. While a 4-connectedness is shown in this figure, it is still possible to use alternative forms such as 8-connectedness where each neuron is reciprocally connected to its 8 nearest neighbours. The motivation behind local excitatory connections is that, besides its consistence with lateral synaptic connections in several areas of the brain, it is hoped that such connections will contribute to object segmentation by inducing synchronised activities amongst neurons which belong to the same region.

Figure 11: A grid (network) of competitive self-organising locally excitatory globally inhibitory neural oscillators. Each neural oscillator is connected to its four neighbours through excitatory synapses and receives inhibition from a global inhibitor. The latter helps create competition between neurons that belong to a single object (synchronised) and other neurons which belong to different objects (desynchronised). Only the coupling between oscillators and has been highlighted; the connections between each neuron and other neighbours are not shown for simplicity.
Figure 12: Connections between each neural oscillator, its four neighbours ( and ) and the global inhibitor Every neural oscillator in a network is connected to the global inhibitor (these connections are not shown here for simplicity). Each neuron is assigned to a pixel of the image whose features form the external input of the neural oscillator. Only the unit of the oscillator is shown.

The building block of the network is based on the model described in (1) and (2) with a slight modification of the external input term and the integration of the global inhibition in the coupling term The resulting equations are given by The coupling from other neighbouring neurons is represented by and defined by Similarly to the coupling term defined in Section 3 in relation to coupled oscillators, excitatory connections are used and they contribute to the propagation of the neuron activity to its neighbours. Also, an oscillator can receive its neighbour input only if the latter is above a certain threshold (set between LK and RC). The calculation of the connection weights of a neural oscillator is based on the value of its input (pixel feature) and those of its neighbours such as oscillators with similar pixel values are assigned strong weights while other oscillators belonging to different regions are given weaker connections. There is a new term added in this case to count for the global inhibitor whose inhibition weight is denoted by The activity of the global inhibitor is represented by the dynamic variable defined as follows: where if at least one oscillator is on the active phase, and otherwise. When an oscillator is triggered, and is a parameter which determines the rate at which the inhibitor reacts to the stimulation from an active oscillator. According to (6), the inhibitor activity is considered only when its value is above a certain threshold (due to the Heaviside function), in which case the term is subtracted from the total excitation from neighbouring neurons. The global inhibitor, which receives excitatory input from all the neurons and sends back inhibitory outputs to all neurons, contributes to the desynchronisation of other oscillators which does not belong to the object being segmented. It cannot affect synchronised oscillators as the sum of the inputs from synchronised neighbours is greater than

In addition, the function called the lateral potential, is introduced in (5) in order to distinguish between small sparse noisy fragments and coherent regions so that oscillations of noisy fragments are removed. Major coherent region contains at least one oscillator (called leader) with large enough lateral excitation θp from its neighbours, but such a leader oscillator does not exist in a noisy fragmented region. Therefore the function determines whether or not an oscillator is a leader and also plays a role in “ignoring” the noise in the input image. It is given by (8): where is a constant. If the weighted sum of active neighbours (each of which should obviously exceed to make the inner Heaviside evaluate to 1) is above a certain threshold then the outer Heaviside is activated (becomes 1) and approaches one; otherwise it relaxes to zero on a time scale determined by a small value Thus, only oscillators which are surrounded by a large number of active oscillators can maintain their high.

5.2. Numerical Simulations

In order to illustrate the separation of existing objects within a grey scale image through the synchronisation of neural oscillators belonging to the same object and desynchronisation of others belonging to different objects, tests on a small synthetic image and a larger real scene are presented below. While a discrete integration is used for the first test, however, for the second test, due to the computation intensive nature of discrete integration of a big number of oscillators, an algorithmic approach which captures the essence of the neural oscillators dynamics is used.

5.2.1. Step-by-Step Equation Integration for Small Scale Images

A network of 10-by-10 neural oscillators, based on the models described above and the behaviour of the grid architecture explained above, was numerically simulated using Matlab. The differential equations were first integrated using the Runge-Kutta 4th order to ensure a better accuracy. However the forward Euler method was also used and the same results were obtained. A 10-by-10 synthetic grey scale image, shown in Figure 13, is fed to a network of 100 neural oscillators. It consists of four separate rectangles, laid upon a black background, each of which has a different size and a different gray scale. Each pixel value is fed as an external input to the corresponding neural oscillator in the grid. The objects are labelled Obj 1, Obj 2, Obj 3, and Obj 4.

Figure 13: A synthetic grey scale image, object number 3 manually highlighted with dashed line to be contrasted from the black background.

Figure 14 shows the temporal activity of all neural oscillators in the grid. Only the first 1800 integration steps are shown; however the oscillations carry on indefinitely unless the input image is withdrawn. Neurons are arranged in a one dimensional array which represents a linear mapping (line by line) of the 2D format of the grid. It clearly shows the synchronisation and desynchronisation of the temporal activities of neural oscillators in the network. While some neurons are active and their temporal activity is synchronised, other neurons are inactive and therefore their temporal activity is desynchronised (with those that are active and synchronised). At a given time, the active and synchronised neurons belong to the same object while those that are inactive belong to other different objects. The 3D graph provides a better illustration of the network activity.

Figure 14: Temporal activity of the neural oscillators in the 10 by 10 grid. The -axis represents the value of the -unit of the neural oscillator (there are two units in an oscillator as described by the underlying differential equations, unit and unit ). The -axis represents the integration steps (only 1800 steps were shown here), and the -axis represents the neurons’ index (i.e., from neuron 1 which is associated with the top left corner pixel to neuron 100 which is associated with the bottom right corner pixel).

In order to further emphasise the segmentation of different objects based on the synchronisation and desynchronisation of different neurons in the 2-dimensional networks, the neurons which belong to the same object are grouped together and their temporal activity is displayed on the same axis as shown in Figure 15. The indexes of neurons which belong to the same object are first determined, and then the corresponding waveforms of those neurons are plotted on the same axis one after the other. Each neuron’s waveform is separated from the previous neuron by an offset to avoid overlapping between two consecutive waveforms. Ordering the waveforms according to the objects they belong to highlights better which set of neurons are active at a given time, hence it indicates which object is detected at that time. For example at time (see Figure 15) the neurons that belong to object 4 are active while other neurons are inactive. At time neurons that belong to object 1 are active and the remaining neurons (which belong to other objects and the background) are inactive. Similarly for time and time the neurons belonging to object 2 and object 3 are active, respectively.

Figure 15: Temporal activity of neural oscillators ordered according to the objects they belong to. The -axis represents integration steps and the -axis represents neuron indexes. For example, the first six neurons belong to object 1, the next four neurons belong to object 2, the following six neurons belong to object 3, and then the following nine neurons represent object 4. Finally the remaining neurons (not all of them are drawn here for simplicity) represent the neurons mapped to the background pixels (there are background neurons in total, only 10 neurons are shown).

Using a 2D visualisation of the temporal activity of the neural oscillators of the network at selected points in time, Figure 16 illustrates further the segmented objects at the times shown in Figure 15. Time corresponds to object 4 (see Figure 13 for initial object labelling). The neural oscillators associated with the pixel of this object are active and also synchronised. The -values of all neural oscillators in the grid are mapped to a grey scale image. The obtained 2D visualisation clearly shows at the emergence of object 4, while other neurons representing the remaining objects are inactive (small values) and therefore the corresponding objects fade away. Likewise at time it is the neurons belonging to object 1 that are active and synchronised while other neurons are inactive. Similarly, at time and object 2 and object 3 have been temporally segmented, respectively.

Figure 16: The value of the -unit of each oscillator is shown in a 2D array where each element corresponds to one neural oscillator that is associated with a pixel of the image located at the same coordinates. The values of neural oscillators are selected at different points in time ( and ). These times correspond to those shown in Figure 15. The synchronisation of active neurons indicates which object is being segmented.
5.2.2. Algorithmic Approach for Large Scale Images

For larger images, the activity of the neuron oscillators is calculated using an algorithmic approach as solving hundreds or thousands of systems of ordinary differential equations (ODE), each of which assigned to an image pixel, is prohibitively costly in terms of computing time and may not be feasible unless their discrete integration is implemented on dedicated hardware platform such as reconfigurable FPGAs (Field Programmable Gate Array) or GPUs (Graphics Processing Unit). The algorithmic approach is based on the neural oscillators dynamics on the cubic branches and at jumping points (LK, RK) between left and right branches (LB, RB). It produces the essential behaviours of coupled neural oscillators, namely the synchronisation and desynchronisation of their temporal activity as well as the exhibition of two time scales (i.e., the slow and fast motions on the left and right branches, resp.). The use of an algorithmic approach considerably reduces the computation time. Only the value of an oscillator is used and the algorithm is described in what follows.

Algorithm 1 is demonstrated on a 110-by-90-pixel image, which contains five chess pieces as shown in Figure 17. A network of 110 by 90 oscillators is built, and each neuron is stimulated with its corresponding pixel value. Each neural oscillator is connected to its four nearest neighbours, and the connecting weights are formed based on the image pixel values. Setting the coupling weights according to the expression results in oscillators with similar pixel values being assigned large weights while other oscillators belonging to different regions will be assigned weaker connections weights. The addition of the term 1 in the denominator is used to avoid a division by zero when both oscillators are assigned the same pixel values in which case the coupling term is assigned the maximum weight value. This will ensure that the leaders will propagate their activity to the oscillators in the same group representing the same object. As the network activity evolves, different objects (chess pieces) emerge one by one as shown in Figure 17. The emergence of each object is achieved through the synchronisation of the activities of neurons belonging to the same object (images (a) to (d)). Image (f) represents the image background.

Algorithm 1

Figure 17: Segmented objects from image containing five chess pieces. Left: original image. (a) to (e) Segmented objects. (f) The image background.

6. Application of Neural Oscillators to Colour Image Segmentation

A similar network of oscillators is applied to colour images where the HSV (hue, saturation, value) colour space is used and a self-organising map-based colour reduction method is employed. The colour reduction method allows the extension of the applicability of networks of oscillators to colour image segmentation. As each neuron in the grid of oscillators receives a one-dimensional input value, there is a need for a mapping method between RGB triplets of an input colour image and the corresponding feature values where similarity measurements are taken into account colour. To the best of our knowledge, this is the first time such neural models are applied to colour image segmentation. The HSV colour space, a linear transform of the RGB (red, green, blue) space [3], offers the advantage of specifying colours in an extremely intuitive manner as the luminance component is separated from the chrominance (i.e., colour) information [2, 58]. The latter feature makes this colour space more suitable for digital image processing applications as it is easy to select a desired colour (hue component) and to then modify it slightly by adjustment of its saturation and intensity [58]. In addition, the components of the HSV colour space are normalised, which makes it suitable for the colour reduction method and the system of colour image segmentation developed in this work.

The colour reduction method exploits both the chromatic features of pixels as well as their spatial characteristics such as how the colour of each pixel is related to the colours of its neighbouring pixels. It is the newly obtained reduced space of colours that is eventually used by a network of locally connected neural oscillators to achieve the segmentation of existing objects.

6.1. Colour Reduction Using Kohonen Self-Organising Maps

Given an RGB input image with a certain number of colours (16 M colours with a 24-bit resolution), the final reduced set of colours is automatically selected using a Kohonen self-organising map (SOM) which is an unsupervised neural network devised by Kohonen [59]. It is based on competitive learning and is a topology preserving map [59, 60], a feature that is highly desired for high dimensional data visualisation and reduction.

In this work, the chromatic features (i.e., the HSV components) of each pixel are combined with additional spatial features extracted from neighbouring pixels. The mean and standard deviations are used here; however there is no restriction on the type of spatial features that could be used. The goal is to map the HSV colour space into a one-dimensional space of colour indexes that can be used by the network of oscillators. Once the set of feature vectors of a given image is constructed, a SOM is built and trained according to Kohonen’s competitive learning rule [59]. The main characteristic of this mapping is that it takes into account the chromatic and spatial similarities between neighbouring pixels. Eventually, the input image, with a larger set of colours, is transformed into another image with a limited number of colours while the spatial characteristics are being maintained.

6.2. RGB to HSV Colour Space Transform

The RGB colour space is mapped into the HSV colour space where the input image is then represented. The RGB to HSV transform is performed using the Matlab function rgb2hsv routine. A new representation is obtained where each image pixel is represented by a triplet (H, S, V) which determines its chromatic features. The system assumes that the input image is in HSV format; however this is not a restriction as it is always possible to map an image from one colour space to another.

6.3. Chromatic and Spatial Feature Extraction

The final feature vector which describes each image pixel consists of its chromatic features combined with its local spatial features calculated for each chromatic plane. The local spatial characteristics chosen in this work are represented by the mean and the standard deviation of the pixel values within a window centred at the pixel being considered. They are calculated as follows.

6.3.1. Central Location Measurements (Mean Values)

This spatial feature is based on statistical measurements of the central location of each pixel within the sample points selected from a given window, namely, the mean value which is given by the following formula: where ch represent the chromatic plane being considered    are the pixel coordinates for which the spatial feature is being calculated, is the neighbourhood of the pixel (i.e., a window centred at the pixel and is the total number of pixels within that window in the input image (see illustration in Figure 18).

Figure 18: Extraction of local spatial features. The mean and standards deviation are calculated for each pixel in the three chromatic planes. Hence, each pixel in the image results in six different spatial features. These spatial features are calculated over a 3 by 3 window centred at the target pixel.
6.3.2. Dispersion Measurements (Standard Deviation Values)

Likewise, the standard deviation is computed for each chromatic component using the following formula: In this work, a window of size is used and the image is padded with zeros before the spatial characteristics are computed.

As a result, each pixel is described by nine values representing the chromatic and spatial features as illustrated in Figure 19. It is the size of the feature vector that will dictate the number of inputs of the Kohonen self-organising network. A set of these feature vectors will be created and used to build and train a Kohonen self-organising map as explained in Section 6.4.

Figure 19: Feature vector for each pixel, using both spatial and chromatic characteristics.
6.4. Building and Training of Kohonen Self-Organising Network

A Kohonen SOM is designed, where the inputs represent the chromatic and the local spatial features of a pixel as calculated in the previous section. While the number of inputs is dictated by the dimensionality of the feature space, the size of the output layer represents the number of representative colours in the reduced colour space (see Figure 20). A subset of feature vectors is randomly selected to form the training set, the SOM network is then trained, and its final weights are adjusted using the Kohonen competitive learning rule. It is important to note that for each new image, a SOM is created anew and its weights are trained, and adapted to the new image. Hence, the mapping between the original colour space and the reduced one is updated and adapted to the content of the image to be segmented.

Figure 20: Kohonen’s SOM used for colour reduction and image transformation. The chromatic components and spatial features are used as inputs.
6.5. Mapping

After training, the new reduced set of colours is represented by the final weights of the output neurons which are then used to map an input image into the newly created colour space. The new image contains only the representative colours as learned by the Kohonen self-organising map. Each pixel in the image is assigned its most similar colour defined by the trained SOM.

6.6. A Neurosystem for Colour Image Segmentation

The combination of the dynamics of networks of neural oscillators and the self-organising map-based colour reduction approach, described in the previous section, results in a neurosystem for colour image segmentation. The original image is first converted into the HSV space, then the representative colours in a given image are automatically selected using the unsupervised Kohonen self-organising map, which is trained anew for each input image. The resulting image with a reduced number of colours is then segmented using the temporal correlation of neural oscillators. An illustration of the proposed framework for colour image segmentation based on the combination of networks of neural oscillators and the SOM-based colour reduction method is illustrated in Figure 21.

Figure 21: Neuro-inspired framework for colour image segmentation. The output from a self-organising maps-based colour reduction stage is fed to a network of locally excited globally inhibited neural oscillators and existing objects are segmented through temporal synchronisation of neurons belonging to the same coherent areas.
6.7. MATLAB Simulation Results

The proposed approach has been implemented in Matlab and demonstrated on 24-bit bitmap colour images (16 million colours as original space) representing real world scenes. The first image represents a car and contains pixels (see Figure 22 bottom right corner), the second image represents a hand and has pixels (see Figure 23), and the third image represents a cup and has pixels (see Figure 24). The input images are originally represented in the RGB space. A transformation to the HSV space is carried out using the Matlab rgb2hsv function which comes with the image processing toolbox. A Kohonen’s self-organising network is then created and trained and used for colour reduction, where its final weights are used to transform an input image into a new image with pixel values mapped to the newly obtained reduced space. Finally, a network of neural oscillators (with a size equal to the input image size) is created, where the pixel values of the transformed image are used as external inputs to neural oscillators. The coupling weights are also determined according to the newly obtained pixel values. The segmentation results are illustrated in Figures 2224 where the main segmented objects are shown separately along with the original image.

Figure 22: Segmentation of a sample colour image (130 by 80 pixels) into different regions.
Figure 23: Segmentation of another sample colour image (100 by 129 pixels) into different regions. Fingers are separated due to the surrounding rings (the big blue ring and the small golden one). This is in accordance with the definition of an object as mentioned in early sections, that is, a coherent region with similar pixels.
Figure 24: Segmentation of a third sample colour image (128 by 128 pixels) into different regions.

7. Discussion

A subjective evaluation is carried out to assess the segmentation quality of the developed system. It is based on the human visual assessment, the most common practice for evaluating the effectiveness of a segmentation system. Figures 2224 show that the segmented images are of an acceptable quality for use in computer-aided diagnosis, image understanding, or object recognition schemes. Although the evaluation remains subjective, this satisfactory assessment is based on the clear separation of the existing objects within the input image. It can be seen from Figure 17 how all existing objects (chess pieces and the background) are accurately segmented. Likewise, the segmentation of the synthetic image using the step-by-step integration of neural oscillators resulted in a perfect separation of the existing rectangles. The problem is, however, trivial in this case; yet it helps demonstrate and validate the underlying principle of object segmentation using neural oscillators and objectively assess its quality. The assessment is objective in this example since a synthetic image is used as the ground truth segmentation. The segmentation of the colour images in Figures 2224 clearly shows that the segmentation of the coherent areas (i.e., objects) in the image (car frame, windows/glass, tree, grass, hand, ring, cup, etc.) is fairly satisfactory. The evaluation of segmentation quality remains an open problem in the field of image segmentation. The absence of accurate and automatic objective evaluation methods makes it difficult to assess whether one algorithm produces better segmentation quality than another. The objective of this paper is to demonstrate the applicability of bioinspired neural oscillators to colour image segmentation and the visual quality of the obtained results clearly demonstrate that the principle is effective. A comparison with all existing algorithmic and machine learning methods is out of the scope of this paper. However, the approach presented in this paper can still be compared to other approaches in terms of its underlying computing principles, advantages and disadvantages. First of all, the computing paradigm of this system can be likened to a region growing approach as more pixels are grouped within a given region (grown from a chosen seed) when their features satisfy some specified criteria. However, the system in this work remains different due to the inherent dynamics of synchronisation and desynchronisation of neurons temporal activity and also the colour reduction method which allows automatic selection of a reduced number of the most relevant representative colours of the input image. The other distinguishing characteristic of this approach is that the reduced set of representative colours are created anew for each input image; that is, it is adaptable to each new input image, as it takes into account both the chromatic and local spatial features which are specific to each image being handled. The other apparent differences and advantages of this system, are that they offer a biologically inspired approach (whose modules are not necessarily all biologically plausible) which builds on the strengths of neural architectures and the computing principles of the brain, which is undoubtedly the only segmentation system that is perfectly working. Another advantage resides in the parallelism and distributed computing of this approach as all the states of oscillators are computed locally and in parallel which makes it more suitable for dedicated hardware implementation (such as VLSI, FPGAs, and/or GPUs) in order to reduce the computation time and achieve real-time computations. This is a very desirable feature for segmenting large scale image databases. Moreover, this method is not specific to particular types of objects and requires less human intervention as opposed to existing segmentation techniques where the user needs to specify initially the number of objects to be segmented. Yet, this approach can be extended to more specific domains where available a priori knowledge can be exploited to calculate the most relevant features.

8. Conclusion and Future Work

A biologically inspired neural model, called neural oscillators, has been investigated and its potential application to solve real world problems, namely, colour image segmentation, has been explored. Also a new system for colour image segmentation was presented. The biological relevance of neural oscillators has been highlighted and their dynamics examined through simulations and analysis of single and coupled neurons as well as networks of neurons. Analysis of the behaviours of this type of neurons was carried out in the time domain as well as in the phase plane, and an investigation of their application to segmentation of grey scale and colour images was presented. The problem of feature binding was discussed and its connection with neural oscillators was highlighted. Also, a discussion of the established link between the binding problem and computer vision was provided along with the role of temporal correlation in feature binding. The main contribution of this work consists of the exploitation of the possibilities of computing and learning with neural oscillators in solving real world engineering problems. To the best of the author’s knowledge, this is the first attempt to apply networks of neural oscillators to the segmentation of colour images. The segmentation quality was visually assessed and a comparison between the proposed approach and classical image segmentation counterparts was carried out. Segmentation results demonstrate the emergence of neural oscillators, combined with self-organising, as an efficient alternative approach for colour image segmentation; it offers the desirable feature of parallel computing which can be further exploited for hardware implementations. Mapping the system onto dedicated hardware platforms will be a desirable extension of this work as it will allow real-time segmentation of large scale colour images and datasets.


  1. N. R. Pal and S. K. Pal, “A review on image segmentation techniques,” Pattern Recognition, vol. 26, no. 9, pp. 1277–1294, 1993. View at Publisher · View at Google Scholar
  2. R. Jain, R. Kasturi, and B. G. Schunck, Machine Vision, McGraw-Hill, New York, NY, USA, 1995.
  3. R. C. Gonzalez and E. W. Richard, Digital Image Processing, Prentice Hall, Upper Saddle River, NJ, USA, 2nd edition, 2002.
  4. R. M. Haralick and L. G. Shapiro, “Image segmentation techniques,” Computer Vision, Graphics, & Image Processing, vol. 29, no. 1, pp. 100–132, 1985. View at Publisher · View at Google Scholar
  5. N. R. Pal and S. K. Pal, “A review on image segmentation techniques,” Pattern Recognition, vol. 26, no. 9, pp. 1277–1294, 1993. View at Publisher · View at Google Scholar
  6. W. Skarbek and A. Koschan, “Colour image segmentation: a survey,” Tech. Rep., Institute for Technical Informatics, Technical University of Berlin, Berlin, Germany, October 1994. View at Google Scholar
  7. V. Zharkova, S. Ipson, J. Aboudarham, and B. Bentley, “Survey of image processing techniques, EGSO internal deliverable,” Tech. Rep. EGSO-5-D1_F03-20021029, 2002. View at Google Scholar
  8. C. von der Malsburg and W. Schneider, “A neural cocktail-party processor,” Biological Cybernetics, vol. 54, no. 1, pp. 29–40, 1986. View at Google Scholar
  9. C. von der Malsburg and J. Buhmann, “Sensory segmentation with coupled neural oscillators,” Biological Cybernetics, vol. 67, no. 3, pp. 233–242, 1992. View at Publisher · View at Google Scholar
  10. W. Singer and C. M. Gray, “Visual feature integration and the temporal correlation hypothesis,” Annual Review of Neuroscience, vol. 18, pp. 555–586, 1995. View at Google Scholar
  11. M. Ursino and G.-E. La Cara, “Modeling segmentation of a visual scene via neural oscillators: fragmentation, discovery of details and attention,” Network: Computation in Neural Systems, vol. 15, no. 2, pp. 69–89, 2004. View at Publisher · View at Google Scholar
  12. J. Grasman, Asymptotic Methods for Relaxation Oscillations and Applications, Springer, New York, NY, USA, 1987.
  13. F. Verhulst, Nonlinear Differential Equations and Dynamical Systems, Springer, Berlin, Germany, 1996.
  14. B. van der Pol, “On relaxation oscillations,” Philosophical Magazine, vol. 2, no. 11, pp. 978–992, 1926. View at Google Scholar
  15. A. V. Hill, “Wave transmission as the basis of nerve activity,” Cold Spring Harbor Symposia on Quantitative Biology, vol. 1, pp. 146–151, 1933. View at Google Scholar
  16. B. van der Pol and J. van der Mark, “The heartbeat considered as a relaxation oscillation, and an electrical model of the heart,” Philosophical Magazine, vol. 6, pp. 763–775, 1928. View at Google Scholar
  17. A. L. Hodgkin and A. F. Huxley, “A quantitative description of membrane current and its application to conduction and excitation in nerve,” The Journal of Physiology, vol. 117, no. 4, pp. 500–544, 1952. View at Google Scholar
  18. R. FitzHugh, “Impulses and physiological states in models of nerve membrane,” Biophysical Journal, vol. 1, pp. 445–466, 1961. View at Google Scholar
  19. J. S. Nagumo, S. Arimoto, and S. Yoshizawa, “An active pulse transmission line simulating nerve axon,” Proceedings of the IRE, vol. 50, pp. 2061–2070, 1962. View at Google Scholar
  20. E. Mayeri, “A relaxation oscillator description of the burst generating mechanism in the cardiac ganglion of the lobster, Homarus americanus,” Journal of General Physiology, vol. 62, no. 4, pp. 473–488, 1973. View at Google Scholar
  21. H. R. Wilson and J. D. Cowan, “Excitatory and inhibitory interactions in localized populations of model neurons,” Biophysical Journal, vol. 12, no. 1, pp. 1–24, 1972. View at Google Scholar
  22. H. R. Wilson and J. D. Cowan, “A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue,” Kybernetik, vol. 13, no. 2, pp. 55–80, 1973. View at Google Scholar
  23. D. J. Pinto, J. C. Brumberg, D. J. Simons, and G. B. Ermentrout, “A quantitative population model of whisker barrels: re-examining the Wilson-Cowan equations,” Journal of Computational Neuroscience, vol. 3, no. 3, pp. 247–264, 1996. View at Google Scholar
  24. H. R. Wilson, “Simplified dynamics of human and mammalian neocortical neurons,” Journal of Theoretical Biology, vol. 200, no. 4, pp. 375–388, 1999. View at Publisher · View at Google Scholar · View at PubMed
  25. S. Campbell and D. Wang, “Synchronization and desynchronization in a network of locally coupled Wilson-Cowan oscillators,” IEEE Transactions on Neural Networks, vol. 7, no. 3, pp. 541–554, 1996. View at Publisher · View at Google Scholar · View at PubMed
  26. C. Morris and H. Lecar, “Voltage oscillations in the barnacle giant muscle fiber,” Biophysical Journal, vol. 35, no. 1, pp. 193–213, 1981. View at Google Scholar
  27. C. M. Gray, P. König, A. K. Engel, and W. Singer, “Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties,” Nature, vol. 338, no. 6213, pp. 334–337, 1989. View at Google Scholar
  28. R. Eckhorn, R. Bauer, W. Jordan et al., “Coherent oscillations: a mechanism of feature linking in the visual cortex?” Biological Cybernetics, vol. 60, no. 2, pp. 121–130, 1988. View at Publisher · View at Google Scholar
  29. W. Singer and C. M. Gray, “Visual feature integration and the temporal correlation hypothesis,” Annual Review of Neuroscience, vol. 18, pp. 555–586, 1995. View at Google Scholar
  30. D. G. Aronson, G. B. Ermentrout, and N. Kopell, “Amplitude response of coupled oscillators,” Physica D, vol. 41, no. 3, pp. 403–449, 1990. View at Google Scholar
  31. D. Cairns, R. Baddeley, and L. Smith, “Constraints on synchronizing oscillator networks,” Neural Computation, vol. 5, no. 2, pp. 260–266, 1993. View at Google Scholar
  32. J. Grasman and M. J. W. Jansen, “Mutually synchronized relaxation oscillators as prototypes of oscillating systems in biology,” Journal of Mathematical Biology, vol. 7, no. 2, pp. 171–197, 1979. View at Google Scholar
  33. R. E. Plant, “A fitzhugh differential-difference equations modeling recurrent neural feedback,” SIAM Journal on Applied Mathematics, vol. 40, no. 1, pp. 150–162, 1981. View at Google Scholar
  34. S. Grossberg and D. Somers, “Synchronized oscillations during cooperative feature linking in a cortical model of visual perception,” Neural Networks, vol. 4, no. 4, pp. 453–466, 1991. View at Google Scholar
  35. D. Somers and N. Kopell, “Waves and synchrony in networks of oscillators of relaxation and non-relaxation type,” Physica D, vol. 89, no. 1-2, pp. 169–183, 1995. View at Google Scholar
  36. D. Terman and D. Wang, “Global competition and local cooperation in a network of neural oscillators,” Physica D, vol. 81, no. 1-2, pp. 148–176, 1995. View at Google Scholar
  37. C. von der Malsburg, “The correlation theory of brain function,” Internal Rep. 81-2, Max-Planck-Institute for Biophysical Chemistry, 1981. View at Google Scholar
  38. C. von der Malsburg and W. Schneider, “A neural cocktail-party processor,” Biological Cybernetics, vol. 54, no. 1, pp. 29–40, 1986. View at Google Scholar
  39. W. A. Phillips and W. Singer, “In search of common foundations for cortical computation,” Behavioral and Brain Sciences, vol. 20, no. 4, pp. 657–722, 1997. View at Publisher · View at Google Scholar
  40. A. Keil, M. M. Müller, W. J. Ray, T. Gruber, and T. Elbert, “Human gamma band activity and perception of a Gestalt,” Journal of Neuroscience, vol. 19, no. 16, pp. 7152–7161, 1999. View at Google Scholar
  41. J. C. Prechtl, “Visual motion induces synchronous oscillations in turtle visual cortex,” Proceedings of the National Academy of Sciences of the United States of America, vol. 91, no. 26, pp. 12467–12471, 1994. View at Publisher · View at Google Scholar
  42. K. MacLeod and G. Laurent, “Distinct mechanisms for synchronization and temporal patterning of odor- encoding neural assemblies,” Science, vol. 274, no. 5289, pp. 976–979, 1996. View at Publisher · View at Google Scholar
  43. V. N. Murthy and E. E. Fetz, “Coherent 25- to 35-Hz oscillations in the sensorimotor cortex of awake behaving monkeys,” Proceedings of the National Academy of Sciences of the United States of America, vol. 89, no. 12, pp. 5670–5674, 1992. View at Publisher · View at Google Scholar
  44. A. K. Engel, P. König, A. K. Kreiter, and W. Singer, “Interhemispheric synchronization of oscillatory neuronal responses in cat visual cortex,” Science, vol. 252, no. 5010, pp. 1177–1179, 1991. View at Google Scholar
  45. D. Somers and N. Kopell, “Rapid synchronization through fast threshold modulation,” Biological Cybernetics, vol. 68, no. 5, pp. 393–407, 1993. View at Publisher · View at Google Scholar
  46. J. Ontrup, H. Wersing, and H. Ritter, “A computational feature binding model of human texture perception,” Cognitive Processing, vol. 5, pp. 32–44, 2004. View at Google Scholar
  47. A. L. Roskies, “The binding problem,” Neuron, vol. 24, no. 1, pp. 7–9, 1999. View at Publisher · View at Google Scholar
  48. C. M. Gray, “The temporal correlation hypothesis of visual feature integration: still alive and well,” Neuron, vol. 24, no. 1, pp. 31–47, 1999. View at Publisher · View at Google Scholar
  49. W. Singer, “Neuronal synchrony: a versatile code for the definition of relations?” Neuron, vol. 24, no. 1, pp. 49–65, 1999. View at Publisher · View at Google Scholar
  50. G. Calvert, C. Spence, and B. E. Stein, The Handbook of Multisensory Processes, MIT Press, Cambridge, Mass, USA, 2004.
  51. A. Revonsuo and J. Newman, “Binding and consciousness,” Consciousness and Cognition, vol. 8, no. 2, pp. 123–127, 1999. View at Google Scholar
  52. J. M. Wolfe and K. R. Cave, “The psychophysical evidence for a binding problem in human vision,” Neuron, vol. 24, no. 1, pp. 11–17, 1999. View at Publisher · View at Google Scholar
  53. H. B. Barlow, “Single units and sensation: a neuron doctrine for perceptual psychology?” Perception, vol. 1, no. 4, pp. 371–394, 1972. View at Google Scholar
  54. R. Desimone, T. D. Albright, C. G. Gross, and C. Bruce, “Stimulus-selective properties of inferior temporal neurons in the macaque,” Journal of Neuroscience, vol. 4, no. 8, pp. 2051–2062, 1984. View at Google Scholar
  55. D. I. Perrett, E. T. Rolls, and W. Caan, “Visual neurones responsive to faces in the monkey temporal cortex,” Experimental Brain Research, vol. 47, no. 3, pp. 329–342, 1982. View at Google Scholar
  56. S. Zeki, A Vision of the Brain, Blackwell Scientific, Oxford, UK, 1993.
  57. W. M. Usrey and R. C. Reid, “Synchronous activity in the visual system,” Annual Review of Physiology, vol. 61, pp. 435–456, 1999. View at Publisher · View at Google Scholar · View at PubMed
  58. A. Ford and A. Roberts, Colour Space Conversions, Westminster University, London, UK, 1998.
  59. T. Kohonen, Self-Organizing Maps, Springer, Berlin, Germany, 1995.
  60. S. Haykin, Neural Networks and Learning Machines, Prentice Hall, Upper Saddle River, NJ, USA, 3rd edition, 2008.