Table of Contents Author Guidelines Submit a Manuscript
Neural Plasticity
Volume 2009 (2009), Article ID 482696, 15 pages
http://dx.doi.org/10.1155/2009/482696
Research Article

A Plastic Temporal Brain Code for Conscious State Generation

1Centre National de la Recherche Scientifique (CNRS - UMR 5508), Université Montpellier 2, CC048 34095 Montpellier Cedex 5, France
216 rue Romain Rolland, 34200 Sète, France

Received 10 July 2008; Revised 18 February 2009; Accepted 24 May 2009

Academic Editor: Tim Schallert

Copyright © 2009 Birgitta Dresp-Langley and Jean Durup. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Consciousness is known to be limited in processing capacity and often described in terms of a unique processing stream across a single dimension: time. In this paper, we discuss a purely temporal pattern code, functionally decoupled from spatial signals, for conscious state generation in the brain. Arguments in favour of such a code include Dehaene et al.'s long-distance reverberation postulate, Ramachandran's remapping hypothesis, evidence for a temporal coherence index and coincidence detectors, and Grossberg's Adaptive Resonance Theory. A time-bin resonance model is developed, where temporal signatures of conscious states are generated on the basis of signal reverberation across large distances in highly plastic neural circuits. The temporal signatures are delivered by neural activity patterns which, beyond a certain statistical threshold, activate, maintain, and terminate a conscious brain state like a bar code would activate, maintain, or inactivate the electronic locks of a safe. Such temporal resonance would reflect a higher level of neural processing, independent from sensorial or perceptual brain mechanisms.

1. Introduction

In thelast twenty years, consciousness studies have produced a considerable bulk of theoretical and experimental works concerned with trying to answer two critical questions: (1) is a scientifically operational definition of consciousness possible and (2) where and how is this phenomenon produced in the brain? Being able to answer the second question entirely depends on whether a valid answer to the first one can be given. Looking back on the various different approaches in this field (e.g., [19]), neuroscientists are left with the conclusion that a fully operational yet comprehensive definition of the phenomenon still poses a fundamental problem, as pointed out in one of the more recent theoretical papers by Block [10], where the author argues for an “abstract solution” to the “problem of consciousness.” Given that phenomenal consciousness by far exceeds cognitive accessibility and performance, its neural basis is not to be identified with the neural basis of any particular cognitive process taking place within consciousness, as Block [10] and others have argued. The “hard problem of consciousness” (e.g., [1113]) or the seemingly insurmountable difficulty to explain through which mechanisms the conscious I which is experienced in terms of I do, I feel, or I am arises in the brain has, indeed, remained an unresolved issue. In this paper, we propose an abstract solution to this problem by suggesting a biophysical model which dissociates the particular cognitive processes which may take place within consciousness, such as conscious perception, for example, from the neural mechanisms that trigger, maintain, and terminate a conscious state (I do, I feel, I am) in the brain. The functional assumptions of our model clarify how such a particular brain state may arise from purely temporal resonance of memory signals in neural circuits, and how this may happen in the absence of any stimulus input, attention, or perception associated with a specific conscious behaviour.

1.1. Conscious Perception and Behaviour the Limiting Factor

Theories of consciousness based on cognitive performance or conscious (as opposed to non-conscious) perception do not address the “hard problemof consciousness.” The science of consciousness has thus far been reduced to looking for measures and neural correlates of consciousbehaviourthat reflects particular cognitive processes. These measures and correlates (see, e.g., [14], for a review) are nothing more than partial traces, found in specific behaviour like guided attention, active conscious perception and report, or conscious memory recall, of a far more complex and intricate phenomenon. Myriad studies in which a particular behaviour is investigated to understand consciousness have been reported. Dehaene et al. [9], for example, approached consciousness in terms of conscious report. These authors suggest that a human subject is phenomenally conscious when some critical event is reliably reported and argue that consciousness may, therefore, be defined in terms of “access of information to conscious report.” Interestingly, such a restriction of phenomenal consciousness to processes that enable information to access a certain level of conscious representation is grounded in Block’s earlier theory of what he called “access consciousness” [5]. However, considering conscious report of human observers as an indicator for mechanisms which give access to consciousness leads to several critical questions, which remain to be answered. Does information made accessible to conscious report correspond to ongoing or past, to real or imagined events? Does the conscious experience that is subject to conscious report occur well before, immediately before, or during the report? How long would the experience be expected to last afterwards? In short, is studying conscious perception and attention sufficient to understand the mechanisms that produce consciousness in the brain?

The logic of scientific explanation requires that the nature of the explanandum, or what is to be explained, is adequately derived from the explanans, or explanation given. Considering the case of studies focussed on conscious perception, we have to bear in mind that any specific consciously performed behaviour is no more than a particular expression of the explanandum   (consciousness). It neither occurs consistently nor systematically whenever the individual is conscious. An explanans derived from such a particular form of expression is adequate only with regard to the specific perceptual process studied, not with regard to the explanandum    (consciousness) as such. Specific behaviour of conscious perception and conscious report and memory recall has been correlated with specific neural activities in the occipital and the late parieto-frontal regions of the brain (see [15, 16] or [9] for extensive reviews and discussions). Along the same line of reasoning, these brain activities may be interpreted adequately in terms of correlates of a particular process of conscious behaviour highlighted by the experimental data, but not in terms of the neural correlates of consciousness as such.

Studies of behaviour which reflects what appear to be transitions between nonconscious and conscious processes such as change blindness (e.g., [17]) have given rise to interpretations of conscious perception in terms of a selective process which opens access to higher levels of information processing. In change blindness, human observers are unable to detect important changes in briefly presented visual scenes disrupted by blinks, flashes, or other visual masks just before the changes occur. This phenomenon may be seen as a particular kind of preconscious perception ([1, 18]). In fact, what happens in change blindness is that observers fail to report what they actually see because they believe that what is there is what they have seen just before. Such belief blocks the selective process which would otherwise enable the new information contained in the new visual scene to access conscious perception. Change blindness has been considered to result from top-down inhibition of ongoing stimuli (cf. [18]), preventing their conscious perception. Change blindness phenomena are particular cases where the conscious state is filled with a dominant memory representation of a previously experienced event. This suggests that there is a selective brain mechanism that makes information accessible to consciousness. More importantly, such a selective mechanism may explain how conscious states are generated in the complete absence of awareness and perception, as in lucid dreaming, for example.

1.2. Lucid Dreaming and Hallucinations: Conscious Experience without Perception

As pointed out already more than a century ago by William James [11], consciousness encompasses far more than being wakeful and able to consciously perceive and remember events which occur or have occurred in the world. When we dream intensely, we are not attentive to stimuli, but we are phenomenally conscious. We may even be able to access and report these phenomenal data several hours later, when we recount our dreams over breakfast. Similarly, patients suffering from mental disorders such as schizophrenia experience hallucinatory events consciously in the absence of external stimuli which trigger the experience. How hallucinations may arise in the brain from hyper-activation of volitional signals, triggering fully conscious and often vivid visual imagery and internally generated “voices” in hallucinating patients, has been discussed extensively by [19] on the basis of his Adaptive Resonance Theory (ART), to which we will get back later.

Baars (e.g., [20]) referred to phenomenal consciousness as the “theatre of the mind,” which is reminiscent of writings from the first book (Part 4, Section  6) of the Treatise of Human Nature (1740) in which the Scottish Philosopher David Hume compared phenomenal consciousness to a theatre with a scene of complex events where various different sensations make their successive appearance in the course of time:

“The mind is a kind of theatre, where several perceptions successively make their appearance; pass, repass, glide away, and mingle in an infinite variety of postures and sensations. There is properly neither simplicity in it at one time, nor identity in different, whatever natural propension we may have to imagine that simplicity and identity. The comparison of the theatre must not mislead us. They are the successive perceptions only, that constitute the mind; nor have we the most distant notion of the places where these scenes are represented, or of the materials of which it is composed.”

Hume’s phenomenal description defines consciousness in terms of successive moments in time where feelings and sensations, not necessarily related to ongoing external events or stimuli, appear and vanish from the mind.

LaBerge et al. ([21]) argue that dreaming of perceiving and doing is equivalent to perceiving and doing. Such a view is supported by evidence for a functional equivalence of psycho-physiological correlates of consciousness in active wakeful observers and during lucid dreaming, which occurs in REM sleep phases. Lucid dreaming and equivalent wakeful activities are measured in terms of relatively short EEG signal epochs indicating a specific activation level of the central nervous system (e.g., [22]). In addition, it has been shown (e.g., [23]) that the invariant patterns of change in quantitative EEG analysis during anaesthesia and wakefulness are reliable brain correlates of what we will refer to here as the conscious brain state.

1.3. The Conscious Brain State

The notion of the conscious state was discussed earlier by Tononi & Edelman [24] and Edelman [25], based on a definition proposed by von der Malsburg [26] in terms of a continuous brain process with a limited duration. Such an abstract definition of consciousness allows separating certain properties of the physiological state of the brain during a conscious experience from the subjective phenomenal contents that are being experienced. Moreover, it satisfies a major constraint to the scientific study of consciousness, the so-called law of parsimony (lex parsimoniae). The latter is both ethically and pragmatically grounded in the philosophy of science of the English cleric William of Occam (14th century: “entia non sunt multiplicanda necessitatem”) and states that the explanation of a phenomenon should resort to as few “entities” (mechanisms, processes, laws) as possible.

We argue that the definition of a conscious state of the brain, in which I am, I do, or I feel, most adequately defines a scientifically operational explanandum. This latter is then to be accounted for by an explanans in terms of the fewest mechanisms needed for its generation. Conscious states are neither identical nor reducible to states of awareness or vigilance (see also [27]). Particular cognitive processes such as conscious memory recall, attention, conscious perception, and volition ([9, 19, 2831]) may or may not be part of the expression of a conscious state at a given moment in time. A conscious state is a specific functional state of the brain, one that enables conscious experience of various subjective contents but is functionally independent from these subjective contents. In a conscious state where I feel that I am tired, for example, the brain substrate for the conscious nature of this feeling is functionally independent from the brain signals produced by my physiological and psychological states (tired) at that given moment. How the temporal signatures that generate conscious states become progressively independent from brain signals involved in particular sensorial and perceptual processes in the course of development will be discussed later.

John [32] argued that the most probable invariant level of neural activity or coherent functional interaction among brain regions that can be measured when a person is in a conscious state is the best possible approximation to what he called the “conscious ground state of the brain.” The conscious ground state of the brain results from specific activities in neural circuits with no more than two (see also [25]) general functional characteristics: (1) very limited information processing capacity (see, e.g., [3335]) and (2) a unique representational content for a limited and relatively short duration (e.g., [8, 3638]). The database from which a conscious state draws this representational content is steadily updated through nonconscious processes, which constitute by far the largest part of all brain activity (e.g., [3941]). Conscious information processing involves very little of such activity. It has been argued that this functional constraint is the limiting factor to any theory of consciousness ([25, 4244]). Conscious information processing relies on serial processing and allows for only a limited amount of information to be dealt with in the time span of a given conscious state. This is reflected, for example, by the fact that most people cannot consciously follow two ideas at the same time, or consciously execute two tasks simultaneously (e.g., [45, 46]). Thus, it seems quite clear that the conscious brain state relies entirely on working memory capacity ([4750]).

2. Time-Bin Model for Conscious State Generation in the Brain

The model we propose here is based on the idea that a conscious brain state is generated on the basis of purely temporal coincidences of memory signals, sometimes called representations, defined as by Churchland ([51, Page 64]) in terms of “patterns of activity across groups of neurons which carry information.” Such patterns of neural activity are described by unique signal sequences across time. These constitute the potential temporal signatures of conscious states.

2.1. The Temporal Signatures of Conscious Brain States

Earlier models based on the functional properties of working memory have attempted to clarify how groups of neurons could produce a specific temporal signal sequence, or temporal signature, that is sufficient to activate, maintain, and terminate a conscious state in the brain. Such a temporal signature would fulfil a double function: it would enable the generation of specific conscious brain states that are well distinguished from non-conscious brain states, and it would provide ready accounts for both their selective nature and the fact that they may occur in the mind more than once. A certain class of theoretical approaches to working memory, such as the Lisman-Idiart-Jensen memory model ([5257]) has proposed temporal mechanisms based on some of the empirical findings summarized above, postulating a working memory with a maximum processing capacity of only a few items, where each such item is represented by the firing activity of a cell assembly, the so-called coding assembly, in a well-defined temporal window. Specific numerical predictions were developed on the basis of such memory models (for details, see [56, 58]). Başar [59] and Başar et al. [60] considered cognitive transfer activities to be based on oscillations at specific temporal frequencies, combined like the letters of an alphabet to deliver a temporal code for conscious brain activity measurable through wavelet analysis of EEG or event-related potentials (ERPs). While these numerical models illustrate both the plausibility and the potential power of temporal codes in the brain, there is a major difference between such models and the one we propose here to account for conscious state generation. Our hypothesis relies on particular dynamics of temporal messages, or resonant time-bin messages, produced by a complex system (the brain) within massively distributed circuits of neurons. It implies that these temporal dynamics are oscillatory, since all known resonance mechanisms are by their physical nature oscillatory, but does not make predictions regarding any particular frequency bands. Recent models’ simulations have invoked possible cortical constraints for the genesis of particular conscious perceptual events, requiring a synchronization of oscillations in visual cortical and prefrontal areas ([61]) for example. Our own model postulates a functional separation between perception related neural activities in functionally identified cortical regions and the temporal neural activity patterns which generate conscious states and reflect a higher processing level. Synchronization of neural activities generated in the different functionally identified areas of the brain is not required to enable such processing.

Taking the general idea of temporal codes in the brain further, we argue that unique combinations of temporal sequences beyond some critical activity threshold generate unique conscious states, which may be regenerated whenever that signature is retrieved again, either by the same set of neurons or any other set capable of producing it. Such neural timing for conscious state generation would rely on simultaneous supra threshold activation of sets of cells within dedicated neural circuits in various arbitrarily but not necessarily randomly determined loci of the brain. The intrinsic topology that determines which single cell of a given circuit produces which spike pattern of a given temporal signature is, therefore, independent of the topological functional organization of the brain.

This assumption that a conscious brain state is triggered by temporal signals of neural circuits that operate at a higher level and independently from other functionally specific circuitry suggests a way of thinking that is radically different from that offered by most current approaches. Such functional independence has the considerable advantage that, should subsets of coding cells be destroyed, other subsets could still deliver the temporal code for conscious states elsewhere in the brain. This hypothesis is fully justified in the light of evidence for a considerable plasticity of functional brain organization (e.g., [62]), which we will discuss later herein in greater detail.

2.2. Temporal Limits of the Conscious Brain State

Just as the temporal signal sequence or activity pattern of any single coding cell is determined by its firing activity across a certain length of time, the temporal signature of a conscious state is also linked to duration, with variations in the limited dynamic range of a few hundreds of milliseconds. These temporal limitations have led many authors to link a conscious brain state to a specific conscious experience, or “psychological moment” ([24, 63, 64]) the particular expressions of which have been investigated in neurobiological and psychophysical studies (e.g., [6481]). Neural network simulations matching the psychophysical and neurobiological data have been proposed ([31, 82]).

Here, for important theoretical reasons stated in the introduction, we attempt to go beyond explanations which link the conscious state to any specific conscious content or experience, bearing in mind that our model is to suggest an “abstract solution” to the problem of consciousness (cf. [10]). Such an abstract solution could be, we argue, a purely temporal pattern code underlying the genesis of conscious states in the brain. To decipher such a code in neural signal patterns, the biophysical duration t of a conscious state may be divided into time bins [83], the duration of which is limited by the accuracy of neuronal timing, or the lower limit of biophysics. Each such bin is expressed through the parameter , which represents the sum of standard deviations for the time delay of synaptic transmission including the duration of the refractory period. An average estimate of 6 milliseconds for appears reasonable in the light of currently available data, which give estimates between 3 and 10 ms for this parameter ([8489]). Interspike intervals and integration times of cortical neurons display a similar dynamic range [90]. Under the simple assumption that within each time bin there is either a signal or no signal, which is derived from McCullough & Pitts’ [91] germinal work on information transmission in neural networks, the information content of a bin with a signal is 1 bit. On the basis of an average duration t   of 300 milliseconds for the conscious state, a of 6 milliseconds for each bin, the information content of a conscious brain state with average duration would not exceed 300/6 = 50 bits. A similar computation of the maximum quantity of information conveyed by a duration t with a number of temporal windows identified by a given was proposed by MacKay & McCulloch [92]. Other time-based models of biophysical information processing related to conscious brain states were suggested later by Thorpe et al. [93] and VanRullen et al. [94]. Approaches in terms of dynamic analyses of correlated oscillations in cortical areas at various frequencies (e.g., [95]) and functional interactions between gamma and theta oscillations in different structures of the brain (e.g., [96]) are consistent with biophysical time estimates given previously. How an immense variety of neural signals would be processed to generate a purely temporal code for conscious brain states becomes clearer in the light of functional properties of reverberant neural circuits in the brain, and the concept of a functional separation between spatial and temporal neural messages in the course of long-distance signal propagation.

2.3. Long-Distance Signal Propagation and Functional Segregation of Signal Contents

Reverberant circuits or loops in the brain have their own intrinsic toplogy (e.g. [9, 31, 97101]). Reverberant neural activity has been found in thalamo-cortical ([102104]) as well as in cortico-cortical pathways ([105108]). Reverberant neural activity as such is a purely temporal process that generates feed-back loops in the brain, referred to by some as “re-entrant circuits” ([24, 25, 98, 109117]). Reverberation is an important functional property of the brain because without it, the conscious execution of focussed action would be difficult, if not impossible [108].

Dehaene et al. [9] suggested that consciousness relies on the extension of local brain activation to higher association cortices that are interconnected by long-distance connections forming reverberating neuronal circuits extending across distant perceptual areas. We believe that the major functional advantage of such long-distance reverberation could be that it allows holding information online for durations that are unrelated to the duration of a given stimulus and long enough to enable the rapid propagation of information through different brain systems. Functional imaging studies have associated conscious brain activity with the parieto-frontal pathways, others suggested occipital correlates (see [16], for an extensive review). What both these brain regions have in common, interestingly, is that they are protected from fluctuations in sensory signals and therefore allow information sharing across a broad variety of higher cognitive processes, well beyond sensory perception.

We argue that such selective information sharing and reduction of signal variations would provide an important functional advantage to the systems in the brain which produce the conscious state code, because at such a stage of processing, such systems would be no longer required to sort out highly complex cross-talk between signals from a multitude of different channels. Thus, the major functional hypothesis of our model claims that long-distance reverberation of neural signals across long-range connections enables functional segregation of spatial and temporal message contents of reverberating signals. Such a decoupling of temporal from spatial messages clarifies how a stable and precise brain code for conscious states can be generated despite the highly plastic and largely diffuse spatial functional organization of the brain. A candidate mechanism underlying such a decoupling of neural message contents is signal decorrelation, which has become an important concept in neural network theory and systems theory in general. Decorrelation reduces cross-talk between multichannel signals in complex systems such as the brain while preserving other critical signal properties. On the basis of this general assumption, the following postulates and model properties are stated.

(1)Only non-conscious brain processes have enough capacity to process the complex cross-talk between spatial and temporal signals originating from various simultaneously activated and functionally specific sensory areas.(2)The temporal signatures of conscious states are generated and consolidated in reverberating interconnected neural circuits that extend across long distances well beyond functionally specific topology(3)The activation of a temporal signature generating a conscious state depends on statistical temporal coincidence of neural activity patterns (memory representations).(4)This temporal signature is independent of signal contents or messages relative to spatial brain maps.

The circuitry generating a temporal signature would have an intrinsic and essentially arbitrary but not necessarily random topology in terms of “which cell fires first.” This intrinsic topology is solely determined by temporal resonance principles. While there is no empirically based description of resonators receiving, amplifying, and transmitting time-patterned messages in the brain, a large number of physical and biophysical phenomena can be plausibly and parsimoniously explained on the basis of resonance principles or mechanisms, as the ART simulations cited here have successfully shown. Grossberg (e.g. [31]) often invokes evolutionary pressure to explain why resonant brain codes are, indeed, likely. Here, we propose to go one step further by claiming that it is likely that evolution has produced brains capable of generating conscious states on the basis of resonant dynamics of a higher and more abstract order compared with the original resonant code of ART.

2.4. Functional Characteristics of the Time-Bin Model

It is likely that biological resonators, in contrast with “ordinary” resonance devices designed by humans, would have highly sophisticated operating principles, given that hundreds of functionally different kinds of cells exist in the brain. On the other hand, there is no reason why resonators in the brain would have to function with a high level of precision, provided that they operate according to some redundancy principle and the whole group of resonating cells producing a conscious state behaves in a statistically predictable way. Our model conception of temporal signal sequences forming a specific biophysical time-bin pattern that activates, maintains, and inactivates a conscious state is certainly and inevitably a simplification of reality. Such a simplification does not affect the internal validity of the model arguments stated. Their major goal is to explain how a brain system could generate conscious states through the least costly processes, on the basis of a relatively limited amount of neural resources.

Given the known temporal properties of conscious information processing, we suppose that conscious states may generate messages corresponding to a vast number of variable contents translated in terms of bit sequences. In the simplest possible model, any of these conscious states would be identified by a unique sequence of 1second and 0second. Thus, in the same way as bar codes provide the key to an almost infinite variety of things, these temporal sequences provide the key to conscious brain states. A given temporal code would be generated spontaneously at a given moment in early brain development then eventually be reproduced and consolidated during brain learning. Consolidation would be a result of repeated reverberation in cortical memory circuits, leading to specific resonance states which correspond to conscious states. Once a resonance circuit is formed, it is able to generate a conscious state at any given moment in time provided there is a statistically significant temporal coincidence between brain activity patterns, or memory representations. As long as this threshold of statistically significant coincidence is not attained, these memory representations in the resonant circuitry remain non-conscious or preconscious.

Counting from a first signal or spike in biophysical time, a temporal sequence of 1second and 0second may be described as a succession of intervals between 1's. Let us imagine a network of brain cells, or resonator, with a functional architecture or connectivity described by the shapes of closed polygons (see Figure 1 for an illustration). Each apex of such a polygon would correspond to a neuron which can receive input and emit output signals from and to processors anywhere in the brain, including along particular tracks of a resonant circuit primed for a particular temporal signature during development. Here, we refer to the apices of such a network model in terms of dedicated principal resonant neurons (PRNs). PRNs would be part of intra- and intercortical networks of neurons, capable of forming long-range connections with other neurons across large distances across the brain, well beyond their nearest neighbours, as one of their major functional property. Not all neurons in the brain would have such a capacity.

fig1
Figure 1: Genesis of resonance states in a dedicated circuit with five principal resonant neurons acting as “coincidence detectors.” Figure 1(a) illustrates a dedicated resonant circuit with five principal resonant neurons acting as coincidence detectors. Each apex of a given polygon corresponds to a principal resonant neuron which can receive input or emit output signals from and to processors anywhere in the brain, along the long-distance tracks of resonant circuitry that has been primed in the course of brain development to generate the temporal activity patterns for conscious state generation. Unidirectional priming only is shown here as one possible example, for illustration. Each edge of a polygon represents a delay path which transmits signals from a given apex to the next, with a characteristic delay that would correspond to some multiple of the elementary “bin” unit. All principal resonant neurons would have been primed throughout lifespan brain development to preferentially process input which carries statistically “strong” signals. When activated, principal resonant neurons send signals along all delay paths originating from them, and all those receiving a signal coinciding with the next input signal remain activated. The connections between principal resonant neurons of such a model would thus be potentiated as in the classic Hebbian model. Figure 1(b) shows some of the many possible excitation patterns within a dedicated resonance circuit with only five principal neurons. Such circuits would form interconnected neural networks that extend across large distances across the brain and have intrinsic, essentially arbitrary though not random, topologies in terms of “which cell fires first.” Such intrinsic topology is unrelated to functionally specific spatial cortical maps. As in the world some events are more likely than others, the same holds for brain events. Whether a given temporal resonance pattern will or will not generate a conscious state is determined by statistical likelihood computations in the brain. How such computations may work is simulated in ART (e.g., [82]), or the TEMPOTRON model by Gutig & Sompolinski [118].

Each edge of a polygon would represent a delay path which transmits signals from a given apex to the next, with a characteristic delay corresponding to some multiple of the elementary “bin” unit ( , as defined earlier by others in other models discussed earlier here). The distribution of these delays should fit the proportion of 1's and 0's in typical “time-bin” messages: if, for example, 1's are as likely to occur in a code as 0's, then the proportions of various delays would be predictable. The delay paths as such would correspond to local neural architectures in the brain (e.g. [119124]). Whatever the effective operational structure of such a resonance circuit, the specific temporal signatures it generates would be experience dependent and consolidated during development. The database of long-term memory representations from which these temporal signatures are drawn is updated continuously through non-conscious mechanisms. The conscious experience of an event we perceive as “new” is generated by a temporal resonance pattern that is for the first time activated above the coincidence threshold. Such a pattern is built from a new and unique combination of previously non-conscious memory representations.

All PRNs would have been primed during brain development to send signals along all delay paths originating from them, and all those receiving a signal coinciding with the next input signal would remain activated. Connections between PRNs would thereby be potentiated, as in the classical Hebbian model. Simultaneously, signals travelling from initially activated neurons to connected cells with too long delay paths would be cancelled. Thus, once a given polygon of the resonant network is potentiated along all of its edges, it would reverberate temporally coinciding signals while amplifying resonant connections across populations of resonant neurons within massively parallel neural networks in the brain. This model assumption is biologically plausible in the light of physiological evidence for both intra- and intercortical connectivity across large distances in the primate brain. The representational power of this distributed temporal code, after functional decoupling from all spatial signal contents relating to functionally specified cortical topology, is virtually unlimited. Such a code does not imply an identity link between the spatial patterns describing a subset of PRNs (Figure 1(b)) and the temporal firing sequence recorded at any such PRN, nor is there any reason why it should. Whether the nine resonant activity patterns shown in Figure 2(b) will trigger given conscious state is not determined by the spatial activity distribution as such, but by the relative probabilistic weight or, expressed in computational language, the relative synaptic weights of the connections of a given subset of PRNs within large populations of such neurons. Thus, any of the nine different resonant states represented in Figure 2(b) would only generate a conscious state if the temporal sequence shown produces resonant activity above the statistical probability threshold. The biological plausibility of a probabilistically driven temporal code relies on the fact that, in the outside world, some physical events are more likely than others. We may consistently assume that brain events would likewise be governed by probabilistic principles.

482696.fig.002
Figure 2: Developmental selection of temporal activity patterns coding for conscious state access. Figure 2 illustrates schematically how the critical temporal activity patterns for conscious states would be progressively selected through activity dependent plasticity during lifespan brain development. At birth, a potentially infinite number of temporal activity patterns would be generated more or less randomly in the neural circuits of the brain. As brain learning progresses, repeated matches of brain events would generate resonant states in long-distance neural circuits. Such resonant states result from higher order processing in dedicated resonant circuits which function independently from sensorial and perceptual processes. Whenever the temporal firing patterns produced by these dedicated resonance circuits reach the statistical temporal coincidence threshold, they generate a conscious brain state. Such a temporal code would unlock the door to consciousness like some bar code would unlock the door of an electronically protected safe.

Probabilistic mechanisms ensure both the relative uniqueness and the seriality of conscious brain events in a competitive race of massively distributed temporal resonances where the winner takes all. How neuronal circuits learn statistical information embedded in distributed patterns of activity is shown in some of the ART simulations by Grossberg et al. (cf. [82]). Brain learning based on purely temporal signal statistics is simulated in the TEMPOTRON model [118].

2.5. From Elementary Temporal Activity Patterns to a Dynamic Resonant Code

Like time-bin resonance itself, the selection of the critical temporal firing patterns that constitute the access code for conscious states would use purely statistical criteria, leading to fewer and fewer consolidated patterns for increasingly complex signal coincidences as the brain learns and develops. When we are born, all brain activity is more or less arbitrary, not necessarily random. During brain development, temporal activity patterns elicited by events in biophysical time will be linked to a variety of particular conscious experiences in a decreasingly arbitrary manner as frequently occurring codes are progressively consolidated through a process which we propose to call developmental selection. This is illustrated in Figure 2, which is our adaptation of Figure  6 from Helekar’s [88] paper. Developmental selection resolves a critical problem in Helekar’s theory, which fails to explain how a nonarbitrary linkage of the code to a variety of contents may take place.

To overcome this dilemma, Helekar daringly proposed a genetically determined linkage, which flies into the face of a large body of evidence showing that brain processes are highly plastic and experience dependent. A genetically determined linkage of the immense variety of possible subjective experience and specific temporal brain activities leaves the question of a brain mechanism for conscious states unanswered. Helekar’s “elementary experience-coding temporal activity patterns” are conceived in terms of preprogrammed subsets of neural firing patterns belonging to the set of all possible temporal patterns that could be generated by the brain. His original hypothesis stated that only those patterns that are members of this subset would give rise to conscious experiences upon their repeated occurrence. The repeated occurrence of ordinary patterns, which Helekar calls noncoding patterns, would not produce conscious experience. The problem with such reasoning is that, once again, the subjective contents of a conscious state are identified with the functional nature of the state as such. In contrast to such a view, we claim that what is commonly called a “subjective experience” is encoded and decoded in the brain through non-conscious mechanisms only.

Also, rather than invoking a genetic programme, we prefer the far more likely hypothesis of a progressively nonarbitrary linkage of potential contents of conscious states and their temporal signatures on the basis of developmental processes and brain learning. Once a given temporal signature has been arbitrarily linked to a conscious state, it remains potentially available as a “brain hypothesis,” which is then either progressively consolidated, or not. Once consolidated, linkages between code and content may become less arbitrary, in some cases even deterministic. The progressive consolidation of linkages between code and content happens outside consciousness, through the repeated matching of working memory representations to long-term memory representations, as postulated in Grossberg’s ART (e.g., [31]).

2.6. From Temporal Resonance to Biophysical Eigenstates

As pointed out above, what distinguishes a conscious state from a non-conscious state solely depends on a statistical threshold. A brain mechanism achieving coincidence computation would lead to the activation of a specific temporal code at a given time on the basis of statistically significant coincidences. A conscious state arises from a temporarily activated temporal signature generated within reverberating neural circuits extending across long distances in the brain. What we call “experience” in common language is coded in the brain in terms of signal sequences in biophysical time. The statistical coincidence of specific temporal activity patterns triggers, maintains, and terminates conscious brain states like a bar code would activate, maintain, and inactivate the electronic locks of a safe. Given the almost infinite number of signal sequences that are possible in a temporal code, there should be a unique temporal pattern for a unique conscious state.

In terms of quantum physics analogy, our time-bin resonance model suggests that non-conscious states are described by temporal wavefunctions which do not have a well-defined period. While a non-conscious state may be a combination of many nonspecific eigenstates, resonant activity beyond the probabilistic coincidence threshold would produce the well-defined temporal activity pattern, or wavefunction, of a single specific eigenstate, the “conscious eigenstate.

3. Arguments in Favour of the Temporal Code

The concept of a temporal neural code for conscious states as the most parsimonious link between brain and mind is justified in the light of several theoretical arguments. It might be useful here to recall that the term “code” initially stems from information theory and may stand for both (1) an entire system of information transmission or communication (like the brain) where symbols are assigned definite meanings and (2) a set of symbols for the content of a given message (like a temporal activity pattern) within that system. One argument in favour of a purely temporal access code for conscious brain states is its undeniable functional and adaptive advantage. Its origin would most likely be epigenetic. During brain development, our subjective experience remains largely non-conscious in the first months of our learning existence. Then, such experience eventually generates data of our phenomenal consciousness, around the age of two or three.

3.1. Plasticity of Spatial Functional Brain Organization

Sensory, somatosensory, and proprioceptive signals may be perceived instantly as data of a conscious state, eliciting what psychophysicists call spontaneous sensations. The integration of the variety of signals such sensations originate from relies on non-conscious mechanisms, which have to be sufficiently adaptable and display a certain functional plasticity to enable the continuous updating of representations in response to changes imposed on our brains day by day by new situations and experiences. Clinical observations in neurological patients severely challenge the idea that any function should be fixed in specific loci. The “phantom limb” syndrome (e.g., [125, 126]) is one such example revealing the extraordinary plasticity of topological functional brain organization. The phantom limb syndrome was already mentioned in writings by Paré and Descartes, and described in greater detail by Guéniot [127]. It has been repeatedly observed in hundreds of case studies since. After arm amputation, patients often experience sensations of pain in the limb that is no longer there, and experimental data show that a third of such patients systematically refer stimulations of the face to the phantom limb, with a topographically organized map for the individual fingers of a hand. On the basis of similar evidence for massive changes in somatotopic maps after digit amputation and other experimental data showing that several years after dorsal rhizotomy in adult monkeys, a region corresponding to the hand in the cortical somatotopic map of the primate’s brain is activated by stimuli delivered to the face [128], Ramachandran and his colleagues proposed their “remapping hypothesis” (e.g., [129]). The latter clarifies how spatial and topological representations are referred to other loci in the brain through massive cortical reorganization. The findings reported by Ramachandran and others provide compelling evidence that, despite dramatic changes in non-conscious topology, representations remain available to the conscious state and can still be experienced as sensations of pain, cold, digging, or rubbing. We believe that this is so because the higher level temporal signatures of lower level sensory representations persist for some time in the brain.

3.2. The Temporal “Coherence Index” and Coincidence Detection

In his “neurophysics of consciousness," John [32, 130] suggested that a conscious state may be identified with a brain state where information is represented by levels of coherence among multiple brain regions, revealed through coherent temporal firing patterns that deviate significantly from random fluctuations. This assumption is consistent with the idea of a stable and perennial temporal code for conscious state generation despite spatial remapping or cortical reorganization. Empirical support for John’s theory comes from evidence for a tight link between electroencephalographic activity in the gamma range defined by temporal firing rates between 40 and 80 Hz (i.e., the so-called “40-Hz” or “phase-locked” gamma oscillations) and conscious states (e.g. [131]). This "coherence index," with a characteristic phase locking at 40 Hz, was found to change with increasing sedation in anaesthesia, independent of the type of anaesthetic used [132]. Decreasing temporal frequencies were reported when doses of a given anaesthetic were increased. Moreover, the characteristic phase locking at 40 Hz displays coherence not only across brain regions during focussed arousal, but also during REM sleep when the subject is dreaming ([133]). Coherence disappears during dreamless, deep slow-wave sleep, which is consistent with findings reported on deeply anesthetized patients. The fact that the temporal coherence index of a conscious state is produced during focussed arousal as well as during dreaming in REM sleep phases is fully consistent with the LaBerge’s idea (e.g. [21]) that dreams and conscious imagination represent functionally equivalent conscious states. Phase-locking at some critical temporal frequency may result from intracortical reverberation and may correlate with the brain mechanisms which establish arbitrary nonrandom departures from different loci or topological maps. Such maps may undergo functional re-organization. The temporal code for conscious state generation, once established, would remain intact for quite a while, keep resonating and eventually reach the critical activation threshold.

3.3. Adaptive Resonance Theory (ART)

Originally, Adaptive Resonance Theory was conceived as a theory of brain learning to explain how the brain generates and updates representations of continuously changing physical environments ([134]). The theory was then extended to account for related phenomena such as attention, intention, volition, and the conscious perception of visual objects (e.g., [135]) or speech (e.g., [136]). Intentions and volition lead to focus attention on potentially relevant internal or external events. These foci of attention lead to new representations when the brain is able to validate and integrate them into resonant states, which would include the conscious states of the brain. According to Grossberg [31], all conscious states are resonant states, triggered by external or internal events and mediated by attention or volition. ART successfully explains how the brain ensures the continuous updating of long-term memory representations through a mechanism termed top-down matching, and how repeated top-down matching can lead to resonant brain states. Since the brain is continuously confronted with all sorts of old and new events, it has to continuously generate probabilistic hypotheses to determine what all these events are most likely to be, and whether they are relevant. This involves matching working memory representations to representations stored in long-term memory. Coincidence of such bottom-up and top-down representations produces so-called matching signals, or coincidence signals which, when repeatedly generated, lead to resonant states in the brain. These are, according to Grossberg, topologically grounded in the “What” and “Where” processing streams of the brain (see [31] for an extensive review). The resonant code suggested in ART is thus tightly linked to functionally specific brain regions coding for perceptual and sensorial processes.

Here we argue, for reasons we have specified above, that the generation of conscious brain states is largely independent of these specific functions. It must therefore depend on a higher order and, as we suggest, purely temporal code based on resonant brain activities beyond sensorial or perceptual processes, most likely on the basis of long-distance propagation and reverberation leading to such higher level resonance. While perception and sensation may be particular aspects of a specific conscious experience (see above), the mechanisms underlying such experience are not to be confounded with the mechanisms underlying the conscious brain state as such. We suggest that the latter is accounted for at the level of an “abstract” brain process, as explained in the paragraphs dealing with the time bin model, through a biophysical code in which a single dimension (time) of neural processing is preserved. The temporal signatures for conscious state generation may be seen as an emergent property of such higher level resonant brain dynamics, which would be functionally disconnected from perceptual processes or sensations. When I fully experience I am, as in deep meditation, I may not perceive visually, hear sound, or experience any sensation other than total relaxation, yet, my brain is definitely in a conscious state.

4. Questions for the Time-Bin Model

Specific questions regarding some of the implications of the time-bin model include the following.

(1)How does a conscious state arise from statistical supra-threshold activation of its temporal signature?(2)How precise would such a signature be?(3)Would it account for the generation of different levels of the conscious state in brains with different anatomical structures (brains of animals, Martians, robots)?(4)How does the biophysical time-bin code relate to variations in the subjectively experienced duration of a conscious state or psychological moment?
4.1. From the First Tune to a Conscious Experience in the Concert Hall

From the early days of our existence when nothing we see, feel, or do is conscious, visual, auditory, tactile, and other sensory input from multiple sources is steadily processed and progressively integrated into more and more stable memory representations through the extraordinary capacity of nonconscious brain processes. These representations progressively fill the steadily up-dated database that forms our long-term memory, from the first time we see a face or hear a tune to the moment we start recognizing tunes, pieces of music, and faces and names of performers. At some stage in this process, when there is enough resonant circuitry extending across longer and longer distances in the brain, allowing the increasingly non-arbitrary linkage between conscious states and temporal signatures capable of triggering them through coincidence statistics that have become robust enough, a greater and greater variety of non-consciously integrated representations will become available to a larger and larger variety of complex conscious experiences. When we sit in a concert hall and listen to a symphony by Brahms, we will experience successive mental events during which certain aspects of the symphony, the visual scene, or the person sitting next to us are selectively and momentarily made available to a conscious state. What is selected will depend on how many coinciding non-conscious memory representations of the sssprevious conscious states will produce activities above the threshold in the dedicated long-distance resonant circuits which generate their temporal signatures. Other brain mechanisms, such as top-down amplification or volition ([9, 31]), may or may not be involved in this process.

When a conscious state is triggered, we become for a short moment able to take stock of past events and to project events into the future. This ability reflects the time-ordering function of consciousness. It allows humans to plan and to read sense into their lives. Sometimes when we are conscious, we may be under the impression that what we experience looks or feels new, although we have seen or felt the same many times before. Conversely, when we find ourselves in a new situation, a conscious experience may leave us with the feeling that we have “been there before”, or that “this has happened before.” Such impressions are readily explained by the statistical nature of the temporal code proposed here.

4.2. Apparent Novelty and “déjà vu”

Subjective impressions of novelty or “déjà vu” would result from the fact that the temporal signatures of conscious states represent a code that is based on a purely temporal statistical likelihood. In such a code, identical signatures are not linked to identical conscious experiences. A brain hypothesis for a physical event at any given moment in time cannot be more than the brain’s “best guess,” and biophysical brain events that remain identical across time do not necessarily produce identical conscious experiences across time. Conversely, conscious states with different temporal signatures may well produce a subjective experience of “déjà vu.” A brain code for conscious states does not have to be perfectly accurate, only sufficiently robust against major fluctuations and errors. This idea of an approximate brain code is consistent with the hypothesis of a multiple realizability of conscious states.

4.3. Multiple Realizability of Conscious States

Rather than assuming that there would be a unique physiological state of the brain for every unique mental state, philosophers such as Lewis (e.g., [137]) have argued that the idea of different physiological or physical life-forms being in a same mental state without being in the same physiological or physical state would be a far more plausible hypothesis. The latter has been termed “hypothesis of a multiple realizability of mental states.” Brains with different levels of physiological development and spatial functional topology or architecture should be able to generate temporal signatures producing equivalent, though not necessarily identical, conscious states in different species. This should be possible through long-distance temporal resonance in neural networks with very different intrinsic topologies and could be based on statistical activity thresholds far less robust than those established on the grounds of brain data reflecting the amount and complexity of human lifelong development. What kind of qualitative experience or qualia such conscious states would enable remains completely uncertain. Our conscious brain somehow becomes connected with the physical world in the course of development, through a discrete process which enables it to function in a statistically reliable way. Sometimes, this process goes wrong, as in pathological brain development producing dysfunctional conscious states.

4.4. The Conscious Brain and Psychological Time

In a way similar to that of sonar systems which connect to the outside by acquiring some form of knowledge of the physical environment, conscious states appear to be encoded in our brains in terms of temporal base frequencies, as through scanning or pulsing. Although a conscious state may be experienced in any form of psychological space-time, the associated biophysical periods in the brain “scale” such experience through a completely self-sufficient code. This explains how the inner clocks of consciousness can operate independently from spatial, verbal, or any other form of cognitive or emotional experience. The brain is thus able to detach itself from the subjective nature of conscious experience, from what may seem “exciting” or “boring” to us, with time “flying by” or “standing still.” While we are in a conscious state, imprisoned by all sorts of mental events we may be experiencing, or completely freed from such experience as in fully conscious deep meditation, the brain is scaling signals related to these temporary events, in its own biophysical time (see Figure 3).

482696.fig.003
Figure 3: The conscious eigenstate as a function of biophysical and subjectively recalled time. Figure 3 illustrates how a conscious eigenstate of the brain may be conceived as part of a state vector as a function of biophysical time (T) and subjectively recalled time (τ). In our model, the duration of a conscious eigenstate would correspond to a given number of biophysical “time bins.” Biophysical time (t) is independent of the subjectively recalled duration of a given experience by a human individual and would correspond to the duration of the critical temporal activity pattern generated in dedicated long-distance resonant circuits to activate, maintain, and inactivate a conscious eigenstate. Our “time bin model” thus explains how the inner clocks of consciousness operate independently from subjective experience, where variations from “interesting” to “dull” may produce variable, subjectively recalled durations of events.

5. Conclusions

The abstract model of conscious state generation proposed here addresses the mind-body problem by suggesting that the conscious brain state is a dynamic result of progressive life-long brain development. The conscious state code emerging from such development is of a purely temporal and statistical nature.

Some time ago, Nagel [138] insisted that in order to understand the hypothesis that a mental event is a physical event, we require more than the understanding of the word “is”. His comment directly relates to identity theory (e.g., [139, 140]), a class of mind-body theories which reject dualism by considering two possibilities, or hypotheses, of identity between a mental state and a physiological state. The first is type identity, where mental states themselves would be physical states. The second is token identity, where mental states would be the direct reflection of a physiological or physical state. Our model assumptions do not sustain the identity claim. They address a fundamental problem recently discussed by Block [10]. Since phenomenal consciousness exceeds conscious cognitive activities such as perception or memory recall, a first step towards the abstract solution argued for by Block is to offer a theory that dissociates the brain origins of cognitive performance taking place within consciousness from the brain genesis of the conscious state as such. The idea of an abstract temporal signature for conscious states achieves this by explaining how a conscious state arises from higher-order temporal resonance dynamics that are independent of sensory processing and perception.

Thus, as Nagel [138] suggested, we go indeed beyond the word “is” when we address the mind-body problem in terms of an abstract biophysical code. Such a code is difficult to reconcile with theories invoking “type” or “token” identity between mind and brain. We nonetheless defend a rigorously monist view by suggesting that dynamic links between conscious states and physiological states form on the basis of highly plastic brain activities governed by probabilistic principles.

References

  1. J. F. Kihlstrom, “The cognitive unconscious,” Science, vol. 237, no. 4821, pp. 1445–1452, 1987. View at Google Scholar
  2. T. Natsoulas, “Concepts of consciousness,” Journal of Mind and Behavior, vol. 4, pp. 13–59, 1983. View at Google Scholar
  3. D. C. Dennett, Consciousness Explained, Little, Brown and Company, Boston, Mass, USA, 1991.
  4. M. I. Posner, “Attention: the mechanisms of consciousness,” Proceedings of the National Academy of Sciences of the United States of America, vol. 91, no. 16, pp. 7398–7403, 1994. View at Publisher · View at Google Scholar
  5. N. Block, “On a confusion about a function of consciousness,” Behavioral and Brain Sciences, vol. 18, no. 2, pp. 227–247, 1995. View at Google Scholar
  6. A. Revonsuo, “Prospects for a scientific research program on consciousness,” in Neural Correlates of Consciousness: Empirical and Conceptual Questions, T. Metzinger, Ed., pp. 57–75, MIT Press, Cambridge, Mass, USA, 2000. View at Google Scholar
  7. A. Zeman, “Consciousness,” Brain, vol. 124, no. 7, pp. 1263–1289, 2001. View at Google Scholar
  8. A. Dietrich, “Functional neuroanatomy of altered states of consciousness: the transient hypofrontality hypothesis,” Consciousness and Cognition, vol. 12, no. 2, pp. 231–256, 2003. View at Publisher · View at Google Scholar
  9. S. Dehaene, J.-P. Changeux, L. Naccache, J. Sackur, and C. Sergent, “Conscious, preconscious, and subliminal processing: a testable taxonomy,” Trends in Cognitive Sciences, vol. 10, no. 5, pp. 204–211, 2006. View at Publisher · View at Google Scholar
  10. N. Block, “Consciousness, accessibility, and the mesh between psychology and neuroscience,” Behavioral and Brain Sciences, vol. 30, no. 5-6, pp. 481–499, 2007. View at Publisher · View at Google Scholar
  11. W. James, Principles of Psychology, Holt, New York, NY, USA, 1980.
  12. D. J. Chalmers, The Conscious Mind, Oxford University Press, Oxford, UK, 1996.
  13. J. R. Searle, “How to study consciousness scientifically,” Philosophical Transactions of the Royal Society B, vol. 353, no. 1377, pp. 1935–1942, 1998. View at Publisher · View at Google Scholar
  14. G. Buzsáki, “The structure of consciousness,” Nature, vol. 446, no. 7133, p. 267, 2007. View at Publisher · View at Google Scholar
  15. J. Driver and P. Vuilleumier, “Perceptual awareness and its loss in unilateral neglect and extinction,” Cognition, vol. 79, no. 1-2, pp. 39–88, 2001. View at Publisher · View at Google Scholar
  16. G. Rees, E. Wojciulik, K. Clarke, M. Husain, C. Frith, and J. Driver, “Neural correlates of conscious and unconscious vision in parietal extinction,” Neurocase, vol. 8, no. 5, pp. 387–393, 2002. View at Publisher · View at Google Scholar
  17. M. E. Silverman and A. Mack, “Change blindness and priming: when it does and does not occur,” Consciousness & Cognition, vol. 15, no. 2, pp. 409–422, 2006. View at Publisher · View at Google Scholar
  18. S. Dehaene and J. P. Changeux, “Ongoing spontaneous activity controls access to consciousness: a neuronal model for inattentional blindness,” PLoS Biology, vol. 3, no. 5, article e141, 2005. View at Google Scholar
  19. S. Grossberg, “How hallucinations may arise from brain mechanisms of learning, attention, and volition,” Journal of the International Neuropsychological Society, vol. 6, no. 5, pp. 583–592, 2000. View at Publisher · View at Google Scholar
  20. B. J. Baars, In the Theater of Consciousness, Oxford University Press, New York, NY, USA, 1997.
  21. S. LaBerge, “Lucid dreaming: psychophysiological studies of consciousness during REM sleep,” in Sleep and Cognition, R. R. Bootzen, J. F. Kihlstrom, and D. L. Schacter, Eds., pp. 109–126, APA press, Washington, DC, USA, 1990. View at Google Scholar
  22. S. LaBerge, L. Levitan, and W. C. Dement, “Lucid dreaming: physiological correlates of consciousness during REM sleep,” Journal of Mind and Behavior, vol. 7, pp. 251–258, 1986. View at Google Scholar
  23. D. R. Drover, H. J. Lemmens, E. T. Pierce et al., “Patient State Index (PSI): titration of delivery and recovery from propofol, alfentanil, and nitrous oxide anesthesia,” Anesthesiology, vol. 97, no. 1, pp. 82–89, 2002. View at Publisher · View at Google Scholar
  24. G. Tononi and G. M. Edelman, “Consciousness and complexity,” Science, vol. 282, no. 5395, pp. 1846–1851, 1998. View at Google Scholar
  25. G. M. Edelman, “Naturalizing consciousness: a theoretical framework,” Proceedings of the National Academy of Sciences of the United States of America, vol. 100, no. 9, pp. 5520–5524, 2003. View at Publisher · View at Google Scholar
  26. C. von der Malsburg, “The coherence definition of consciousness,” in Cognition, Computation and Consciousness, M. Ito, Y. Miyashita, and E. T. Rolls, Eds., pp. 193–204, Oxford University Press, Oxford, UK, 1997. View at Google Scholar
  27. T. A. Nielsen and P. Stenstrom, “What are the memory sources of dreaming?” Nature, vol. 437, no. 7063, pp. 1286–1289, 2005. View at Publisher · View at Google Scholar
  28. N. Cowan, E. M. Elliott, S. J. Saults et al., “On the capacity of attention: its estimation and its role in working memory and cognitive aptitudes,” Cognitive Psychology, vol. 51, no. 1, pp. 42–100, 2005. View at Publisher · View at Google Scholar
  29. A. Raz and J. Buhle, “Typologies of attentional networks,” Nature Reviews Neuroscience, vol. 7, no. 5, pp. 367–379, 2006. View at Publisher · View at Google Scholar
  30. F. Crick and C. Koch, “The unconscious homunculus,” Neuro-Psychoanalysis, vol. 2, pp. 3–11, 2000. View at Google Scholar
  31. S. Grossberg, “The link between brain learning, attention, and consciousness,” Consciousness and Cognition, vol. 8, no. 1, pp. 1–44, 1999. View at Publisher · View at Google Scholar
  32. E. R. John, “The neurophysics of consciousness,” Brain Research Reviews, vol. 39, no. 1, pp. 1–28, 2002. View at Publisher · View at Google Scholar
  33. W. Schneider and R. M. Shiffrin, “Controlled and automatic human information processing: I. Detection, search, and attention,” Psychological Review, vol. 84, no. 1, pp. 1–66, 1977. View at Publisher · View at Google Scholar
  34. R. M. Shiffrin and W. Schneider, “Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory,” Psychological Review, vol. 84, no. 2, pp. 127–190, 1977. View at Publisher · View at Google Scholar
  35. R. M. Shiffrin, “Attention, automatism, and consciousness,” in Essential Sources in the Scientific Study of Consciousness, B. J. Baars, W. P. Banks, and J. B. Newman, Eds., pp. 631–642, MIT Press, Cambridge, Mass, USA, 2003. View at Google Scholar
  36. J. Duncan, “The locus of interference in the perception of simultaneous stimuli,” Psychological Review, vol. 87, no. 3, pp. 272–300, 1980. View at Publisher · View at Google Scholar
  37. B. Mangan, “The conscious “fringe”: bringing William James up to date,” in Essential Sources in the Scientific Study of Consciousness, B. J. Baars, W. P. Banks, and J. B. Newman, Eds., pp. 741–759, MIT Press, Cambridge, Mass, USA, 2003. View at Google Scholar
  38. J. LeDoux, Synaptic Self: How Our Brains Become Who We Are, Macmillan, New York, NY, USA, 2002.
  39. M. Velmans, “Is human information processing conscious?” Behavioral and Brain Sciences, vol. 14, no. 4, pp. 651–669, 1991. View at Google Scholar
  40. J. A. Gray, “To thine own synapses be true?” Nature Neuroscience, vol. 5, p. 1115, 2002. View at Google Scholar
  41. S. Pockett, “Does consciousness cause behaviour?” Journal of Consciousness Studies, vol. 11, no. 2, pp. 23–40, 2004. View at Google Scholar
  42. M. P. A. Page and D. Norris, “The primacy model: a new model of immediate serial recall,” Psychological Review, vol. 105, no. 4, pp. 761–781, 1998. View at Google Scholar
  43. A. K. Seth and B. J. Baars, “Neural Darwinism and consciousness,” Consciousness & Cognition, vol. 14, no. 1, pp. 140–168, 2005. View at Publisher · View at Google Scholar
  44. A. K. Seth, E. Izhikevich, G. N. Reeke, and G. M. Edelman, “Theories and measures of consciousness: an extended framework,” Proceedings of the National Academy of Sciences of the United States of America, vol. 103, no. 28, pp. 10799–10804, 2006. View at Publisher · View at Google Scholar
  45. E. C. Cherry, “Some experiments on the recognition of speech, with one and two ears,” Journal of the Acoustical Society of America, vol. 25, pp. 975–979, 1953. View at Google Scholar
  46. B. J. Baars, “Metaphors of consciousness and attention in the brain,” Trends in Neurosciences, vol. 21, no. 2, pp. 58–62, 1998. View at Publisher · View at Google Scholar
  47. H. S. Oberly, “A comparison of the spans of attention and memory,” American Journal of Psychology, vol. 40, pp. 295–302, 1928. View at Google Scholar
  48. G. A. Miller, “The magic number seven, plus or minus two: some limits on our capacity for processing information,” The Psychological Review, vol. 63, pp. 81–97, 1956. View at Google Scholar
  49. A. J. Parkin, “Human memory,” Current Biology, vol. 9, pp. 582–585, 1999. View at Google Scholar
  50. E. K. Vogel, G. F. Woodman, and S. J. Luck, “Storage of features, conjunctions, and objects in visual working memory,” Journal of Experimental Psychology: Human Perception and Performance, vol. 27, no. 1, pp. 92–114, 2001. View at Publisher · View at Google Scholar
  51. P. S. Churchland, Brain-Wise. Studies in Neurophilosophy, MIT Press, Cambridge, Mass, USA, 2002.
  52. J. E. Lisman and M. A. P. Idiart, “Storage of 7±2 short-term memories in oscillatory subcycles,” Science, vol. 267, no. 5203, pp. 1512–1515, 1995. View at Google Scholar
  53. O. Jensen, M. A. P. Idiart, and J. E. Lisman, “Physiologically realistic formation of autoassociative memory in networks with theta/gamma oscillations: role of fast NMDA channels,” Learning Memory, vol. 3, no. 2-3, pp. 243–256, 1996. View at Google Scholar
  54. O. Jensen and J. E. Lisman, “Novel lists of 7 ± 2 known items can be reliably stored in an oscillatory short-term memory network: interaction with long-term memory,” Learning Memory, vol. 3, no. 2-3, pp. 257–263, 1996. View at Google Scholar
  55. J. Lisman, “What makes the brain's tickers tock?” Nature, vol. 394, no. 6689, pp. 132–133, 1998. View at Publisher · View at Google Scholar
  56. O. Jensen and J. E. Lisman, “An oscillatory short-term memory buffer model can account for data on the Sternberg task,” Journal of Neuroscience, vol. 18, no. 24, pp. 10688–10699, 1998. View at Google Scholar
  57. O. Jensen, “Reading the hippocampal code by theta phase-locking,” Trends in Cognitive Sciences, vol. 9, no. 12, pp. 551–553, 2005. View at Publisher · View at Google Scholar
  58. O. Jensen and J. E. Lisman, “Hippocampal sequence-encoding driven by a cortical multi-item working memory buffer,” Trends in Neurosciences, vol. 28, no. 2, pp. 67–72, 2005. View at Publisher · View at Google Scholar
  59. E. Başar, Brain Functions and Oscillations, I. Brain Oscillations: Principles and Approaches, Springer, Berlin, Germany, 1998.
  60. E. Başar, C. Başar-Eroglu, S. Karakaş, and M. Schürmann, “Brain oscillations in perception and memory,” International Journal of Psychophysiology, vol. 35, no. 2-3, pp. 95–124, 2000. View at Publisher · View at Google Scholar
  61. S. Grossberg and M. Versace, “Spikes, synchrony, and attentive learning by laminar thalamocortical circuits,” Brain Research, vol. 1218, pp. 278–312, 2008. View at Publisher · View at Google Scholar
  62. J. T. Wall, J. Xu, and X. Wang, “Human brain plasticity: an emerging view of the multiple substrates and mechanisms that cause cortical changes and related sensory dysfunctions after injuries of sensory inputs from the body,” Brain Research Reviews, vol. 39, no. 2-3, pp. 181–215, 2002. View at Publisher · View at Google Scholar
  63. E. Pöppel and N. Logothetis, “Neuronal oscillations in the human brain,” Naturwissenschaften, vol. 73, no. 5, pp. 267–268, 1986. View at Google Scholar
  64. C. von der Malsburg, “The what and why of binding: the modeler's perspective,” Neuron, vol. 24, no. 1, pp. 95–104, 1999. View at Publisher · View at Google Scholar
  65. D. Lehmann, H. Ozaki, and I. Pal, “EEG alpha map series: brain micro-states by space-oriented adaptive segmentation,” Electroencephalography and Clinical Neurophysiology, vol. 67, no. 3, pp. 271–288, 1987. View at Google Scholar
  66. R. Lestienne and B. L. Strehler, “Differences between monkey visual cortex cells in triplet and ghost doublet informational symbols relationships,” Biological Cybernetics, vol. 59, no. 4-5, pp. 337–352, 1988. View at Google Scholar
  67. S. J. Thorpe and M. Imbert, “Biological constraints on connectionist models,” in Connectionism in Perspective, R. Pfeifer, Z. Schreter, and F. Fogelman-Soulié, Eds., pp. 63–92, Elsevier, Amsterdam, The Netherlands, 1989. View at Google Scholar
  68. F. Crick and C. Koch, “Towards a neurobiological theory of consciousness,” Seminars in Neuroscience, vol. 2, pp. 263–275, 1990. View at Google Scholar
  69. J. K. Tsotsos, “Analyzing vision at the complexity level,” Behavioral and Brain Sciences, vol. 13, no. 3, pp. 423–469, 1990. View at Google Scholar
  70. M. C. Potter, “Very short-term conceptual memory,” Memory & Cognition, vol. 21, no. 2, pp. 156–161, 1993. View at Google Scholar
  71. W. K. Strik and D. Lehmann, “Data-determined window size and space-oriented segmentation of spontaneous EEG map series,” Electroencephalography and Clinical Neurophysiology, vol. 87, no. 4, pp. 169–174, 1993. View at Publisher · View at Google Scholar
  72. J. A. Gray, “Consciousness and its (dis)contents,” Behavioral and Brain Sciences, vol. 18, pp. 703–722, 1995. View at Google Scholar
  73. R. D. Pascual-Marqui, C. M. Michel, and D. Lehmann, “Segmentation of brain electrical activity into microstates: model estimation and validation,” IEEE Transactions on Biomedical Engineering, vol. 42, no. 7, pp. 658–665, 1995. View at Publisher · View at Google Scholar
  74. J. G. Taylor, “A competition for consciousness?” Neurocomputing, vol. 11, no. 2–4, pp. 271–296, 1996. View at Publisher · View at Google Scholar
  75. T. Koenig and D. Lehmann, “Microstates in language-related brain potential maps show noun-verb differences,” Brain and Language, vol. 53, no. 2, pp. 169–182, 1996. View at Publisher · View at Google Scholar
  76. D. Lehmann, W. K. Strik, B. Henggeler, T. Koenig, and M. Koukkou, “Brain electric microstates and momentary conscious mind states as building blocks of spontaneous thinking. I. Visual imagery and abstract thoughts,” International Journal of Psychophysiology, vol. 29, no. 1, pp. 1–11, 1998. View at Publisher · View at Google Scholar
  77. S. L. Bressler and J. A. S. Kelso, “Cortical coordination dynamics and cognition,” Trends in Cognitive Sciences, vol. 5, no. 1, pp. 26–36, 2001. View at Publisher · View at Google Scholar
  78. M. M. Chun and R. Marois, “The dark side of visual attention,” Current Opinion in Neurobiology, vol. 12, no. 2, pp. 184–189, 2002. View at Publisher · View at Google Scholar
  79. B. Libet, “The neural time factor in conscious and unconscious events,” in Experimental and Theoretical Studies of Consciousness, pp. 282–303, John Wiley & Sons, New York, NY, USA, 1993. View at Google Scholar
  80. B. Libet, “Timing of conscious experience: reply to the 2002 commentaries on Libet's findings,” Consciousness & Cognition, vol. 12, no. 3, pp. 321–331, 2003. View at Publisher · View at Google Scholar
  81. B. Libet, Mind Time, Harvard University Press, Cambridge, Mass, USA, 2004.
  82. S. Grossberg, I. Boardman, and M. Cohen, “Neural dynamics of variable-rate speech categorization,” Journal of Experimental Psychology: Human Perception and Performance, vol. 23, no. 2, pp. 481–503, 1997. View at Google Scholar
  83. W. Bair, “Spike timing in the mammalian visual system,” Current Opinion in Neurobiology, vol. 9, no. 4, pp. 447–453, 1999. View at Publisher · View at Google Scholar
  84. L. Shastri and V. Ajjanagadde, “From simple associations to systematic reasoning: a connectionist representation of rules, variables and dynamic bindings using temporal synchrony,” Behavioral and Brain Sciences, vol. 16, no. 3, pp. 417–494, 1993. View at Google Scholar
  85. F. Rieke, D. Warland, R. de Ruyter van Steveninck, and W. Bialek, Spikes: Exploring the Neural Code, MIT Press, Cambridge, Mass, USA, 1997.
  86. M. Yoshioka and M. Shiino, “Associative memory based on synchronized firing of spiking neurons with time-delayed interactions,” Physical Review E, vol. 58, no. 3, pp. 3628–3639, 1998. View at Google Scholar
  87. D. R. Moore and A. J. King, “Auditory perception: the near and far of sound localization,” Current Biology, vol. 9, no. 10, pp. R361–R363, 1999. View at Publisher · View at Google Scholar
  88. S. A. Helekar, “On the possibility of universal neural coding of subjective experience,” Consciousness and Cognition, vol. 8, no. 4, pp. 423–446, 1999. View at Publisher · View at Google Scholar
  89. W. Singer, “Phenomenal awareness and consciousness from a neurobiological perspective,” in Neural Correlates of Consciousness: Empirical and Conceptual Questions, T. Metzinger, Ed., pp. 121–137, MIT Press, Cambridge, Mass, USA, 2000. View at Google Scholar
  90. J. J. Eggermont, “Is there a neural code?” Neuroscience and Biobehavioral Reviews, vol. 22, no. 2, pp. 355–370, 1998. View at Publisher · View at Google Scholar
  91. W. McCullough and W. Pitts, “A logical calculus of ideas imminent in nervous activity,” Bulletin of Mathematical Biophysics, vol. 5, pp. 115–133, 1943. View at Google Scholar
  92. D. M. MacKay and W. S. McCulloch, “The limiting information capacity of a neuronal link,” The Bulletin of Mathematical Biophysics, vol. 14, no. 2, pp. 127–135, 1952. View at Publisher · View at Google Scholar
  93. S. Thorpe, A. Delorme, and R. Van Rullen, “Spike-based strategies for rapid processing,” Neural Networks, vol. 14, no. 6-7, pp. 715–725, 2001. View at Publisher · View at Google Scholar
  94. R. VanRullen, R. Guyonneau, and S. J. Thorpe, “Spike times make sense,” Trends in Neurosciences, vol. 28, no. 1, pp. 1–4, 2005. View at Publisher · View at Google Scholar
  95. D. S. Bassett, A. Meyer-Lindenberg, S. Achard, T. Duke, and E. Bullmore, “Adaptive reconfiguration of fractal small-world human brain functional networks,” Proceedings of the National Academy of Sciences of the United States of America, vol. 103, no. 51, pp. 19518–19523, 2006. View at Publisher · View at Google Scholar
  96. N. Axmacher, F. Mormann, G. Fernandez, C. E. Elger, and J. Fell, “Memory formation by neuronal synchronization,” Brain Research Reviews, vol. 52, no. 1, pp. 170–182, 2006. View at Publisher · View at Google Scholar
  97. M. Abeles, H. Bergman, E. Margalit, and E. Vaadia, “Spatiotemporal firing patterns in the frontal cortex of behaving monkeys,” Journal of Neurophysiology, vol. 70, no. 4, pp. 1629–1638, 1993. View at Google Scholar
  98. G. M. Edelman, “Neural Darwinism: selection and reentrant signaling in higher brain function,” Neuron, vol. 10, no. 2, pp. 115–125, 1993. View at Publisher · View at Google Scholar
  99. F. Crick, The Astonishing Hypothesis: The Scientific Search for the Soul, Simon and Schuster, New York, NY, USA, 1994.
  100. C. Constantinidis, G. V. Williams, and P. S. Goldman-Rakic, “A role for inhibition in shaping the temporal flow of information in prefrontal cortex,” Nature Neuroscience, vol. 5, no. 2, pp. 175–180, 2002. View at Publisher · View at Google Scholar
  101. P.-M. Lau and G.-Q. Bi, “Synaptic mechanisms of persistent reverberatory activity in neuronal networks,” Proceedings of the National Academy of Sciences of the United States of America, vol. 102, no. 29, pp. 10333–10338, 2005. View at Publisher · View at Google Scholar
  102. R. Llinás, U. Ribary, D. Contreras, and G. Pedroarena, “The neuronal basis for consciousness,” Philosophical Transactions of the Royal Society B, vol. 353, no. 1377, pp. 1841–1849, 1998. View at Publisher · View at Google Scholar
  103. R. Llinás and U. Ribary, “Consciousness and the brain: the thalamocortical dialogue in health and disease,” Annals of the New York Academy of Sciences, vol. 929, pp. 166–175, 2001. View at Google Scholar
  104. R. VanRullen and C. Koch, “Is perception discrete or continuous?” Trends in Cognitive Sciences, vol. 7, no. 5, pp. 207–213, 2003. View at Publisher · View at Google Scholar
  105. M. Steriade, “Synchronized activities of coupled oscillators in the cerebral cortex and thalamus at different levels of vigilance,” Cerebral Cortex, vol. 7, no. 6, pp. 583–588, 1997. View at Publisher · View at Google Scholar
  106. D. A. Pollen, “On the neural correlates of visual perception,” Cerebral Cortex, vol. 9, no. 1, pp. 4–19, 1999. View at Publisher · View at Google Scholar
  107. V. A. F. Lamme, “Separate neural definitions of visual consciousness and visual attention; a case for phenomenal awareness,” Neural Networks, vol. 17, no. 5-6, pp. 861–872, 2004. View at Publisher · View at Google Scholar
  108. V. A. F. Lamme, “Towards a true neural stance on consciousness,” Trends in Cognitive Sciences, vol. 10, no. 11, pp. 494–501, 2006. View at Publisher · View at Google Scholar
  109. G. M. Edelman, The Remembered Present, Basic Books, New York, NY, USA, 1989.
  110. G. Tononi, O. Sporns, and G. M. Edelman, “Re-entry and the problem of integrating multiple cortical areas: simulation of dynamic integration in the visual system,” Cerebral Cortex, vol. 2, pp. 310–335, 1992. View at Google Scholar
  111. G. Tononi and G. M. Edelman, A Universe of Consciousness: How Matter Becomes Imagination, Basic Books, New York, NY, USA, 2000.
  112. J. M. Fuster, “Cortical dynamics of memory,” International Journal of Psychophysiology, vol. 35, no. 2-3, pp. 155–164, 2000. View at Publisher · View at Google Scholar
  113. J. Prinz, “A neurofunctional theory of visual consciousness,” Consciousness & Cognition, vol. 9, no. 2, pp. 243–259, 2000. View at Publisher · View at Google Scholar
  114. V. Di Lollo, J. T. Enns, and R. A. Rensink, “Competition for consciousness among visual events: the psychophysics of reentrant visual processes,” Journal of Experimental Psychology, vol. 129, no. 4, pp. 481–507, 2000. View at Google Scholar
  115. W. Klimesch, M. Doppelmayr, A. Yonelinas et al., “Theta synchronization during episodic retrieval: neural correlates of conscious awareness,” Cognitive Brain Research, vol. 12, no. 1, pp. 33–38, 2001. View at Publisher · View at Google Scholar
  116. L. C. Robertson, “Binding, spatial attention and perceptual awareness,” Nature Reviews Neuroscience, vol. 4, no. 2, pp. 93–102, 2003. View at Publisher · View at Google Scholar
  117. F. Crick and C. Koch, “A framework for consciousness,” Nature Neuroscience, vol. 6, no. 2, pp. 119–126, 2003. View at Publisher · View at Google Scholar
  118. R. Gutig and H. Sompolinsky, “The tempotron: a neuron that learns spike timing-based decisions,” Nature Neuroscience, vol. 9, no. 3, pp. 420–428, 2006. View at Publisher · View at Google Scholar
  119. S. B. Nelson, “Cortical microcircuits: diverse or canonical?” Neuron, vol. 36, no. 1, pp. 19–27, 2002. View at Publisher · View at Google Scholar
  120. M. Nedergaard, B. Ransom, and S. A. Goldman, “New roles for astrocytes: redefining the functional architecture of the brain,” Trends in Neurosciences, vol. 26, no. 10, pp. 523–530, 2003. View at Publisher · View at Google Scholar
  121. T. Fellin and G. Carmignoto, “Neurone-to-astrocyte signalling in the brain represents a distinct multifunctional unit,” Journal of Physiology, vol. 559, no. 1, pp. 3–15, 2004. View at Publisher · View at Google Scholar
  122. T. H. Bullock, M. V. L. Bennett, D. Johnston, R. Josephson, E. Marder, and R. D. Fields, “The neuron doctrine, redux,” Science, vol. 310, no. 5749, pp. 791–793, 2005. View at Publisher · View at Google Scholar
  123. A. Volterra and J. Meldolesi, “Astrocytes, from brain glue to communication elements: the revolution continues,” Nature Reviews Neuroscience, vol. 6, no. 8, pp. 626–640, 2005. View at Publisher · View at Google Scholar
  124. Y. Yamazaki, Y. Hozumi, K. Kaneko et al., “Direct evidence for mutual interactions between perineuronal astrocytes and interneurons in the CA1 region of the rat hippocampus,” Neuroscience, vol. 134, no. 3, pp. 791–802, 2005. View at Publisher · View at Google Scholar
  125. V. S. Ramachandran, D. Rogers-Ramachandran, and S. Cobb, “Touching the phantom limb,” Nature, vol. 377, no. 6549, pp. 489–490, 1995. View at Google Scholar
  126. V. S. Ramachandran, “Consciousness and body image: lessons from phantom limbs, Capgras syndrome and pain asymbolia,” Philosophical Transactions of the Royal Society B, vol. 353, no. 1377, pp. 1851–1859, 1998. View at Publisher · View at Google Scholar
  127. T. Guéniot, “D'une hallucination du toucher (hétérotopie subjective des extrémités) particulière à certains amputés,” Journal de Physiologie de l'Homme et des Animaux, vol. 4, pp. 416–418, 1868. View at Google Scholar
  128. M. M. Merzenich, R. J. Nelson, M. P. Stryker, M. S. Cynader, A. Schoppmann, and J. M. Zook, “Somatosensory cortical map changes following digit amputation in adult monkeys,” Journal of Comparative Neurology, vol. 224, no. 4, pp. 591–605, 1984. View at Google Scholar
  129. V. S. Ramachandran, D. Rogers-Ramachandran, and M. Stewart, “Perceptual correlates of massive cortical reorganization,” Science, vol. 258, no. 5085, pp. 1159–1160, 1992. View at Google Scholar
  130. E. R. John, “A field theory of consciousness,” Consciousness and Cognition, vol. 10, no. 2, pp. 184–213, 2001. View at Publisher · View at Google Scholar
  131. A. K. Engel, P. Konig, A. K. Kreiter, T. B. Schillen, and W. Singer, “Temporal coding in the visual cortex: new vistas on integration in the nervous system,” Trends in Neurosciences, vol. 15, no. 6, pp. 218–226, 1992. View at Publisher · View at Google Scholar
  132. G. Stockmanns, E. Kochs, W. Nahm, C. Thornton, and C. J. Kalkmann, “Automatic analysis of auditory evoked potentials by means of wavelet analysis,” in Memory and Awareness in Anaesthesia IV, D. C. Jordan, D. J. A. Vaughan, and D. E. F. Newton, Eds., pp. 117–131, Imperial College Press, London, UK, 2000. View at Google Scholar
  133. R. Llinás and U. Ribary, “Coherent 40-Hz oscillation characterizes dream state in humans,” Proceedings of the National Academy of Sciences of the United States of America, vol. 90, no. 5, pp. 2078–2081, 1993. View at Google Scholar
  134. S. Grossberg, “A neural model of attention reinforcement and discrimination learning,” International Review of Neurobiology, vol. 18, pp. 263–327, 1975. View at Google Scholar
  135. S. Grossberg and A. Grunewald, “Cortical synchronization and perceptual framing,” Journal of Cognitive Neuroscience, vol. 9, no. 1, pp. 117–132, 1997. View at Google Scholar
  136. S. Grossberg and C. W. Myers, “The resonant dynamics of speech perception: interword integration and duration-dependent backward effects,” Psychological Review, vol. 107, no. 4, pp. 735–767, 2000. View at Google Scholar
  137. D. Lewis, “Mad pain and Martian pain,” in Philosophical Papers. Vol. I, D. Lewis, Ed., Oxford University Press, Oxford, UK, 1983. View at Google Scholar
  138. T. Nagel, “What is it like to be a bat?” The Philosophical Review, vol. 83, pp. 435–450, 1974. View at Google Scholar
  139. H. Feigl, “The “Mental” and the “Physical”,” in Concepts, Theories and the Mind-Body Problem, H. Feigl, M. Scriven, and G. Maxwell, Eds., vol. 2 of Studies in the Philosophy of Science, Minneapolis, Minn, USA, 1958. View at Google Scholar
  140. J. A. Gray, “The mind-brain identity theory as a scientific hypothesis,” Philosophical Quarterly, vol. 21, pp. 247–252, 1971. View at Google Scholar