Review Article

Structure-Function Relationships behind the Phenomenon of Cognitive Resilience in Neurology: Insights for Neuroscience and Medicine

Figure 7

Plasticity and pluripotency. Pluripotent nodes can be recruited to provide a common alternate relay between groups of nodes subserving different, independent processes (Figure 5). As the different processes compete for the pluripotent node’s bandwidth, the pluripotent node’s intrinsic subnetwork of processing units must learn to carry and output the signals from the different competing processes in a manner that is ultimately unadulterated. One option is to multiplex signals from different processes over the subnetwork, for instance based on their frequency content (a). If two processes and (blue and red) generate nonoverlapping, narrow-band signals with different central frequencies and , the subnetwork can learn to tune its output nodes’ transfer functions so that they separately filter the two frequency bands, and relay adequately the signals (see little chart on the right). Oscillations and the emission of power from multiple frequency bands over the same regions of the brain are reliably observed in electrophysiological signals (see [117]). Another option is to keep the signals from the different processes separated within the subnetwork over different, potentially cross-inhibiting subpipelines. Many other options can be envisioned. In all cases, somewhat, the pluripotent node must learn, as it is recruited for renormalization of information transfer, to separate independent sources of signals (b). Blind source separation can be performed by unsupervised algorithms such as principal component analyses (PCA), for uncorrelated sources of Gaussian signals, and by independent component analysis (ICA), for statistically independent sources of signals (see [118]). In both cases, source separation enables the absence of cross-talks and interference between processes. This is expressed in a mutual information between the nodes of different processes that becomes null (see (b) little chart). These algorithms can easily be implemented in small neural networks. PCA is formally equivalent to neural network implementations based on classical learning rules from connectionism such as Hebb’s rule for unsupervised learning (see [119]). Hebb’s rule increases synaptic weight , that is, local effective connectivity, between a presynaptic neuron and a postsynaptic neuron based on the presence of sustained correlations in their activity (see [120]). In this framework, synaptic weights behave just like the loadings of principal components along directions of maximal covariance. ICA can be implemented with modifications and generalizations of Hebb’s rule in a manner that minimizes mutual information between processes while maximizing their own intrinsic quantity of information (e.g., infomax algorithm, [121]).
462765.fig.007a
(a)
462765.fig.007b
(b)