Computational and Mathematical Methods in Medicine

Computational and Mathematical Methods in Medicine / 2017 / Article

Research Article | Open Access

Volume 2017 |Article ID 6102494 | 13 pages |

Preventing Neurodegenerative Memory Loss in Hopfield Neuronal Networks Using Cerebral Organoids or External Microelectronics

Academic Editor: Henggui Zhang
Received05 Apr 2017
Accepted06 Jun 2017
Published05 Sep 2017


Developing technologies have made significant progress towards linking the brain with brain-machine interfaces (BMIs) which have the potential to aid damaged brains to perform their original motor and cognitive functions. We consider the viability of such devices for mitigating the deleterious effects of memory loss that is induced by neurodegenerative diseases and/or traumatic brain injury (TBI). Our computational study considers the widely used Hopfield network, an autoassociative memory model in which neurons converge to a stable state pattern after receiving an input resembling the given memory. In this study, we connect an auxiliary network of neurons, which models the BMI device, to the original Hopfield network and train it to converge to its own auxiliary memory patterns. Injuries to the original Hopfield memory network, induced through neurodegeneration, for instance, can then be analyzed with the goal of evaluating the ability of the BMI to aid in memory retrieval tasks. Dense connectivity between the auxiliary and Hopfield networks is shown to promote robustness of memory retrieval tasks for both optimal and nonoptimal memory sets. Our computations estimate damage levels and parameter ranges for which full or partial memory recovery is achievable, providing a starting point for novel therapeutic strategies.

1. Introduction

In the past few years, we have witnessed the emergence of several staggering technologies in regenerative neurobiology, including hybrid systems of neurons and semiconductor microelectronics [1, 2], brain-machine interfaces [3, 4], and the manipulation of induced pluripotent stem cells, which develop into recognizable miniature cerebral organoids [59]. Such advancements are likely to play a key role in future medical and rehabilitation research due to the growing numbers of traumatic brain injury incidents and the overall aging of the population, which increases the risk of dementia and neurodegenerative diseases. Even the most complex and sophisticated neuronal networks, such as the ones present in the human brain, are exposed to pathological effects that may culminate in functional losses such as motor and cognitive deficits, worse decision-making capabilities, and memory impairments. With approximately one new Alzheimer’s case diagnosed every 68 seconds in the US alone, it is important to investigate in theoretical and computational settings whether minibrains, organoids, and external devices could help mitigate its most prevalent symptom: memory loss. To this end, we examine memory degradation in Hopfield neuronal networks that retrieve stored patterns from partial or noisy queues and evaluate to which extent an external auxiliary network can help restore its original functionality.

Traumatic brain injuries (TBI) and neurodegenerative diseases are among the main causes of cognitive dysfunction in humans [1015]. Both sources of dysfunction exhibit significant presence of focal axonal swellings [1622]. Axonal injuries hinder the information encoded in spike trains [2325], thus leading to potentially severe functional deficits [2632]. Challenging our understanding of the impact of axonal swellings is our inability to access small-scale injuries with noninvasive methods, the overall complexity of neuronal pathologies, and our limited knowledge of how networks process biological signals [33]. While it is difficult to diagnose and treat pathological effects on small spatial scales in vivo, recent developments in stem cell technologies might revolutionize current therapeutics [9]. Structures resembling whole organs, termed organoids, have been generated from stem cells for the intestine, the kidney, and, most impressively, the brain [9]. In fact, the groundbreaking work of Lancaster et al. [8] opened new routes for studying developmental diseases and degenerative conditions in minibrains [57]. In some cases, the tissues derived in vitro from patient cells may be used in organ replacement strategies.

Due to the complexity of the brain, however, it is not clear that the addition of a small network of external neurons or a hybrid bioelectronic system could restore the information processing capabilities required for higher cognitive functions. Moreover, there is currently a significant lack of biophysical evidence or experimental studies capable of directly addressing this issue. This motivates our development of a computational framework, using the Hopfield model for associative memory, which can provide a platform to study the conditions in which an auxiliary external network may prevent and/or reverse TBI and neurodegenerative impairments. The purpose of this study is to simulate healthy and damaged memory networks during memory retrieval tasks. In our model, we can explicitly allow an auxiliary network to communicate with the original and then delineate its effect on memory retrieval. We demonstrate that an auxiliary network can indeed mitigate the effects of memory loss due to progressive neurodegeneration of the Hopfield network.

Minibrains, or cerebral organoids, will likely play a critical role in our understanding of human brain development and modeling of neurodevelopmental disorders [6]. An organoid is an organized group of organ-specific cells that form an undeveloped organ in vitro [9]. Stem cells or organ progenitors are treated with growth factors and placed in conditions that allow a 3D organoid to form. Researchers treat an embryonic stem cell medium with low levels of basic fibroblast growth factor, and once a 3D aggregate of cells forms, they are transferred to a neural induction medium [5, 8]. The cerebral organoids display distinct brain regions; however, they were randomly organized and lacked the same structure as a developing human brain (Figure 1(e)) [5, 8]. In a later study, researchers formed organoids from pluripotent stem cells (PSCs) or induced PSCs (iPSCs) [9]. The organoid develops through cell sorting and spatially restricting lineage commitment in a similar manner to how the human brain develops [9]. Although cerebral organoids are currently small, they have characteristics of normal brain development and show discrete brain regions [7]. This brings hope that cerebral organoids can be useful for disease modeling, drug testing, and organ replacement. For example, Lancaster et al. used a cerebral organoid to create a more accurate model of human microcephaly [8]. Currently, cerebral organoids have some drawbacks: they can only be grown to a diameter of 4 mm due to a lack of vascularization [9], and researchers do not know to what extent electrical potential and connectivity are the same in the developing human brain and a cerebral organoid [7].

Several strategies are enabling functional interfaces between neuronal networks and electronics at various resolutions. Some of these systems are intended to interface with individual neurons. Fromherz created a hybrid system of neurons and semiconductor microelectronics by using a chip to excite and record neuronal activity (Figure 1(b)) [1]. The neurons interfaced with silicon microstructures, allowing information transfer between ion-conducting neurons and electroconducting silicon [2]. Researchers have also grown cells on graphene solution-gated field effect transistors which were able to stimulate the cells [37], and used microscale gold electrodes to communicate with rat brain neurons (Figure 1(f)) [36]. Fine-resolution stimulation has been achieved with vertical silicon nanowire probes which, with an intracellular interface, can measure a neuron’s action potential [38].

Other systems are intended to interface with an entire brain region. Nanowires were successfully integrated into hippocampal regions (Figures 1(c) and 1(d)) [35]. Microelectrode array (MEA) technology can record and stimulate at the neuronal network level by sending and retrieving information at multiple network sites with a noninvasive interface [39]. This allowed stem cells that grew into neural networks to be interfaced; digital signals fed to the network via the MEAs created distinct neural output patterns (Figure 1(a)) [34]. In a later experiment, Pizzi et al. stimulated a neural network with MEAs and read the output with an artificial neuronal network [40]. Using this interface, they were able to train the neuronal network to control a robotic arm given different inputs. Current MEA directions include creating higher density MEAs, flexible electrodes, and tactics to monitor subthreshold activity [39].

In addition to interfacing with organically grown neural networks, researchers are also creating artificial neurons [41]. These artificial neurons are made with chalcogenide-based materials that change states when a current is applied; this allows them to represent a neuron’s membrane potential while also being stochastic. Future studies aim to link these synthetic neurons into a network.

Brain-machine interfaces (BMIs) interpret motor intent from cortical signals and then stimulate muscles or the spinal cord. These devices have the potential to allow people with spinal cord injury to regain motor function [3]. Signals from the brain can be used to control more than just the limbs; researchers have trained monkeys to navigate a wheelchair using a BMI [42]. BMIs can also help reverse the effects of TBI by first processing neural intent from the brain and then generating targeted feedback that will help the brain execute a function, whether it is retrieving a memory or performing a motor task [4]. Connecting technology to the brain may be achieved by a recently developed microfabricated neural probe [43]. This probe is stiff for insertion into the brain but later dissolves leaving a polymeric structure, creating a suitable interface between the probe and the brain [43].

2. Materials and Methods

Hopfield networks of artificial neurons are the most used class of models for memory retrieval tasks [4447]. More advanced models of memory encoding and retrieval have been developed since the pioneering work of Hopfield [4853]. However, we focus here on a Hopfield associate memory model in order to illustrate the concepts of how an auxiliary network can be trained with the Hopfield network. Damage to axons and neuronal connections typically impairs the network’s collective functionality after a critical threshold value [54]. In the worst-case scenarios, focal axonal swellings, axotomies, or cell death could block the information encoded in spike trains and effectively zero out weights in the connectivity matrix [2325]. In this study, we couple a smaller auxiliary network to an injured Hopfield network to improve memory retrieval (Figures 3 and 4). Auxiliary neurons are connected to the original network in a sparse manner to mimic experimental constraints and trained to converge to auxiliary memory patterns. We then randomly induce injury across the original network and analyze its ability to retrieve memories with the appended external network.

2.1. Hopfield Model

In our autoassociative memory network, coupled artificial neurons respond to meaningful external queues with stable, collective activity patterns (where ). The memory collection is composed of all of the attractors of the high-dimensional dynamical system (Figure 2). In Hopfield’s original formulation [44, 47], a neuron is either “on,” with state , or “off,” with state . The connectivity strength (weight) between neurons and is given by and stored in the matrix . The temporal dynamical equation for the neuronal state is given bywith input potential and gain functionAt a given damage level, we randomly eliminate (%) connections, which interferes with the network’s ability to properly recover a stored memory.

2.2. Simulation Procedure: Hopfield Model with Auxiliary Network

To simulate a damaged brain retrieving memories, we select a memory set, encode it in a Hopfield network, induce network damage, and measure memory retrieval. Each memory is a unique binary pattern generated from a Unicode symbol (dimensions: pixels), and each of the pixels represents an artificial neuron’s state with value −1 (black/off) or (white/on). Memory sets contain 20 symbols (see Figure 2) selected from a 30-symbol dataset. As pointed out by Gerstner et al. [47], numerical digits and alphabetical characters can be highly correlated, making it difficult for a Hopfield network to retrieve them from partial or noisy queues without some degree of confusion. To circumvent this, we optimize the 20-item memory set by choosing the most orthogonal subset (i.e., the characters that are collectively less correlated to each other). We will refer to it as the optimal network and compare it with a nonoptimal memory set that shares 15 out of 20 with it. This suffices for observing noticeable difference in performance.

2.3. Train Auxiliary Network

In order to incorporate the auxiliary network, we extended each activity pattern to include the activity exhibited in the auxiliary network during the retrieval of a particular memory. Figures 3 and 4 and Table 1 summarize our methodology and simulation parameters. In our new model, each activity pattern is now the old memory pattern, , augmented with an auxiliary memory pattern, . Hence, .

Number of stored memories20 (see Figure 2)
Neurons in the original network1000
Neurons in the auxiliary network200 or 400
Density in (connectivity to the auxiliary network)5%
Density in (connectivity to the original network)0%–50% (5% increments)
Noise level0%–50% (5% increments)
Sparsity in (damage level)0%–80% (10% increments)

The memory collection is composed of these new activity patterns (Figure 2). The connectivity matrix describing the connectivity weights between neurons is .

The healthy original Hopfield network encodes all 20 stored memories (Figure 3(a)); connections between neurons in this network are represented by , the original network connectivity matrix. The original network is sparsely connected to a small set of auxiliary neurons represented by , the original-auxiliary connectivity matrix. After calibration, the auxiliary network converges to auxiliary memory patterns (Figure 3(b)). Interconnections between neurons in this network, which now stores a set of patterns, are represented by , the auxiliary network connectivity matrix. We generate the auxiliary network connectivity matrix (Figure 3(c)) with random interfacing to the original network before any damage, for prevention purposes. The auxiliary-original connectivity matrix, , holds connectivity weights between auxiliary and original neurons (Figure 3(d)). The connectivity degree of the interface, that is, the fraction of novel, random connections between the two networks, may vary (5%–50%).

Just as before, the temporal dynamical equation for the neuronal state is given bywith input potential and gain function where .

The connectivity matrix used to compute the input potential, , transitions to different states to simulate the emergence and attenuation of connections within and among neurons in the original and auxiliary network. When submatrix is inserted into the connectivity matrix, each original network pattern generates an auxiliary network pattern. These auxiliary patterns are recorded and then used to construct the auxiliary network connectivity matrix (submatrix ) via the Hopfield model.

Network Training. One has

2.4. Execute Memory Recovery

Virtual lesions to the original network are represented by a sparsified connectivity matrix and lead to deficits in memory retrieval tasks (Figure 4(a)). But since the auxiliary network was calibrated and trained, it still retrieves its auxiliary patterns once the sparse interface is activated (Figures 4(b)-4(c)), allowing for a significantly more robust system (Figure 4(d)). As a consequence, the failure rate, that is, the percent of times in which the original network fails to fully retrieve memories, decreases significantly. In what follows, we will discuss the impact of different parameters on performance: noise level (0%–50%), connectivity between the auxiliary and original networks (0%–50%), percent damage in the original network (0%–80%), and size of the auxiliary network ( or ).

Memory Recovery. One has

3. Results

3.1. Failure Rates and Network Performance

At all damage levels, failure rate increases with noise as expected (see Figure 5). For damage levels in , the optimal network performs well and the auxiliary network has a negligible effect on failure rate. However, once is beyond this value, the failure rate decreases with a denser interface in , indicating that the auxiliary network, , in fact prevents memory loss in the original network. As shown in Figure 5, this trend is common for both small (200 neurons) and large (400 neurons) auxiliary networks, although at and in the larger auxiliary network outperforms the smaller one.

Performance declines with increased damage as expected (see Figure 8); however, a denser interface in improves network functionality at low noise levels . The smaller auxiliary network (200 neurons) improves performance but worsens performance at larger values. The larger auxiliary network (400 neurons) also improved performance for but only worsens for . The shift from enhancing to undermining memory recovery may be due to the fact that, at high noise levels, the auxiliary memory might not be retrieved, therefore not feeding correct information back to the original network. Finally, Figure 6 shows how performance decreases with both damage and noise levels. Again, denser interfacing in between auxiliary and original networks counteracts the detrimental effects that damage has on performance. The larger auxiliary network () also enhances memory retrieval more than the smaller one ().

3.2. Comparison with Nonoptimal Memory Sets

As previously mentioned, Unicode symbols may be highly correlated making it difficult for the Hopfield model to retrieve them, without confusion, from noisy or partial queues. Figures 5, 8, and 6 refer to networks that encoded an optimal memory subset. For contrasting purposes, we discuss nonoptimal memory sets which only share 15 out of 20 memories with the optimal set; the remaining elements are randomly chosen from the Unicode list of characters. As a consequence, the nonoptimal network has higher failure rates than the optimal network across all noise and connectivity levels (Figure 7). Still, the nonoptimal network shows similar trends to the optimal network; at low damage levels, the auxiliary network does not affect memory retrieval while at higher damage levels ( and in ), failure rate decreases with increased connectivity. The performance trends are also analogous to the optimal network (see Figure 8), though with lower values. Again, the performance decreases with both increased damage and noise (see Figure 9) with denser interfaces and larger auxiliary networks better counteracting the detrimental effects of damage.

4. Discussion

In this work, we introduce a computational framework to evaluate potential enhancements in memory retrieval within a damaged Hopfield network. Axonal damage and connectivity impairments cause memory loss while communication with an auxiliary network may help prevent the loss of functionality. In our setup, the Hopfield network can tolerate significant amounts of damage before memory loss ensues, although this may vary according to the specific parameters in simulations. The beneficial/therapeutic effects of the auxiliary network are more noticeable at higher damage levels, for both optimal and nonoptimal memory sets. As a consequence, the auxiliary networks can be expected to compensate for severe deficits but not aid healthy individuals in retrieving memories.

The density of the interface between the original and external networks is a key factor for improving performance. Since there are several biological and experimental difficulties that may constrain the number of connections between the systems, it is somewhat promising that sparse connectivity suffices to calibrate complementary memories. But even if the auxiliary network needs to read activity from only a few brain regions, it must still stimulate the damaged brain in many areas.

In our setup, we interpret the original network as the human brain or one of its components and the auxiliary network taking potentially different forms such as a cerebral organoid, an artificial neuronal network, or a semiconductor chip. In order for an organoid to be useful as an auxiliary network, it must follow the same Hopfield plasticity rules that the original network follows so that connectivity will restructure during calibration. Researchers have recently grown cerebral organoids that display discrete brain regions [8, 9]. However, it is unknown whether the connectivity in these organoids can be controlled. This organic neural network must be trainable for it to be useful as an auxiliary network. Another obstacle to overcome is the size of the organoid. In our computational study, the larger auxiliary network aids in memory retrieval more than the small auxiliary network; the auxiliary networks were 40% and 20% the size of the original network, respectively. Current cerebral organoids have a maximum diameter of 4 mm, making them less than 2% of the size of a human hippocampus, the brain region associated with memory storage. An auxiliary network much smaller than the original network may not be capable of enhancing memory retrieval; cerebral organoids must grow larger in order to be useful auxiliary devices.

A semiconductor chip does not have the same connectivity and size constraints as a cerebral organoid. However, while a chip can reliably store and send information according to its initial programming, it is not adaptable like an organic neural network. A cerebral organoid is capable of changing over time as the human brain changes. This ability to adapt with the changing brain will perhaps allow it to consistently store and retrieve information despite memory changes in the brain.

Future work could simulate more specific or more biologically sophisticated neuronal network models and hybrid bioelectronic systems. Neurons display more complex activity than simply being “on” or “off,” although Hopfield’s formulation still provides important insight into how the brain stores and retrieves memories. Additionally, novel methodologies might allow the auxiliary network to prevent memory loss with sparser auxiliary-original connectivity. This is important since experimental constraints may only allow for few connections to be made between a device and the human brain.

5. Conclusion

Recent technologies show promising ways to engineer biological and artificial external networks and connect them to the brain [2, 8, 9, 35, 39]. This computational study investigates whether such auxiliary apparatus could theoretically ameliorate damage in a network and improve memory retrieval. A sparse interface was sufficient to generate stable auxiliary network patterns (auxiliary memories). After the original network becomes damaged, the auxiliary network can be connected back to the original network, aiding in memory retrieval. The auxiliary network’s influence is proportional to its size and connectivity to the original network, although its beneficial effects might be noticeable only at substantial injury levels.

These results imply that an auxiliary network could help a severely damaged network recover some functionality if it is calibrated before damage occurs. An auxiliary device may be able to store memories for individuals who are forecasted to suffer from Alzheimer’s disease and allow them to retrieve memories despite severe neurodegeneration. An auxiliary device may also be useful to people whose lifestyle puts them at risk for TBI. Prior to getting injured, individuals could store memories in an auxiliary network and then use their device if brain damage occurs. In conclusion, auxiliary networks might be a promising road to improve brain functionality after injury or neurodegeneration.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.


A significant amount of this research was completed during an internship with the Center for Sensorimotor Neural Engineering (CSNE) at the University of Washington and supported by the National Science Foundation. The authors would like to thank the CSNE internship facilitators Dr. Eric Chudler and Dr. Lise Johnson for helping make this work possible. They also thank Melanie Weber for providing valuable insight into the Hopfield model.


  1. P. Fromherz, “Electrical interfacing of nerve cells and semiconductor chips,” A European Journal of Chemical Physics and Physical Chemistry, vol. 3, no. 3, pp. 276–284, 2002. View at: Google Scholar
  2. P. Fromherz, “Semiconductor chips with ion channels, nerve cells and brain,” Physica E: Low-Dimensional Systems and Nanostructures, vol. 16, no. 1, pp. 24–34, 2003. View at: Publisher Site | Google Scholar
  3. M. Alam, W. Rodrigues, B. N. Pham, and N. V. Thakor, “Brain-machine interface facilitated neurorehabilitation via spinal stimulation after spinal cord injury: Recent progress and future perspectives,” Brain Research, vol. 1646, pp. 25–33, 2016. View at: Publisher Site | Google Scholar
  4. D. A. Turner, “Enhanced functional outcome from traumatic brain injury with brain–machine interface neuromodulation,” in Translational Research in Traumatic Brain Injury, D. Laskowitz and G. Grant, Eds., CRC Press/Taylor and Francis Group, Boca Raton (FL), 2016. View at: Google Scholar
  5. M. Bershteyn and A. R. Kriegstein, “Cerebral organoids in a dish: progress and prospects,” Cell, vol. 155, no. 1, pp. 19-20, 2013. View at: Publisher Site | Google Scholar
  6. O. Brüstle, “Developmental neuroscience: Miniature human brains,” Nature, vol. 501, no. 7467, pp. 319-320, 2013. View at: Publisher Site | Google Scholar
  7. B.-I. Bae and C. A. Walsh, “What are mini-brains?” Science, vol. 342, no. 6155, pp. 200-201, 2013. View at: Publisher Site | Google Scholar
  8. M. A. Lancaster, M. Renner, C.-A. Martin et al., “Cerebral organoids model human brain development and microcephaly,” Nature, vol. 501, no. 7467, pp. 373–379, 2013. View at: Publisher Site | Google Scholar
  9. M. A. Lancaster and J. A. Knoblich, “Organogenesis in a dish: modeling development and disease using organoid technologies,” Science, vol. 345, no. 6194, Article ID 1247125, 2014. View at: Publisher Site | Google Scholar
  10. C. LoBue, D. Denney, L. S. Hynan et al., “Self-Reported Traumatic Brain Injury and Mild Cognitive Impairment: Increased Risk and Earlier Age of Diagnosis,” Journal of Alzheimer's Disease, vol. 51, no. 3, pp. 727–736, 2016. View at: Publisher Site | Google Scholar
  11. D. K. Menon and A. I. R. Maas, “Traumatic brain injury in 2014: Progress, failures and new approaches for TBI research,” Nature Reviews Neurology, vol. 11, no. 2, pp. 71-72, 2015. View at: Publisher Site | Google Scholar
  12. B. W. Patterson, D. L. Elbert, K. G. Mawuenyega et al., “Age and amyloid effects on human central nervous system amyloid-beta kinetics,” Annals of Neurology, vol. 78, no. 3, pp. 439–453, 2015. View at: Publisher Site | Google Scholar
  13. B. Roozenbeek, A. I. R. Maas, and D. K. Menon, “Changing patterns in the epidemiology of traumatic brain injury,” Nature Reviews Neurology, vol. 9, no. 4, pp. 231–236, 2013. View at: Publisher Site | Google Scholar
  14. W. Thies and L. Bleiler, “Alzheimer’s disease facts and figures,” Alzheimer’s & Dementia, vol. 9, no. 2, pp. 208–245, 2013. View at: Google Scholar
  15. J. K. Yue, M. J. Vassar, H. F. Lingsma et al., “Transforming research and clinical knowledge in traumatic brain injury pilot: Multicenter implementation of the common data elements for traumatic brain injury,” Journal of Neurotrauma, vol. 30, no. 22, pp. 1831–1844, 2013. View at: Publisher Site | Google Scholar
  16. R. Adalbert, A. Nogradi, E. Babetto et al., “Severely dystrophic axons at amyloid plaques remain continuous and connected to viable cell bodies,” Brain, vol. 132, no. 2, pp. 402–416, 2009. View at: Publisher Site | Google Scholar
  17. M. Daianu, R. E. Jacobs, T. Town, and P. M. Thompson, “Axonal diameter and density estimated with 7-Tesla hybrid diffusion imaging in transgenic Alzheimer rats,” in Proceedings of the Medical Imaging 2016: Image Processing, San Diego, Calif, USA, March 2016. View at: Publisher Site | Google Scholar
  18. D. Krstic and I. Knuesel, “Deciphering the mechanism underlying late-onset Alzheimer disease,” Nature Reviews Neurology, vol. 9, no. 1, pp. 25–34, 2013. View at: Publisher Site | Google Scholar
  19. M. A. Hemphill, S. Dauth, C. J. Yu, B. E. Dabiri, and K. K. Parker, “Traumatic brain injury and the neuronal microenvironment: A potential role for neuropathological mechanotransduction,” Neuron, vol. 85, no. 6, pp. 1177–1192, 2015. View at: Publisher Site | Google Scholar
  20. M. H. Magdesian, F. S. Sanchez, M. Lopez et al., “Atomic force microscopy reveals important differences in axonal resistance to injury,” Biophysical Journal, vol. 103, no. 3, pp. 405–414, 2012. View at: Publisher Site | Google Scholar
  21. M. H. Magdesian, G. M. Lopez-Ayon, M. Mori et al., “Rapid mechanically controlled rewiring of neuronal circuits,” Journal of Neuroscience, vol. 36, no. 3, pp. 979–987, 2016. View at: Publisher Site | Google Scholar
  22. S. Millecamps and J.-P. Julien, “Axonal transport deficits and neurodegenerative diseases,” Nature Reviews Neuroscience, vol. 14, no. 3, pp. 161–176, 2013. View at: Publisher Site | Google Scholar
  23. P. D. Maia, M. A. Hemphill, B. Zehnder, C. Zhang, K. K. Parker, and J. N. Kutz, “Diagnostic tools for evaluating the impact of Focal Axonal Swellings arising in neurodegenerative diseases and/or traumatic brain injury,” Journal of Neuroscience Methods, vol. 253, pp. 233–243, 2015. View at: Publisher Site | Google Scholar
  24. P. D. Maia and J. N. Kutz, “Compromised axonal functionality after neurodegeneration, concussion and/or traumatic brain injury,” Journal of Computational Neuroscience, vol. 37, no. 2, pp. 317–332, 2014. View at: Publisher Site | Google Scholar
  25. P. D. Maia and J. N. Kutz, “Identifying critical regions for spike propagation in axon segments,” Journal of Computational Neuroscience, vol. 36, no. 2, pp. 141–155, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  26. B. L. Edlow, W. A. Copen, S. Izzy et al., “Longitudinal Diffusion Tensor Imaging Detects Recovery of Fractional Anisotropy Within Traumatic Axonal Injury Lesions,” Neurocritical Care, vol. 24, no. 3, pp. 342–352, 2016. View at: Publisher Site | Google Scholar
  27. A. Hånell, J. E. Greer, M. J. McGinn, and J. T. Povlishock, “Traumatic brain injury-induced axonal phenotypes react differently to treatment,” Acta Neuropathologica, vol. 129, no. 2, pp. 317–332, 2015. View at: Publisher Site | Google Scholar
  28. N. Henninger, J. Bouley, E. M. Sikoglu et al., “Attenuated traumatic axonal injury and improved functional outcome after traumatic brain injury in mice lacking Sarm1,” Brain, vol. 139, no. 4, pp. 1094–1105, 2016. View at: Publisher Site | Google Scholar
  29. C. S. Hill, M. P. Coleman, and D. K. Menon, “Traumatic Axonal Injury: Mechanisms and Translational Opportunities,” Trends in Neurosciences, vol. 39, no. 5, pp. 311–324, 2016. View at: Publisher Site | Google Scholar
  30. P. D. Maia and J. N. Kutz, “Reaction time impairments in decision-making networks as a diagnostic marker for traumatic brain injuries and neurological diseases,” Journal of Computational Neuroscience, vol. 42, no. 3, pp. 323–347, 2017. View at: Publisher Site | Google Scholar
  31. J. M. Kunert, P. D. Maia, J. N. Kutz, and S. Jbabdi, “Functionality and robustness of injured connectomic dynamics in C. elegans: linking behavioral deficits to neural circuit damage,” PLOS Computational Biology, vol. 13, no. 1, pp. 1–22, 2017. View at: Publisher Site | Google Scholar
  32. S. Rudy, P. D. Maia, and J. N. Kutz, “Cognitive and behavioral deficits arising from neurodegeneration and traumatic brain injury: a model for the underlying role of focal axonal swellings in neuronal networks with plasticity,” Journal of Systems and Integrative Neuroscience, 2016. View at: Publisher Site | Google Scholar
  33. D. J. Sharp, G. Scott, and R. Leech, “Network dysfunction after traumatic brain injury,” Nature Reviews Neurology, vol. 10, pp. 156–166, 2014. View at: Publisher Site | Google Scholar
  34. R. Pizzi, G. Cino, F. Gelain, D. Rossetti, and A. Vescovi, “Learning in human neural networks on microelectrode arrays,” BioSystems, vol. 88, no. 1-2, pp. 1–15, 2007. View at: Publisher Site | Google Scholar
  35. K.-Y. Lee, S. Shim, I.-S. Kim et al., “Coupling of semiconductor nanowires with neurons and their interfacial structure,” Nanoscale Research Letters, vol. 5, no. 2, pp. 410–415, 2010. View at: Publisher Site | Google Scholar
  36. V. Andoralov, M. Falk, D. B. Suyatin et al., “Biofuel cell based on microscale nanostructured electrodes with inductive coupling to rat brain neurons,” Scientific Reports, vol. 3, article no. 3270, 2013. View at: Publisher Site | Google Scholar
  37. L. H. Hess, C. Becker-Freyseng, M. S. Wismer et al., “Electrical coupling between cells and graphene transistors,” Small, vol. 11, no. 14, pp. 1703–1710, 2015. View at: Publisher Site | Google Scholar
  38. K.-Y. Lee, I. Kim, S.-E. Kim et al., “Vertical nanowire probes for intracellular signaling of living cells,” Nanoscale Research Letters, vol. 9, no. 1, pp. 1–7, 2014. View at: Publisher Site | Google Scholar
  39. R. Kim, S. Joo, H. Jung, N. Hong, and Y. Nam, “Recent trends in microelectrode array technology for in vitro neural interface platform,” Biomedical Engineering Letters, vol. 4, no. 2, pp. 129–141, 2014. View at: Publisher Site | Google Scholar
  40. R. M. R. Pizzi, D. Rossetti, G. Cino, D. Marino, A.L.Vescovi, and W. Baer, “A cultured human neural network operates a robotic actuator,” BioSystems, vol. 95, no. 2, pp. 137–144, 2009. View at: Publisher Site | Google Scholar
  41. T. Tuma, A. Pantazi, M. Le Gallo, A. Sebastian, and E. Eleftheriou, “Stochastic phase-change neurons,” Nature Nanotechnology, vol. 11, no. 8, pp. 693–699, 2016. View at: Publisher Site | Google Scholar
  42. S. Rajangam, P.-H. Tseng, A. Yin et al., “Wireless cortical brain-machine interface for whole-body navigation in primates,” Scientific Reports, vol. 6, Article ID 22170, 2016. View at: Publisher Site | Google Scholar
  43. T. Sun, S. Merugu, W. M. Tsang et al., “A microfabricated neural probe with porous si-parylene hybrid structure to enable a reliable brain-machine interface,” in Proceedings of the 29th IEEE International Conference on Micro Electro Mechanical Systems, MEMS 2016, pp. 153–156, Shanghai, China, January 2016. View at: Publisher Site | Google Scholar
  44. J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proceedings of the National Academy of Sciences of the United States of America, vol. 79, no. 8, pp. 2554–2558, 1982. View at: Publisher Site | Google Scholar | MathSciNet
  45. J. J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,” Proceedings of the National Academy of Sciences of the United States of America, vol. 81, pp. 3088–3092, 1984. View at: Google Scholar
  46. S. G. Hu, Y. Liu, Z. Liu et al., “Associative memory realized by a reconfigurable memristive Hopfield neural network,” Nature Communications, vol. 6, article no. 8522, 2015. View at: Publisher Site | Google Scholar
  47. W. Gerstner, W. M. Kistler, R. Naud, and L. Paninski, “Neuronal dynamics: From single neurons to networks and models of cognition,” Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition, pp. 1–577, 2014. View at: Publisher Site | Google Scholar
  48. M. Garagnani, T. Wennekers, and F. Pulvermüller, “A neuroanatomically grounded Hebbian-learning model of attention-language interactions in the human brain,” European Journal of Neuroscience, vol. 27, no. 2, pp. 492–513, 2008. View at: Publisher Site | Google Scholar
  49. M. Garagnani, T. Wennekers, and F. Pulvermüller, “Recruitment and consolidation of cell assemblies for words by way of hebbian learning and competition in a multi-layer neural network,” Cognitive Computation, vol. 1, no. 2, pp. 160–176, 2009. View at: Publisher Site | Google Scholar
  50. T. Wennekers, M. Garagnani, and F. Pulvermüller, “Language models based on Hebbian cell assemblies,” Journal of Physiology Paris, vol. 100, no. 1-3, pp. 16–30, 2006. View at: Publisher Site | Google Scholar
  51. R. C. Anafi and J. H. T. Bates, “Balancing robustness against the dangers of multiple attractors in a hopfield-type model of biological attractors,” PLoS ONE, vol. 5, no. 12, Article ID e14413, 2010. View at: Publisher Site | Google Scholar
  52. R. A. Menezes and L. H. A. Monteiro, “Synaptic compensation on hopfield network: implications for memory rehabilitation,” Neural Computing and Applications, 2011. View at: Google Scholar
  53. E. Ruppin and J. A. Reggia, “Patterns of functional damage in neural network models of associative memory,” Neural Computation, 1995. View at: Google Scholar
  54. S. Rudy, P. D. Maia, and J. N. Kutz, “Cognitive and behavioral deficits arising from neurodegeneration and traumatic brain injury: a model for the underlying role of focal axonal swellings in neuronal networks with plasticity,” Journal of Systems and Integrative Neuroscience, 2016. View at: Google Scholar

Copyright © 2017 M. Morrison et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

928 Views | 385 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.