Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 1492429 | https://doi.org/10.1155/2020/1492429

Kun Han, Dewei Wu, Lei Lai, "A Brain-Inspired Adaptive Space Representation Model Based on Grid Cells and Place Cells", Computational Intelligence and Neuroscience, vol. 2020, Article ID 1492429, 12 pages, 2020. https://doi.org/10.1155/2020/1492429

A Brain-Inspired Adaptive Space Representation Model Based on Grid Cells and Place Cells

Academic Editor: Justin Dauwels
Received22 Sep 2019
Accepted24 Apr 2020
Published11 Aug 2020

Abstract

Grid cells and place cells are important neurons in the animal brain. The information transmission between them provides the basis for the spatial representation and navigation of animals and also provides reference for the research on the autonomous navigation mechanism of intelligent agents. Grid cells are important information source of place cells. The supervised learning and unsupervised learning models can be used to simulate the generation of place cells from grid cell inputs. However, the existing models preset the firing characteristics of grid cell. In this paper, we propose a united generation model of grid cells and place cells. First, the visual place cells with nonuniform distribution generate the visual grid cells with regional firing field through feedforward network. Second, the visual grid cells and the self-motion information generate the united grid cells whose firing fields extend to the whole space through genetic algorithm. Finally, the visual place cells and the united grid cells generate the united place cells with uniform distribution through supervised fuzzy adaptive resonance theory (ART) network. Simulation results show that this model has stronger environmental adaptability and can provide reference for the research on spatial representation model and brain-inspired navigation mechanism of intelligent agents under the condition of nonuniform environmental information.

1. Introduction

Environmental cognitive ability is the basis of free movement of animals and intelligent agents. Learning from nature and brain is an important method to study the autonomous navigation mechanism of intelligent agents [1]. The hippocampal structure in the brain is an important organization related to episodic memory and spatial navigation and is the core area that constitutes the neural circuit of cognitive map. The hippocampal structure contains a variety of cells which are related to spatial representation and located in different regions, such as place cells [2], grid cells [3], head-direction cells [4], and boundary vector cells [5]. Through information transformations between these cells, spatial representation [6], cognitive map construction [7, 8], goal navigation [9, 10], episodic memory [11],and other functions can be realized.

Place cells and grid cells represent space in different ways. Place cells are mainly located in the hippocampus CA1, CA3, and dentate gyrus. In familiar environment, place cell has a single or limited number of firing fields. When an animal conducts spatial exploration, a certain number of place cells randomly constitute cell population to realize space representation [12]. The changes of the environment may cause global remapping [13, 14], partial remapping [15], or firing rate remapping [16, 17] of the place cell population. Grid cells are mainly located in the entorhinal cortex, which includes the middle entorhinal cortex and the lateral entorhinal cortex and is an important information source of hippocampus. Grid cell has regular hexagonal firing field extending to the whole space, which is characterized by size, spacing, phase, and direction. The grid cells with similar firing field spacing and direction are clustered into cell module. The ratios of firing field spacing between any adjacent modules are similar [1821]. Self-motion information is an important information source of grid cells to maintain the firing field stability [2224]. However, the firing field phase and direction may be varied with the change of environment [2527].

Grid cells are important information source of place cells [2831]. Since grid cells were discovered, researchers have proposed a variety of generation models of place cells from grid cell inputs. In the unsupervised models, the place cells are generated through the weighted summation of the grid cell inputs and the weights from grid cells to place cells are trained through competition mechanism [3234]. In the supervised models, the visual place cells are generated from environment information and are used as supervision to update the weights from grid cells to place cells [3537]. Although the existing models have simulated the generation of place cells from grid cell inputs, there still exists shortcoming. In these models, the grid cells are generated from self-motion information. The firing models of grid cell driven directly by the self-motion information can be divided into the continuous attractor network model [38] and the oscillatory interference model [39]. The continuous attractor network model is based on the preset activation-inhibition connections between grid cells, namely, local activation and long-range inhibition. The parameters in oscillatory interference model include maximum firing rate, firing field spacing, firing field direction, and firing field phase and are also preset [3234]. Therefore, the firing characteristics of grid cell and place cell cannot adapt to the environment.

When the first outbound exploration of the rat pups, place cells and grid cells develop simultaneously [7, 30, 40]. It is suggested that there may exist information transformations between place cells and grid cells. In this paper, we propose a united generation model of grid cells and place cells which has the ability to adapt to the environment. In order to distinguish all kinds of grid cells and place cells, the place cells generated from environment information are called visual place cells, the grid cells generated from visual place cell inputs are called visual grid cells, the grid cells generated from self-motion information are called self-motion grid cells, the grid cells generated from two information sources are called united grid cells, and the place cells generated through supervised learning are called united place cells. In this model, the generation process of united grid cells and united place cells is mainly divided into three steps. First, visual place cells generate visual grid cells along the boundary through feedforward network. Second, visual grid cells and self-motion information generate united grid cells extending to the whole space through genetic algorithm. Third, united grid cells generate more compact united place cells in the sparse area of visual place cells through supervised fuzzy ART network. The model can be used for the spatial representation of intelligent agents.

2. Models

Visual place cells are driven by the external environment and own high stability, absolute location information, and the earlier generation time. Therefore, they are used as the supervisions for the generation of united grid cells and united place cells. The generation process of united grid cells and united place cells is shown in Figure 1.

2.1. Visual Place Cells Generate Visual Grid Cells

It is assumed that the agent explores a rectangular space at the speed and reaches any location with the same probability. The spatial boundary information drives the generation of visual place cells which have a tighter distribution near the boundary. Gaussian function is used to represent the firing field of visual place cell:where is the firing rate of the visual place cell at the location ; is the maximum firing rate of visual place cells; is the place where the visual place cell is generated.

When the agent explores freely, if the firing rates of all visual place cells were less than , new visual place cell will be generated. is the standard deviation of firing field size of the visual place cell, which increases as goes up:where is the minimum distance from the exploring location to the boundary; is the minimum standard deviation of the firing field size; is the maximum standard deviation of the firing field size; is the firing field distribution constant; is the maximum distance from which visual place cells can be generated.

The feedforward network based on place cell inputs and Hebbian learning to weights can be used to generate grid cell with hexagonal firing field [4146]. The periodic grid cell firing field is derived from the periodic weight distribution from place cells to single grid cell, and the input correlation driving the development of periodic weight distribution is usually presented as the Mexican hat model. The input correlation with Mexican hat model may be derived from the temporal correlation [4244] or the spatial correlation [45, 46] of the place cell firing rates. However, the existing temporal correlation models assume the Hebbian learning as nonlinear correlation plasticity [42], the spiking rate adaptive function as Mexican hat model [43], or the weight window function as Mexican hat model [44]. We found that, without any presupposition, the input correlation with Mexican hat model can be derived only through the linear temporal correlation of the place cell firing rates. The firing field spacing of grid cell generated by this model is proportional to the exploring speed of the intelligent agent. It is assumed that the weight update from the place cell population to a grid cell has a certain time interval. The Hebbian learning is implemented based on the change of place cell firing rates before and after the time interval and the real-time grid cell firing rate. The weight update can be expressed as follows:where is the weight from the visual place cell to the generated visual grid cell; is weight update rate; is the weight update time interval; is the reduction coefficient of place cell firing rate; is the firing rate of visual grid cell generated from visual place cell inputs at any moment ; is the weight update constant.

In order to develop weights with periodic spatial distribution, competitive nonlinear restriction is applied. The upper boundary and the lower boundary of the weights are set, respectively. When a weight is less than the lower boundary, the weight is set to the lower boundary. When any weight is larger than the upper boundary, all weights are equally scaled down through competition, so that the maximum weight is equal to the upper boundary.

2.2. Visual Grid Cells and Self-Motion Information Generate United Grid Cells

In the existing models, either place cell inputs or self-motion information can generate grid cells independently. However, in this paper, on the one hand, since the firing field distribution of visual place cells varies with the change of the distance from the exploring location to the boundary, the firing field of visual grid cell generated from the visual place cell inputs through the feedforward network cannot expand to the whole space. On the other hand, the firing field parameters of self-motion grid cells generated from self-motion information need to be preset and cannot be adaptive to the environment. In view of the above shortcomings, we combine the visual grid cell and the self-motion information through genetic algorithm to generate the united grid cell with firing field adaptive to environment and extending to the whole space.

The grid cell models driven directly by self-motion information mainly include continuous attractor network model [38] and oscillatory interference model [39]. The continuous attractor network model represents the firing pattern of the grid cell population. The asymmetrical intercellular connections and self-motion information make the firing pattern move as a whole. The oscillatory interference model represents the firing rate of a single grid cell. The self-motion information causes the phase shift of each oscillator, so as to change the firing rate. In this paper, the united grid cells are independent of each other and there is no interconnection. Therefore, the united grid cell is represented by the oscillatory interference model referring to [34]. The firing rate of united grid cell at location can be expressed aswhere is the maximum firing rate; is the firing field spacing; is the firing field phase; is the firing field direction.

The visual grid cell whose firing characteristics are adaptive to environment is generated from visual place cell inputs. We regard the firing field of visual grid cell as the sample of the firing field of united grid cell along one certain environmental boundary. The genetic algorithm is used to optimize the parameters in (4) to maximize the similarity between the firing characteristics of visual grid cell and the firing characteristics of united grid cell. The parameters optimization can be seen as where the grid pattern which is adaptive to environment comes.

Genetic algorithm is a model to search for the optimal solution by simulating the biological evolution process. It begins with populations that represent the potential set of solutions to a problem. After the initial populations, according to the principle of survival of the fittest, generation evolution produces better approximate solutions. In each generation, crossover and mutation are performed with the help of genetic operators to generate new populations representing a new solution set, and then populations are selected according to the fitness. This process will result in having selected populations more adaptive to the environment than the populations in previous generation. The optimal population in the last generation is regarded as the approximate optimal solution. The genetic algorithm can be shown in Figure 2.

In this paper, set the update range of the parameters as , , , , and . The evolution process of genetic algorithm is as follows.Initialize the population randomly. The population size is ; each population contains the above five parameters; each parameter is represented by bits binary.Set crossover probability and mutation probability . offsprings are generated through crossover operator and mutation operator.Record the firing rate of visual grid cell and calculate the firing rates of united grid cell in the sampling region. When the record number reaches , the fitness of each population is calculated. The fitness is defined as the quadratic sum of the firing rate differences at each record moment; namely, Select populations with low fitness from the parent and the offspring as the next generation.Record the optimal solution and reset the record number to zero.Determine whether the end condition is satisfied. If so, output the optimal solution; if not, return to step ②.

2.3. Visual Place Cells and United Grid Cells Generate United Place Cells

Influenced by the boundaries, the visual place cells have nonuniform distribution. However, the grid cells can generate place cells by competitive neural network whose parameters can influence the firing field characteristics of the generated place cells. Therefore, the combination of visual place cells and united grid cells can improve the distribution density and positioning accuracy of the place cells far away from the boundaries. In this paper, supervised fuzzy ART network is used to realize the information transmission from the visual place cells and united grid cells to the united place cells.

ART network is a competitive classifying and clustering network with both plasticity and incremental learning. It has the ability of learning new knowledge and meanwhile maintaining the memory of old knowledge. Therefore, the learning process is robust to the input order of the samples. ART network mainly includes ART1 network for binary input processing, ART2 network for real input processing [47], ART3 network for multilayer network [48], fuzzy ART network for fuzzy processing [49], and ARTMAP network for supervised learning [50, 51]. The fuzzy ART network structure is shown in Figure 3(a).

The competitive learning of fuzzy ART network includes the following steps.Input preprocessing: is the normalized input vector, and the range of each element is ; the parameter represents the number of input elements; is the complement representation of the input vector.Category selecting: for the input vector and the node in field , the selection function is defined aswhere is a small nonnegative real, and the value in this paper is 0.001; is the number of nodes in field ; is the adaptive weight vector from input vector to node , and the initial value of each weight is 1; is the fuzzy sum operator defined as ; is 1-norm defined as .The node corresponding to the largest function in all the selection functions is regarded as the category. If there are multiple maximum selection functions at the same time, the node with the smallest index is selected as the category. After the category selection, the vector in field is calculated:Category matching: to match and , if , the match succeeds; otherwise, the match fails. is the match parameter. If the match fails, the selection function will be set to zero, and the learning will return to step ② to select category and match category again. The match process will end until the match succeeds or all nodes in field have been tried.Weight updating: if the input vector matches the node successfully, the weight vector will be updated. If the input vector does not match any node in field , a new node will be added as the match node, and the weight vector from the input vector to the added node is initialized. The update of weight vector is expressed aswhere is the learning rate. When , the process is defined as fast learning. In this paper, we take .In the existing ART network models, the supervised network is ARTMAP network which includes a pair of fuzzy ART networks (i.e., ARTa and ARTb). The ARTb network provides learning supervision for the ARTa network. The process of generating united place cells is actually to classify firing rates of united grid cell population. In the model of generating united place cells from visual place cells and united grid cells, we simplify the supervised ARTMAP network. According to the firing fields of visual place cells, the whole space is divided into different types, between which there may exist overlap. The firing field of each visual place cell is one type, and the region without visual place cells is one type. The ARTb network is replaced by the visual place cell types as the supervision of the ARTa network, and the input vector of the ARTa network is the firing rates of united grid cell population. Each type is divided into a number of categories which are defined as the united place cells. This simplification enables the fuzzy ART network to have the supervised learning ability. The supervised fuzzy ART network structure is shown in Figure 3(b).In Figure 3(b), the inputs are the firing rates of united grid cell population and the fuzzy ART network is the structure in Figure 2(a). The blue blocks represent the visual place cells which act as supervisors. Their firing fields are small enough that one type contains only one category. They are used to train the parameters in the fuzzy ART network. The red blocks represent the types which are divided into different categories. They include the visual place cells whose firing fields are large enough and the region where there is no visual place cell. The category range, namely, the firing field size of the united place cell, is determined by the trained parameters in the fuzzy ART network.

3. Results

3.1. The Firing Field of Visual Grid Cell Distributes Periodically along the Boundary

The environment and boundary information drive the generation of visual place cells with different distribution density, and then visual place cells generate locally distributed visual grid cell through feedforward network. Simulation parameters of visual grid cell are shown in Table 1.


ParameterVariableValue

Exploring space
Exploring speed
Maximum firing rate of visual place cells
Minimum standard deviation of firing field of visual place cells
Maximum standard deviation of firing field of visual place cells
Maximum distance to generate visual place cell
Firing field distribution constant of visual place cells
Weight update rate
Reduction coefficient of firing rate of visual place cells
Weight update constant
Lower weight boundary
Upper weight boundary

The agent explores the whole space and reaches any location with the same probability. According to the generation process of visual place cells introduced in Section 2.1, after exploration, the distribution of visual place cells is shown in Figure 4.

As can be seen from Figure 4, in the region close to the boundaries, the distribution of visual place cells is closer and the firing field size is smaller, which suggests a more accurate spatial representation. In the region moving away from the boundaries, the firing field spacing and size increase gradually, and the spatial representation becomes fuzzy. The distribution of visual place cells conforms to the distribution characteristics of initial place cells proposed in the preweaning rat experiment [30].

In the brain, the appearance of mature grid cell is later than that of mature place cell. Therefore, it is suggested that the place cells can provide input information for the generation of grid cells. Assuming that the weight update time interval in this paper is a positive integer, the change of will have an influence on the weight distribution under the same exploring speed. Taking the visual place cells shown in Figure 4 as the information source of the visual grid cell and according to the weight update model introduced in Section 2.1 and the parameters in Table 1, the weights from visual place cell to visual grid cell learned under different weight update time intervals are shown in Figure 5.

The weights and the firing field of visual grid cell have the same distribution. Therefore, it can be seen from Figure 5 that the firing field of visual grid cell is influenced by the weight update time interval and boundary. When the time interval is small (e.g., ), the visual grid cell with periodic firing field cannot be generated in the rectangular space. In fact, this is because the small time interval does not make the weight update process ((3)) meet the reaction-diffusion mechanism [52]. With the increase of time interval, the visual grid cell with periodic firing field is generated along the boundary and the firing field spacing increases monotonically. Under the same time interval, the boundary influences the firing field distribution of visual grid cell, and the firing field along each boundary can correspond to an independent visual grid cell (e.g., ). As the time interval increases continuously, the firing field of generated visual grid cell will gradually lose the periodicity and meanwhile lose the ability of serving as the sample of the united grid cell.

3.2. The Firing Field of United Grid Cell Can Extend to the Whole Exploring Space

Although the firing field of visual grid cell cannot cover the whole exploring space, it can be used as the sample of united grid cell whose firing field can extend freely. First, the sampling region of the genetic algorithm is determined. If there is activated visual place cell with weight to any visual grid cell greater than threshold at a certain exploring location, the firing rates and are sampled. The simulation parameters of the genetic algorithm used to generate united grid cells are shown in Table 2, and the sampling region of the genetic algorithm is shown in Figure 6.


ParameterVariableValue

Weight threshold in sampling region
Population size
Parameter binary digit
Crossover probability
Mutation probability

According to the simulation in Section 3.1, the visual grid cells generated when the weight update time interval is are selected for the generation of united grid cells through the genetic algorithm. Each visual grid cell independently participates in the generation of a united grid cell, so that four united grid cells could be generated at each time interval. The firing rate of each visual grid cell is normalized so that its maximum firing rate in the sampling region is . The agent explores the region near the boundaries at the speed for . According to the evolution process of genetic algorithm introduced in Section 2.2, the firing parameters of the united grid cells are updated. After the exploration, taking the time interval as an example, the firing fields of the generated four united grid cells are shown in Figure 7.

It can be seen from Figure 7 that the united grid cell generated through genetic algorithm has hexagonal firing field extending to the whole exploring space. And in the sampling region near each boundary the firing field of visual grid cell is almost the same as that of the generated united grid cell. Therefore, the united grid cell generated through genetic algorithm has the characteristics of free expansion and environmental adaptation and is more suitable for spatial representation than the grid cells generated from a single information source.

In Figure 5, because the firing field of each visual grid cell is a one-dimensional distribution along one boundary, the unique united grid cell with hexagon firing field cannot be determined. Further, in view of the above simulation results, the firing field direction is increased as a preset parameter and the other parameters are taken as the learning parameters to conduct space exploring and genetic algorithm learning again. After the exploration, still taking the time interval as an example, the firing fields of another generated four united grid cells are shown in Figure 8.

As can be seen from Figures 7 and 8, under the same visual grid cell, the firing fields of generated united grid cells with a direction difference of can both match the firing field of visual grid cell precisely. Therefore, they are both used to represent space in this paper. After the learning through the above two genetic algorithms, 32 united grid cells are generated under the condition of 4 different weight update time intervals, and their firing parameters are shown in Table 3.


Boundary to generate visual grid cellWeight update time intervalSpacingPhaseMaximum firing rateDirection

16.2268[12.2776, 29.3255]0.90715.7427
9.6970[20.1760, 26.4516]0.98536.2663
20.9189[71.2414, 54.3109]0.87595.7427
13.7634[27.5269, 51.0264]0.99326.2663
22.9130[37.8495, 0.5865]0.87594.7477
13.6070[72.5709, 11.2610]0.99325.2713
30.0293[29.5601, 1.4663]0.73414.7477
17.5171[73.5093, 48.7977]0.99325.2713

14.0371[4.5357, 46.9795]0.82894.7109
8.3675[61.3099, 37.0088]0.99325.2345
18.4146[38.7097, 42.7566]0.79774.7109
10.8700[19.9413, 8.4457]0.98535.2345
22.4047[46.2170, 38.5924]0.81334.6494
13.2942[35.3470, 44.3402]0.99325.1732
28.4262[55.5230, 58.2991]0.71953.6852
16.9306[42.5415, 48.7390]0.97754.2088

15.5230[14.9365, 21.5836]0.80944.1888
9.0323[6.8035, 52.7273]0.99714.7124
16.3050[16.4223, 52.6100]0.83585.2022
9.6188[31.1241, 11.2023]0.99715.7258
23.7732[63.8905, 54.0762]0.81334.1826
13.6070[2.9717, 42.1114]0.99714.7062
29.4819[1.6422, 0.3519]0.64916.2770
17.5171[20.4888, 29.4428]0.93840.5174

15.2102[78.7488, 29.6774]0.81334.1704
9.1105[64.3597, 22.5220]0.95114.6940
20.8407[78.1232, 41.9941]0.80945.2759
12.5122[52.9423, 19.5894]0.99325.7995
21.3881[78.7488, 20]0.78203.1877
13.4506[9.0714, 39.4135]0.99513.7113
23.9687[57.5562, 29.9120]0.78306.2832
14.7019[76.5591, 16.8328]0.99320.5236

3.3. The Distribution of United Place Cells Is Closer than That of Visual Place Cells

The united grid cells and the visual place cells generate the united place cells through the supervised fuzzy ART network. The united grid cells provide input information, the visual place cells provide supervision information, and the matching parameter of the supervised fuzzy ART network determines the distribution density of generated united place cells. In order to make the generated united place cells have uniform distribution density in the whole exploring space similar to that of visual place cells near the boundaries, the matching parameter of the supervised fuzzy ART network is learned. The agent explores the space at interval. For the visual place cells satisfying the sampling region of genetic algorithm in Figure 6, the fuzzy ART network is used to implement category learning and real-time adjustment of matching parameter , so that there is only one category of united place cell in each type of visual place cell. The learning result of matching parameter is shown in Figure 9.

In Figure 9, each matching parameter ensures that the corresponding visual place cell contains only one category. Different matching parameters are obtained since these visual place cells have different distances to boundary and different firing field sizes. Therefore, the matching parameters are fluctuant. The mean value of all 1088 matching parameters is calculated as the matching parameter of the types that are not in the sampling region of genetic algorithm. The space that does not belong to the sampling region of genetic algorithm is explored successively at interval, and the category learning is implemented for each type based on the supervised fuzzy ART network. The distribution of generated united place cell is shown in Figure 10.

As can be seen from Figure 10, the united place cells generated through supervised fuzzy ART network can not only retain the distribution density of visual place cells near the boundary, but also extend the distribution density to the whole exploring space. Compared with the visual place cells shown in Figure 4, the united place cells are more closely distributed in the region far from the boundary, so the spatial representation accuracy of united place cells is higher.

4. Conclusion

Neurons in the hippocampal structure, such as grid cells and place cells, are the basis of environmental cognition and free movement. The research on their firing mechanism can not only deeply understand the working principle of the brain, but also provide reference for the construction of the brain-inspired navigation mechanism of intelligent agents. In this paper, we propose a united generation model of grid cells and place cells, which successively generates visual place cells, visual grid cells, united grid cells, and united place cells. The model can realize the spatial representation and provide a foundation for the construction of navigation cognitive map.

In the generation process of grid cells and place cells, we only presuppose the firing field distribution of visual place cells, and the other three cell types are all the results of environmental adaptation. The visual place cells generate the visual grid cell through feedforward network, whose firing field spacing varies with the change of the weight update time interval. The visual grid cell and self-motion information generate the united grid cell through genetic algorithm, whose firing field extends to the whole exploring space. The visual place cells and the united grid cells generate the united place cells through supervised fuzzy ART network, which are evenly distributed in the whole exploring space. Therefore, compared with the existing models, the model in this paper has stronger environmental adaptability and can adaptively represent the space under the condition of uneven distribution of environment information.

Based on the reaction-diffusion mechanism and weights’ Hebbian learning, grid cell can be generated from the place cell inputs. In the existing models, the network parameters are preset, so the firing field of generated grid cell cannot adapt to the environment. In this paper, the input correlation with Mexican hat model is spontaneously generated by the place cell inputs. This method is discussed in a separate paper which has been accepted.

The visual grid cell and self-motion information are combined to generate the united grid cell through genetic algorithm. The firing field of visual grid cell which is regarded as the sample determines the firing parameters of generated united grid cell. In the brain, grid cells exist in the form of module, and the ratio of firing field spacing between any adjacent modules is almost constant. In this paper, the firing fields of generated united grid cells do not show such characteristics, which indicates that the generation of grid cells requires other information sources in addition to the place cell inputs. To generate grid cells based on multiple information sources will be one of our next research contents.

The ARTMAP network is a supervised ART network. It assigns each input to a unique category by gradually increasing the matching parameters of ARTa network. In this paper, we simplify the ARTMAP network so as to make fuzzy ART network have supervised learning ability. Different from the adjustment method of matching parameter of ARTMAP network, the model in this paper gradually reduces the matching parameter, so that each type of visual place cell in the sampling region of genetic algorithm can generate unique united place cell. Meanwhile, the learned matching parameter is used for classification in the other types to generate united place cells, which makes the place cell distribution near the boundaries extend to the whole exploring space.

The spatial representation based on grid cells and place cells only implements the positioning. The cognitive map required by intelligent navigation should contain the relative relationship between independent locations and provide accurate path information for the autonomous movement of intelligent agents. Therefore, the cognitive map construction and the intelligent navigation based on the cognitive map will be the main content of our next research.

Data Availability

The data used to support the findings of this study are all available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (NSFC Grant no. 61603409).

References

  1. M. J. Milford, Robot Navigation from Nature, vol. 41, Springer Tracts in Advanced Robotics, Berlin, Germany, 2007.
  2. J. O’Keefe and J. Dostrovsky, “The hippocampus as a spatial map. preliminary evidence from unit activity in the freely-moving rat,” Brain Research, vol. 34, no. 1, pp. 171–175, 1971. View at: Publisher Site | Google Scholar
  3. T. Hafting, M. Fyhn, S. Molden, M.-B. Moser, and E. I. Moser, “Microstructure of a spatial map in the entorhinal cortex,” Nature, vol. 436, no. 7052, pp. 801–806, 2005. View at: Publisher Site | Google Scholar
  4. J. Taube, R. Muller, and J. Ranck, “Head-direction cells recorded from the postsubiculum in freely moving rats. I. Description and quantitative analysis,” The Journal of Neuroscience, vol. 10, no. 2, pp. 420–435, 1990. View at: Publisher Site | Google Scholar
  5. C. Lever, S. Burton, A. Jeewajee, J. O’Keefe, and N. Burgess, “Boundary vector cells in the subiculum of the hippocampal formation,” Journal of Neuroscience, vol. 29, no. 31, pp. 9771–9777, 2009. View at: Publisher Site | Google Scholar
  6. E. I. Moser, E. Kropff, and M.-B. Moser, “Place cells, grid cells, and the brain’s spatial representation system,” Annual Review of Neuroscience, vol. 31, no. 1, pp. 69–89, 2008. View at: Publisher Site | Google Scholar
  7. T. J. Wills, F. Cacucci, N. Burgess, and J. O’Keefe, “Development of the hippocampal cognitive map in preweanling rats,” Science, vol. 328, no. 5985, pp. 1573–1576, 2010. View at: Publisher Site | Google Scholar
  8. J. O’Keefe and L. Nadel, The hippocampus as a Cognitive Map, Oxford University Press, Oxford, UK, 1978.
  9. A. Banino, C. Barry, B. Uria et al., “Vector-based navigation using grid-like representations in artificial agents,” Nature, vol. 557, no. 7705, pp. 429–433, 2018. View at: Publisher Site | Google Scholar
  10. S. Grossberg, “From brain synapses to systems for learning and memory: object recognition, spatial navigation, timed conditioning, and movement control,” Brain Research, vol. 1621, pp. 270–293, 2015. View at: Publisher Site | Google Scholar
  11. P. K. Pilly and S. Grossberg, “How do spatial learning and memory occur in the brain? Coordinated learning of entorhinal grid cells and hippocampal place cells,” Journal of Cognitive Neuroscience, vol. 24, no. 5, pp. 1031–1054, 2012. View at: Publisher Site | Google Scholar
  12. M. Wilson and B. McNaughton, “Dynamics of the hippocampal ensemble code for space,” Science, vol. 261, no. 5124, pp. 1055–1058, 1993. View at: Publisher Site | Google Scholar
  13. M. I. Schlesiger, B. L. Boublil, J. B. Hales, J. K. Leutgeb, and S. Leutgeb, “Hippocampal global remapping can occur without input from the medial entorhinal cortex,” Cell Reports, vol. 22, no. 12, pp. 3152–3159, 2018. View at: Publisher Site | Google Scholar
  14. M. Geva-Sagiv, S. Romani, L. Las, and N. Ulanovsky, “Hippocampal global remapping for different sensory modalities in flying bats,” Nature Neuroscience, vol. 19, no. 7, pp. 952–958, 2016. View at: Publisher Site | Google Scholar
  15. M. I. Anderson and K. J. Jeffery, “Heterogeneous modulation of place cell firing by changes in context,” The Journal of Neuroscience, vol. 23, no. 26, pp. 8827–8835, 2003. View at: Publisher Site | Google Scholar
  16. K. Allen, J. N. P. Rawlins, D. M. Bannerman, and J. Csicsvari, “Hippocampal place cells can encode multiple trial-dependent features through rate remapping,” Journal of Neuroscience, vol. 32, no. 42, pp. 14752–14766, 2012. View at: Publisher Site | Google Scholar
  17. T. Solstad, H. N. Yousif, and T. J. Sejnowski, “Place cell rate remapping by CA3 recurrent collaterals,” PLoS Computational Biology, vol. 10, no. 6, Article ID e1003648, 2014. View at: Publisher Site | Google Scholar
  18. H. Stensola, T. Stensola, T. Solstad, K. Frøland, M.-B. Moser, and E. I. Moser, “The entorhinal grid map is discretized,” Nature, vol. 492, no. 7427, pp. 72–78, 2012. View at: Publisher Site | Google Scholar
  19. C. Barry, R. Hayman, N. Burgess, and K. J. Jeffery, “Experience-dependent rescaling of entorhinal grids,” Nature Neuroscience, vol. 10, no. 6, pp. 682–684, 2007. View at: Publisher Site | Google Scholar
  20. J. Krupic, M. Bauza, S. Burton, C. Barry, and J. O’Keefe, “Grid cell symmetry is shaped by environmental geometry,” Nature, vol. 518, no. 7538, pp. 232–235, 2015. View at: Publisher Site | Google Scholar
  21. P. K. Pilly and S. Grossberg, “How does the modular organization of entorhinal grid cells develop?” Frontier in Human Neuroscience, vol. 8, p. 337, 2014. View at: Publisher Site | Google Scholar
  22. S. S. Winter, M. L. Mehlman, B. J. Clark, and J. S. Taube, “Passive transport disrupts grid signals in the parahippocampal cortex,” Current Biology, vol. 25, no. 19, pp. 2493–2502, 2015. View at: Publisher Site | Google Scholar
  23. B. J. Kraus, M. P. Brandon, R. J. Robinson, M. A. Connerney, M. E. Hasselmo, and H. Eichenbaum, “During running in place, grid cells integrate elapsed time and distance run,” Neuron, vol. 88, no. 3, pp. 578–589, 2015. View at: Publisher Site | Google Scholar
  24. H. F. Ólafsdóttir and C. Barry, “Spatial cognition: grid cell firing depends on self-motion cues,” Current Biology, vol. 25, no. 19, pp. R827–R829, 2015. View at: Publisher Site | Google Scholar
  25. J. A. Pérez-Escobar, O. Kornienko, P. Latuske, L. Kohler, and K. Allen, “Visual landmarks sharpen grid cell metric and confer context specificity to neurons of the medial entorhinal cortex,” eLife, vol. 5, Article ID e16937, 2016. View at: Publisher Site | Google Scholar
  26. S. S. Winter, B. J. Clark, and J. S. Taube, “Disruption of the head direction cell network impairs the parahippocampal grid cell signal,” Science, vol. 347, no. 6224, pp. 870–874, 2015. View at: Publisher Site | Google Scholar
  27. F. Raudies, J. R. Hinman, and M. E. Hasselmo, “Modelling effects on grid cells of sensory input during self-motion,” The Journal of Physiology, vol. 594, no. 22, pp. 6513–6526, 2016. View at: Publisher Site | Google Scholar
  28. D. Bush, C. Barry, and N. Burgess, “What do grid cells contribute to place cell firing?” Trends in Neurosciences, vol. 37, no. 3, pp. 136–145, 2014. View at: Publisher Site | Google Scholar
  29. C. S. Mallory, K. Hardcastle, J. S. Bant, and L. M. Giocomo, “Grid scale drives the scale and long-term stability of place maps,” Nature Neuroscience, vol. 21, no. 2, pp. 270–282, 2018. View at: Publisher Site | Google Scholar
  30. L. Muessig, J. Hauser, T. J. Wills, and F. Cacucci, “A developmental switch in place cell accuracy coincides with grid cell maturation,” Neuron, vol. 86, no. 5, pp. 1167–1173, 2015. View at: Publisher Site | Google Scholar
  31. P. Latuske, O. Kornienko, L. Kohler, and K. Allen, “Hippocampal remapping and its Entorhinal origin,” Frontiers in Behavioral Neuroscience, vol. 11, p. 253, 2018. View at: Publisher Site | Google Scholar
  32. F. Savelli and J. J. Knierim, “Hebbian analysis of the transformation of medial entorhinal grid-cell inputs to hippocampal place fields,” Journal of Neurophysiology, vol. 103, no. 6, pp. 3167–3183, 2010. View at: Publisher Site | Google Scholar
  33. T. Solstad, E. I. Moser, and G. T. Einevoll, “From grid cells to place cells: a mathematical model,” Hippocampus, vol. 16, no. 12, pp. 1026–1031, 2006. View at: Publisher Site | Google Scholar
  34. B. Si and A. Treves, “The role of competitive learning in the generation of DG fields from EC inputs,” Cognitive Neurodynamics, vol. 3, no. 2, pp. 1770–1787, 2009. View at: Publisher Site | Google Scholar
  35. A. Jauffret, N. Cuperlier, P. Gaussier, and P. Tarroux, “Multimodal integration of visual place cells and grid cells for navigation tasks of a real robot,” in Proceedings of the 12th International Conference on Simulation of Adaptive Behavior, pp. 136–145, Odense, Denmark, August 2012. View at: Google Scholar
  36. A. Jauffret, N. Cuperlier, and P. Gaussier, “From grid cells and visual place cells to multimodal place cell: a new robotic architecture,” Frontiers in Neurorobotics, vol. 9, p. 1, 2015. View at: Publisher Site | Google Scholar
  37. P. Gaussier, J. P. Banquet, N. Cuperlier et al., “Merging information in the entorhinal cortex: what can we learn from robotics experiments and modeling?” The Journal of Experimental Biology, vol. 222, no. 1, Article ID jeb186932, 2019. View at: Publisher Site | Google Scholar
  38. Y. Burak and I. R. Fiete, “Accurate path integration in continuous attractor network models of grid cells,” PLoS Computational Biology, vol. 5, no. 2, Article ID e1000291, 2009. View at: Publisher Site | Google Scholar
  39. N. Burgess, C. Barry, and J. O’Keefe, “An oscillatory interference model of grid cell firing,” Hippocampus, vol. 17, no. 9, pp. 801–812, 2007. View at: Publisher Site | Google Scholar
  40. R. F. Langston, J. A. Ainge, J. J. Couey et al., “Development of the spatial representation system in the rat,” Science, vol. 328, no. 5985, pp. 1576–1580, 2010. View at: Publisher Site | Google Scholar
  41. L. Castro and P. Aguiar, “A feedforward model for the formation of a grid field where spatial information is provided solely from place cells,” Biological Cybernetics, vol. 108, no. 2, pp. 133–143, 2014. View at: Publisher Site | Google Scholar
  42. A. Stepanyuk, “Self-organization of grid fields under supervision of place cells in a neuron model with associative plasticity,” Biologically Inspired Cognitive Architectures, vol. 13, pp. 48–62, 2015. View at: Publisher Site | Google Scholar
  43. T. D’Albis and R. Kempter, “A single-cell spiking model for the origin of grid-cell patterns,” Plos Computational Biology, vol. 13, no. 10, Article ID e1005782, 2017. View at: Publisher Site | Google Scholar
  44. M. M. Monsalve-Mercado and C. Leibold, “Hippocampal spike-timing correlations lead to hexagonal grid fields,” Physical Review Letters, vol. 119, no. 3, 2017. View at: Publisher Site | Google Scholar
  45. S. N. Weber and H. Sprekeler, “Learning place cells, grid cells and invariances with excitatory and inhibitory plasticity,” eLife, vol. 7, 2018. View at: Publisher Site | Google Scholar
  46. Y. Dordek, D. Soudry, R. Meir, and D. Derdikman, “Extracting grid cell characteristics from place cell inputs using non-negative principal component analysis,” eLife, vol. 5, Article ID e10094, 2016. View at: Publisher Site | Google Scholar
  47. G. A. Carpenter and S. Grossberg, “Art 2: self-organization of stable category recognition codes for analog input patterns,” Applied Optics, vol. 26, no. 23, pp. 4919–4930, 1987. View at: Publisher Site | Google Scholar
  48. G. A. Carpenter and S. Grossberg, “Art 3: hierarchical search using chemical transmitters in self- organizing pattern recognition architectures,” Neural Network, vol. 3, no. 2, pp. 129–152, 1989. View at: Publisher Site | Google Scholar
  49. G. A. Carpenter, S. Grossberg, and D. B. Rosen, “Fuzzy ART: fast stable learning and categorization of analog patterns by an adaptive resonance system,” Neural Networks, vol. 4, no. 6, pp. 759–771, 1991. View at: Publisher Site | Google Scholar
  50. G. A. Carpenter, S. Grossberg, and J. H. Reynolds, “ARTMAP: supervised real-time learning and classification of nonstationary data by a self-organizing neural network,” Neural Networks, vol. 4, no. 5, pp. 565–588, 1991. View at: Publisher Site | Google Scholar
  51. G. A. Carpenter, S. Grossberg, N. Markuzon, J. H. Reynolds, and D. B. Rosen, “Fuzzy ARTMAP: a neural network architecture for incremental supervised learning of analog multidimensional maps,” IEEE Transactions on Neural Networks, vol. 3, no. 5, pp. 698–713, 1992. View at: Publisher Site | Google Scholar
  52. J. D. Murray, Mathematical Biology II: Spatial Models and Biomedical Application, Springer, New York, NY, USA, 2003.

Copyright © 2020 Kun Han et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views438
Downloads294
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.