Modeling and Analysis of DataDriven Systems through Computational Neuroscience
View this Special IssueResearch Article  Open Access
Lei Li, Tingting Zou, Tao Cai, Dejiao Niu, Yuquan Zhu, "A Fast Spatial Pool Learning Algorithm of Hierarchical Temporal Memory Based on Minicolumn’s SelfNomination", Computational Intelligence and Neuroscience, vol. 2021, Article ID 6680833, 13 pages, 2021. https://doi.org/10.1155/2021/6680833
A Fast Spatial Pool Learning Algorithm of Hierarchical Temporal Memory Based on Minicolumn’s SelfNomination
Abstract
As a new type of artificial neural network model, HTM has become the focus of current research and application. The sparse distributed representation is the basis of the HTM model, but the existing spatial pool learning algorithms have high training time overhead and may cause the spatial pool to become unstable. To overcome these disadvantages, we propose a fast spatial pool learning algorithm of HTM based on minicolumn’s nomination, where the minicolumns are selected according to the loadcarrying capacity and the synapses are adjusted using compressed encoding. We have implemented the prototype of the algorithm and carried out experiments on three datasets. It is verified that the training time overhead of the proposed algorithm is almost unaffected by the encoding length, and the spatial pool becomes stable after fewer iterations of training. Moreover, the training of the new input does not affect the already trained results.
1. Introduction
Neural computing is a hot topic in the field of artificial intelligence and machine learning. Hierarchical temporal memory (HTM) is a kind of machine learning technology that simulates the organization and mechanism of cerebral cortex cells as well as the information processing pipeline of the human brain. HTM is trained with a large number of timeseries data and stores a large number of pattern sequences. Through the memory, HTM can predict the future inputs or detect whether the current input is abnormal according to the temporal history information.
HTM works based on sparse distributed representations (SDRs) of the inputs. Spatial pool learning algorithm (SPL) establishes the connection between the input code and the minicolumn’s synapses using the Hebbian learning rule and activates the column set by a significant bit in the input code, thus realizing the SDR of input. In this process, the spatial pool is expected to maintain a certain degree of flexibility like the cerebral cortex, so several overlapping goals determine how the spatial pool operates and learns, which include the following: (1) all columns should learn to represent something useful regardless of how many columns the spatial pool has. (2) A region needs to form a sparse representation of the input, i.e., only a small percentage of the columns in the spatial pool should be “winners” for an input. (3) All columns should represent nontrivial patterns of a certain level of complexity. (4) A column should be avoided to respond strongly to many distinctly unrelated input patterns, i.e., a column should represent a limited number of input patterns, sometimes only one. (5) The spatial pool should have the ability of selfadjusting to cope with various changes, including the damage of the columns and the damage or change of sensory organs. SPL uses the boosting rule, inhibition rule, minimal threshold, large synaptic pool, and Hebbian rule to enable the dynamic configuration of the spatial pool, thus achieving the desired effect after multirounds of training. In the conventional training, the time overhead of the algorithm, the sparsity of the active minicolumns, the stability of the spatial pool, and the utilization rate of the minicolumn are the main factors to evaluate the HTM training. In SPL, a large synaptic pool has a great impact on the training time overhead, and the boosting and inhibition rules lead to the instability of the spatial pool. In this paper, we analyze the characteristics of SPL and propose a fast spatial pool learning algorithm of HTM based on minicolumn’s selfnomination (SPL_SN), which includes the calculation rules of the selfnomination status of the minicolumn, the selection rule of the minicolumn based on the loadcarrying capacity and level, and the synapse adjustment rule based on compressed encoding. The prototype of the proposed algorithm is tested on three kinds of datasets, and four criteria are used to evaluate the time overhead, the stability of the spatial pool, the sparsity of active minicolumns, and the utilization rate of all columns. The main contributions are as follows:(1)SPL_SN reconstructs the structure of the spatial pool and adds the matching input information to each minicolumn. Based on the loadcarrying capacity of the minicolumn, i.e., the utilization degree of the minicolumn, the SPL_SN can make the selected minicolumns sparsely distributed in the spatial pool.(2)Different from SPL that uses overlapping values to select minicolumns, SPL_SN uses the overlapping values to calculate the selfnomination status of the minicolumn. The selfnomination status and the loadcarrying capacity of the minicolumn can improve the utilization rate of minicolumns in the spatial pool.(3)Different from SPL using the Hebbian rule to adjust the link between the input and minicolumn, SPL_SN designs a synaptic adjustment rule based on compressed encoding that can reduce the training time overhead. The learning of new input does not affect the trained information in the minicolumn, ensuring a dynamic adaptability of HTM and improving the efficiency of HTM training and the stability of the spatial pool.(4)The prototype of SPL_SN is implemented in the HTM opensource framework, and different types of datasets are used for test and analysis. It is verified that the algorithm not only meets the flexibility of the spatial pool but also improves the training efficiency and stability of the spatial pool.
2. Related Works
Neural networks (NNs) have achieved great success in many fields, e.g., classification [1], recognition [2], prediction, and control. In order to further improve their performance and efficiency, NNs are also exploring in different directions, such as using dendritic neurons to build networks [3] and fusing with other neural networks [4]. HTM is a new artificial neural network model based on Jeff Hawkins’ memory prediction framework [5, 6], and its main workflow is shown in Figure 1. The network space of HTM is described as a region composed of multiple columns. Different regions have hierarchical relationships. There are many cells on a minicolumn. The cells on each minicolumn share a proximal dendrite that is used to receive input stimuli. Cells on the minicolumns have many branches of distal dendrites, which can be used to build relationships with other cells [7, 8]. After the encoder, the input is transformed into a binary vector of equal length, and the number of components with 1 in the vector is equal. Through SPL, the vector activates a few minicolumns to represent the input data. The temporal pool learning algorithm (TPL) selects some cells on the activated minicolumn to express the input location information and establishes dendrite branches on these cells to construct the correlation between inputs. The sequence is learned through SPL and TPL to form memory in the network space of HTM. When the input sequence is consistent with the memory sequence, the HTM can predict the next content based on memory [9]. Two core algorithms of HTM, SPL and TPL, are used to construct the space invariant [10] and time invariant [11] features of input sequences, which are the key technologies for HTM to distinguish similar sequences.
With its advantages in timeseries data processing, HTM has been used to deal with a variety of learning tasks in the field of timeseries data. For example, HTM can be used to identify traffic anomalies in computer networks [12, 13], to detect abnormal behavior of website users [14], to process biological signals, to predict position in the background of smart home [15], to identify abnormal trajectories of the constructed geospatial travel pattern [16], to detect anomalies in stock trading data [17], to detect the optic nerve abnormalities in retinal image [18], to detect the online sequential attack [19], to establish IT event response system, and so on. These applications have achieved good results.
For SPL, Ahmad and Hawkins give the characteristic description of SDR and the expression form of information in the spatial pool [20], which provides theoretical guidance for the following research. Lattner uses the vector to express and study the spatial pool. He proposes the formula of calculating the overlapping value and rules of learning [21]. Byrne describes these formulas by a matrix [22], which provides the mathematical basis for the derivation of rules. Leake et al. study and discuss the influence of initialization parameters on spatial pool calculation [23]. Mnatzaganian et al. give a comprehensive mathematical framework of spatial pool [24], so that the followup scholars can understand the core characteristics of SPL and improve the algorithm from a mathematical point of view. These studies quantitatively analyze the existing HTM and provide guidance for the adjustment of training parameters. If the application has a large input space, the large synaptic pool maintained by the spatial pool causes intolerable training time overhead, and the boosting and inhibition rules make the spatial pool unstable. Although the training results of the spatial pool can be stabilized by closing the training, the operation will prevent the new inputs from entering the training process.
3. The Proposed Algorithm
3.1. Analysis of the Current Spatial Pool Learning Algorithm
SPL mainly uses the overlapping value to activate the minicolumn and dynamically adjusts the synaptic value of the minicolumn to strengthen their relationship. By boosting factors, the lessactive minicolumn can also win the competition, and each minicolumn in the space pool can participate in representing the input. By the inhibition radius, the activated minicolumns will not concentrate together, which ensures that the winning set of minicolumns is sparsely scattered in the spatial pool. By adjusting the minicolumn’s synapses according to the Hebbian competition rule, the training results of the spatial pool tend to be stable after several iterations.
However, the minicolumn activation rule and minicolumn synaptic adjustment rule make the training results of the spatial pool difficult to stabilize, and the learning efficiency of the algorithm is low. Next, the limitations of these two rules are analyzed in terms of the stability of the spatial pool and the efficiency of the current algorithm.
3.1.1. The Limitations of the Minicolumn Activation Rule
SPL uses the following equation to calculate the overlap between the minicolumn and the input.where indicates whether the connection between the ith minicolumn and the jth component in the input code is connected, is the value of the jth component in the input code, is the boosting factor of the ith minicolumn, and is the receptive domain of the ith minicolumn.
In order to activate fewer minicolumns to participate in the input expression, the algorithm takes the overlapping value multiplied by the minicolumn’s boosting factor as the competition basis. The boosting factor is calculated according to the activation frequency of the minicolumn in the statistical period and the average activation frequency of all minicolumns within the inhibition radius. refers to the activation frequency of the ith minicolumn at time t. It can be calculated with the following equation. T is the statistical period. refers to the average active frequency of all minicolumns in the inhibition radius at time t. It can be calculated from the following equation, where is the set of all minicolumns within the inhibition radius of the ith minicolumn.
Using these two mean values, the boosting factor of the minicolumn can be calculated by the following equation, and β is the adjustment factor.
The value of the boosting factor will increase greatly if the minicolumn is not activated for a long time. In this case, the minicolumn with lower activation frequency can be activated by the following equation:where indicates whether the ith minicolumn is activated at time . is the set of overlapping values of minicolumns, and these minicolumns are within the inhibition radius of the ith minicolumn. returns the kth value in according to the overlapping values, and is the number of minicolumns to be activated. is the minimum threshold to activate the minicolumn.
First, the code length determines the number of the minicolumn’s synapses and affects the size of minicolumn’s receptive domain. The long encoding will increase the workload of calculating the overlapping values and adjusting synaptic persistence. The computational efficiency of the current spatial pool learning algorithm is greatly affected by the code length. Secondly this minicolumn activation rule will cause the spatial pool unstable, especially when the input training space is small and the spatial pool capacity is large. This is because the boosting strategy ensures that all minicolumns participate in expressing the input while ignoring the need for individual input to be stably expressed. For example, if only one input is trained and the overlap between ’s code and y_{i} is greater than the kth value within N_{i} = {y_{1}, y_{2}, …, y_{i}, y_{i+1}, …, y_{i+m} }, then y_{i} will be activated in the previous training. As the training process goes on, other minicolumns are seldom activated, and their boosting factors will become very large. Then, one of them will win in the competition and be activated. This makes the minicolumn set activated by X unstable, which leads to ineffective distal synaptic connections in the temporal pool.
3.1.2. The Limitation of Minicolumn Synaptic Regulation
When a minicolumn set is activated by the input, SPL will adjust the permanence of the proximal synapses on the minicolumns according to the Hebbian rule, which can be described as follows:where represents the increment of permanence and represents the decrease of permanence. represents the permanence of the connection between the ith minicolumn and the jth component in the input code. is the value of the jth component in the input code, and is the set of active minicolumns at time t.
Usually, a minicolumn is involved in the expression of multiple inputs. The regulation of minicolumn synapses not only enhances the expression ability of the minicolumn for the current input but also reduces the expression ability for other inputs. If a trained input does not participate in SPL for a long time, it will lose the ability to activate the minicolumns. We can call the process of forgetting. Forgetting has its significance in the biological system, such as reducing the memory of uncommon concepts, reducing energy consumption, and highlighting the cognition of recent learning content. However, in the HTM model, forgetting in the spatial pool will bring great cost to subsequent learning. If the forgotten input enters SPL again, it will activate a new minicolumn set, which leads to an unstable expression of a given input. In this way, what the temporal pool has learned becomes meaningless or unpredictable. Therefore, the forgetting mechanism is not suitable for SPL, but it is very useful for TPL of HTM.
The Hebbian rule adjusts the spatial pool dynamically to a stable state after iterative training. If a new input destroys much information stored in the current HTM, the spatial pool needs many iterations of training to reach a new stable state. It will affect the efficiency and effectiveness of HTM.
3.2. The SelfNominations of the Minicolumn
In SPL, the minicolumn has two states according to the input stimulus, active and inactive. However, in SPL_SN, the minicolumn is given three selfnomination states based on the overlapping values, the same, similar, and distinct. The status indicates the willingness of the minicolumn to participate in the expression of the current input. Then, through the spatial pool, the appropriate minicolumns are selected to activate in different states to express the input. In doing so, the spatial pool is given the decision to activate the minicolumns. The spatial pool can combine the statistical information of the minicolumn to select the minicolumn more reasonably and improve computational efficiency.
The overlapping value between the minicolumn and the input is only used to determine the selfnomination state of the minicolumn, which can be calculated by the following equation:
According to the overlap value, the minicolumn can be given three different selfnomination states by the following equation:where represents the threshold at which the minicolumn can be activated and expresses the recommendation state of the ith minicolumn under the current input training environment. The value of can be the same, similar, or distinct. If is the same, it means that the minicolumn can be activated by the current input, and a certain input that the minicolumn has participated in is extremely close to or even the same as the current input. If is similar, it means that the minicolumn overlaps with the current input, but it has not been activated. The content of a certain input that the minicolumn has participated in is similar to the current input. If is distinct, it means that the minicolumn overlaps less with the current input, and all inputs that the minicolumn has participated in are quite different from the current input.
According to , the minicolumn can be divided into three levels, which represent the similarity between the learned content and the current input, that is, the different willingness to participate in the current input. Based on this, the minicolumn selection algorithm can improve the utilization of every minicolumn.
3.3. The Minicolumn Activation Rule
To improve the minicolumn section for the input, we reconstruct the spatial pool’s structure and more information of the minicolumn will be used. The minicolumn not only calculates the degree of willingness to participate in the current input, i.e., the selfnomination state, but also saves the content and the number of different inputs that have been participated in through the previous learning process. This information is called the loadcarrying capacity of the minicolumn, which can be used as a measure of the utilization degree of the minicolumn. By fully utilizing the selfnomination status and loadcarrying capacity of the minicolumn, the appropriate minicolumn set can be selected to express the current input. Suppose that minicolumns are used to express the input in the spatial pool and the minicolumn sets of adjacent inputs overlap at most minicolumns. The idea of the fast training algorithm is described as follows:(1)The current selfnomination status is calculated according to the input.(2)For minicolumns with , the algorithm selects the minicolumns containing the current input values to activate and count them as n_{1}. If , it returns this set of active minicolumns for the input. Otherwise, the n_{2} minicolumns with the lowest loadcarrying capacity are randomly selected from the remaining minicolumns for activation, and n_{2} takes the maximum value that satisfies the constraint . Then, the algorithm updates the loadcarrying information of the n_{2} minicolumns, saves the current input on the minicolumns, and increases the number of inputs that the minicolumn participates in by 1.(3)For the minicolumns with , n_{3} minicolumns with the lowest loadcarrying capacity are randomly selected to activate, and n_{3} takes the maximum value that satisfies the constraint . Then, the algorithm updates the loadcarrying information of the n_{2} minicolumns, saves the current input on the minicolumns, and increases the number of inputs that the minicolumn participates in by 1.(4)For minicolumns with , minicolumns are selected to activate. Then, the algorithm updates the loadcarrying information of these minicolumns, saves the current input on the minicolumns, and increases the number of inputs that the minicolumn participates in by 1.
SPL_SN activates a predetermined number of minicolumns with being the same or similar, which ensures that the training results of the spatial pool satisfy the characteristic that the minicolumn sets of similar input have a certain overlap. The random minicolumn selection strategy ensures that the set of active minicolumns can be sparsely scattered in the spatial pool and improves the utilization rate of the minicolumns as much as possible. The limits the number of activated minicolumns to ensure the convergence of the algorithm.
3.4. The Synaptic Adjustment Rule
There is a mapping between the minicolumn set activated by SPL_SN and the current input. Therefore, it is necessary to add synaptic connections to the input to maintain the mapping. How the active minicolumn increases the synaptic connection is shown in the following equation:where is the number of active minicolumns selected by the algorithm for the current input, represents the connected synapses of the minicolumn. CurrentInput is the content of the current input, and is the compressed code of the current input.
SPL is based on the input distribution code. The rule of the encoding is usually to express the input with a fixedlength binary vector, in which the number of components with value 1 is small and fixed. This encoding rule scatters the input characters to each component with value 1, which can be called 1component. By encoding similar inputs with a certain number of overlapped components, the influence of noise on the input can be reduced and the robustness of the system can be improved. In the training process of the spatial pool, the number and position of the 1components are the factors to calculate the overlapping values and adjust the synapse. Therefore, we propose an effective compressed encoding method, which expresses scalars with shorter vectors.
For any scalar i, the corresponding encoding rule can be given as shown in the following equation:where represents the number of 1components in the code, k represents the overlap number of 1components in adjacent scalar coding, represents the starting position of 1components in input coding, and is the compressed code of scalar i.
In the input compressed encoding, a longer binary vector can be represented as the position vector of 1components, and the length of the uncompressed code is hidden. Therefore, no matter what the input value is, the length of the compressed code is fixed and short. This encoding rule can improve the efficiency of calculating the overlapping values. When the input code is superimposed on the minicolumn, it shows that the minicolumn has participated in the expression of the input, so that the spatial pool can express the input with a relatively fixed set of minicolumns through fewer rounds of training. Moreover, when a new input enters the training and modifies the connection of the minicolumn’s synapses, the connection among the existing synapses is not adjusted, so the trained result will not be affected. Although the adjusting rule shown in equation (11) may result in a large overlap between the minicolumn and the uncorrelated input code, the minicolumn will not be activated because the minicolumn’s loadcarrying capacity does not have the input information. Thus, different inputs can be mapped into different minicolumn sets.
3.5. The Flow of SPL_SN
The flow of SPL_SN is shown in Table 1.

SPL_SN modifies the rules of SPL. It calculates the selfnomination states of the minicolumns based on the overlapping values to express the willingness of the minicolumns to represent the input and activates minicolumns based on their selfnomination states and loadcarrying capacity. The minicolumn selection rule is dominated by the spatial pool, so it can make full use of the minicolumns and improves the efficiency of spatial pool learning. The compressed encoding algorithm uses the position of 1components to reconstruct the code which simplifies the expression of the code and improves the efficiency of the query. When learning the new inputs, the synaptic adjustment rule based on compressed encoding does not affect the trained information in the minicolumns, which improves the efficiency of training and the stability of the spatial pool.
4. Evaluation and Analysis
Built upon the Numenta’s opensource HTM code, a prototype system of SPL_SN is implemented. Three datasets are used to evaluate the SPL and SPL_SN.
4.1. Datasets and Evaluation Metrics
In this paper, three datasets are used to evaluate SPL and SPL_SN. The first two datasets are constructed artificially using the integer number between 1 and N as the input space. Relative to the spatial pool settings, N takes 10 and 5000 for the smaller input space and larger input space. The last dataset is the timeperiod statistics of New York taxi passengers, which is constructed by choosing the number of passengers every 30 minutes between July 1, 2014, and October 13, 2014, in total 5,000 data.
We train SPL and SPL_SN with the three datasets separately and analyze the advantages and disadvantages of the algorithm through the result of each training round. The evaluation criteria are listed below:(1)Time overhead: for the same input space and different length of code, the computation efficiency of the algorithm is evaluated by the time overhead for one round of training.(2)Stability of the spatial pool: it includes two meanings. The first is to describe whether the spatial pool can express the input with a fixed set of minicolumns. The second is to describe whether the new input will affect the trained results. The mean stability (MS) is used to evaluate the stability of the spatial pool. Assume that the set of minicolumns corresponding to input i after the kth round is ; then, the stability of input i after the kth round () is calculated with the following equation: With the stability of each input (), the mean stability (MS) of the spatial pool after the kth round is calculated using the following equation, where M is the number of inputs:(3)The sparsity of input representation: it describes whether a spatial pool can express the input with a relatively fixed number of minicolumns. After each round of training, we count the number of active minicolumns for each input and calculate their mean () and standard deviation (). and can evaluate the sparsity of the input representation. These two values are calculated using the following equation, where M is the number of inputs:(4)The utilization of minicolumns: it describes whether the minicolumns can be activated more evenly. The standard deviation of the minicolumn activation frequency is used to describe this indicator. After the kth round, the activation frequency of minicolumn j () is calculated by the following equation, where indicates whether the input i activates the minicolumn j and M is the number of inputs:
The average activation frequency of all minicolumns () can be calculated by the following equation, where N is the number of minicolumns:
The standard deviation of minicolumn activation frequency () can be calculated by the following equation:
4.2. Testing Environment
The parameters of the spatial pool are shown in Table 2.

We have implemented the algorithm using Java in the Windows 10 operating system, and the program runs on a PC with an Intel core tm i77500u 2.9 GHz CPU processor and 8 GB memory.
4.3. Results and Analysis
4.3.1. The Artificial Dataset with Small Input Space
The input space is constructed by numbers from 1 to 10. The coding length is set to 5000, 20000, and 40000, respectively. The two algorithms train all inputs for 10 rounds with different code lengths. Then, we collect and compare the results after each round.
Figure 2 shows the training time overhead in each round with different coding lengths. It can be seen that, with the increase of code length, the time overhead of SPL increases rapidly, while the time overhead of SPL_SN is much lower and almost does not increase. This is because, in SPL, the increase of code length increases the number of the minicolumn’s synapses and also affects the size of the receptive domain, which increases the workload of calculating the overlapping value. In SPL_SN, the length of the compressed code is fixed and short, and the number of the minicolumn’s synapses which are composed of a small number of input codes is small, so the workload of calculating overlapping values is low and hardly changed. It shows that the time cost of SPL_SN is hardly affected by the code length.
(a)
(b)
Figure 3 shows the stability of the spatial pool. When SPL trains data with small input space, the stability of the spatial pool tends to increase with more training rounds. However, the spatial pools cannot be stabilized completely because of the boosting and inhibition rules. Therefore, SPL suggests that the learning function should be turned off after certain rounds of training. This is to avoid that the instability of the spatial pool may introduce invalid distal synapses to the temporal pool. However, turning off the learning function will make the spatial pool unable to deal with the new inputs. SPL_SN maintains the high stability of the spatial pool. This is because the synaptic adjustment rule based on compressed encoding enables the proximal synapses of the minicolumn to quickly establish and solidify the corresponding mapping with the input code, to express the input with a relatively stable minicolumn set. Each subsequent training can be regarded as training on a new input relative to the trained content. If the new input training does not affect the trained content, it shows that the training algorithm has a strong antiinterference performance. SPL_SN cannot change the trained content in each training, which shows that the algorithm has a high antiinterference performance.
Figure 4 shows the results on the sparsity of the input representation. SPL_SN, like SPL, is able to express the input with a sparse minicolumn set according to the parameters.
(a)
(b)
Figure 5 shows the minicolumn utilization. Because of the small input space, the results of the two algorithms cannot make all minicolumns participate in the input expression. In the early training of SPL, the activated minicolumns are in a small range of the spatial pool. With the boosting and inhibition rule, the activated minicolumns gradually scatter in the spatial pool. SPL_SN can express the input with a more dispersed set of minicolumns in the first round of training.
(a)
(b)
4.3.2. The Artificial Dataset with Larger Input Space
The input space is constructed by numbers from 1 to 5000. The code length is set to 5000, 20000, and 40000, respectively. The two algorithms train all inputs for 10 rounds with different code lengths. Then, we collect and compare the results after each round.
Figure 6 shows the training time overhead in each round with different coding lengths. It also can be seen that, with the increase of code length, the time cost of SPL increases rapidly, and the time cost of SPL_SN is lower and almost does not increase. Compared with the previous dataset, the training time overhead of this dataset is much higher because the number of inputs is much more than that of the previous dataset. It shows that the time overhead of SPL is affected by the size of input space and code length, while the time overhead of SPL_SN is only affected by the size of input space and is hardly affected by the length of the code.
(a)
(b)
Figure 7 shows the test results on the stability of the spatial pool. When SPL trains input data with a larger input space, the spatial pool can be more stable after the first round of training compared to the result of a small input space. This is because all minicolumns participate in the expression of inputs and the activation frequencies of the minicolumns are similar. Therefore, the probability of minicolumn synaptic adjustment is reduced and the spatial pool is relatively more stable. However, due to the boosting and inhibition rules of SPL, the spatial pool cannot achieve a completely stable state even with more training rounds. SPL_SN has established the mapping between the input and minicolumn set in the first round of training, and the subsequent training process does not change the learned content, so SPL_SN can keep the desired stability of the spatial pool.
Figure 8 shows the sparsity of the input representation. SPL_SN, like SPL, is able to express the input with a sparse minicolumn set according to the parameters.
(a)
(b)
Figure 9 shows the minicolumn utilization. For the large input space, all minicolumns in the spatial pool participate in the expression. Each minicolumn of both algorithms participates in the expression of 273 inputs on average. With the increase of training rounds, SPL slowly decreases the standard deviation of the average participation degree due to the boosting rule and inhibition rule. This indicates that the utilization degree of the minicolumn becomes higher. For SPL_SN, the standard deviation of the average participation degree of the minicolumn is close to 0 in the first round of training, which means the proposed algorithm can efficiently utilize the minicolumns at the beginning.
(a)
(b)
4.3.3. The New York Taxi Passenger Flow Dataset
The input space is constructed by the New York taxi passenger flow with 5000 data. In this dataset, the largest piece of data is 39375. Therefore, we set the code length to 40000 to express the input. This setting can meet the requirements of the input space encoding. The two algorithms train all data for 10 rounds. Then, according to the training results of each round, we evaluate the characteristics of the spatial pool.
Figure 10 shows the comparison of the training time overhead of each round. The time overhead of SPL is much larger than that of SPL_SN in each training round. It is because the maximum value of taxi passenger flow data is 39375, which requires a longer code to express the input. The longer code length increases the number of synapses of the minicolumn and the workload of calculating overlapping values. In SPL_SN, the input’s compressed code is not affected by the code length, so the time overhead does not increase. These results are consistent with the prior experiment where the encoding has the same length.
Figure 11 shows the test results on the stability of the spatial pool. This result is the same as that of the artificial dataset with 5000 data. SPL_SN can keep the desired stability of the spatial pool. SPL increases the stability of the spatial pool with more training rounds, but ultimately it does not achieve the desired goal.
Figure 12 shows the sparsity of the input representation. SPL_SN, like SPL, is able to express the input with a sparse minicolumn set according to the parameters.
(a)
(b)
Figure 13 shows the minicolumn utilization. This observation is the same as that of the artificial dataset with 5000 data. Both algorithms make efficient use of the minicolumns of the spatial pool, while SPL_SN achieves this faster.
(a)
(b)
5. Conclusions and Prospection
In this paper, we propose a fast spatial pool learning algorithm called SPL_SN. The spatial pool is reconstructed so that more information and states are provided into the minicolumn. We propose a minicolumn selection strategy based on the loadcarrying degree and a synaptic adjustment algorithm based on compressed encoding. These improvements not only ensure the flexibility of the spatial pool but also greatly improve the training efficiency and stability. In the future research, we intend to construct a model based on HTM which can deal with the data with 3D (2D spatial+1D time) feature. Besides, we intend to implement a contentbased fast retrieval system that can improve the retrieval speed of such applications.
Data Availability
The artificial data used to support the findings of this study are included within the article. The timeperiod statistics of New York taxi passengers can be found at https://www1.nyc.gov/site/tlc/about/tlctriprecorddata.page.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This research work was supported in part by the National Natural Science Foundation of China with grant no. 61806086, in part by the Project of National Key R&D Program of China (2018YFB0804204), and in part by Jiangsu Provincial Doctoral Innovation Fund (CXZZ13_0689).
References
 E. Principi, D. Rossetti, S. Squartini, and F. Piazza, “Unsupervised electric motor fault detection by using deep autoencoders,” IEEE/CAA Journal of Automatica Sinica, vol. 6, no. 2, pp. 441–451, 2019. View at: Publisher Site  Google Scholar
 R. Zhang, F. Zhu, J. Liu, and G. Liu, “Depthwise separable convolutions and multilevel pooling for an efficient spatial CNNbased steganalysis,” IEEE Transactions on Information Forensics and Security, vol. 15, pp. 1138–1150, 2020. View at: Publisher Site  Google Scholar
 S. Gao, M. Zhou, Y. Wang, J. Cheng, H. Yachi, and J. Wang, “Dendritic neuron model with effective learning algorithms for classification, approximation, and prediction,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 2, pp. 601–614, 2019. View at: Publisher Site  Google Scholar
 J. Wang and T. Kumbasar, “Parameter optimization of interval type2 fuzzy neural networks based on PSO and BBBC methods,” IEEE/CAA Journal of Automatica Sinica, vol. 6, no. 1, pp. 247–257, 2019. View at: Publisher Site  Google Scholar
 J. Hawkins, On Intelligence, St. Martin’s Griffin, London, UK, 2004.
 D. George, “How the brain might work: a hierarchical and temporal model for learning and recognition,” Stanford University, Stanford, CA, USA, 2008, Doctoral thesis. View at: Google Scholar
 J. Hawkins and S. Ahmad, “Why neurons have thousands of synapses, a theory of sequence memory in neocortex,” Frontiers in Neural Circuits, vol. 10, no. 23, pp. 1–13, 2016. View at: Publisher Site  Google Scholar
 J. Hawkins, M. Lewis, M. Klukas et al., “A framework for intelligence and cortical function based on grid cells in the neocortex,” Frontiers in Neural Circuits, vol. 12, Article ID 121, 2019. View at: Publisher Site  Google Scholar
 J. Hawkins, S. Ahmad, S. Purdy et al., “Biological and machine intelligence(BAMI),” 2020, https://numenta.com/resources/biologicalandmachineintelligence/. View at: Google Scholar
 H. Deng, W. Chen, Q. Shen, A. J. Ma, P. C. Yuen, and G. Feng, “Invariant subspace learning for time series data based on dynamic time warping distance,” Pattern Recognition, vol. 102, Article ID 107210, 2020. View at: Publisher Site  Google Scholar
 H. Morita, M. Inatsu, and H. Kokubu, “Topological computation analysis of meteorological timeseries data,” Siam Journal on Applied Dynamical Systems, vol. 18, no. 2, pp. 1200–1222, 2019. View at: Publisher Site  Google Scholar
 G. Khangamwa, “Detecting network intrusions using hierarchical temporal memory,” in Proceedings of the International Conference on eInfrastructure and eServices for Developing Countries, pp. 41–48, Lagos, Nigeria, December 2010. View at: Google Scholar
 M. A. Rodriguez, R. Kotagiri, and R. Buyya, “Detecting performance anomalies in scientific workflows using hierarchical temporal memory,” Future Generation Computer Systems, vol. 88, pp. 624–635, 2018. View at: Publisher Site  Google Scholar
 V. Berger, “Anomaly detection in user behavior of websites using hierarchical temporal memories: using machine learning to detect unusual behavior from users of a web service to quickly detect possible security hazards,” 2017, Master thesis. View at: Google Scholar
 Alshammari and N. Owaid, “Anomaly detection using hierarchical temporal memory in smart homes,” Staffordshire University, Staffordshire, UK, 2018, Doctoral thesis. View at: Google Scholar
 Numenta, “Geospatial tracking—learning the patterns in movement and detecting anomalies,” 2014, https://numenta.com/assets/pdf/whitepapers/GeospatialTrackingWhitePaper.pdf. View at: Google Scholar
 Numenta, “Detecting anomalies in publicly traded stocks using trading and twitter data,” 2011, https://numenta.com/assets/pdf/apps/htmforstocks.pdf. View at: Google Scholar
 T. Zhou, Z. Z. Zhang, and Y. Y. Chen, “Hierarchical temporal memory network for medical image processing,” in Proceedings of the International Conference on Computational, Modeling, Simulation and Mathematical Statistics (CMSMS), pp. 620–626, Xi’an, China, June 2018. View at: Google Scholar
 T. Li, B. Wang, F. Shang et al., “Online sequential attack detection for ADSB data based on hierarchical temporal memory,” Computers & Security, vol. 87, Article ID 101599, 2019. View at: Publisher Site  Google Scholar
 S. Ahmad and J. Hawkins, “Properties of sparse distributed representations and their application to hierarchical temporal memory,” 2020, https://arxiv.org/abs/1503.07469. View at: Google Scholar
 S. Lattner, “Hierarchical temporal memoryinvestigations, ideas, and experiments,” Johannes Kepler University, Linz, Austria, 2014, Master thesis. View at: Google Scholar
 F. Byrne, “Encoding reality: predictionassisted cortical learning algorithm in hierarchical temporal memory,” 2020, https://arxiv.org/abs/1509.08255. View at: Google Scholar
 M. Leake, L. Xia, K. Rocki et al., “A probabilistic view of the spatial pool in hierarchical temporal memory,” World Academy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering, vol. 9, pp. 1266–1273, 2015. View at: Google Scholar
 J. Mnatzaganian, E. Fokoué, and D. Kudithipudi, “A mathematical formalization of hierarchical temporal memory’s spatial pool,” Frontiers in Robot and AI, vol. 3, pp. 81–92, 2017. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2021 Lei Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.