Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 8892240 | https://doi.org/10.1155/2020/8892240

Qianqian Ren, Xiyu Liu, Minghe Sun, "Turing Universality of Weighted Spiking Neural P Systems with Anti-spikes", Computational Intelligence and Neuroscience, vol. 2020, Article ID 8892240, 10 pages, 2020. https://doi.org/10.1155/2020/8892240

Turing Universality of Weighted Spiking Neural P Systems with Anti-spikes

Academic Editor: Daniele Bibbo
Received17 Apr 2020
Revised19 Jul 2020
Accepted28 Aug 2020
Published17 Sep 2020

Abstract

Weighted spiking neural P systems with anti-spikes (AWSN P systems) are proposed by adding anti-spikes to spiking neural P systems with weighted synapses. Anti-spikes behave like spikes of inhibition of communication between neurons. Both spikes and anti-spikes are used in the rule expressions. An illustrative example is given to show the working process of the proposed AWSN P systems. The Turing universality of the proposed P systems as number generating and accepting devices is proved. Finally, a universal AWSN P system having 34 neurons is proved to work as a function computing device by using standard rules, and one having 30 neurons is proved to work as a number generator.

1. Introduction

Membrane computing, introduced by Păun [1], is a branch of nature-inspired computing. It provides a rich computational framework for biomolecular computing. Models of membrane computing are inspired by the structures and functions of living cells. The obtained models are distributed and parallel computing devices, usually called P systems [2]. There are three main classes of P systems: cell-like P systems, tissue-like P systems [3], and neural-like P systems [4]. Neural-like P systems, inspired by the ways of information storage and processing in human brain nervous systems, are systems that combine neurons and membrane computing, among which the most widely known are spiking neural P systems (SN P systems) [5]. A SN P system consists of a group of neurons located at the nodes of a directed graph, and neurons send spikes to adjacent neurons through synapses, i.e., links in the graph. There is only one type of objects, i.e., spikes, in the neurons.

With different biological features and mathematical motivations, many variants of SN P systems have emerged. Some of them made changes on synapses between neurons, such as SN P systems with rules on synapses [6], SN P systems with multiple channels [7], and SN P systems with thresholds [8], while others made changes on the communication rules, such as SN P systems with communication on request [9], SN P systems with polarizations [10], and SN P systems with inhibitory rules [11]. Various new variants of SN P systems are provided in [12, 13]. Recently, some new variants of neural-like P systems have been proposed, which are inspired by SN P systems, such as those reported in [14]. In addition, many publications appeared in the literature on the computational power of SN P systems as function computing devices and the number generating/accepting devices. Pǎun [18] proved small universality of SN P systems. Pan [19] proved the small universality of SN P systems with communication on request by using 14 neurons, and more details are available in [20, 21].

Since the SNP system was proposed, many scholars have explored its applications. At present, there are many applications of SN P systems, such as skeletonizing image processing [22, 23], optimization problems [24], fault diagnosis [2527], and working models [28].

Inspired by the spikes of inhibition of communication between neurons, a new type of SN P systems is proposed by adding anti-spikes to SN P systems, which is called spiking neural P systems with anti-spikes (ASN P systems) [29]. In ASN P systems, each neuron contains multiple copies of symbolic object or and processes information by spiking rules and forgetting rules. The annihilating rule exists in each neuron and is the first to apply, meaning and cannot coexist in any neuron. Many researchers have proposed different ASN P systems, such as ASN P systems with multiple channels [30], ASN P systems with rules on synapses [31], and asynchronous ASN P systems [32]. The computational power of ASN P systems as number generating and accepting devices, as well as function computing devices, also can be proved [33].

In [34], SN P systems with weighted synapses were proposed. The weights represent the numbers of synapses between connected neurons. Based on the above, a new variant of SN P systems, called the weighted spiking neural P systems with anti-spikes (AWSN P systems),is proposed in this work. In these systems, neurons receive spikes or anti-spikes from their connected neurons and the numbers of spikes or anti-spikes they receive are determined by the weights of the synapses. Only one type of objects, i.e., spikes or antispikes, exists in each neuron with standard rules in SN P systems. These systems use spiking rules with the form of (called standard rules if and extended rules otherwise), where is a regular expression over spikes and , and and are all positive integers. The meaning of the spiking rules is that spikes are consumed and spikes are generated after time periods. SN P systems also have forgetting rules of the form , where is a positive integer. The meaning of the forgetting rules is that spikes are dissolved or removed from a neuron.

The rest of this article is organized as follows. In Section 2, the basic knowledge of a register machine is given. The definition of AWSN P systems is given, and an example is presented to show their working process in Section 3. By simulating register machines, the computational power of AWSN P systems is proved as natural number generating devices and accepting devices in Section 4. In Section 5, the universality of these systems as function computing devices and number generating devices is obtained by using 34 neurons and 30 neurons, respectively. Remarks and future research directions are given in Section 6.

2. Prerequisites

The universality of systems is proved by simulating a register machine . A register machine is structured as , where is the number of registers, is the set of instruction labels, and are the starting and ending labels, and is the set of instructions shown below:(1) (add 1 to register and then go to instruction labels or with nondeterministic choice)(2) (if register is not empty, then subtract 1 from it and go to ; otherwise, go to )(3) (the ending instruction)

A register machine has two modes: a generating mode and an accepting mode. A register machine generates a set of numbers indefinitely, denoted by , and works in the following way in the generating mode. When all the registers start empty, starts the computational process from the instruction label . When reaches , the computation ends with the results stored in register 1. If the computation does not stop, the numbers will not be generated. A set of numbers can also be accepted by a register machine, denoted as , in the accepting mode. Only the input neuron is nonempty at the beginning. It then works in a way similar to that in the generating mode. As register machines are universal in the accepting mode, the add instructions can be written as . Register machines can compute any set of Turing computable numbers represented by NRE (see, e.g., [6]).

Generally, a universal register machine is used to compute Turing computable functions for the purpose of analyzing the computing power of system. A universal register machine is proposed by Minsky [35]. If satisfies that and are natural numbers and is a recursive function, then is universal, denoted by , including 8 registers and 23 instructions. Compared with register machine as shown in Figure 1, register machine does not have instructions and , and the final result is placed in register 0. Since the result is stored in register 0, it cannot contain any SUB instruction. Hence, register 8 is added and used to store the result without any SUB instruction. In general, in order to analyze the universality of the system, i.e., to verify that the system is equivalent to a Turing machine, a universal register machine as shown in Figure 1 is simulated by a system, denoted by , consisting of 9 registers and 25 instructions.

3. Weighted Spiking Neural P Systems with Anti-spikes

3.1. Definition

The proposed AWSN P system is described as follows:where(1) is the set of alphabets, where the symbol is a spike, and is an anti-spike.(2) are neurons, in the form of for , where is the initial number of spikes stored in , and is the set of rules used in in the following form:(a)Spiking rules, , where is a regular expression over or , , , and are the time unit(b)Forgetting rules, , where and(3) represents the synapses, where is the set of weights. For any, , , and .(4) and are the input neuron and output neuron.

In the AWSN P system, each neuron has one or more spiking rules and some of them also have forgetting rules, and either spikes or anti-spikes exist in each neuron. If there are spikes or anti-spikes in neuron , and , the spiking rule can be stimulated. If , then the spiking rule is called pure, and the rule can be written as. The spiking rule can be interpreted as follows. If spikes or anti-spikes are removed from neuron and the neuron fires, spikes will be generated after time periods (as usual in membrane computing, all neurons in a system work in parallel with an assumed global clock) and spikes will be sent to neuron , where . If the spiking rule of neuron is used in time for all , the neuron will be closed before time and will not receive any spikes or anti-spikes, and then the neuron will open at time . If , spikes will be emitted immediately, which means the neuron receives spikes or anti-spikes from the upper neuron without delay.

If the forgetting rules in the neurons are used, then the spikes or anti-spikes are removed from the neurons. Spiking rules and forgetting rules must be applied if the conditions are met, but the choice of rules is nondeterministic if the conditions of multiple rules are met in a neuron. However, the annihilating rule must be applied first in each neuron.

Through these rules, transitions between configurations can occur. Any sequence of transitions starting from the initial configuration is called a computation. A computation will stop when it reaches a configuration where all neurons are open and no rules can be used. To compute the function , natural numbers are introduced into the system by reading a binary sequence from the environment. That is to say, the input neuron of receives a spike in a step if it corresponds to 1 in , but it receives nothing if it corresponds to 0. The input neuron received exactly spikes and will not receive any more spikes after receiving the last spike. The result of the computation is encoded in the distance between two spikes, which means that the computation halts with exactly two spikes as outputs immediately after outputting the second spike. Hence, it generates a spike string of the form , for and . The computation outputs no spike for a nonspecified number of steps from the beginning of the computation until outputting the first spike.

Let and be the sets of numbers generated and accepted by , respectively. Let , with , denote the family of sets of numbers generated or accepted by an AWSN P system with neurons and a maximum of rules in a neuron.

3.2. An Illustrative Example

An example as graphically shown in Figure 2 is given to explain the working process of the AWSN P system. The results of each step are shown in Table 1. A positive number in the table represents the number of spikes in the neuron, and a negative number represents the number of anti-spikes. For example, 2 means there are two spikes, and −2 means there are two anti-spikes.


Step

t2200
t + 11−1−30
t + 220−32
t + 31−3−62
t + 41−303 (fires)

The system has four neurons as shown in Figure 2. Assume that each of neurons and has two spikes, and neurons and are empty with no spikes. Suppose that the rule in neuron can be used at time , generating one anti-spike and sending three anti-spikes to neurons and because the weight of synapses between these neurons is 3. Two anti-spikes together with two spikes disappear immediately because the annihilating rule is applied first, and there is one anti-spike left in neuron . The rule in generates two spikes to be sent to neuron and one spike to be sent to neuron . So the rule in can be applied again. Neuron receives six anti-spikes from by using the rule of neuron twice, so that the rule in fires. Neuron gets three spikes (two from neuron ) and sends one spike to the environment.

4. Computational Models

4.1. Generating Mode

Theorem 1. .

Proof. A register machine is considered. is simulated by an AWSN P system, including three modules, i.e., modules ADD, SUB, and OUTPUT.
In the simulation process, a register of corresponds to neuron , and the number contained in register is the number of spikes contained in neuron . An instruction in corresponds to neuron . Furthermore, the modules require some other neurons in addition to and . The simulation of the ADD and SUB instructions begins at neuron . Modules ADD and SUB are simulated by sending spikes to and as rules in neuron fire. Neuron sends a spike to either or , but the choice is nondeterministic. When a spike arrives at neuron , the computation in stops, and the module OUTPUT begins to send the result stored in register 1 to the environment. At the beginning of the simulation, neuron has one spike but other neurons do not have any spikes.

(a)Module ADD (Shown in Figure 3) Assume that an ADD instruction has to be simulated at time , one spike is in neuron , and the rule can be used. Neuron sends one spike to neurons , , and , respectively. The rules and in neuron are chosen in a nondeterministic way for use at time . In this way, there are two cases to consider depending on the choice of the rules in . If is chosen, neuron sends a spike to neuron . Thus, will generate one spike by using its rule. If is chosen, neuron sends an anti-spike to neurons and , respectively. Thus will fire and generate one spike by using its rule. The rule in neuron cannot be used because of the annihilating rule, so that is empty. After one spike is added to , the register adds 1 and the instruction or is activated. Therefore, the ADD instruction can be simulated correctly by the module ADD.(b)Module SUB (Shown in Figure 4) Suppose that neuron has one spike. After the rule is enabled at time , each of the neurons and receives two anti-spikes , and receives one anti-spike. The rest of the computation can be divided into two cases according to the number of spikes contained in .(1)Neuron has at least one spike. Neuron receives one anti-spike from neuron , but anti-spike will disappear immediately by annihilating one spike in . Therefore, the rule in neuron is not used at time . At the same time, neuron opens to get one anti-spike from , and then the rule in fires and generates one spike but sends three spikes to neurons and two spikes to . The two spikes are annihilated with two anti-spikes from and one spike is left in neuron . Simultaneously, the same happens in neuron , i.e., the two spikes are annihilated immediately and there is no spike left in .(2)Neuron has no spike. Neuron gets one anti-spike from and its rule can be applied at time . Simultaneously, neuron gets one anti-spike from . Hence, one spike from is annihilated in the next time. The rule in cannot be used because does not have any anti-spikes. At the same time, neuron receives five spikes, among which two spikes are used to annihilate the two anti-spikes received from neuron ; thus the rule in can be applied. Neuron receives one spike that annihilates one anti-spike received from neuron , and then the rule in is enabled to generate one spike .

Therefore, the SUB instruction can be simulated correctly by module SUB.

(c)Module OUTPUT (Shown in Figure 5) Assume that of system accumulated one spike at time , and neuron has spikes for the number being stored in register 1 of . When the rule in is fired at time , neuron sends one spike to . At this moment, has an odd number of spikes and its rule fires. At time , sends one spike to and , respectively. Thus, neuron has one spike, which is an odd number. At time , neuron fires, sending one spike to the environment. At the same time, the rules in and are used, and both send one to . After steps, until neuron has no spike, the number of spikes in is even. At the same time, the use of the rule in is stopped, and neuron has one spike. Neuron will receive one spike at time , and then the number of spikes is odd. Neuron fires a second time. Therefore, the number computed by the AWSN P system is the difference between the first two steps when the neuron fires; that is, . The module OUTPUT can be simulated correctly.
4.2. Accepting Mode

Theorem 2. .

Proof. The proof of this theorem is similar to that of Theorem 1. A register machine , consisting of three modules, ADD, SUB, and INPUT, is considered. Module SUB is shown in Figure 4.

(1)Module ADD (Shown in Figure 6) Assume that an ADD instruction has to be simulated at time . Suppose that one spike is in neuron ; then the rule can be used. Thus, neuron sends one spike to neurons and . In this way, the number of spikes in increases by 1 and the instruction is activated. Hence, the ADD instruction can be simulated correctly by this module.(2)Module INPUT (Shown in Figure 7) Module INPUT shown in Figure 7 works as follows. The function of module INPUT is to read the spike train and compute the number in the time between receiving two spikes. When neuron receives the first spike at time and then neurons ,, and receive one spike each, the rule in and can be applied at time. At time, neuron gets one spike, and, at the same time, neuron gets one spike from and neuron receives one from . Therefore, in the next time periods, the rules in neurons and can continued to be used. During this period, gets spikes. When neuron receives the second spike at step , each of neurons and receives one spike at step and they both have two spikes. In this way, neurons and cannot fire to send any spikes to neuron . In the whole process, neuron receives spikes, i.e., the number is stored in register .

From the descriptions above about the three modules, it is clear that the register machine can correctly simulate the system. The proof is complete.

5. A Small Universal AWSN P System

5.1. The Universality as Function Computing Devices

Theorem 3. There is a universal AWSN P system having 34 neurons which can be used to perform function computing.

Proof. A general framework of a system used to simulate a universal register machine is shown in Figure 8, which is a universal AWSN P system. consists of 8 modules: ADD, SUB, ADD-ADD, SUB-ADD-1, SUB-ADD-2, SUB-SUB, INPUT, and OUTPUT. The modules SUB, OUTPUT, and ADD are the same as those in Figures 46, respectively. The module INPUT is shown in Figure 9.
Module INPUT works as follows: when neuron gets a spike from the environment, the rule fires and one spike is sent to neurons , , and , and two spikes are sent to neuron . Then, the rule in neuron sends one spike to both and . At the same time, neuron fires and then sends one spike to and two spikes to . Up to this point, three spikes were sent to neuron . Therefore, before neuron receives more spikes from the environment, neurons and have received one spike from each other in each time period and neuron has received spikes.
When receives the second spike, each of the neurons , , and can get one spike and gets two spikes. Neuron has four spikes at this moment, and its rule can be used to send two spikes to neuron . Neuron then has six spikes, so that the rule in is used to produce one spike and send it to . In this way, neurons and receive one spike from each other in each step before receives the third spike from the environment. Neuron has spikes at the end. When neuron receives the third spike, each of the neurons , , and gets one spike, while receives two spikes. As a result, neuron has an odd number of spikes and the rule cannot be applied. At present, neuron has three spikes, and the rule in neuron fires, which generates one spike and sends it to . In this way, it can simulate the instruction in the next step.
As with the proof of Theorems 1 and 2, the system uses the following numbers of neurons:9 neurons for 9 registers25 neurons for 25 labels5 neurons for the module INPUT1 neuron in each SUB instructions and 14 in total2 neurons for the module OUTPUTTherefore, totally 55 neurons are used.
The numbers of neurons can be decreased by exploring some relationships between some instructions of register machine . The following modules are given to reduce the number of neurons in the computation process.
The SUB-ADD instructions can be divided into two cases, depending on the number of spikes placed in register (the register involved in the SUB instruction). Modules SUB-ADD-1 and SUB-ADD-2 shown in Figures 10 and 11 can simulate the SUB and ADD instructions sequentially. The working process of module SUB-ADD-1 is similar to that of module SUB. When the rule in neuron is used and contains at least one spike, neuron cannot fire. Neuron fires by receiving one and then sends one spike to . At the end of the computation, neuron has one spike, neuron has one spike, and neuron is empty. When is empty, neurons and are also empty and neuron contains one spike. Thus, each pair of SUB-ADD-1 instructions and can share a common neuron when , and there are totally 6 pairs in :By using this module, 6 neurons can be saved. In the same way, the module shown in Figure 10 can simulate the two instructions and . Neuron can be saved.
The module ADD-ADD shown in Figure 12 can simulate instructions and . In this way, one neuron can be saved.
The SUB instructions share a common neuron when the labels of their registers are different, as shown in Figure 13. Assume that the simulation of the SUB instruction starts at time . When neuron gets a spike, the rule fires and sends one anti-spike to and two anti-spikes to and , respectively, at time . Neuron receives an anti-spike at time . Neurons , , , and work in the same way as those in module SUB shown in Figure 4. Neuron will send three spikes to and two spikes to , where forgetting rules will be applied. Thus, the instruction is correctly simulated by this module. The process when starting with instruction is similar to that described above.
Two SUB modules dealing with the same register, as shown in Figure 14, can also be proved to work correctly in a similar way. Assume that the instruction is simulated and one spike is contained in neuron . The process is divided into two cases according to the number of spikes in neuron . When has at least one spike, the working process of the system is similar to that of module SUB. When is empty, the rule in neuron cannot be used. Neurons , , and are all empty but neuron contains one spike. All SUB instructions can be simulated correctly by the module. Therefore, all SUB modules can share a common neuron.
From the above description about the numbers of neurons saved, the system uses the following:9 neurons for 9 registers17 neurons for 17 labels5 neurons for the module INPUT1 neuron for all the 14 SUB instructions2 neurons for the module OUTPUTA total of 21 neurons can be saved and the number of neurons in this system can be decreased from 55 to 34. The proof is complete.

5.2. The Small Universality as Number Generator

A small universal AWSN P system as a number generator is considered. The process of simulating universal number generators is similar to that of simulating general function computing devices, but the difference between them lies in the module INPUT. The system starts with the spike train from environment and ends with neuron receiving spikes. This system is then loaded with an arbitrary number , and neuron receives spikes. The number is also the output at the same time as the output spike train , with in register 1 and in register 2. Since the output module is not required, that is to say, register 8 is not required, the register machine is simulated. If the computation in halts, the computation can also halt.

Furthermore, module INPUT and module OUTPUT can be combined. The module INPUT-OUTPUT is shown in Figure 15, and an example is used to prove its feasibility. The label can also be saved because of module INPUT-OUTPUT. The string 101 is used in module INPUT-OUTPUT, where and . The computation follows the above working processes of the modules. The results of each step are shown in Table 2.


Step

1000200000
0110200000
111126102 (fire)0
0221182130
0220042230
0220−122330
022001244 (fire)1

Assume that has one spike at time, and neuron has two spikes. At time, and receive one spike, respectively. From the structure shown in Figure 15, neurons and receive one spike from each other at each step until and stop firing. Then receives the second spike. Each of neurons and receives one spike, receives six spikes, and receives two spikes, so that neurons and can fire. At time, both and have two spikes, but they cannot fire again. receives six spikes from , but also receives two anti-spikes from , plus four spikes existing in , so that neuron has eight spikes. In addition, neuron receives two spikes again, so that there are three spikes contained. Neuron only has one spike because the received anti-spike annihilates one spike. At time, the neuron is empty after receiving an anti-spike. receives two anti-spikes, so that there are four spikes contained in neuron , the number of spikes is even, and its rule can fire. At the next step, receives one anti-spike and fires. Neuron consumes two spikes and still can fire. At time, neurons and receive one spike from , respectively. So, there are 4 spikes in , meeting the required conditions for firing. Neuron also gets one spike.

The string is read through neuron , and g(x) spikes are stored in register 1 when the calculation stops. At the same time , the output number (t + 6 − t − 2 = 4) is the same as the number stored in register 2. Neuron activates and starts simulating the register machine by simulating modules ADD and SUB. Therefore, through this process, the module INPUT-OUTPUT can be simulated correctly.

Therefore, this system contains the following:8 neurons for the 8 registers14 neurons for the 14 labels ( is saved; 8 neurons are saved by modules SUB-ADD and ADD-ADD)1 neuron for 13 SUB instructions7 neurons in the module INPUT-OUTPUT

There is a universal AWSN P system having 30 neurons that can be used to perform number generating.

6. Conclusions

In this work, a variant of the SN P systems, called the AWSN P systems,is proposed. Because of the use of anti-spikes, the proposed systems are more biologically significant thanSN P systems, with inhibitory spikes in the communication between neurons. An example is used to illustrate the working process of this system. The computational universality is then proved in the case of generating mode and accepting mode, respectively. Finally, the Turing universality of AWSN P systems is proved. The function computing device can be realized by using 34 neurons. Compared with the small universal SN P system using anti-spikes introduced by Song [17], the AWSN P system uses 13 fewer neurons. Compared with the SN P systems with weighted synapses introduced by Pan [34], the AWSN P system uses 4 fewer neurons. The small universality of the ASN P system as number generator is investigated with 30 neurons. Compared with Pan’s work [34], the proposed system uses 6 fewer neurons.

The computational universality is proved for AWSN P systems with standard rules. There are three types of spiking rules, , , and , used that are time dependent, and there is one type of forgetting rules, . There are several future research directions. One direction is to investigate whether the computational power will remain the same if only one or two types of spiking rules are used or if the forgetting rules are not used and to investigate whether AWSN P systems can perform better or the same if the spiking rules are not time-dependent. These open problems certainly need further studies. Another future research direction is the application of the proposed systems. There have been studies, such as using SN P systems with learning function for letter recognitions [36]. If the learning function was introduced in AWSN P systems, it may perform better in letter recognitions. Because the use of anti-spikes improves the ability of AWSN P systems to represent and process information, it may solve more practical problems, which still require further research.

Data Availability

No datasets were used in this article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was funded by the National Natural Science Foundation of China (nos. 61876101, 61802234, and 61806114), the Social Science Fund Project of Shandong (16BGLJ06 and 11CGLJ22), China Postdoctoral Science Foundation Funded Project (2017M612339 and 2018M642695), Natural Science Foundation of Shandong Province (ZR2019QF007), China Postdoctoral Special Funding Project (2019T120607), and Youth Fund for Humanities and Social Sciences, Ministry of Education (19YJCZH244).

References

  1. G. Păun, “Computing with membranes,” Journal of Computer & System Sciences, vol. 61, pp. 108–143, 2000. View at: Google Scholar
  2. G. Păun, G. Rozenberg, and A. Salomaa, The Oxford Handbook of Membrane Computing, Oxford University Press, Oxford, UK, 2010.
  3. C. Martin-Vide, J. Pazos, G. Păun, and A. Rodríguez-Patón, “Tissue P systems,” Theoretical Computer Science, vol. 296, pp. 295–326, 2003. View at: Google Scholar
  4. G. Păun, Membrane Computing: An Introduction, Springer-Verlag, Berlin, Germany, 2012.
  5. M. Ionescu, G. Păun, and “T. Yokomori, “Spiking neural P systems,” Fundamenta Informaticae, vol. 71, pp. 279–308, 2006. View at: Google Scholar
  6. T. Song, L. Pan, and G. Păun, “Spiking neural P systems with rules on synapses,” Theoretical Computer Science, vol. 529, pp. 888–895, 2014. View at: Publisher Site | Google Scholar
  7. H. Peng, J. Yang, J. Wang et al., “Spiking neural P systems with multiple channels,” Neural Networks, vol. 95, pp. 66–71, 2017. View at: Publisher Site | Google Scholar
  8. X. Zeng, X. Zhang, T. Song, and L. Pan, “Spiking neural P systems with thresholds,” Neural Computation, vol. 26, no. 7, pp. 1340–1361, 2014. View at: Publisher Site | Google Scholar
  9. L. Pan, G. Păun, G. Zhang, and F. Neri, “Spiking neural P systems with communication on request,” International Journal of Neural Systems, vol. 27, pp. 1–17, 2017. View at: Publisher Site | Google Scholar
  10. T. Wu, A. Paun, Z. Zhang, and L. Pan, “Spiking neural P systems with polarizations,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 8, pp. 3349–3360, 2018. View at: Google Scholar
  11. H. Peng, B. Li, J. Wang et al., “Spiking neural P systems with inhibitory rules,” Knowledge-Based Systems, vol. 188, pp. 1–10, 2020. View at: Publisher Site | Google Scholar
  12. A. Alhazov, R. Freund, S. Ivanov, M. Oswald, and S. Verlan, “Extended spiking neural P systems with white hole rules and their red-green variants,” Natural Computing, vol. 17, no. 2, pp. 297–310, 2017. View at: Publisher Site | Google Scholar
  13. T. Wu, Z. Zhang, G. Păun, and L. Pan, “Cell-like spiking neural P systems,” Theoretical Computer Science, vol. 623, pp. 180–189, 2016. View at: Publisher Site | Google Scholar
  14. H. Peng, T. Bao, X. Luo et al., “Dendrite P systems,” Neural Networks, vol. 127, pp. 110–120, 2020. View at: Publisher Site | Google Scholar
  15. H. Peng, J. Wang, M. J. Pérez-Jiménez, and A. Riscos-Núñez, “Dynamic threshold neural P systems,” Knowledge-Based Systems, vol. 163, pp. 875–884, 2019. View at: Google Scholar
  16. H. Peng and J. Wang, “Coupled neural P systems,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 6, pp. 1672–1682, 2019. View at: Publisher Site | Google Scholar
  17. T. Song, Y. Jiang, X. Shi, and X. Zeng, “Small universal spiking neural P systems with anti-spikes,” Journal of Computational and Theoretical Nanoscience, vol. 10, no. 4, pp. 999–1006, 2013. View at: Publisher Site | Google Scholar
  18. A. Pǎun and G. Pǎun, “Small universal spiking neural P systems,” BioSystems, vol. 90, pp. 48–60, 2007. View at: ǎun &author=G. Pǎun&publication_year=2007" target="_blank">Google Scholar
  19. T. Pan, X. Shi, Z. Zhang, and F. Xu, “A small universal spiking neural P system with communication on request,” Neurocomputing, vol. 275, pp. 1622–1628, 2017. View at: Google Scholar
  20. X. Zhang, X. Zeng, and L. Pan, “Smaller universal spiking neural P systems,” Fundamenta Informaticae, vol. 87, pp. 117–136, 2008. View at: Google Scholar
  21. T. Wu, F. Bȋlbȋe, A. Păun, L. Pan, and F. Neri, “Simplified and yet turing universal spiking neural P systems with communication on request,” International Journal of Neural Systems, vol. 28, pp. 1–19, 2018. View at: Publisher Site | Google Scholar
  22. D. Díaz-Pernil, F. Peña-Cantillana, and M. A. Gutiérrez-Naranjo, “A parallel algorithm for skeletonizing images by using spiking neural P systems,” Neurocomputing, vol. 115, pp. 81–91, 2013. View at: Publisher Site | Google Scholar
  23. T. Song, S. Pang, S. Hao, A. Rodríguez-Patón, and P. Zheng, “A parallel image skeletonizing method using spiking neural P systems with weights,” Neural Processing Letters, vol. 115, 2018. View at: Google Scholar
  24. G. Zhang, H. Rong, F. Neri, and M. J. Pérez-Jiménez, “An optimization spiking neural P system for approximately solving combinatorial optimization problems,” International Journal of Neural Systems, vol. 24, 2014. View at: Publisher Site | Google Scholar
  25. Y. Yahya, A. Qian, and A. Yahya, “Power transformer fault diagnosis using fuzzy reasoning spiking neural P systems,” Journal of Intelligent Learning Systems and Applications, vol. 8, no. 4, pp. 77–91, 2016. View at: Publisher Site | Google Scholar
  26. T. Wang, J. Zhao, G. Zhang, Z. He, and J. Wang, “Fault diagnosis of electric power systems based on fuzzy reasoning spiking neural P systems,” IEEE Transactions on Power Systems, vol. 30, pp. 1182–1194, 2014. View at: Google Scholar
  27. T. Wang, X. Wei, J. Wang et al., “A weighted corrective fuzzy reasoning spiking neural P system for fault diagnosis in power systems with variable topologies,” Engineering Applications of Artificial Intelligence, vol. 92, 2020. View at: Publisher Site | Google Scholar
  28. T. Song, X. Zeng, P. Zheng, M. Jiang, and A. Rodríguez-Patòn, “A parallel workflow pattern modeling using spiking neural P systems with colored spikes,” IEEE Transactions on Nanobioscience, vol. 17, no. 4, pp. 474–484, 2018. View at: Publisher Site | Google Scholar
  29. L. Pan and G. Păun, “Spiking neural P systems with anti-spikes,” International Journal of Computers Communications & Control, vol. 4, no. 3, pp. 273–282, 2009. View at: Publisher Site | Google Scholar
  30. X. Song, J. Wang, H. Peng et al., “Spiking neural P systems with multiple channels and anti-spikes,” Biosystems, vol. 169-170, no. 170, pp. 13–19, 2018. View at: Publisher Site | Google Scholar
  31. T. Wu, Y. Wang, S. Jiang, Y. Su, and X. Shi, “Spiking neural P systems with rules on synapses and anti-spikes,” Theoretical Computer Science, vol. 724, pp. 13–27, 2018. View at: Publisher Site | Google Scholar
  32. T. Song, X. Liu, and X. Zeng, “Asynchronous spiking neural P systems with anti-spikes,” Kluwer Academic Publishers, vol. 42, pp. 633–647, 2015. View at: Google Scholar
  33. V. P. Metta and A. Kelemenová, “Universality of spiking neural P systems with anti-spikes,” in Proceedings of the International conference on theory & application of models and computation, pp. 352–365, New York, NY. USA, 2014. View at: Google Scholar
  34. L. Pan, X. Zeng, X. Zhang, and Y. Jiang, “Spiking neural P systems with weighted synapses,” Neural Processing Letters, vol. 35, no. 1, pp. 13–27, 2012. View at: Publisher Site | Google Scholar
  35. M. Minsky, Computation: Finite and Infinite Machines, Prentice-Hall, New Jersey, NJ, USA, 1967.
  36. T. Song, L. Pan, T. Wu, P. Zheng, M. L. D. Wong, and A. Rodriguez-Paton, “Spiking neural P systems with learning functions,” IEEE Transactions on NanoBioscience, vol. 18, no. 2, pp. 176–190, 2019. View at: Publisher Site | Google Scholar

Copyright © 2020 Qianqian Ren et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views47
Downloads18
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.