Research Article  Open Access
Qianqian Ren, Xiyu Liu, Minghe Sun, "Turing Universality of Weighted Spiking Neural P Systems with Antispikes", Computational Intelligence and Neuroscience, vol. 2020, Article ID 8892240, 10 pages, 2020. https://doi.org/10.1155/2020/8892240
Turing Universality of Weighted Spiking Neural P Systems with Antispikes
Abstract
Weighted spiking neural P systems with antispikes (AWSN P systems) are proposed by adding antispikes to spiking neural P systems with weighted synapses. Antispikes behave like spikes of inhibition of communication between neurons. Both spikes and antispikes are used in the rule expressions. An illustrative example is given to show the working process of the proposed AWSN P systems. The Turing universality of the proposed P systems as number generating and accepting devices is proved. Finally, a universal AWSN P system having 34 neurons is proved to work as a function computing device by using standard rules, and one having 30 neurons is proved to work as a number generator.
1. Introduction
Membrane computing, introduced by Păun [1], is a branch of natureinspired computing. It provides a rich computational framework for biomolecular computing. Models of membrane computing are inspired by the structures and functions of living cells. The obtained models are distributed and parallel computing devices, usually called P systems [2]. There are three main classes of P systems: celllike P systems, tissuelike P systems [3], and neurallike P systems [4]. Neurallike P systems, inspired by the ways of information storage and processing in human brain nervous systems, are systems that combine neurons and membrane computing, among which the most widely known are spiking neural P systems (SN P systems) [5]. A SN P system consists of a group of neurons located at the nodes of a directed graph, and neurons send spikes to adjacent neurons through synapses, i.e., links in the graph. There is only one type of objects, i.e., spikes, in the neurons.
With different biological features and mathematical motivations, many variants of SN P systems have emerged. Some of them made changes on synapses between neurons, such as SN P systems with rules on synapses [6], SN P systems with multiple channels [7], and SN P systems with thresholds [8], while others made changes on the communication rules, such as SN P systems with communication on request [9], SN P systems with polarizations [10], and SN P systems with inhibitory rules [11]. Various new variants of SN P systems are provided in [12, 13]. Recently, some new variants of neurallike P systems have been proposed, which are inspired by SN P systems, such as those reported in [14]. In addition, many publications appeared in the literature on the computational power of SN P systems as function computing devices and the number generating/accepting devices. Pǎun [18] proved small universality of SN P systems. Pan [19] proved the small universality of SN P systems with communication on request by using 14 neurons, and more details are available in [20, 21].
Since the SNP system was proposed, many scholars have explored its applications. At present, there are many applications of SN P systems, such as skeletonizing image processing [22, 23], optimization problems [24], fault diagnosis [25–27], and working models [28].
Inspired by the spikes of inhibition of communication between neurons, a new type of SN P systems is proposed by adding antispikes to SN P systems, which is called spiking neural P systems with antispikes (ASN P systems) [29]. In ASN P systems, each neuron contains multiple copies of symbolic object or and processes information by spiking rules and forgetting rules. The annihilating rule exists in each neuron and is the first to apply, meaning and cannot coexist in any neuron. Many researchers have proposed different ASN P systems, such as ASN P systems with multiple channels [30], ASN P systems with rules on synapses [31], and asynchronous ASN P systems [32]. The computational power of ASN P systems as number generating and accepting devices, as well as function computing devices, also can be proved [33].
In [34], SN P systems with weighted synapses were proposed. The weights represent the numbers of synapses between connected neurons. Based on the above, a new variant of SN P systems, called the weighted spiking neural P systems with antispikes (AWSN P systems),is proposed in this work. In these systems, neurons receive spikes or antispikes from their connected neurons and the numbers of spikes or antispikes they receive are determined by the weights of the synapses. Only one type of objects, i.e., spikes or antispikes, exists in each neuron with standard rules in SN P systems. These systems use spiking rules with the form of (called standard rules if and extended rules otherwise), where is a regular expression over spikes and , and and are all positive integers. The meaning of the spiking rules is that spikes are consumed and spikes are generated after time periods. SN P systems also have forgetting rules of the form , where is a positive integer. The meaning of the forgetting rules is that spikes are dissolved or removed from a neuron.
The rest of this article is organized as follows. In Section 2, the basic knowledge of a register machine is given. The definition of AWSN P systems is given, and an example is presented to show their working process in Section 3. By simulating register machines, the computational power of AWSN P systems is proved as natural number generating devices and accepting devices in Section 4. In Section 5, the universality of these systems as function computing devices and number generating devices is obtained by using 34 neurons and 30 neurons, respectively. Remarks and future research directions are given in Section 6.
2. Prerequisites
The universality of systems is proved by simulating a register machine . A register machine is structured as , where is the number of registers, is the set of instruction labels, and are the starting and ending labels, and is the set of instructions shown below:(1) (add 1 to register and then go to instruction labels or with nondeterministic choice)(2) (if register is not empty, then subtract 1 from it and go to ; otherwise, go to )(3) (the ending instruction)
A register machine has two modes: a generating mode and an accepting mode. A register machine generates a set of numbers indefinitely, denoted by , and works in the following way in the generating mode. When all the registers start empty, starts the computational process from the instruction label . When reaches , the computation ends with the results stored in register 1. If the computation does not stop, the numbers will not be generated. A set of numbers can also be accepted by a register machine, denoted as , in the accepting mode. Only the input neuron is nonempty at the beginning. It then works in a way similar to that in the generating mode. As register machines are universal in the accepting mode, the add instructions can be written as . Register machines can compute any set of Turing computable numbers represented by NRE (see, e.g., [6]).
Generally, a universal register machine is used to compute Turing computable functions for the purpose of analyzing the computing power of system. A universal register machine is proposed by Minsky [35]. If satisfies that and are natural numbers and is a recursive function, then is universal, denoted by , including 8 registers and 23 instructions. Compared with register machine as shown in Figure 1, register machine does not have instructions and , and the final result is placed in register 0. Since the result is stored in register 0, it cannot contain any SUB instruction. Hence, register 8 is added and used to store the result without any SUB instruction. In general, in order to analyze the universality of the system, i.e., to verify that the system is equivalent to a Turing machine, a universal register machine as shown in Figure 1 is simulated by a system, denoted by , consisting of 9 registers and 25 instructions.
3. Weighted Spiking Neural P Systems with Antispikes
3.1. Definition
The proposed AWSN P system is described as follows:where(1) is the set of alphabets, where the symbol is a spike, and is an antispike.(2) are neurons, in the form of for , where is the initial number of spikes stored in , and is the set of rules used in in the following form:(a)Spiking rules, , where is a regular expression over or , , , and are the time unit(b)Forgetting rules, , where and(3) represents the synapses, where is the set of weights. For any, , , and .(4) and are the input neuron and output neuron.
In the AWSN P system, each neuron has one or more spiking rules and some of them also have forgetting rules, and either spikes or antispikes exist in each neuron. If there are spikes or antispikes in neuron , and , the spiking rule can be stimulated. If , then the spiking rule is called pure, and the rule can be written as. The spiking rule can be interpreted as follows. If spikes or antispikes are removed from neuron and the neuron fires, spikes will be generated after time periods (as usual in membrane computing, all neurons in a system work in parallel with an assumed global clock) and spikes will be sent to neuron , where . If the spiking rule of neuron is used in time for all , the neuron will be closed before time and will not receive any spikes or antispikes, and then the neuron will open at time . If , spikes will be emitted immediately, which means the neuron receives spikes or antispikes from the upper neuron without delay.
If the forgetting rules in the neurons are used, then the spikes or antispikes are removed from the neurons. Spiking rules and forgetting rules must be applied if the conditions are met, but the choice of rules is nondeterministic if the conditions of multiple rules are met in a neuron. However, the annihilating rule must be applied first in each neuron.
Through these rules, transitions between configurations can occur. Any sequence of transitions starting from the initial configuration is called a computation. A computation will stop when it reaches a configuration where all neurons are open and no rules can be used. To compute the function , natural numbers are introduced into the system by reading a binary sequence from the environment. That is to say, the input neuron of receives a spike in a step if it corresponds to 1 in , but it receives nothing if it corresponds to 0. The input neuron received exactly spikes and will not receive any more spikes after receiving the last spike. The result of the computation is encoded in the distance between two spikes, which means that the computation halts with exactly two spikes as outputs immediately after outputting the second spike. Hence, it generates a spike string of the form , for and . The computation outputs no spike for a nonspecified number of steps from the beginning of the computation until outputting the first spike.
Let and be the sets of numbers generated and accepted by , respectively. Let , with , denote the family of sets of numbers generated or accepted by an AWSN P system with neurons and a maximum of rules in a neuron.
3.2. An Illustrative Example
An example as graphically shown in Figure 2 is given to explain the working process of the AWSN P system. The results of each step are shown in Table 1. A positive number in the table represents the number of spikes in the neuron, and a negative number represents the number of antispikes. For example, 2 means there are two spikes, and −2 means there are two antispikes.

The system has four neurons as shown in Figure 2. Assume that each of neurons and has two spikes, and neurons and are empty with no spikes. Suppose that the rule in neuron can be used at time , generating one antispike and sending three antispikes to neurons and because the weight of synapses between these neurons is 3. Two antispikes together with two spikes disappear immediately because the annihilating rule is applied first, and there is one antispike left in neuron . The rule in generates two spikes to be sent to neuron and one spike to be sent to neuron . So the rule in can be applied again. Neuron receives six antispikes from by using the rule of neuron twice, so that the rule in fires. Neuron gets three spikes (two from neuron ) and sends one spike to the environment.
4. Computational Models
4.1. Generating Mode
Theorem 1. .
Proof. A register machine is considered. is simulated by an AWSN P system, including three modules, i.e., modules ADD, SUB, and OUTPUT.
In the simulation process, a register of corresponds to neuron , and the number contained in register is the number of spikes contained in neuron . An instruction in corresponds to neuron . Furthermore, the modules require some other neurons in addition to and . The simulation of the ADD and SUB instructions begins at neuron . Modules ADD and SUB are simulated by sending spikes to and as rules in neuron fire. Neuron sends a spike to either or , but the choice is nondeterministic. When a spike arrives at neuron , the computation in stops, and the module OUTPUT begins to send the result stored in register 1 to the environment. At the beginning of the simulation, neuron has one spike but other neurons do not have any spikes.
Therefore, the SUB instruction can be simulated correctly by module SUB.
(c)Module OUTPUT (Shown in Figure 5) Assume that of system accumulated one spike at time , and neuron has spikes for the number being stored in register 1 of . When the rule in is fired at time , neuron sends one spike to . At this moment, has an odd number of spikes and its rule fires. At time , sends one spike to and , respectively. Thus, neuron has one spike, which is an odd number. At time , neuron fires, sending one spike to the environment. At the same time, the rules in and are used, and both send one to . After steps, until neuron has no spike, the number of spikes in is even. At the same time, the use of the rule in is stopped, and neuron has one spike. Neuron will receive one spike at time , and then the number of spikes is odd. Neuron fires a second time. Therefore, the number computed by the AWSN P system is the difference between the first two steps when the neuron fires; that is, . The module OUTPUT can be simulated correctly.4.2. Accepting Mode
Theorem 2. .
Proof. The proof of this theorem is similar to that of Theorem 1. A register machine , consisting of three modules, ADD, SUB, and INPUT, is considered. Module SUB is shown in Figure 4.
(1)Module ADD (Shown in Figure 6) Assume that an ADD instruction has to be simulated at time . Suppose that one spike is in neuron ; then the rule can be used. Thus, neuron sends one spike to neurons and . In this way, the number of spikes in increases by 1 and the instruction is activated. Hence, the ADD instruction can be simulated correctly by this module.From the descriptions above about the three modules, it is clear that the register machine can correctly simulate the system. The proof is complete.
5. A Small Universal AWSN P System
5.1. The Universality as Function Computing Devices
Theorem 3. There is a universal AWSN P system having 34 neurons which can be used to perform function computing.
Proof. A general framework of a system used to simulate a universal register machine is shown in Figure 8, which is a universal AWSN P system. consists of 8 modules: ADD, SUB, ADDADD, SUBADD1, SUBADD2, SUBSUB, INPUT, and OUTPUT. The modules SUB, OUTPUT, and ADD are the same as those in Figures 4–6, respectively. The module INPUT is shown in Figure 9.
Module INPUT works as follows: when neuron gets a spike from the environment, the rule fires and one spike is sent to neurons , , and , and two spikes are sent to neuron . Then, the rule in neuron sends one spike to both and . At the same time, neuron fires and then sends one spike to and two spikes to . Up to this point, three spikes were sent to neuron . Therefore, before neuron receives more spikes from the environment, neurons and have received one spike from each other in each time period and neuron has received spikes.
When receives the second spike, each of the neurons , , and can get one spike and gets two spikes. Neuron has four spikes at this moment, and its rule can be used to send two spikes to neuron . Neuron then has six spikes, so that the rule in is used to produce one spike and send it to . In this way, neurons and receive one spike from each other in each step before receives the third spike from the environment. Neuron has spikes at the end. When neuron receives the third spike, each of the neurons , , and gets one spike, while receives two spikes. As a result, neuron has an odd number of spikes and the rule cannot be applied. At present, neuron has three spikes, and the rule in neuron fires, which generates one spike and sends it to . In this way, it can simulate the instruction in the next step.
As with the proof of Theorems 1 and 2, the system uses the following numbers of neurons: 9 neurons for 9 registers 25 neurons for 25 labels 5 neurons for the module INPUT 1 neuron in each SUB instructions and 14 in total 2 neurons for the module OUTPUTTherefore, totally 55 neurons are used.
The numbers of neurons can be decreased by exploring some relationships between some instructions of register machine . The following modules are given to reduce the number of neurons in the computation process.
The SUBADD instructions can be divided into two cases, depending on the number of spikes placed in register (the register involved in the SUB instruction). Modules SUBADD1 and SUBADD2 shown in Figures 10 and 11 can simulate the SUB and ADD instructions sequentially. The working process of module SUBADD1 is similar to that of module SUB. When the rule in neuron is used and contains at least one spike, neuron cannot fire. Neuron fires by receiving one and then sends one spike to . At the end of the computation, neuron has one spike, neuron has one spike, and neuron is empty. When is empty, neurons and are also empty and neuron contains one spike. Thus, each pair of SUBADD1 instructions and can share a common neuron when , and there are totally 6 pairs in :By using this module, 6 neurons can be saved. In the same way, the module shown in Figure 10 can simulate the two instructions and . Neuron can be saved.
The module ADDADD shown in Figure 12 can simulate instructions and . In this way, one neuron can be saved.
The SUB instructions share a common neuron when the labels of their registers are different, as shown in Figure 13. Assume that the simulation of the SUB instruction starts at time . When neuron gets a spike, the rule fires and sends one antispike to and two antispikes to and , respectively, at time . Neuron receives an antispike at time . Neurons , , , and work in the same way as those in module SUB shown in Figure 4. Neuron will send three spikes to and two spikes to , where forgetting rules will be applied. Thus, the instruction is correctly simulated by this module. The process when starting with instruction is similar to that described above.
Two SUB modules dealing with the same register, as shown in Figure 14, can also be proved to work correctly in a similar way. Assume that the instruction is simulated and one spike is contained in neuron . The process is divided into two cases according to the number of spikes in neuron . When has at least one spike, the working process of the system is similar to that of module SUB. When is empty, the rule in neuron cannot be used. Neurons , , and are all empty but neuron contains one spike. All SUB instructions can be simulated correctly by the module. Therefore, all SUB modules can share a common neuron.
From the above description about the numbers of neurons saved, the system uses the following: 9 neurons for 9 registers 17 neurons for 17 labels 5 neurons for the module INPUT 1 neuron for all the 14 SUB instructions 2 neurons for the module OUTPUTA total of 21 neurons can be saved and the number of neurons in this system can be decreased from 55 to 34. The proof is complete.
5.2. The Small Universality as Number Generator
A small universal AWSN P system as a number generator is considered. The process of simulating universal number generators is similar to that of simulating general function computing devices, but the difference between them lies in the module INPUT. The system starts with the spike train from environment and ends with neuron receiving spikes. This system is then loaded with an arbitrary number , and neuron receives spikes. The number is also the output at the same time as the output spike train , with in register 1 and in register 2. Since the output module is not required, that is to say, register 8 is not required, the register machine is simulated. If the computation in halts, the computation can also halt.
Furthermore, module INPUT and module OUTPUT can be combined. The module INPUTOUTPUT is shown in Figure 15, and an example is used to prove its feasibility. The label can also be saved because of module INPUTOUTPUT. The string 101 is used in module INPUTOUTPUT, where and . The computation follows the above working processes of the modules. The results of each step are shown in Table 2.

Assume that has one spike at time, and neuron has two spikes. At time, and receive one spike, respectively. From the structure shown in Figure 15, neurons and receive one spike from each other at each step until and stop firing. Then receives the second spike. Each of neurons and receives one spike, receives six spikes, and receives two spikes, so that neurons and can fire. At time, both and have two spikes, but they cannot fire again. receives six spikes from , but also receives two antispikes from , plus four spikes existing in , so that neuron has eight spikes. In addition, neuron receives two spikes again, so that there are three spikes contained. Neuron only has one spike because the received antispike annihilates one spike. At time, the neuron is empty after receiving an antispike. receives two antispikes, so that there are four spikes contained in neuron , the number of spikes is even, and its rule can fire. At the next step, receives one antispike and fires. Neuron consumes two spikes and still can fire. At time, neurons and receive one spike from , respectively. So, there are 4 spikes in , meeting the required conditions for firing. Neuron also gets one spike.
The string is read through neuron , and g(x) spikes are stored in register 1 when the calculation stops. At the same time , the output number (t + 6 − t − 2 = 4) is the same as the number stored in register 2. Neuron activates and starts simulating the register machine by simulating modules ADD and SUB. Therefore, through this process, the module INPUTOUTPUT can be simulated correctly.
Therefore, this system contains the following: 8 neurons for the 8 registers 14 neurons for the 14 labels ( is saved; 8 neurons are saved by modules SUBADD and ADDADD) 1 neuron for 13 SUB instructions 7 neurons in the module INPUTOUTPUT
There is a universal AWSN P system having 30 neurons that can be used to perform number generating.
6. Conclusions
In this work, a variant of the SN P systems, called the AWSN P systems,is proposed. Because of the use of antispikes, the proposed systems are more biologically significant thanSN P systems, with inhibitory spikes in the communication between neurons. An example is used to illustrate the working process of this system. The computational universality is then proved in the case of generating mode and accepting mode, respectively. Finally, the Turing universality of AWSN P systems is proved. The function computing device can be realized by using 34 neurons. Compared with the small universal SN P system using antispikes introduced by Song [17], the AWSN P system uses 13 fewer neurons. Compared with the SN P systems with weighted synapses introduced by Pan [34], the AWSN P system uses 4 fewer neurons. The small universality of the ASN P system as number generator is investigated with 30 neurons. Compared with Pan’s work [34], the proposed system uses 6 fewer neurons.
The computational universality is proved for AWSN P systems with standard rules. There are three types of spiking rules, , , and , used that are time dependent, and there is one type of forgetting rules, . There are several future research directions. One direction is to investigate whether the computational power will remain the same if only one or two types of spiking rules are used or if the forgetting rules are not used and to investigate whether AWSN P systems can perform better or the same if the spiking rules are not timedependent. These open problems certainly need further studies. Another future research direction is the application of the proposed systems. There have been studies, such as using SN P systems with learning function for letter recognitions [36]. If the learning function was introduced in AWSN P systems, it may perform better in letter recognitions. Because the use of antispikes improves the ability of AWSN P systems to represent and process information, it may solve more practical problems, which still require further research.
Data Availability
No datasets were used in this article.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This research was funded by the National Natural Science Foundation of China (nos. 61876101, 61802234, and 61806114), the Social Science Fund Project of Shandong (16BGLJ06 and 11CGLJ22), China Postdoctoral Science Foundation Funded Project (2017M612339 and 2018M642695), Natural Science Foundation of Shandong Province (ZR2019QF007), China Postdoctoral Special Funding Project (2019T120607), and Youth Fund for Humanities and Social Sciences, Ministry of Education (19YJCZH244).
References
 G. Păun, “Computing with membranes,” Journal of Computer & System Sciences, vol. 61, pp. 108–143, 2000. View at: Google Scholar
 G. Păun, G. Rozenberg, and A. Salomaa, The Oxford Handbook of Membrane Computing, Oxford University Press, Oxford, UK, 2010.
 C. MartinVide, J. Pazos, G. Păun, and A. RodríguezPatón, “Tissue P systems,” Theoretical Computer Science, vol. 296, pp. 295–326, 2003. View at: Google Scholar
 G. Păun, Membrane Computing: An Introduction, SpringerVerlag, Berlin, Germany, 2012.
 M. Ionescu, G. Păun, and “T. Yokomori, “Spiking neural P systems,” Fundamenta Informaticae, vol. 71, pp. 279–308, 2006. View at: Google Scholar
 T. Song, L. Pan, and G. Păun, “Spiking neural P systems with rules on synapses,” Theoretical Computer Science, vol. 529, pp. 888–895, 2014. View at: Publisher Site  Google Scholar
 H. Peng, J. Yang, J. Wang et al., “Spiking neural P systems with multiple channels,” Neural Networks, vol. 95, pp. 66–71, 2017. View at: Publisher Site  Google Scholar
 X. Zeng, X. Zhang, T. Song, and L. Pan, “Spiking neural P systems with thresholds,” Neural Computation, vol. 26, no. 7, pp. 1340–1361, 2014. View at: Publisher Site  Google Scholar
 L. Pan, G. Păun, G. Zhang, and F. Neri, “Spiking neural P systems with communication on request,” International Journal of Neural Systems, vol. 27, pp. 1–17, 2017. View at: Publisher Site  Google Scholar
 T. Wu, A. Paun, Z. Zhang, and L. Pan, “Spiking neural P systems with polarizations,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 8, pp. 3349–3360, 2018. View at: Google Scholar
 H. Peng, B. Li, J. Wang et al., “Spiking neural P systems with inhibitory rules,” KnowledgeBased Systems, vol. 188, pp. 1–10, 2020. View at: Publisher Site  Google Scholar
 A. Alhazov, R. Freund, S. Ivanov, M. Oswald, and S. Verlan, “Extended spiking neural P systems with white hole rules and their redgreen variants,” Natural Computing, vol. 17, no. 2, pp. 297–310, 2017. View at: Publisher Site  Google Scholar
 T. Wu, Z. Zhang, G. Păun, and L. Pan, “Celllike spiking neural P systems,” Theoretical Computer Science, vol. 623, pp. 180–189, 2016. View at: Publisher Site  Google Scholar
 H. Peng, T. Bao, X. Luo et al., “Dendrite P systems,” Neural Networks, vol. 127, pp. 110–120, 2020. View at: Publisher Site  Google Scholar
 H. Peng, J. Wang, M. J. PérezJiménez, and A. RiscosNúñez, “Dynamic threshold neural P systems,” KnowledgeBased Systems, vol. 163, pp. 875–884, 2019. View at: Google Scholar
 H. Peng and J. Wang, “Coupled neural P systems,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 6, pp. 1672–1682, 2019. View at: Publisher Site  Google Scholar
 T. Song, Y. Jiang, X. Shi, and X. Zeng, “Small universal spiking neural P systems with antispikes,” Journal of Computational and Theoretical Nanoscience, vol. 10, no. 4, pp. 999–1006, 2013. View at: Publisher Site  Google Scholar
 A. Pǎun and G. Pǎun, “Small universal spiking neural P systems,” BioSystems, vol. 90, pp. 48–60, 2007. View at: ǎun &author=G. Pǎun&publication_year=2007" target="_blank">Google Scholar
 T. Pan, X. Shi, Z. Zhang, and F. Xu, “A small universal spiking neural P system with communication on request,” Neurocomputing, vol. 275, pp. 1622–1628, 2017. View at: Google Scholar
 X. Zhang, X. Zeng, and L. Pan, “Smaller universal spiking neural P systems,” Fundamenta Informaticae, vol. 87, pp. 117–136, 2008. View at: Google Scholar
 T. Wu, F. Bȋlbȋe, A. Păun, L. Pan, and F. Neri, “Simplified and yet turing universal spiking neural P systems with communication on request,” International Journal of Neural Systems, vol. 28, pp. 1–19, 2018. View at: Publisher Site  Google Scholar
 D. DíazPernil, F. PeñaCantillana, and M. A. GutiérrezNaranjo, “A parallel algorithm for skeletonizing images by using spiking neural P systems,” Neurocomputing, vol. 115, pp. 81–91, 2013. View at: Publisher Site  Google Scholar
 T. Song, S. Pang, S. Hao, A. RodríguezPatón, and P. Zheng, “A parallel image skeletonizing method using spiking neural P systems with weights,” Neural Processing Letters, vol. 115, 2018. View at: Google Scholar
 G. Zhang, H. Rong, F. Neri, and M. J. PérezJiménez, “An optimization spiking neural P system for approximately solving combinatorial optimization problems,” International Journal of Neural Systems, vol. 24, 2014. View at: Publisher Site  Google Scholar
 Y. Yahya, A. Qian, and A. Yahya, “Power transformer fault diagnosis using fuzzy reasoning spiking neural P systems,” Journal of Intelligent Learning Systems and Applications, vol. 8, no. 4, pp. 77–91, 2016. View at: Publisher Site  Google Scholar
 T. Wang, J. Zhao, G. Zhang, Z. He, and J. Wang, “Fault diagnosis of electric power systems based on fuzzy reasoning spiking neural P systems,” IEEE Transactions on Power Systems, vol. 30, pp. 1182–1194, 2014. View at: Google Scholar
 T. Wang, X. Wei, J. Wang et al., “A weighted corrective fuzzy reasoning spiking neural P system for fault diagnosis in power systems with variable topologies,” Engineering Applications of Artificial Intelligence, vol. 92, 2020. View at: Publisher Site  Google Scholar
 T. Song, X. Zeng, P. Zheng, M. Jiang, and A. RodríguezPatòn, “A parallel workflow pattern modeling using spiking neural P systems with colored spikes,” IEEE Transactions on Nanobioscience, vol. 17, no. 4, pp. 474–484, 2018. View at: Publisher Site  Google Scholar
 L. Pan and G. Păun, “Spiking neural P systems with antispikes,” International Journal of Computers Communications & Control, vol. 4, no. 3, pp. 273–282, 2009. View at: Publisher Site  Google Scholar
 X. Song, J. Wang, H. Peng et al., “Spiking neural P systems with multiple channels and antispikes,” Biosystems, vol. 169170, no. 170, pp. 13–19, 2018. View at: Publisher Site  Google Scholar
 T. Wu, Y. Wang, S. Jiang, Y. Su, and X. Shi, “Spiking neural P systems with rules on synapses and antispikes,” Theoretical Computer Science, vol. 724, pp. 13–27, 2018. View at: Publisher Site  Google Scholar
 T. Song, X. Liu, and X. Zeng, “Asynchronous spiking neural P systems with antispikes,” Kluwer Academic Publishers, vol. 42, pp. 633–647, 2015. View at: Google Scholar
 V. P. Metta and A. Kelemenová, “Universality of spiking neural P systems with antispikes,” in Proceedings of the International conference on theory & application of models and computation, pp. 352–365, New York, NY. USA, 2014. View at: Google Scholar
 L. Pan, X. Zeng, X. Zhang, and Y. Jiang, “Spiking neural P systems with weighted synapses,” Neural Processing Letters, vol. 35, no. 1, pp. 13–27, 2012. View at: Publisher Site  Google Scholar
 M. Minsky, Computation: Finite and Inﬁnite Machines, PrenticeHall, New Jersey, NJ, USA, 1967.
 T. Song, L. Pan, T. Wu, P. Zheng, M. L. D. Wong, and A. RodriguezPaton, “Spiking neural P systems with learning functions,” IEEE Transactions on NanoBioscience, vol. 18, no. 2, pp. 176–190, 2019. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2020 Qianqian Ren et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.