Table of Contents Author Guidelines Submit a Manuscript
Complexity
Volume 2019, Article ID 7313414, 12 pages
https://doi.org/10.1155/2019/7313414
Research Article

Sequential Spiking Neural P Systems with Local Scheduled Synapses without Delay

1Key Laboratory of Image Information Processing and Intelligent Control of Education Ministry of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, Hubei, China
2Government Girls Postgraduate College, Kohat 26000, Khyber Pakhtunkhwa, Pakistan
3Department of Computer Science, University of the Philippines Diliman, Quezon City, Philippines
4School of Information Science and Engineering, Xiamen University, Xiamen 361005, China

Correspondence should be addressed to Fei Xu; nc.ude.tsuh@ux_ief

Received 7 December 2018; Revised 21 February 2019; Accepted 20 March 2019; Published 14 April 2019

Academic Editor: Alex Alexandridis

Copyright © 2019 Alia Bibi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Spiking neural P systems with scheduled synapses are a class of distributed and parallel computational models motivated by the structural dynamism of biological synapses by incorporating ideas from nonstatic (i.e., dynamic) graphs and networks. In this work, we consider the family of spiking neural P systems with scheduled synapses working in the sequential mode: at each step the neuron(s) with the maximum/minimum number of spikes among the neurons that can spike will fire. The computational power of spiking neural P systems with scheduled synapses working in the sequential mode is investigated. Specifically, the universality (Turing equivalence) of such systems is obtained.

1. Introduction

Spiking neural P systems (abbreviated as SN P systems) were first introduced in [1] as a class of parallel and distributed neural-like computational models, which are inspired from the fact that spiking neurons communicate with each other through electrical impulses. An SN P system could be viewed as a directed graph with nodes consisting of a number of computing units called neurons and the arcs representing the synapses. Each neuron contains copies of a single object type called the spike and a set of spiking rules that produces one or more spikes from the source neuron to every neuron connected by a synapse and forgetting rules that removes spikes from the neuron.

Many efforts have been made to investigate the theoretical and practical aspects of SN P systems, since the computational model was introduced. The computational power of SN P systems was investigated as different computing devices, e.g., number generating/accepting devices [1], language generators [2, 3], and function computing devices [4]. By abstracting computing ideas from the human brain and biological neurons, many variants of SN P systems have been introduced, such as axon P systems [5], SN P systems with rules on synapses [68], SN P systems with weights [9], SN P systems with neuron division and budding [10], SN P systems with structural plasticity [11], cell-like SN P systems [12], fuzzy reasoning SN P systems [1316], probabilistic SN P systems [17], SN P systems with multiple channels [18], SN P systems with scheduled synapses (SSN P systems for short) [19], coupled neural P systems [20], dynamic threshold neural P systems [21], SN P systems with colored spikes [22], and SN P systems simulators [23, 24].

Although biological processes in living organisms happen in parallel, they are not synchronized by a universal clock as assumed in SN P systems. Introduced in [25], sequential SN P systems are a class of computing devices that plays a role of a bridge between spiking neural networks and membrane computing [26]. Two types of sequentiality are considered in sequential SN P systems: the general sequentiality [25] and sequentiality induced by the spike(s) number [2729]. The first case is the classical pure-sequential model of its family. The fact behind the pure-sequentiality is that these systems are completely (purely) sequential according to neurons; at each time unit, one and only one neuron among all active neurons is nondeterministically chosen to fire. The second case is based on the first one, where the sequentiality is induced on the basis of number of spikes (either maximum or minimum) present in active neurons; on the basis of spikes number we have further two types: maximal sequential SN P systems and minimal sequential SN P systems.

In this work, we consider SN P systems with scheduled synapses (SSN P systems) which is inspired and motivated by the structural dynamism of biological synapses, while incorporating ideas from nonstatic (i.e., dynamic) graphs and networks from mathematics. Synapses in SSN P systems are only available at a specific schedule or duration. The schedules are of two types: local and global scheduling, named after a specified reference neurons: local reference neurons and global reference neurons. In the first case synapses are called local scheduled synapses, in which synapses are scheduled locally with respect to their local reference neurons; there can be multiple reference neurons within a system; however each synapse is associated with exactly one reference neuron. In the second case synapses are called global-scheduled synapses, in which synapses are scheduled globally with respect to a single-shared global reference neuron. In particular, we consider SN P systems with local scheduled synapses working in the sequential mode without delay (SSSN P systems, for short), where the sequentiality induced by the maximum/minimum (max/min, for short) spike(s) number is introduced. The computational power of SSSN P systems working in strong sequentiality or pseudosequentiality strategies is investigated. Specifically, SSSN P systems working in the max/min-sequentiality (resp., max/min-pseudosequentiality) strategy are proved to be Turing universal.

The paper is organized as follows: Section 2 provides definitions of some concepts necessary for the understanding of the paper. We introduce in Section 3 our variant of SN P systems, namely, sequential SN P systems with local scheduled synapses working in max/min-sequential strategy without delay (SSSN P systems). We present our universality results in Section 4 and finally provide final remarks in Section 5.

2. Preliminaries

In this section, various notions and notations from formal language theory and computability theory that we shall use in the remainder of the paper are explained. The reader is also assumed to have a basic familiarity with membrane computing. Both the basics of membrane computing and its formal language and computability theory prerequisites are covered in sufficient detail in [30] and the handbook in [31].

We denote by a nonempty finite set of symbols called alphabet, and is known as the set of all strings of symbols generated from , including , the empty string. The set of nonempty string over is denoted by . Each subset of is called a language over . A language is called -free, if it does not contain empty string (thus is a subset of ). A set of collection of languages is called a family of languages. We define inductively regular expressions over an alphabet and the language they describe as follows: and every symbol are regular expressions. If and are regular expressions, then , , and are also regular expressions. We denote by the language described by the regular expression . In particular, we have , , , , and .

Our universality results are proved by allowing our proposed model to simulate a well-known universal computing model called register machine [1, 32].

A register machine is a construct of the form: , where(i) is the number of registers,(ii) is the set of instruction labels,(iii) is the start label (which labels an instruction ADD),(iv) is the label halt (which is assigned to instruction ),(v) is the set of instructions.

A register machine could accept or generate a number . In this work, we use a register machine that generates numbers. We denote the set of all numbers computed/generated by as . Since register machine computes all sets of numbers that are Turing computable, then we have , where denotes the family of languages recognized by Turing machine.

3. Sequential SN P Systems with Local Scheduled Synapses Working in Max/Min-Sequential Strategy without Delay

We give here the definition of sequential SN P systems with local scheduled synapses without delay.

Definition 1. A sequential SN P system with local scheduled synapses without delay, SSSN P system for short, of degree is a construct of the form
where (a) is the singleton alphabet (a is called spike);(b) are neurons of the form , where(1) is the initial number of spikes contained in ;(2) is a finite set of rules of the form(i), where is a regular expression over ;(ii) for some , with restriction that for any rule of type (i) from , where is a regular expression over and , , and ;(c) is the set of synapses among neurons, where is the set of schedules of the form (), no has a synapse to itself, and each synapse has exactly one schedule;(d) is the set of synapse references, is the power set of the set , and is the set of reference neurons. For any two pairs , , we have and ;(e) is the label for output neurons.

The semantics of spiking rule application is as follows: Let be a neuron containing spikes and a rule . If and , then can be applied. With this rule application will consume spikes to send spike(s) to its adjacent neurons, then it will have spikes remaining. If then rule serves as a standard rule and otherwise as an extended rule. In general, a neuron can have at least two spiking rules with regular expressions and such that . In this case, if the regular expression is satisfied, then we nondeterministically choose one of the rules to be applied. Additionally, our system is sequential and synchronized so that at each step, among all the active neurons, the one with maximum/minimum number of spikes can apply its rule. Rule application is sequential both at the neurons and at system level. This means at each time schedule exactly single rule can be applied by a spiking neuron and similarly one and only one neuron (holding maximum/minimum number of spikes) can fire among all the active neurons. Thus all the neurons apply their rules at maximum/minimum and in sequence. As a convention if a rule has then we write it as , instead. Moreover synapse from the output neuron labeled with is always assumed available.

The semantics of synapse schedule is as follows: the synapses are scheduled according to the activation of the reference neuron. A scheduled synapse takes effect immediately upon activation of its reference neuron. The reference neuron is indicated with the symbol “”, (e.g., ). If at step , a reference neuron becomes activated with a synapse scheduled with , then it means that synapse is active only from the step “” to “”. If at some step a rule is applied by a neuron , but there is no respective scheduled synapse at that time, then that spike gets wasted as no adjacent neuron will receive it.

The semantics of sequentiality either maximum/minimum is as follows: if at any step there are more than one active neurons (neurons with enabled rules are called active), then only the neuron(s) containing the maximum/minimum number of spikes (among the currently active neurons) will be able to fire. If there is a tie among two or more active neurons (all holding equal number of spikes), then there are two different strategies considered in SSSN P systems: strong sequentiality and pseudosequentiality. In the first strategy one and only one active neuron is nondeterministically chosen among all the tied neurons to fire, while in the second strategy all the tied neurons fire simultaneously; let us assume that, in max-sequential case, at a given step there are four active neurons: , , , and with spikes stored 4, 5, 5, and 1, respectively; it is obvious that there is a tie between neurons and ; so in strong max-sequential case either neuron or will be nondeterministically chosen to fire, but in max-pseudosequential case both neurons will fire simultaneously.

A configuration of the system in a given step is the distribution of spikes among neurons and the status of each neuron, whether closed or open. The initial configuration is given by of each neuron and all neurons are open.

Starting from the fixed initial configuration (distribution of spikes among neurons) and applying the rules in synchronized manner (a global clock is assumed), the system gets evolved. Applying the rules according to the above description, transitions among configuration can be defined.

Applying the rules in this way, the system passes from one configuration to another configuration. Such a step is known as transition. Given two configurations , of , the direct transition between these two configurations is represented by . While the reflexive and transitive closure of the relation is represented by . A transition is sequential provided that, among all the candidate neurons, one holding maximum/minimum number of spikes must apply the rule.

A computation is the sequence of transitions, starting from the initial configuration to the final configuration. A computation is said to be successful or a halting computation, if it reaches a configuration where no further rule(s) can be applied. Moreover all the neurons must be restored to their valid states.

There are various ways to define the result or output of the computation, but in this work we use the following type: we consider only the first two spikes fired or produced by the output neuron at steps and . The number computed by SSSN P systems in max-sequential case is the difference between first two spikes minus one, i.e., , while in min-sequential case it is the difference between first two spikes minus two, i.e., .

We denote with the family of all sets of numbers generated by SN P systems working in sequential mode with local scheduled synapses and without delay. Here means that the number generated is encoded by the first two spikes of the output neuron; means that system works in generative mode; means only local scheduled synapses is used; indicates the max/min-sequentiality and max/min-pseudosequentiality modes of the systems; means extended rules. Moreover we will take into account only halting computation.

We follow the conventional practice by ignoring the number zero, when comparing the power of two number generating devices, as empty string is ignored in formal languages and automata theory while comparing two language generating devices.

4. Results on Computational Power

In what follows, we start describing our results for SSSN P systems with local schedules and without delays by giving theorems for both sequential strategies: based on max/min-sequentiality and on max/min-pseudosequentiality. Due to the scheduled synapses our systems are deterministic at the system level; nondeterminism is shifted to the level of neurons in both sequential strategies: strong sequential and pseudosequential. By simulating register machines we prove universality for both strategies.

4.1. Max-Sequential SN P Systems with Local Scheduled Synapses without Delays

In this section we provide the universality results of SSSN P systems with local schedules and without delays in the max-sequential strategy or systems in short. Here the superscript stands for max-sequential and the subscript means local scheduling. We note that for this section we make use of extended rules which are spiking rules where . In our results we use parameters, similar to [19] and other works on universality. Parameters specify at most rules in each neuron, forgetting at most spikes and consuming at most spikes, respectively.

Theorem 2. .

Proof. In order to prove Theorem 2, we will simulate a register machine with an system . Prior to construction of , we give a brief description of the computation as follows: each neuron in is associated with each register in . The local reference neuron is labeled with “”. If the content of is the number , then spikes stored in its corresponding are . If applies some instruction such that some operation, i.e., ADD, SUB, or HALT, is performed by this means a neuron begins to simulate the operation. We provide modules ADD, SUB, and FIN in order to simulate all three types of instructions of .
Module ADD. In Figure 1 we give the graphical description of the module simulating a rule of the form .
The module functions are as follows: here the reference neuron is , so it is labeled with , i.e., . The simulation starts from the reference neuron . Once applies its rule, it sends one spike to each of neurons and at schedule . In step 2, neuron is active with two spikes. Neuron applies its extended rule at schedule to send two spikes to each of neurons and . Neuron now has two new spikes added to its previous spikes. If the number of spikes in neuron is even then neuron does not apply any rule. It is worth noting that we use here an extended rule instead of a standard rule as it is a strong sequential case, which means at most one neuron can fire at any step. At the next step neuron nondeterministically selects which rule to apply. Either neuron or becomes activated depending on which rule of neuron is applied. We have the following two cases depending on which rule is applied by neuron .
Case I. If neuron applies its rule , then neuron fires only once, at schedule . Applying this rule consumes all 3 spikes of neuron and sends one spike to each of neurons and . In this way, the single spike of neuron is restored. At the next step is the only neuron that can apply a rule at schedule . A spike is sent from neuron to neuron in order to begin simulating of .
Case II. If neuron applies its rule , then neuron fires twice. By firing first at step 3, neuron consumes one spike and sends one spike to each of neurons and . In this way, the single spike of neuron is restored. At schedule both neurons and can apply their rules, but only neuron can apply its rule since it has two spikes while neuron has only one spike. Hence at rule of neuron is applied. At neuron applies its rule to consume one spike and send one spike to each of neurons and . At , neuron has the most spikes so it applies its forgetting rule. At neuron becomes active to begin simulating of .
The simulation of ADD is complete as the number of spikes in neuron increased by two, hence increasing the content of register by 1. Afterwards, the next instruction, either or , is chosen nondeterministically for simulation. We note that neuron remains inactive as long as its content remains even. Hence, when simulating ADD neuron does not apply any of its rules. The simulation ends by restoring all spikes in all neurons to their initial configuration. Hence, the module is ready for another simulation of ADD.
Module SUB. In Figure 2 we give the graphical description of the module SUB simulating a rule of the form . The module functions are as follows: at schedule , the reference neuron is activated and fires sending one spike to each of neurons , and . The SUB module always has two cases depending on the value in register : either (when register is empty) or (when register is nonempty). We explain both the cases separately beginning at schedule .
Case I. When in register , then neuron has spikes. Neuron and have 2 and 4 spikes, respectively. Due to max-sequentiality, is the activated neuron so it fires consuming one spike and sending a spike at . Due to the spiking of neuron , neuron now has 4 spikes. Neuron is activated and it fires one spike each to and at . Neurons and now have 3 and 5 spikes each, respectively. Neuron is activated so it consumes one spike and sends a spike to neuron at . Now neuron has 4 spikes and applies its forgetting rule at . Neuron is now activated so applying its second rule, it consumes one spike and sends it to neuron at . At neuron fires sending a spike to neurons and . Neurons and now each have their original spikes with 2 and 1 spikes, respectively. Lastly at neuron is activated to begin simulating instruction of .
Case II. When in register , this means neuron has spikes. Neuron fires sending a spike to neuron at and consuming 3 spikes. Hence, neuron now has spikes corresponding to subtracting the content of register by 1. Neuron has the most spikes, so it fires sending a spike to each of neurons and at . At neuron has the most spikes so it fires sending a spike to neuron . At neuron now has 5 spikes and it fires sending a spike to neuron . At neuron fires sending one spike each of neurons (restoring the single spike of ) and . Lastly at neuron is activated to simulate the instruction of .
In both scenarios the module SUB is restored to its initial configuration after simulating a SUB instruction. Since a register in can be associated with two or more SUB instructions, we must check for interference among several SUB modules. We find that there is no interference due to the semantics of local schedule. Let and be SUB instructions on register , and let be the instruction to be simulated. Neuron can have a synapse to neurons and in the modules associated with instructions and . Due to the local schedule of synapses, only synapses associated with the simulated instruction are available. Neurons not associated with are not available; hence they do not receive any spikes from . In this way, no wrong simulations are performed by .
Module FIN: Halting the Computation. To complete the computation, the module FIN is depicted in Figure 3. Assume that register machine has halted; i.e., its instruction has been applied. This means that simulates and begins to output the result. Recall that register 1 is never decremented; i.e., it is never associated with instruction. This means that holds spikes. At reference neuron fires sending a spike to each of neurons and the output neuron . At neuron fires sending a spike to neuron . At neuron fires sending the first spike (out of three spikes) to the environment and to neuron . At the next step neuron is activated since it has spikes. For the next steps neuron continues to apply its rule to consume 2 spikes. After steps neuron has only one spike so at this step neuron spikes sending a spike to the environment for the second and last time. Finally at step + 2, neuron forgets its last spike. The first and second spike of neuron were sent out at step and , respectively. Hence the result of the computation of is , exactly the value in register when register machine halts. All parameters for , , and are satisfied, and an extended rule is used in ADD module. This completes the proof.

Figure 1: Module ADD: simulating instruction .
Figure 2: Module SUB: simulating instruction .
Figure 3: Module FIN.

In what follow, we consider max-pseudosequential strategy: a more realistic approach of spiking for neurons in the system: in case of a tie among active neurons, then all the tied neurons will fire simultaneously.

4.2. Max-Pseudosequential SN P Systems with Local Scheduled Synapses without Delays

In this section we provide the universality results of SSSN P systems with local schedules and without delays in the max-pseudosequential strategy, abbreviated as systems.

Theorem 3. .

Proof. In order to prove Theorem 3, we simulate a register machine with an system . Prior to the construction of , we give a brief description of the computation as follows: each neuron in is associated with each register in . If register stores the number , then neuron has spikes. If some instruction is applied by this means neuron begins simulating the instruction. In contrast to Theorem 2, in Theorem 3, due to max-pseudosequentiality we do not need extended rules; only standard rules are enough to prove universality.
Module ADD. The template module for the ADD instruction is depicted in Figure 4. The local reference neuron fires at schedule sending one spike to each of neurons , , and . At there is a tie between neurons and (both have equal number of spikes), but due to max-pseudosequentiality, both neurons fire simultaneously and both send one spike each to neurons and . In this way neuron receives two spikes corresponding to an increment of 1 in the stored value in register . At the next step neuron must nondeterministically decide which rule to apply, so we have the following two cases.
Case I. When neuron applies rule at , one spike is sent to neuron . Neuron fires sending a spike to neuron at , which means system will simulate the next instruction of .
Case II. When neuron applies its rule it consumes one spike, so two spikes remain. Neuron receives a spike from neuron but cannot fire since neuron still has the most spikes. At neuron applies firing and sending a spike to neuron . Again there is a tie between neurons and at but due to max-pseudosequentiality both fires at a single step, sending a spike but only receives a spike; the spike of is wasted as there is no synapse from it at schedule . This means system is ready to simulate the next instruction of .
The simulation of an ADD instruction is correct: contents of are increased by two, followed by nondeterministically activating either neuron or .
Module SUB. To simulate instruction we have the SUB module in Figure 5. At schedule the local reference neuron fires sending one spike to each of neurons , , and . At the next step neurons , , and have 1, 1, and 2 spikes, respectively. Due to max-pseudosequentiality, both tied neurons and with equal number of spikes fire simultaneously at . Neuron receives one spike from neuron while neuron receives two, one each from neurons and . At this point neuron has the following two cases depending on the value of in register .
Case I. When , this means neuron has spikes. Neuron has 4 spikes, the most at so it applies its forgetting rule. At neuron applies its rule consuming and producing one spike, sending the spike to neuron . In this way the spikes in return to indicating in register . At neuron fires and so begins to simulate the next instruction .
Case II. When , this means that neuron has spikes. Neuron fires at , consuming 3 spikes (to simulate the decrement of by 1) and sending a spike to neuron . At neuron has 5 spikes and fires sending a spike to neuron . At [5,6) neuron fires send one spike each to neurons (restoring the single spike of ) and . Hence, system will simulate the next instruction .
As due to the semantics of local reference neuron, there is no interference with any SUB module. The simulation of the SUB instruction is correct: if register is nonempty then spikes in neuron are decreased by 2 followed by activating neuron ; if register is empty then spikes in neuron are not decreased and neuron is activated. It is to be noted that with slight high complexity the SUB module in Figure 2 for max-sequentiality can also be used in SSN systems.
Module FIN: Halting the Computation. The module FIN in Figure 3 can also be used in SSN systems to produce the output of .
It is clear that all three modules have utilized at most 3 rules in each neuron, consumed at most 5 spikes while forgetting at most 4 spikes. The synapses of each module are synchronized with respect to their related local reference neurons. We also note that no extended rules are needed in max-pseudosequentiality. The parameters of the theorem are satisfied, thus completing the proof.

Figure 4: Module ADD: simulating instruction .
Figure 5: Module SUB: simulating instruction .
4.3. Min-Sequential SN P Systems with Local Scheduled Synapses without Delays

In this section we provide the universality results of SSSN P systems with local schedules and without delays in the min-sequential strategy or systems in short. Here the superscript stands for min-sequential and the subscript means local scheduling. We note that, due to min-sequentiality, in ADD module we do not make use of extended rules as compared to Theorem 2. Extended rule is used only in FIN module. In our results we use parameters, similar to [19] and other works on universality. Parameters specify at most rules in each neuron, forgetting at most spikes, and consuming at most spikes, respectively.

Theorem 4. .

Proof. In order to prove Theorem 4, we will simulate a register machine with an system . Prior to construction of , we give a brief description of the computation as follows: each neuron in is associated with each register in . The local reference neuron is labeled with “”. If the content of is the number , then spikes stored in its corresponding are . If applies some instruction such that some operation, i.e., ADD, SUB, or HALT, is performed by this means a neuron begins to simulate the operation. We provide modules ADD, SUB, and FIN in order to simulate all three types of instructions of .
Module ADD. In Figure 6 we give the graphical description of the module simulating a rule of the form .
The module functions are as follows: the simulation starts from the reference neuron Once applies its rule, it sends one spike to each of neurons , , and at schedule . It is worth noting that, in order to keep neuron inactive, its content must be even; adding one spike to the content of register means that it is active to fire now, but due to min-sequentiality neuron will fire next at step 2. Neuron applies its rule at schedule to send a spike to each of neurons and . Neuron has now one more spike added to its previous spikes. As the number of spikes in neuron is even now so it will not apply any rule. At the next step neuron nondeterministically selects which rule to apply. Either neuron or becomes activated depending on which rule of neuron is applied. We have the following two cases depending on which rule is applied by neuron .
Case I. If neuron applies its rule , then neuron fires only once, at schedule . Applying this rule consumes two spikes of neuron and sends one spike to . In this way, the single spike of neuron is restored. At the next step is the only neuron that can apply a rule at schedule . A spike is sent from neuron to neuron in order to begin simulating of .
Case II. If neuron applies its rule , then neuron fires twice. By firing first at step 3, neuron consumes and sends one spike to neuron . At schedule both neurons and are active, but only neuron can apply its rule since it has two spikes while neuron has three spike. Hence at , rule of neuron is applied and one spike is sent to neuron . At both neurons and are active, but only neuron with less spikes applies its rule; the spike fired by neuron is not received by any neuron, since neuron has no synapse at schedule . At , neuron fires sending a spike to . At neuron becomes active to begin simulating of .
The simulation of ADD is complete as the number of spikes in neuron is increased by two, hence increasing the content of register by 1. Afterwards, the next instruction, either or , is chosen nondeterministically for simulation. Since neuron remains inactive as long as its content remains even. Hence, when simulating ADD neuron does not apply any of its rules. The simulation ends by restoring all spikes in all neurons to their initial configuration. Hence, the module is ready for another simulation of ADD.
Module SUB. In Figure 7 we give the graphical description of the module SUB simulating a rule of the form . The module functions are as follows: At schedule , the reference neuron is activated and fires sending one spike to each of neurons and . Due to min-sequentiality, is the activated neuron so it fires and sends a spike at . Due to the spiking of neuron , neurons and now have 3 and 1 spikes, respectively. Neuron is activated and it fires one spike to at .
The SUB module always has two cases depending on the value in register : either (when register is empty) or (when register is nonempty). We explain both cases separately beginning at schedule .
Case I. When in register , then neuron has spikes. Neuron has 4 spikes. Neuron is now activated with less spikes so applying its second rule, it consumes one spike and sends it to neuron at . At neuron fires sending a spike to neuron . At neuron fires sending a spike to and . Neurons and now each have their original spikes with 2 and 1 spikes, respectively. Lastly at neuron is activated to begin simulating instruction of .
Case II. When in register , this means neuron has spikes. Neuron has 4 spikes, the less at so it applies its forgetting rule. Neuron fires sending a spike to neuron at and consuming 3 spikes. Hence, neuron now has spikes corresponding to subtracting the content of register by 1. At neuron fires sending one spike to each of neurons (restoring the single spike of ) and . Lastly at neuron is activated to simulate the instruction of .
In both scenarios the module SUB is restored to its initial configuration after simulating a SUB instruction. There is no interference in this module due to the semantics of local scheduling.
Module FIN: Halting the Computation. To complete the computation, the module FIN is depicted in Figure 8. Assume that register machine has halted; i.e., its instruction has been applied. This means that simulates and begins to output the result. At reference neuron fires sending a spike to the output neuron . At neuron fires sending the first spike (hence ) out of three spikes to the environment and to neuron . At the next step neuron is activated since it has spikes. For the next steps neuron continues to apply its extended rule to consume and produce 2 spikes. At step, neuron has only one spike so by applying second rule neuron spikes sending one spike to neuron (hence activating it for the second time with odd number of spikes). At step, neuron spikes (hence ) sending a spike to the environment for the second and last time. The first and second spike of neuron were sent out at step and , respectively. Hence the result of the computation of is , exactly the value in register when register machine halts. All parameters for , , and are satisfied, and an extended rule is used in FIN module. This completes the proof.

Figure 6: Module ADD: simulating instruction .
Figure 7: Module SUB: simulating instruction .
Figure 8: Module FIN.
4.4. Min-Pseudosequential SN P Systems with Local Scheduled Synapses without Delays

In this section we provide the universality results of SSSN P systems with local schedules and without delays in the min-pseudosequential strategy, abbreviated as systems.

Theorem 5. .

Proof. In order to prove the theorem we simulate a register machine with an system . Prior to the construction of , we give a brief description of the computation as follows: each neuron in is associated with each register in . If register stores the number , then neuron has spikes. If some instruction is applied by this means neuron begins simulating the instruction.
Module ADD. The template module for the ADD instruction is depicted in Figure 9. The module functions are as follows: the simulation starts from the reference neuron Once applies its rule, it sends one spike to each of neurons , , and at schedule . Due to min-sequentiality neuron will fire next at step 2. Neuron applies its rule at schedule to send a spike to each of neurons and . Simulation of neuron is complete now with two spikes added to its content. At the next step neuron nondeterministically selects which rule to apply. Either neuron or becomes activated depending on rule selection of neuron . We have the following two cases depending on the rule application by neuron .
Case I. If neuron applies its rule , then neuron fires only once, at schedule . Applying this rule consumes two spikes of neuron and sends one spike to . In this way, the single spike of neuron is restored. At the next step is the only neuron that can apply a rule at schedule . A spike is sent from neuron to neuron in order to begin simulating of .
Case II. If neuron applies its rule , then neuron fires twice. By firing first at step 3, neuron consumes and sends one spike to neuron . At schedule both neurons and are active, but only neuron can apply its rule since it has two spikes while neuron has three spike. Hence at , rule of neuron is applied and one spike is sent to neuron . At both neurons and are active with equal number of spikes and due to min-pseudosequentiality both fire simultaneously, the spike fired by neuron is not received by any neuron, since neuron has no synapse at schedule and the spike fired by is received by . At neuron becomes active to begin simulating of .
The simulation of ADD is complete as the number of spikes in neuron is increased by two, hence increasing the content of register by 1. Afterwards, the next instruction, either or , is chosen nondeterministically for simulation. Since neuron remains inactive when its content remains even. Hence, when simulating ADD, neuron does not apply any of its rules. The simulation ends by restoring all spikes in all neurons to their initial configuration. Hence, the module is ready for another simulation of ADD.
Module SUB. To simulate instruction we have the SUB module in Figure 10. At schedule , the local reference neuron fires sending one spike to each of neurons , , , and . Due to min-pseudosequentiality, both neurons and fire at . Neuron receives two, one each from neurons and . At this point neuron has the following two cases depending on the value of in register .
Case I. When , this means neuron has spikes. Neuron has 4 spikes, the most at , so neuron fires and sends a spike to neuron . At neuron has 5 spikes and fires sending a spike to neuron . At neuron fires sending a spike to each of neurons (returning the initial spike of ) and . Hence, system will simulate the next instruction .
Case II. When , this means that neuron has spikes. Neuron has 4 spikes, the less at , so applies it forgetting rule at Neuron fires at , consuming 3 spikes (to simulate the decrement of by 1) and sending a spike to neuron . At neuron fires sending a spike to each of neurons (returning the initial spike of ) and . Hence, system will simulate the next instruction .
Due to local reference neuron, there is no interference with any SUB module. The simulation of the SUB instruction is correct: if register is nonempty then spikes in neuron are decreased by 2 followed by activating neuron ; if register is empty then spikes in neuron are not decreased and neuron is activated. It is to be noted that the SUB module in Figure 7 for min-sequentiality can also be used in SSN systems.
Module FIN: Halting the Computation. The module FIN in Figure 8 can also be used in SSN systems to produce the output of .
It is clear that all three modules have utilized at most 3 rules in each neuron, consumed at most 5 spikes while forgetting at most 4 spikes. The synapses of each module are synchronized with respect to their related local reference neurons. We also note that no extended rules are needed in ADD module due to min-sequentiality, but in FIN module. The parameters of the theorem are satisfied, thus completing the proof.

Figure 9: Module ADD: simulating instruction .
Figure 10: Module SUB: simulating instruction .

5. Final Remarks

In this work, the computational power of sequential SN P systems with local scheduled synapses without delay is investigated. Results show that sequential SN P systems with local scheduled synapses without delay working in both max/min-sequentiality and max/min-pseudosequentiality strategies are computationally universal with both standard and extended rules (only in case when standard rules fail to perform). In particular, we showed for both strategies that universality is achieved using at most 3 rules per neuron, consuming, and forgetting at most 5 and 4 spikes, respectively. We further note that extended rule is used only in ADD module with the max-sequential strategy and FIN module with both min-sequential and min-pseudosequential strategies. Moreover strong sequential strategy has slight higher complexity than pseudosequential.

Our future work is to prove universality of our variants with global-scheduled synapses. The open problems are to reduce complexity of our systems, to prove universality without forgetting rules and extended rules.

The other direction that might be interesting is to realize which class of problems/languages these kind of SN P systems variant is capable of solving/deciding, thereby comparing its capability in recognizing/solving languages/problems with the other variants with respect to several parameters/ingredients of the SN P systems.

Data Availability

The paper contains only theoretical derivations; no data was used in the research.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by National Natural Science Foundation of China (61502186) and China Postdoctoral Science Foundation (2016M592335).

References

  1. M. Ionescu, G. Păun, and T. Yokomori, “Spiking neural P systems,” Fundamenta Informaticae, vol. 71, no. 2-3, pp. 279–308, 2006. View at Google Scholar
  2. H. Chen, R. Freund, M. Ionescu, G. Păun, and M. J. Pérez-Jiménez, “On string languages generated by spiking neural P systems,” Fundamenta Informaticae, vol. 75, no. 1-4, pp. 141–162, 2007. View at Google Scholar · View at Scopus
  3. H. Chen, M. Ionescu, T.-O. Ishdorj, A. Păun, G. Păun, and M. J. Pérez-Jiménez, “Spiking neural P systems with extended rules: universality and languages,” Natural Computing, vol. 7, no. 2, pp. 147–166, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  4. A. Păun and G. Păun, “Small universal spiking neural P systems,” BioSystems, vol. 90, no. 1, pp. 48–60, 2007. View at Google Scholar
  5. X. Zhang, L. Pan, and A. Păun, “On the universality of axon P systems,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 11, pp. 2816–2829, 2015. View at Google Scholar
  6. T. Song, L. Pan, and G. Păun, “Spiking neural P systems with rules on synapses,” Theoretical Computer Science, vol. 529, pp. 82–95, 2014. View at Publisher · View at Google Scholar · View at Scopus
  7. T. Song and L. Pan, “Spiking neural P systems with rules on synapses working in maximum spikes consumption strategy,” IEEE Transactions on NanoBioscience, vol. 14, no. 1, pp. 38–44, 2015. View at Publisher · View at Google Scholar
  8. T. Song and L. Pan, “Spiking neural P systems with rules on synapses working in maximum spiking strategy,” IEEE Transactions on NanoBioscience, vol. 14, no. 4, pp. 465–477, 2015. View at Google Scholar
  9. J. Wang, H. J. Hoogeboom, L. Pan, G. Păun, and M. J. Pérez-Jiménez, “Spiking neural P systems with weights,” Neural Computation, vol. 22, no. 10, pp. 2615–2646, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  10. L. Pan, G. Păun, and M. J. Pérez-Jiménez, “Spiking neural P systems with neuron division and budding,” Science China Information Sciences, vol. 54, no. 8, pp. 1596–1607, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  11. F. G. C. Cabarle, H. N. Adorna, M. J. Pérez-Jiménez, and T. Song, “Spiking neural P systems with structural plasticity,” Neural Computing and Applications, vol. 26, no. 8, pp. 1905–1917, 2015. View at Publisher · View at Google Scholar · View at Scopus
  12. T. Wu, Z. Zhang, G. Păun, and L. Pan, “Cell-like spiking neural P systems,” Theoretical Computer Science, vol. 623, pp. 180–189, 2016. View at Publisher · View at Google Scholar · View at Scopus
  13. Z. Gexiang, M. J. Pérez-Jiménez, and G. Marian, Real-life Applications with Membrane Computing, Springer, Cham, Switzerland, 2017.
  14. H. Peng, J. Wang, M. J. Pérez-Jiménez, H. Wang, J. Shao, and T. Wang, “Fuzzy reasoning spiking neural P system for fault diagnosis,” Information Sciences, vol. 235, pp. 106–116, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. T. Wang, G. Zhang, J. Zhao, Z. He, J. Wang, and M. J. Perez-Jimenez, “Fault diagnosis of electric power systems based on fuzzy reasoning spiking neural P systems,” IEEE Transactions on Power Systems, vol. 30, no. 3, pp. 1182–1194, 2015. View at Publisher · View at Google Scholar · View at Scopus
  16. J. Wang, P. Shi, H. Peng, M. J. Pérez-Jiménez, and T. Wang, “Weighted fuzzy spiking neural P systems,” IEEE Transactions on Fuzzy Systems, vol. 21, no. 2, pp. 209–220, 2013. View at Publisher · View at Google Scholar · View at Scopus
  17. G. Zhang, H. Rong, F. Neri, and M. J. Pérez-Jiménez, “An optimization spiking neural P system for approximately solving combinatorial optimization problems,” International Journal of Neural Systems, vol. 24, no. 5, Article ID 1440006, 2014. View at Publisher · View at Google Scholar · View at Scopus
  18. H. Peng, J. Yang, J. Wang et al., “Spiking neural P systems with multiple channels,” Neural Networks, vol. 95, pp. 66–71, 2017. View at Publisher · View at Google Scholar · View at Scopus
  19. F. G. C. Cabarle, H. N. Adorna, M. Jiang, and X. Zeng, “Spiking neural P systems with scheduled synapses,” IEEE Transactions on NanoBioscience, vol. 16, no. 8, pp. 792–801, 2017. View at Publisher · View at Google Scholar · View at Scopus
  20. H. Peng and J. Wang, “Coupled neural P systems,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–11, 2018. View at Google Scholar
  21. H. Peng, J. Wang, M. J. Pérez-Jiménez, and A. Riscos-Núñez, “Dynamic threshold neural P systems,” Knowledge-Based Systems, vol. 163, pp. 875–884, 2019. View at Publisher · View at Google Scholar
  22. T. Song, A. Rodrguez-Patón, P. Zheng, and X. Zeng, “Spiking neural P systems with colored spikes,” IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 4, pp. 1106–1115, 2018. View at Google Scholar
  23. R. Barbuti, A. Maggiolo-Schettini, P. Milazzo, and S. Tini, “Compositional semantics of spiking neural P systems,” Journal of Logic and Algebraic Programming, vol. 79, no. 6, pp. 304–316, 2010. View at Publisher · View at Google Scholar · View at Scopus
  24. J. P. A. Carandang, J. M. B. Villaores, F. G. C. Cabarle, H. N. Adorna, and M. A. Martinez-Del-Amor, “CuSNP: Spiking neural P systems simulators in CUDA,” Romanian Journal of Information Science and Technology, vol. 20, no. 1, pp. 57–70, 2017. View at Google Scholar · View at Scopus
  25. O. H. Ibarra, S. Woodworth, F. Yu, and A. Pǎun, “On spiking neural P systems and partially blind counter machines,” Natural Computing, vol. 7, no. 1, pp. 3–19, 2008. View at Publisher · View at Google Scholar · View at Scopus
  26. M. Garca Arnau, D. Peréz, A. Rodrguez Patón, and P. Sosk, “Spiking neural P systems: stronger normal forms,” International Journal of Unconventional Computing, vol. 5, no. 5, pp. 411–425, 2007. View at Google Scholar
  27. O. H. Ibarra, A. Pǎun, and A. Rodríguez-Patón, “Sequential SNP systems based on min/max spike number,” Theoretical Computer Science, vol. 410, no. 30-32, pp. 2982–2991, 2009. View at Publisher · View at Google Scholar · View at Scopus
  28. K. Jiang, T. Song, and L. Pan, “Universality of sequential spiking neural P systems based on minimum spike number,” Theoretical Computer Science, vol. 499, pp. 88–97, 2013. View at Publisher · View at Google Scholar · View at Scopus
  29. F. G. C. Cabarle, H. N. Adorna, and M. J. Pérez-Jiménez, “Sequential spiking neural P systems with structural plasticity based on max/min spike number,” Neural Computing and Applications, vol. 27, no. 5, pp. 1337–1347, 2016. View at Publisher · View at Google Scholar · View at Scopus
  30. G. Păun, Membrane Computing: An Introduction, Springer-Verlag, Berlin, Germany, 2012.
  31. G. Păun, G. Rozenberg, and A. Salomaa, Eds., The Oxford Handbook of Membrane Computing, Oxford University Press, New York, NY, USA, 2010.
  32. M. Minsky, Computation: Finite and Infinite Machines, Prentice–Hall, Englewood Cliffs, NJ, USA, 1967.