Abstract

Spiking neural P systems with scheduled synapses are a class of distributed and parallel computational models motivated by the structural dynamism of biological synapses by incorporating ideas from nonstatic (i.e., dynamic) graphs and networks. In this work, we consider the family of spiking neural P systems with scheduled synapses working in the sequential mode: at each step the neuron(s) with the maximum/minimum number of spikes among the neurons that can spike will fire. The computational power of spiking neural P systems with scheduled synapses working in the sequential mode is investigated. Specifically, the universality (Turing equivalence) of such systems is obtained.

1. Introduction

Spiking neural P systems (abbreviated as SN P systems) were first introduced in [1] as a class of parallel and distributed neural-like computational models, which are inspired from the fact that spiking neurons communicate with each other through electrical impulses. An SN P system could be viewed as a directed graph with nodes consisting of a number of computing units called neurons and the arcs representing the synapses. Each neuron contains copies of a single object type called the spike and a set of spiking rules that produces one or more spikes from the source neuron to every neuron connected by a synapse and forgetting rules that removes spikes from the neuron.

Many efforts have been made to investigate the theoretical and practical aspects of SN P systems, since the computational model was introduced. The computational power of SN P systems was investigated as different computing devices, e.g., number generating/accepting devices [1], language generators [2, 3], and function computing devices [4]. By abstracting computing ideas from the human brain and biological neurons, many variants of SN P systems have been introduced, such as axon P systems [5], SN P systems with rules on synapses [68], SN P systems with weights [9], SN P systems with neuron division and budding [10], SN P systems with structural plasticity [11], cell-like SN P systems [12], fuzzy reasoning SN P systems [1316], probabilistic SN P systems [17], SN P systems with multiple channels [18], SN P systems with scheduled synapses (SSN P systems for short) [19], coupled neural P systems [20], dynamic threshold neural P systems [21], SN P systems with colored spikes [22], and SN P systems simulators [23, 24].

Although biological processes in living organisms happen in parallel, they are not synchronized by a universal clock as assumed in SN P systems. Introduced in [25], sequential SN P systems are a class of computing devices that plays a role of a bridge between spiking neural networks and membrane computing [26]. Two types of sequentiality are considered in sequential SN P systems: the general sequentiality [25] and sequentiality induced by the spike(s) number [2729]. The first case is the classical pure-sequential model of its family. The fact behind the pure-sequentiality is that these systems are completely (purely) sequential according to neurons; at each time unit, one and only one neuron among all active neurons is nondeterministically chosen to fire. The second case is based on the first one, where the sequentiality is induced on the basis of number of spikes (either maximum or minimum) present in active neurons; on the basis of spikes number we have further two types: maximal sequential SN P systems and minimal sequential SN P systems.

In this work, we consider SN P systems with scheduled synapses (SSN P systems) which is inspired and motivated by the structural dynamism of biological synapses, while incorporating ideas from nonstatic (i.e., dynamic) graphs and networks from mathematics. Synapses in SSN P systems are only available at a specific schedule or duration. The schedules are of two types: local and global scheduling, named after a specified reference neurons: local reference neurons and global reference neurons. In the first case synapses are called local scheduled synapses, in which synapses are scheduled locally with respect to their local reference neurons; there can be multiple reference neurons within a system; however each synapse is associated with exactly one reference neuron. In the second case synapses are called global-scheduled synapses, in which synapses are scheduled globally with respect to a single-shared global reference neuron. In particular, we consider SN P systems with local scheduled synapses working in the sequential mode without delay (SSSN P systems, for short), where the sequentiality induced by the maximum/minimum (max/min, for short) spike(s) number is introduced. The computational power of SSSN P systems working in strong sequentiality or pseudosequentiality strategies is investigated. Specifically, SSSN P systems working in the max/min-sequentiality (resp., max/min-pseudosequentiality) strategy are proved to be Turing universal.

The paper is organized as follows: Section 2 provides definitions of some concepts necessary for the understanding of the paper. We introduce in Section 3 our variant of SN P systems, namely, sequential SN P systems with local scheduled synapses working in max/min-sequential strategy without delay (SSSN P systems). We present our universality results in Section 4 and finally provide final remarks in Section 5.

2. Preliminaries

In this section, various notions and notations from formal language theory and computability theory that we shall use in the remainder of the paper are explained. The reader is also assumed to have a basic familiarity with membrane computing. Both the basics of membrane computing and its formal language and computability theory prerequisites are covered in sufficient detail in [30] and the handbook in [31].

We denote by a nonempty finite set of symbols called alphabet, and is known as the set of all strings of symbols generated from , including , the empty string. The set of nonempty string over is denoted by . Each subset of is called a language over . A language is called -free, if it does not contain empty string (thus is a subset of ). A set of collection of languages is called a family of languages. We define inductively regular expressions over an alphabet and the language they describe as follows: and every symbol are regular expressions. If and are regular expressions, then , , and are also regular expressions. We denote by the language described by the regular expression . In particular, we have , , , , and .

Our universality results are proved by allowing our proposed model to simulate a well-known universal computing model called register machine [1, 32].

A register machine is a construct of the form: , where(i) is the number of registers,(ii) is the set of instruction labels,(iii) is the start label (which labels an instruction ADD),(iv) is the label halt (which is assigned to instruction ),(v) is the set of instructions.

A register machine could accept or generate a number . In this work, we use a register machine that generates numbers. We denote the set of all numbers computed/generated by as . Since register machine computes all sets of numbers that are Turing computable, then we have , where denotes the family of languages recognized by Turing machine.

3. Sequential SN P Systems with Local Scheduled Synapses Working in Max/Min-Sequential Strategy without Delay

We give here the definition of sequential SN P systems with local scheduled synapses without delay.

Definition 1. A sequential SN P system with local scheduled synapses without delay, SSSN P system for short, of degree is a construct of the form
where (a) is the singleton alphabet (a is called spike);(b) are neurons of the form , where(1) is the initial number of spikes contained in ;(2) is a finite set of rules of the form(i), where is a regular expression over ;(ii) for some , with restriction that for any rule of type (i) from , where is a regular expression over and , , and ;(c) is the set of synapses among neurons, where is the set of schedules of the form (), no has a synapse to itself, and each synapse has exactly one schedule;(d) is the set of synapse references, is the power set of the set , and is the set of reference neurons. For any two pairs , , we have and ;(e) is the label for output neurons.

The semantics of spiking rule application is as follows: Let be a neuron containing spikes and a rule . If and , then can be applied. With this rule application will consume spikes to send spike(s) to its adjacent neurons, then it will have spikes remaining. If then rule serves as a standard rule and otherwise as an extended rule. In general, a neuron can have at least two spiking rules with regular expressions and such that . In this case, if the regular expression is satisfied, then we nondeterministically choose one of the rules to be applied. Additionally, our system is sequential and synchronized so that at each step, among all the active neurons, the one with maximum/minimum number of spikes can apply its rule. Rule application is sequential both at the neurons and at system level. This means at each time schedule exactly single rule can be applied by a spiking neuron and similarly one and only one neuron (holding maximum/minimum number of spikes) can fire among all the active neurons. Thus all the neurons apply their rules at maximum/minimum and in sequence. As a convention if a rule has then we write it as , instead. Moreover synapse from the output neuron labeled with is always assumed available.

The semantics of synapse schedule is as follows: the synapses are scheduled according to the activation of the reference neuron. A scheduled synapse takes effect immediately upon activation of its reference neuron. The reference neuron is indicated with the symbol “”, (e.g., ). If at step , a reference neuron becomes activated with a synapse scheduled with , then it means that synapse is active only from the step “” to “”. If at some step a rule is applied by a neuron , but there is no respective scheduled synapse at that time, then that spike gets wasted as no adjacent neuron will receive it.

The semantics of sequentiality either maximum/minimum is as follows: if at any step there are more than one active neurons (neurons with enabled rules are called active), then only the neuron(s) containing the maximum/minimum number of spikes (among the currently active neurons) will be able to fire. If there is a tie among two or more active neurons (all holding equal number of spikes), then there are two different strategies considered in SSSN P systems: strong sequentiality and pseudosequentiality. In the first strategy one and only one active neuron is nondeterministically chosen among all the tied neurons to fire, while in the second strategy all the tied neurons fire simultaneously; let us assume that, in max-sequential case, at a given step there are four active neurons: , , , and with spikes stored 4, 5, 5, and 1, respectively; it is obvious that there is a tie between neurons and ; so in strong max-sequential case either neuron or will be nondeterministically chosen to fire, but in max-pseudosequential case both neurons will fire simultaneously.

A configuration of the system in a given step is the distribution of spikes among neurons and the status of each neuron, whether closed or open. The initial configuration is given by of each neuron and all neurons are open.

Starting from the fixed initial configuration (distribution of spikes among neurons) and applying the rules in synchronized manner (a global clock is assumed), the system gets evolved. Applying the rules according to the above description, transitions among configuration can be defined.

Applying the rules in this way, the system passes from one configuration to another configuration. Such a step is known as transition. Given two configurations , of , the direct transition between these two configurations is represented by . While the reflexive and transitive closure of the relation is represented by . A transition is sequential provided that, among all the candidate neurons, one holding maximum/minimum number of spikes must apply the rule.

A computation is the sequence of transitions, starting from the initial configuration to the final configuration. A computation is said to be successful or a halting computation, if it reaches a configuration where no further rule(s) can be applied. Moreover all the neurons must be restored to their valid states.

There are various ways to define the result or output of the computation, but in this work we use the following type: we consider only the first two spikes fired or produced by the output neuron at steps and . The number computed by SSSN P systems in max-sequential case is the difference between first two spikes minus one, i.e., , while in min-sequential case it is the difference between first two spikes minus two, i.e., .

We denote with the family of all sets of numbers generated by SN P systems working in sequential mode with local scheduled synapses and without delay. Here means that the number generated is encoded by the first two spikes of the output neuron; means that system works in generative mode; means only local scheduled synapses is used; indicates the max/min-sequentiality and max/min-pseudosequentiality modes of the systems; means extended rules. Moreover we will take into account only halting computation.

We follow the conventional practice by ignoring the number zero, when comparing the power of two number generating devices, as empty string is ignored in formal languages and automata theory while comparing two language generating devices.

4. Results on Computational Power

In what follows, we start describing our results for SSSN P systems with local schedules and without delays by giving theorems for both sequential strategies: based on max/min-sequentiality and on max/min-pseudosequentiality. Due to the scheduled synapses our systems are deterministic at the system level; nondeterminism is shifted to the level of neurons in both sequential strategies: strong sequential and pseudosequential. By simulating register machines we prove universality for both strategies.

4.1. Max-Sequential SN P Systems with Local Scheduled Synapses without Delays

In this section we provide the universality results of SSSN P systems with local schedules and without delays in the max-sequential strategy or systems in short. Here the superscript stands for max-sequential and the subscript means local scheduling. We note that for this section we make use of extended rules which are spiking rules where . In our results we use parameters, similar to [19] and other works on universality. Parameters specify at most rules in each neuron, forgetting at most spikes and consuming at most spikes, respectively.

Theorem 2. .

Proof. In order to prove Theorem 2, we will simulate a register machine with an system . Prior to construction of , we give a brief description of the computation as follows: each neuron in is associated with each register in . The local reference neuron is labeled with “”. If the content of is the number , then spikes stored in its corresponding are . If applies some instruction such that some operation, i.e., ADD, SUB, or HALT, is performed by this means a neuron begins to simulate the operation. We provide modules ADD, SUB, and FIN in order to simulate all three types of instructions of .
Module ADD. In Figure 1 we give the graphical description of the module simulating a rule of the form .
The module functions are as follows: here the reference neuron is , so it is labeled with , i.e., . The simulation starts from the reference neuron . Once applies its rule, it sends one spike to each of neurons and at schedule . In step 2, neuron is active with two spikes. Neuron applies its extended rule at schedule to send two spikes to each of neurons and . Neuron now has two new spikes added to its previous spikes. If the number of spikes in neuron is even then neuron does not apply any rule. It is worth noting that we use here an extended rule instead of a standard rule as it is a strong sequential case, which means at most one neuron can fire at any step. At the next step neuron nondeterministically selects which rule to apply. Either neuron or becomes activated depending on which rule of neuron is applied. We have the following two cases depending on which rule is applied by neuron .
Case I. If neuron applies its rule , then neuron fires only once, at schedule . Applying this rule consumes all 3 spikes of neuron and sends one spike to each of neurons and . In this way, the single spike of neuron is restored. At the next step is the only neuron that can apply a rule at schedule . A spike is sent from neuron to neuron in order to begin simulating of .
Case II. If neuron applies its rule , then neuron fires twice. By firing first at step 3, neuron consumes one spike and sends one spike to each of neurons and . In this way, the single spike of neuron is restored. At schedule both neurons and can apply their rules, but only neuron can apply its rule since it has two spikes while neuron has only one spike. Hence at rule of neuron is applied. At neuron applies its rule to consume one spike and send one spike to each of neurons and . At , neuron has the most spikes so it applies its forgetting rule. At neuron becomes active to begin simulating of .
The simulation of ADD is complete as the number of spikes in neuron increased by two, hence increasing the content of register by 1. Afterwards, the next instruction, either or , is chosen nondeterministically for simulation. We note that neuron remains inactive as long as its content remains even. Hence, when simulating ADD neuron does not apply any of its rules. The simulation ends by restoring all spikes in all neurons to their initial configuration. Hence, the module is ready for another simulation of ADD.
Module SUB. In Figure 2 we give the graphical description of the module SUB simulating a rule of the form . The module functions are as follows: at schedule , the reference neuron is activated and fires sending one spike to each of neurons , and . The SUB module always has two cases depending on the value in register : either (when register is empty) or (when register is nonempty). We explain both the cases separately beginning at schedule .
Case I. When in register , then neuron has spikes. Neuron and have 2 and 4 spikes, respectively. Due to max-sequentiality, is the activated neuron so it fires consuming one spike and sending a spike at . Due to the spiking of neuron , neuron now has 4 spikes. Neuron is activated and it fires one spike each to and at . Neurons and now have 3 and 5 spikes each, respectively. Neuron is activated so it consumes one spike and sends a spike to neuron at . Now neuron has 4 spikes and applies its forgetting rule at . Neuron is now activated so applying its second rule, it consumes one spike and sends it to neuron at . At neuron fires sending a spike to neurons and . Neurons and now each have their original spikes with 2 and 1 spikes, respectively. Lastly at neuron is activated to begin simulating instruction of .
Case II. When in register , this means neuron has spikes. Neuron fires sending a spike to neuron at and consuming 3 spikes. Hence, neuron now has spikes corresponding to subtracting the content of register by 1. Neuron has the most spikes, so it fires sending a spike to each of neurons and at . At neuron has the most spikes so it fires sending a spike to neuron . At neuron now has 5 spikes and it fires sending a spike to neuron . At neuron fires sending one spike each of neurons (restoring the single spike of ) and . Lastly at neuron is activated to simulate the instruction of .
In both scenarios the module SUB is restored to its initial configuration after simulating a SUB instruction. Since a register in can be associated with two or more SUB instructions, we must check for interference among several SUB modules. We find that there is no interference due to the semantics of local schedule. Let and be SUB instructions on register , and let be the instruction to be simulated. Neuron can have a synapse to neurons and in the modules associated with instructions and . Due to the local schedule of synapses, only synapses associated with the simulated instruction are available. Neurons not associated with are not available; hence they do not receive any spikes from . In this way, no wrong simulations are performed by .
Module FIN: Halting the Computation. To complete the computation, the module FIN is depicted in Figure 3. Assume that register machine has halted; i.e., its instruction has been applied. This means that simulates and begins to output the result. Recall that register 1 is never decremented; i.e., it is never associated with instruction. This means that holds spikes. At reference neuron fires sending a spike to each of neurons and the output neuron . At neuron fires sending a spike to neuron . At neuron fires sending the first spike (out of three spikes) to the environment and to neuron . At the next step neuron is activated since it has spikes. For the next steps neuron continues to apply its rule to consume 2 spikes. After steps neuron has only one spike so at this step neuron spikes sending a spike to the environment for the second and last time. Finally at step + 2, neuron forgets its last spike. The first and second spike of neuron were sent out at step and , respectively. Hence the result of the computation of is , exactly the value in register when register machine halts. All parameters for , , and are satisfied, and an extended rule is used in ADD module. This completes the proof.

In what follow, we consider max-pseudosequential strategy: a more realistic approach of spiking for neurons in the system: in case of a tie among active neurons, then all the tied neurons will fire simultaneously.

4.2. Max-Pseudosequential SN P Systems with Local Scheduled Synapses without Delays

In this section we provide the universality results of SSSN P systems with local schedules and without delays in the max-pseudosequential strategy, abbreviated as systems.

Theorem 3. .

Proof. In order to prove Theorem 3, we simulate a register machine with an system . Prior to the construction of , we give a brief description of the computation as follows: each neuron in is associated with each register in . If register stores the number , then neuron has spikes. If some instruction is applied by this means neuron begins simulating the instruction. In contrast to Theorem 2, in Theorem 3, due to max-pseudosequentiality we do not need extended rules; only standard rules are enough to prove universality.
Module ADD. The template module for the ADD instruction is depicted in Figure 4. The local reference neuron fires at schedule sending one spike to each of neurons , , and . At there is a tie between neurons and (both have equal number of spikes), but due to max-pseudosequentiality, both neurons fire simultaneously and both send one spike each to neurons and . In this way neuron receives two spikes corresponding to an increment of 1 in the stored value in register . At the next step neuron must nondeterministically decide which rule to apply, so we have the following two cases.
Case I. When neuron applies rule at , one spike is sent to neuron . Neuron fires sending a spike to neuron at , which means system will simulate the next instruction of .
Case II. When neuron applies its rule it consumes one spike, so two spikes remain. Neuron receives a spike from neuron but cannot fire since neuron still has the most spikes. At neuron applies firing and sending a spike to neuron . Again there is a tie between neurons and at but due to max-pseudosequentiality both fires at a single step, sending a spike but only receives a spike; the spike of is wasted as there is no synapse from it at schedule . This means system is ready to simulate the next instruction of .
The simulation of an ADD instruction is correct: contents of are increased by two, followed by nondeterministically activating either neuron or .
Module SUB. To simulate instruction we have the SUB module in Figure 5. At schedule the local reference neuron fires sending one spike to each of neurons , , and . At the next step neurons , , and have 1, 1, and 2 spikes, respectively. Due to max-pseudosequentiality, both tied neurons and with equal number of spikes fire simultaneously at . Neuron receives one spike from neuron while neuron receives two, one each from neurons and . At this point neuron has the following two cases depending on the value of in register .
Case I. When , this means neuron has spikes. Neuron has 4 spikes, the most at so it applies its forgetting rule. At neuron applies its rule consuming and producing one spike, sending the spike to neuron . In this way the spikes in return to indicating in register . At neuron fires and so begins to simulate the next instruction .
Case II. When , this means that neuron has spikes. Neuron fires at , consuming 3 spikes (to simulate the decrement of by 1) and sending a spike to neuron . At neuron has 5 spikes and fires sending a spike to neuron . At [5,6) neuron fires send one spike each to neurons (restoring the single spike of ) and . Hence, system will simulate the next instruction .
As due to the semantics of local reference neuron, there is no interference with any SUB module. The simulation of the SUB instruction is correct: if register is nonempty then spikes in neuron are decreased by 2 followed by activating neuron ; if register is empty then spikes in neuron are not decreased and neuron is activated. It is to be noted that with slight high complexity the SUB module in Figure 2 for max-sequentiality can also be used in SSN systems.
Module FIN: Halting the Computation. The module FIN in Figure 3 can also be used in SSN systems to produce the output of .
It is clear that all three modules have utilized at most 3 rules in each neuron, consumed at most 5 spikes while forgetting at most 4 spikes. The synapses of each module are synchronized with respect to their related local reference neurons. We also note that no extended rules are needed in max-pseudosequentiality. The parameters of the theorem are satisfied, thus completing the proof.

4.3. Min-Sequential SN P Systems with Local Scheduled Synapses without Delays

In this section we provide the universality results of SSSN P systems with local schedules and without delays in the min-sequential strategy or systems in short. Here the superscript stands for min-sequential and the subscript means local scheduling. We note that, due to min-sequentiality, in ADD module we do not make use of extended rules as compared to Theorem 2. Extended rule is used only in FIN module. In our results we use parameters, similar to [19] and other works on universality. Parameters specify at most rules in each neuron, forgetting at most spikes, and consuming at most spikes, respectively.

Theorem 4. .

Proof. In order to prove Theorem 4, we will simulate a register machine with an system . Prior to construction of , we give a brief description of the computation as follows: each neuron in is associated with each register in . The local reference neuron is labeled with “”. If the content of is the number , then spikes stored in its corresponding are . If applies some instruction such that some operation, i.e., ADD, SUB, or HALT, is performed by this means a neuron begins to simulate the operation. We provide modules ADD, SUB, and FIN in order to simulate all three types of instructions of .
Module ADD. In Figure 6 we give the graphical description of the module simulating a rule of the form .
The module functions are as follows: the simulation starts from the reference neuron Once applies its rule, it sends one spike to each of neurons , , and at schedule . It is worth noting that, in order to keep neuron inactive, its content must be even; adding one spike to the content of register means that it is active to fire now, but due to min-sequentiality neuron will fire next at step 2. Neuron applies its rule at schedule to send a spike to each of neurons and . Neuron has now one more spike added to its previous spikes. As the number of spikes in neuron is even now so it will not apply any rule. At the next step neuron nondeterministically selects which rule to apply. Either neuron or becomes activated depending on which rule of neuron is applied. We have the following two cases depending on which rule is applied by neuron .
Case I. If neuron applies its rule , then neuron fires only once, at schedule . Applying this rule consumes two spikes of neuron and sends one spike to . In this way, the single spike of neuron is restored. At the next step is the only neuron that can apply a rule at schedule . A spike is sent from neuron to neuron in order to begin simulating of .
Case II. If neuron applies its rule , then neuron fires twice. By firing first at step 3, neuron consumes and sends one spike to neuron . At schedule both neurons and are active, but only neuron can apply its rule since it has two spikes while neuron has three spike. Hence at , rule of neuron is applied and one spike is sent to neuron . At both neurons and are active, but only neuron with less spikes applies its rule; the spike fired by neuron is not received by any neuron, since neuron has no synapse at schedule . At , neuron fires sending a spike to . At neuron becomes active to begin simulating of .
The simulation of ADD is complete as the number of spikes in neuron is increased by two, hence increasing the content of register by 1. Afterwards, the next instruction, either or , is chosen nondeterministically for simulation. Since neuron remains inactive as long as its content remains even. Hence, when simulating ADD neuron does not apply any of its rules. The simulation ends by restoring all spikes in all neurons to their initial configuration. Hence, the module is ready for another simulation of ADD.
Module SUB. In Figure 7 we give the graphical description of the module SUB simulating a rule of the form . The module functions are as follows: At schedule , the reference neuron is activated and fires sending one spike to each of neurons and . Due to min-sequentiality, is the activated neuron so it fires and sends a spike at . Due to the spiking of neuron , neurons and now have 3 and 1 spikes, respectively. Neuron is activated and it fires one spike to at .
The SUB module always has two cases depending on the value in register : either (when register is empty) or (when register is nonempty). We explain both cases separately beginning at schedule .
Case I. When in register , then neuron has spikes. Neuron has 4 spikes. Neuron is now activated with less spikes so applying its second rule, it consumes one spike and sends it to neuron at . At neuron fires sending a spike to neuron . At neuron fires sending a spike to and . Neurons and now each have their original spikes with 2 and 1 spikes, respectively. Lastly at neuron is activated to begin simulating instruction of .
Case II. When in register , this means neuron has spikes. Neuron has 4 spikes, the less at so it applies its forgetting rule. Neuron fires sending a spike to neuron at and consuming 3 spikes. Hence, neuron now has spikes corresponding to subtracting the content of register by 1. At neuron fires sending one spike to each of neurons (restoring the single spike of ) and . Lastly at neuron is activated to simulate the instruction of .
In both scenarios the module SUB is restored to its initial configuration after simulating a SUB instruction. There is no interference in this module due to the semantics of local scheduling.
Module FIN: Halting the Computation. To complete the computation, the module FIN is depicted in Figure 8. Assume that register machine has halted; i.e., its instruction has been applied. This means that simulates and begins to output the result. At reference neuron fires sending a spike to the output neuron . At neuron fires sending the first spike (hence ) out of three spikes to the environment and to neuron . At the next step neuron is activated since it has spikes. For the next steps neuron continues to apply its extended rule to consume and produce 2 spikes. At step, neuron has only one spike so by applying second rule neuron spikes sending one spike to neuron (hence activating it for the second time with odd number of spikes). At step, neuron spikes (hence ) sending a spike to the environment for the second and last time. The first and second spike of neuron were sent out at step and , respectively. Hence the result of the computation of is , exactly the value in register when register machine halts. All parameters for , , and are satisfied, and an extended rule is used in FIN module. This completes the proof.

4.4. Min-Pseudosequential SN P Systems with Local Scheduled Synapses without Delays

In this section we provide the universality results of SSSN P systems with local schedules and without delays in the min-pseudosequential strategy, abbreviated as systems.

Theorem 5. .

Proof. In order to prove the theorem we simulate a register machine with an system . Prior to the construction of , we give a brief description of the computation as follows: each neuron in is associated with each register in . If register stores the number , then neuron has spikes. If some instruction is applied by this means neuron begins simulating the instruction.
Module ADD. The template module for the ADD instruction is depicted in Figure 9. The module functions are as follows: the simulation starts from the reference neuron Once applies its rule, it sends one spike to each of neurons , , and at schedule . Due to min-sequentiality neuron will fire next at step 2. Neuron applies its rule at schedule to send a spike to each of neurons and . Simulation of neuron is complete now with two spikes added to its content. At the next step neuron nondeterministically selects which rule to apply. Either neuron or becomes activated depending on rule selection of neuron . We have the following two cases depending on the rule application by neuron .
Case I. If neuron applies its rule , then neuron fires only once, at schedule . Applying this rule consumes two spikes of neuron and sends one spike to . In this way, the single spike of neuron is restored. At the next step is the only neuron that can apply a rule at schedule . A spike is sent from neuron to neuron in order to begin simulating of .
Case II. If neuron applies its rule , then neuron fires twice. By firing first at step 3, neuron consumes and sends one spike to neuron . At schedule both neurons and are active, but only neuron can apply its rule since it has two spikes while neuron has three spike. Hence at , rule of neuron is applied and one spike is sent to neuron . At both neurons and are active with equal number of spikes and due to min-pseudosequentiality both fire simultaneously, the spike fired by neuron is not received by any neuron, since neuron has no synapse at schedule and the spike fired by is received by . At neuron becomes active to begin simulating of .
The simulation of ADD is complete as the number of spikes in neuron is increased by two, hence increasing the content of register by 1. Afterwards, the next instruction, either or , is chosen nondeterministically for simulation. Since neuron remains inactive when its content remains even. Hence, when simulating ADD, neuron does not apply any of its rules. The simulation ends by restoring all spikes in all neurons to their initial configuration. Hence, the module is ready for another simulation of ADD.
Module SUB. To simulate instruction we have the SUB module in Figure 10. At schedule , the local reference neuron fires sending one spike to each of neurons , , , and . Due to min-pseudosequentiality, both neurons and fire at . Neuron receives two, one each from neurons and . At this point neuron has the following two cases depending on the value of in register .
Case I. When , this means neuron has spikes. Neuron has 4 spikes, the most at , so neuron fires and sends a spike to neuron . At neuron has 5 spikes and fires sending a spike to neuron . At neuron fires sending a spike to each of neurons (returning the initial spike of ) and . Hence, system will simulate the next instruction .
Case II. When , this means that neuron has spikes. Neuron has 4 spikes, the less at , so applies it forgetting rule at Neuron fires at , consuming 3 spikes (to simulate the decrement of by 1) and sending a spike to neuron . At neuron fires sending a spike to each of neurons (returning the initial spike of ) and . Hence, system will simulate the next instruction .
Due to local reference neuron, there is no interference with any SUB module. The simulation of the SUB instruction is correct: if register is nonempty then spikes in neuron are decreased by 2 followed by activating neuron ; if register is empty then spikes in neuron are not decreased and neuron is activated. It is to be noted that the SUB module in Figure 7 for min-sequentiality can also be used in SSN systems.
Module FIN: Halting the Computation. The module FIN in Figure 8 can also be used in SSN systems to produce the output of .
It is clear that all three modules have utilized at most 3 rules in each neuron, consumed at most 5 spikes while forgetting at most 4 spikes. The synapses of each module are synchronized with respect to their related local reference neurons. We also note that no extended rules are needed in ADD module due to min-sequentiality, but in FIN module. The parameters of the theorem are satisfied, thus completing the proof.

5. Final Remarks

In this work, the computational power of sequential SN P systems with local scheduled synapses without delay is investigated. Results show that sequential SN P systems with local scheduled synapses without delay working in both max/min-sequentiality and max/min-pseudosequentiality strategies are computationally universal with both standard and extended rules (only in case when standard rules fail to perform). In particular, we showed for both strategies that universality is achieved using at most 3 rules per neuron, consuming, and forgetting at most 5 and 4 spikes, respectively. We further note that extended rule is used only in ADD module with the max-sequential strategy and FIN module with both min-sequential and min-pseudosequential strategies. Moreover strong sequential strategy has slight higher complexity than pseudosequential.

Our future work is to prove universality of our variants with global-scheduled synapses. The open problems are to reduce complexity of our systems, to prove universality without forgetting rules and extended rules.

The other direction that might be interesting is to realize which class of problems/languages these kind of SN P systems variant is capable of solving/deciding, thereby comparing its capability in recognizing/solving languages/problems with the other variants with respect to several parameters/ingredients of the SN P systems.

Data Availability

The paper contains only theoretical derivations; no data was used in the research.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by National Natural Science Foundation of China (61502186) and China Postdoctoral Science Foundation (2016M592335).