Complexity

Volume 2019, Article ID 7313414, 12 pages

https://doi.org/10.1155/2019/7313414

## Sequential Spiking Neural P Systems with Local Scheduled Synapses without Delay

^{1}Key Laboratory of Image Information Processing and Intelligent Control of Education Ministry of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, Hubei, China^{2}Government Girls Postgraduate College, Kohat 26000, Khyber Pakhtunkhwa, Pakistan^{3}Department of Computer Science, University of the Philippines Diliman, Quezon City, Philippines^{4}School of Information Science and Engineering, Xiamen University, Xiamen 361005, China

Correspondence should be addressed to Fei Xu; nc.ude.tsuh@ux_ief

Received 7 December 2018; Revised 21 February 2019; Accepted 20 March 2019; Published 14 April 2019

Academic Editor: Alex Alexandridis

Copyright © 2019 Alia Bibi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Spiking neural P systems with scheduled synapses are a class of distributed and parallel computational models motivated by the structural dynamism of biological synapses by incorporating ideas from nonstatic (i.e., dynamic) graphs and networks. In this work, we consider the family of spiking neural P systems with scheduled synapses working in the sequential mode: at each step the neuron(s) with the maximum/minimum number of spikes among the neurons that can spike will fire. The computational power of spiking neural P systems with scheduled synapses working in the sequential mode is investigated. Specifically, the universality (Turing equivalence) of such systems is obtained.

#### 1. Introduction

Spiking neural P systems (abbreviated as SN P systems) were first introduced in [1] as a class of parallel and distributed neural-like computational models, which are inspired from the fact that spiking neurons communicate with each other through electrical impulses. An SN P system could be viewed as a directed graph with nodes consisting of a number of computing units called neurons and the arcs representing the synapses. Each neuron contains copies of a single object type called the spike and a set of spiking rules that produces one or more spikes from the source neuron to every neuron connected by a synapse and forgetting rules that removes spikes from the neuron.

Many efforts have been made to investigate the theoretical and practical aspects of SN P systems, since the computational model was introduced. The computational power of SN P systems was investigated as different computing devices, e.g., number generating/accepting devices [1], language generators [2, 3], and function computing devices [4]. By abstracting computing ideas from the human brain and biological neurons, many variants of SN P systems have been introduced, such as axon P systems [5], SN P systems with rules on synapses [6–8], SN P systems with weights [9], SN P systems with neuron division and budding [10], SN P systems with structural plasticity [11], cell-like SN P systems [12], fuzzy reasoning SN P systems [13–16], probabilistic SN P systems [17], SN P systems with multiple channels [18], SN P systems with scheduled synapses (SSN P systems for short) [19], coupled neural P systems [20], dynamic threshold neural P systems [21], SN P systems with colored spikes [22], and SN P systems simulators [23, 24].

Although biological processes in living organisms happen in parallel, they are not synchronized by a universal clock as assumed in SN P systems. Introduced in [25], sequential SN P systems are a class of computing devices that plays a role of a bridge between spiking neural networks and membrane computing [26]. Two types of sequentiality are considered in sequential SN P systems: the general sequentiality [25] and sequentiality induced by the spike(s) number [27–29]. The first case is the classical pure-sequential model of its family. The fact behind the pure-sequentiality is that these systems are completely (purely) sequential according to neurons; at each time unit, one and only one neuron among all active neurons is nondeterministically chosen to fire. The second case is based on the first one, where the sequentiality is induced on the basis of number of spikes (either maximum or minimum) present in active neurons; on the basis of spikes number we have further two types: maximal sequential SN P systems and minimal sequential SN P systems.

In this work, we consider SN P systems with scheduled synapses (SSN P systems) which is inspired and motivated by the structural dynamism of biological synapses, while incorporating ideas from nonstatic (i.e., dynamic) graphs and networks from mathematics. Synapses in SSN P systems are only available at a specific schedule or duration. The schedules are of two types: local and global scheduling, named after a specified reference neurons: local reference neurons and global reference neurons. In the first case synapses are called local scheduled synapses, in which synapses are scheduled locally with respect to their local reference neurons; there can be multiple reference neurons within a system; however each synapse is associated with exactly one reference neuron. In the second case synapses are called global-scheduled synapses, in which synapses are scheduled globally with respect to a single-shared global reference neuron. In particular, we consider SN P systems with local scheduled synapses working in the sequential mode without delay (SSSN P systems, for short), where the sequentiality induced by the maximum/minimum (max/min, for short) spike(s) number is introduced. The computational power of SSSN P systems working in strong sequentiality or pseudosequentiality strategies is investigated. Specifically, SSSN P systems working in the max/min-sequentiality (resp., max/min-pseudosequentiality) strategy are proved to be Turing universal.

The paper is organized as follows: Section 2 provides definitions of some concepts necessary for the understanding of the paper. We introduce in Section 3 our variant of SN P systems, namely, sequential SN P systems with local scheduled synapses working in max/min-sequential strategy without delay (SSSN P systems). We present our universality results in Section 4 and finally provide final remarks in Section 5.

#### 2. Preliminaries

In this section, various notions and notations from formal language theory and computability theory that we shall use in the remainder of the paper are explained. The reader is also assumed to have a basic familiarity with membrane computing. Both the basics of membrane computing and its formal language and computability theory prerequisites are covered in sufficient detail in [30] and the handbook in [31].

We denote by a nonempty finite set of symbols called alphabet, and is known as the set of all strings of symbols generated from , including , the empty string. The set of nonempty string over is denoted by . Each subset of is called a language over . A language is called -free, if it does not contain empty string (thus is a subset of ). A set of collection of languages is called a family of languages. We define inductively regular expressions over an alphabet and the language they describe as follows: and every symbol are regular expressions. If and are regular expressions, then , , and are also regular expressions. We denote by the language described by the regular expression . In particular, we have , , , , and .

Our universality results are proved by allowing our proposed model to simulate a well-known universal computing model called register machine [1, 32].

A register machine is a construct of the form: , where(i) is the number of registers,(ii) is the set of instruction labels,(iii) is the start label (which labels an instruction ADD),(iv) is the label halt (which is assigned to instruction ),(v) is the set of instructions.

A register machine could accept or generate a number . In this work, we use a register machine that generates numbers. We denote the set of all numbers computed/generated by as . Since register machine computes all sets of numbers that are Turing computable, then we have , where denotes the family of languages recognized by Turing machine.

#### 3. Sequential SN P Systems with Local Scheduled Synapses Working in Max/Min-Sequential Strategy without Delay

We give here the definition of sequential SN P systems with local scheduled synapses without delay.

*Definition 1. *A sequential SN P system with local scheduled synapses without delay, SSSN P system for short, of degree is a construct of the form

where (a) is the singleton alphabet (a is called spike);(b) are neurons of the form , where(1) is the initial number of spikes contained in ;(2) is a finite set of rules of the form(i), where is a regular expression over ;(ii) for some , with restriction that for any rule of type (i) from , where is a regular expression over and , , and ;(c) is the set of synapses among neurons, where is the set of schedules of the form (), no has a synapse to itself, and each synapse has exactly one schedule;(d) is the set of synapse references, is the power set of the set , and is the set of reference neurons. For any two pairs , , we have and ;(e) is the label for output neurons.

The semantics of spiking rule application is as follows: Let be a neuron containing spikes and a rule . If and , then can be applied. With this rule application will consume spikes to send spike(s) to its adjacent neurons, then it will have spikes remaining. If then rule serves as a standard rule and otherwise as an extended rule. In general, a neuron can have at least two spiking rules with regular expressions and such that . In this case, if the regular expression is satisfied, then we nondeterministically choose one of the rules to be applied. Additionally, our system is sequential and synchronized so that at each step, among all the active neurons, the one with maximum/minimum number of spikes can apply its rule. Rule application is sequential both at the neurons and at system level. This means at each time schedule exactly single rule can be applied by a spiking neuron and similarly one and only one neuron (holding maximum/minimum number of spikes) can fire among all the active neurons. Thus all the neurons apply their rules at maximum/minimum and in sequence. As a convention if a rule has then we write it as , instead. Moreover synapse from the output neuron labeled with is always assumed available.

The semantics of synapse schedule is as follows: the synapses are scheduled according to the activation of the reference neuron. A scheduled synapse takes effect immediately upon activation of its reference neuron. The reference neuron is indicated with the symbol “”, (e.g., ). If at step , a reference neuron becomes activated with a synapse scheduled with , then it means that synapse is active only from the step “” to “”. If at some step a rule is applied by a neuron , but there is no respective scheduled synapse at that time, then that spike gets wasted as no adjacent neuron will receive it.

The semantics of sequentiality either maximum/minimum is as follows: if at any step there are more than one active neurons (neurons with enabled rules are called active), then only the neuron(s) containing the maximum/minimum number of spikes (among the currently active neurons) will be able to fire. If there is a tie among two or more active neurons (all holding equal number of spikes), then there are two different strategies considered in SSSN P systems: strong sequentiality and pseudosequentiality. In the first strategy one and only one active neuron is nondeterministically chosen among all the tied neurons to fire, while in the second strategy all the tied neurons fire simultaneously; let us assume that, in max-sequential case, at a given step there are four active neurons: , , , and with spikes stored 4, 5, 5, and 1, respectively; it is obvious that there is a tie between neurons and ; so in strong max-sequential case either neuron or will be nondeterministically chosen to fire, but in max-pseudosequential case both neurons will fire simultaneously.

A configuration of the system in a given step is the distribution of spikes among neurons and the status of each neuron, whether closed or open. The initial configuration is given by of each neuron and all neurons are open.

Starting from the fixed initial configuration (distribution of spikes among neurons) and applying the rules in synchronized manner (a global clock is assumed), the system gets evolved. Applying the rules according to the above description, transitions among configuration can be defined.

Applying the rules in this way, the system passes from one configuration to another configuration. Such a step is known as transition. Given two configurations , of , the direct transition between these two configurations is represented by . While the reflexive and transitive closure of the relation is represented by . A transition is sequential provided that, among all the candidate neurons, one holding maximum/minimum number of spikes must apply the rule.

A computation is the sequence of transitions, starting from the initial configuration to the final configuration. A computation is said to be successful or a halting computation, if it reaches a configuration where no further rule(s) can be applied. Moreover all the neurons must be restored to their valid states.

There are various ways to define the result or output of the computation, but in this work we use the following type: we consider only the first two spikes fired or produced by the output neuron at steps and . The number computed by SSSN P systems in max-sequential case is the difference between first two spikes minus one, i.e., , while in min-sequential case it is the difference between first two spikes minus two, i.e., .

We denote with the family of all sets of numbers generated by SN P systems working in sequential mode with local scheduled synapses and without delay. Here means that the number generated is encoded by the first two spikes of the output neuron; means that system works in generative mode; means only local scheduled synapses is used; indicates the max/min-sequentiality and max/min-pseudosequentiality modes of the systems; means extended rules. Moreover we will take into account only halting computation.

We follow the conventional practice by ignoring the number zero, when comparing the power of two number generating devices, as empty string is ignored in formal languages and automata theory while comparing two language generating devices.

#### 4. Results on Computational Power

In what follows, we start describing our results for SSSN P systems with local schedules and without delays by giving theorems for both sequential strategies: based on max/min-sequentiality and on max/min-pseudosequentiality. Due to the scheduled synapses our systems are deterministic at the system level; nondeterminism is shifted to the level of neurons in both sequential strategies: strong sequential and pseudosequential. By simulating register machines we prove universality for both strategies.

##### 4.1. Max-Sequential SN P Systems with Local Scheduled Synapses without Delays

In this section we provide the universality results of SSSN P systems with local schedules and without delays in the max-sequential strategy or systems in short. Here the superscript stands for max-sequential and the subscript means local scheduling. We note that for this section we make use of extended rules which are spiking rules where . In our results we use parameters, similar to [19] and other works on universality. Parameters specify at most rules in each neuron, forgetting at most spikes and consuming at most spikes, respectively.

Theorem 2. *.*

*Proof. *In order to prove Theorem 2, we will simulate a register machine with an system . Prior to construction of , we give a brief description of the computation as follows: each neuron in is associated with each register in . The local reference neuron is labeled with “”. If the content of is the number , then spikes stored in its corresponding are . If applies some instruction such that some operation, i.e., ADD, SUB, or HALT, is performed by this means a neuron begins to simulate the operation. We provide modules ADD, SUB, and FIN in order to simulate all three types of instructions of .*Module ADD*. In Figure 1 we give the graphical description of the module simulating a rule of the form .

The module functions are as follows: here the reference neuron is , so it is labeled with , i.e., . The simulation starts from the reference neuron . Once applies its rule, it sends one spike to each of neurons and at schedule . In step 2, neuron is active with two spikes. Neuron applies its extended rule at schedule to send two spikes to each of neurons and . Neuron now has two new spikes added to its previous spikes. If the number of spikes in neuron is even then neuron does not apply any rule. It is worth noting that we use here an extended rule instead of a standard rule as it is a strong sequential case, which means at most one neuron can fire at any step. At the next step neuron nondeterministically selects which rule to apply. Either neuron or becomes activated depending on which rule of neuron is applied. We have the following two cases depending on which rule is applied by neuron .*Case I*. If neuron applies its rule , then neuron fires only once, at schedule . Applying this rule consumes all 3 spikes of neuron and sends one spike to each of neurons and . In this way, the single spike of neuron is restored. At the next step is the only neuron that can apply a rule at schedule . A spike is sent from neuron to neuron in order to begin simulating of .*Case II*. If neuron applies its rule , then neuron fires twice. By firing first at step 3, neuron consumes one spike and sends one spike to each of neurons and . In this way, the single spike of neuron is restored. At schedule both neurons and can apply their rules, but only neuron can apply its rule since it has two spikes while neuron has only one spike. Hence at rule of neuron is applied. At neuron applies its rule to consume one spike and send one spike to each of neurons and . At , neuron has the most spikes so it applies its forgetting rule. At neuron becomes active to begin simulating of .

The simulation of ADD is complete as the number of spikes in neuron increased by two, hence increasing the content of register by 1. Afterwards, the next instruction, either or , is chosen nondeterministically for simulation. We note that neuron remains inactive as long as its content remains even. Hence, when simulating ADD neuron does not apply any of its rules. The simulation ends by restoring all spikes in all neurons to their initial configuration. Hence, the module is ready for another simulation of ADD.*Module SUB*. In Figure 2 we give the graphical description of the module SUB simulating a rule of the form . The module functions are as follows: at schedule , the reference neuron is activated and fires sending one spike to each of neurons , and . The SUB module always has two cases depending on the value in register : either (when register is empty) or (when register is nonempty). We explain both the cases separately beginning at schedule .*Case I*. When in register , then neuron has spikes. Neuron and have 2 and 4 spikes, respectively. Due to max-sequentiality, is the activated neuron so it fires consuming one spike and sending a spike at . Due to the spiking of neuron , neuron now has 4 spikes. Neuron is activated and it fires one spike each to and at . Neurons and now have 3 and 5 spikes each, respectively. Neuron is activated so it consumes one spike and sends a spike to neuron at . Now neuron has 4 spikes and applies its forgetting rule at . Neuron is now activated so applying its second rule, it consumes one spike and sends it to neuron at . At neuron fires sending a spike to neurons and . Neurons and now each have their original spikes with 2 and 1 spikes, respectively. Lastly at neuron is activated to begin simulating instruction of .*Case II*. When in register , this means neuron has spikes. Neuron fires sending a spike to neuron at and consuming 3 spikes. Hence, neuron now has spikes corresponding to subtracting the content of register by 1. Neuron has the most spikes, so it fires sending a spike to each of neurons and at . At neuron has the most spikes so it fires sending a spike to neuron . At neuron now has 5 spikes and it fires sending a spike to neuron . At neuron fires sending one spike each of neurons (restoring the single spike of ) and . Lastly at neuron is activated to simulate the instruction of .

In both scenarios the module SUB is restored to its initial configuration after simulating a SUB instruction. Since a register in can be associated with two or more SUB instructions, we must check for interference among several SUB modules. We find that there is no interference due to the semantics of local schedule. Let and be SUB instructions on register , and let be the instruction to be simulated. Neuron can have a synapse to neurons and in the modules associated with instructions and . Due to the local schedule of synapses, only synapses associated with the simulated instruction are available. Neurons not associated with are not available; hence they do not receive any spikes from . In this way, no wrong simulations are performed by .*Module FIN: Halting the Computation*. To complete the computation, the module FIN is depicted in Figure 3. Assume that register machine has halted; i.e., its instruction has been applied. This means that simulates and begins to output the result. Recall that register 1 is never decremented; i.e., it is never associated with instruction. This means that holds spikes. At reference neuron fires sending a spike to each of neurons and the output neuron . At neuron fires sending a spike to neuron . At neuron fires sending the first spike (out of three spikes) to the environment and to neuron . At the next step neuron is activated since it has spikes. For the next steps neuron continues to apply its rule to consume 2 spikes. After steps neuron has only one spike so at this step neuron spikes sending a spike to the environment for the second and last time. Finally at step + 2, neuron forgets its last spike. The first and second spike of neuron were sent out at step and , respectively. Hence the result of the computation of is , exactly the value in register when register machine halts. All parameters for , , and are satisfied, and an extended rule is used in ADD module. This completes the proof.