Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2017, Article ID 1575828, 14 pages
https://doi.org/10.1155/2017/1575828
Research Article

The Strict-Sense Nonblocking Multirate Switching Network

Faculty of Electronics and Telecommunications, Chair of Communication and Computer Networks, Poznań University of Technology, Ul. Polanka 3, 60-965 Poznań, Poland

Correspondence should be addressed to Remigiusz Rajewski; lp.nanzop.tup@ikswejar.zsuigimer

Received 18 May 2016; Revised 7 November 2016; Accepted 14 November 2016; Published 7 February 2017

Academic Editor: Kyandoghere Kyamakya

Copyright © 2017 Wojciech Kabaciński and Remigiusz Rajewski. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper considers the nonblocking conditions for a multirate switching network at the connection level. The necessary and sufficient conditions for the discrete bandwidth model, as well as sufficient and, in particular cases, also necessary conditions for the continuous bandwidth model, were given. The results given for in the discrete bandwidth model are the same as those proposed by Hwang et al. (2005); however, in this paper, these results were extended to other values of , , and . In the continuous bandwidth model for , the results given in this paper are also the same as those by Hwang et al. (2005); however, for , it was proved that a smaller number of vertically stacked switching networks are needed.

1. Introduction

A multirate switching network is a network in which any connection is associated with weight . Such a weight represents a certain bandwidth of input and output ports and interstage links connecting these input and output ports. The capacity of the links is normalized and is usually equal to 1 for interstage links. Input and output ports’ capacity is in many cases lower than the interstage links’ capacity and is denoted by , where . Weight is also limited by range , , where .

Depending on the possible values of , two models of multirate connections are considered in the literature: the discrete bandwidth model and the continuous bandwidth model. In the discrete bandwidth model, it is assumed that there are a finite number of distinct rates and the smallest rate divides all other rates , where . Denote and . The smallest rate is often called a channel. In this paper, we assumed that each internal link has channels and each input or output link has channels, where , , and . Every new connection is associated with a positive integer , where and is the maximal number of channels that one request can demand. In the continuous bandwidth model, connections may occupy any fraction of a link’s transmission capacity from interval . Both models are considered in this paper.

One of the best known multistage switching networks is the 3-stage Clos network [1]. Nonblocking conditions for single-rate and multicast connections were considered by many authors [25]. The first upper bound of nonblocking conditions in the case of the continuous bandwidth model was proposed by Melen and Turner in [6]. This upper bound was later improved by Chung and Ross in [7]. In turn, asymmetrical switch configurations were considered in [8]. More generalized 3-stage Clos switching fabrics were considered by Liotopoulos and Chalasani [9]. The results derived in those papers were limited to or . Both sufficient and necessary nonblocking conditions for any and were proved in [10, 11] in the case of symmetrical and asymmetrical 3-stage Clos switching networks, respectively. In some papers, the blocking probability at the connection level was also considered [1214].

Another switching network considered in the literature is the vertically stacked Banyan type switching network [1518]. Multirate switching fabrics were considered in [19, 20], where necessary and sufficient conditions were given for the discrete bandwidth model when is an integer, and a sufficient condition, as well as a necessary condition for , for the continuous bandwidth model was proved. Better upper bounds, which in some cases are also lower bounds, for switching networks were given in [21, 22]. Multirate switching fabrics with multicast connections were considered in turn in [23]. Some architectures, which may be considered as special cases of networks, like extended delta and Cantor switching fabrics, were considered earlier in [6, 7, 24].

The results presented in [19, 20] have been improved by Hwang et al. in [25]. They proved both sufficient and necessary conditions for the strict-sense nonblocking operation of networks for the discrete bandwidth model, but only where . They also proved sufficient and necessary conditions for the continuous bandwidth model when and sufficient conditions when (the conditions are also necessary in case ).

In this paper, we also consider a multirate switching network, however, only for . The results for are under study. First, we extended the results given in [25] for the discrete bandwidth model to the general case. Then, we also introduce sufficient condition for the continuous bandwidth model when . In most cases, these sufficient conditions are also necessary.

It should be noted that nowadays multirate switching networks [614, 1619, 23, 24] are getting more popular each day [2630]. A special kind of such a multirate network is an elastic optical network [3136] which constitutes a “hot topic” in optical networking and switching. In the future, elastic optical networks will replace the current optical networks used by network operators and will also probably be used in data centers. In the elastic optical network, an optical path may occupy a bandwidth which is a multiple of the so-called frequency slot unit. This frequency slot unit occupies 12.5 GHz of the bandwidth and adjacent frequency slot units may be assigned to one optical path. This may be described by using the discrete bandwidth model with denoting the frequency slot unit and denoting the number of such units in one connection.

Multirate type of structure can be used, for example, in data center networks [3741] or in multiprocessor systems. By using multirate structures, it is also possible to handle network traffic in telecommunication and computer networks generated by many services handled by network providers or companies. Using a new type of switching network structures in data centers or multiprocessor systems allows building more energy-efficient and cheaper architectures, where the cost could be understood as the number of cross points [3, 4] or as the number of active and passive optical switching elements [42]. The topic of energy efficiency is not considered in this paper; however, this paper could be a starting point for such an investigation.

In turn, the classification of different types of services enables the proper management of resources and appropriate performance for each of these services especially for the 4G/5G networks. Each service requires different resources, expressed very often in basic bandwidth units or in the number of channels. And for the 4G/5G networks bandwidth is a very crucial aspect. If sufficient resources are available, each service considered can be realized in such a switching network. It was assumed that one connection represents some service and each service requires a different number of channels , where the maximal number of channels one service could demand is , and .

Motivation of this paper was to improve already known best results for the strict-sense nonblocking multirate switching network [25]. In the next sections, it is described in detail how better results were achieved. The switching network constitutes nowadays a quite interesting architecture which could be used in optical network nodes especially for , where physical implementation is much easier than that for , where denotes the size of one switching element.

This paper is organized as follows. In Section 2, the model used in this paper is described. In the next section, the nonblocking operation of the considered network is discussed for the discrete bandwidth model. In Section 4, strict-sense nonblocking conditions for the continuous bandwidth model are considered. In the next section, a few numerical examples for both discrete and continuous bandwidth models are presented. In the same section, the results obtained in this paper are compared with the already known upper bounds. The last section includes conclusions.

2. Model Description

The switching network was proposed in [16]. Such a network is constructed by vertically stacking copies of networks (the main idea of stacking planes is shown in Figure 1). This architecture was later extended to the switching network in [17]. Such an extended network is obtained by adding extra stages to the network and vertically stacking copies of the network. The switching network is a particular case of the switching network which consists of symmetrical switching elements. The switching network is nonblocking in the strict sense when there are a sufficient number of vertically stacked planes , so any connection between a free input and a free output can be realized regardless of the routing algorithm used.

Figure 1: An example of switching network.

The architecture of the switching network (where , , and ) is shown in Figure 2. It consists of vertically stacked networks composed of switches. These networks are called planes. At each input terminal there is a splitter and at each output terminal there is a combiner . Throughout the discussion in this paper, bipartite graphs [4346] will be used to represent the topology of the switching network [16, 18]. In a bipartite graph, which represents the space-division network, two paths are not allowed to intersect at a node. If this happens, this means that one of these two paths is blocked.

Figure 2: An example of switching network.

Before we move on to the next part of this paper, we will determine the number of paths that may intersect with a given path in one node. Let us assume that we want to set up connection between input terminal and output terminal . This connection will be blocked in one plane if another connection is already set up through one of the nodes belonging to path . These nodes can belong to the part of the switching network that is reachable from input terminals or to the part of the switching network that is reachable from output terminals. These two parts of the switching network will be denoted as set . These parts are equal and both of them have or stages for odd or even, respectively, where . Let us determineWhen is odd, we have , or when is even. So, the above stage numbers can be superseded to . The number of paths reachable at the node of stage is equal to . However, one of these paths is the path which belongs to the considered connection , so the number of paths that intersect with the given path is equal toWhen is even, there is also the central stage and there are additional paths that intersect with the considered path , where denotes a set of these additional paths. The number of these additional paths is equal toAccording to formulas (2) and (3), it should also be noted that the number of all paths that can intersect with the considered path is denoted by and is given by the following equation:

3. Discrete Bandwidth Model

In the discrete bandwidth model, the capacity of the link is divided into bandwidth units also called channels. A connection may occupy such channels, where is usually limited by , which denotes the maximum number of channels one connection may occupy, and . In the general case, denotes the number of channels at each input and output link, while is the number of such channels in any interstage link. A connection between input and output which requests channels will be denoted by . This connection can be realized in input and output when , where denotes the number of channels already occupied by the existing connections at input (output ). Similarly, an interstage link can be used by the connection when , where denotes the number of channels occupied by connections already set up through this interstage link. This means that an interstage link which haschannels occupied is inaccessible for connection . We will also use the following equations to make the final formulas more transparent:

Theorem 1. The switching network is strict-sense nonblocking for the discrete bandwidth model, and if and only if

Proof. Sufficient and necessary conditions will be proved by showing the worst state in the switching network. Let a new connection be . This connection may be blocked if any node of stages to carries connections that already occupy channels. These blocking connections can be set up from inputs, and they block in nodes of stages to , or to outputs (they will block the new connection in stages to ). When is even, may also be blocked in the node of stage by connections from inputs. Since there are channels at each input or output and channels block the connection in one plane, the number of planes that may be inaccessible by isIn the set of inputs, we may still havefree channels and channels at input and output . Therefore, we may havechannels free in input and output ports. Connections in these channels may occupyadditional planes. Similarly, connections to the set of outputs may occupy additional planes, but when is even, we also have to consider the center stage. It must only be counted once, since connections from inputs of the set of inputs have to be set up to the set of outputs, to intersect with the path of the new connection in the node of stage . So, we may haveadditional planes inaccessible by the new connection. In the worst-case scenario, one more plane is needed to set up connection . Thus, finally,planes are needed.
The number of planes must be maximized through all and it reaches a maximum at ; then, by putting in respective formulas ((10), (13), and (14)), we obtain conditions given in Theorem 1 (see formula (9)).

In the case when , we have the space-division switching network and we may simplify the previous formulas. So, this time, the interstage link is inaccessible by considered connection if it haschannels occupied, and expressions (6), (7), and (8) have form as follows, respectively:As a result, Theorem 1 in this case isThis result corresponds to the results given for space-division switching networks by Lea [16].

When and , we obtain the so-called -dilated switching network presented in Figure 3. interstage links are inaccessible by a new connection when they havechannels occupied and expressions (6), (7), and (8) have form as follows, respectively:Theorem 1 in this case is

Figure 3: The -dilated switching network.

The discrete model could be adopted in all systems where channels are used. Thus, for example, this model will work for the UMTS system, the 4G system, the 5G system, and so on. It could be assumed that each kind of channel used, for example, in the UMTS/4G/5G could be divided into basic bandwidth units (or in fact into the smallest channels or slots). One UMTS/4G/5G channel then will use maximum small channels/slot.

4. Continuous Bandwidth Model

In the continuous bandwidth model, a connection may occupy any fraction of the link capacity. This fraction is called the weight of a connection and is denoted by . A connection from input to output of weight will be denoted by . Weight is limited by interval , , and , where denotes the normalized bandwidth of input and output links. Input (output ) is accessible by the new connection if , where denotes the total weight of all connections already set up at input (output ). An interstage link is accessible by this new connection when , where denotes the total weight of all connections already set up through this interstage link. Similarly as in [3, 22], the following functions will be used:

In these equations, denotes the number of connections that can be set up in one input (output) link of the smallest weight which will block a connection of weight in one interstage link. In turn, denotes a bandwidth in an input (output) link which remains after setting up connections of weight , where , in this link. denotes the number of connections of weight , so that such connections may block one interstage link for a new connection of weight . indicates whether weight can be used by the next connection or not.

We will also use the following equations to make our final formulas more transparent:

Theorem 2. The switching network is strict-sense nonblocking for the continuous bandwidth case, and ifwhere is given in the following formula:

Proof. Sufficient conditions will be proved by showing the worst state in the switching network. Suppose we want to add the new connection denoted by , where . The connection path from input terminal will be inaccessible for the new connection of weight if there is a node in the connecting path which already carries connections of the total weight greater than . In the worst-case scenario, this sum of weights should be as small as possible, say , where is close to but greater than . However, in the worst-case scenario, when , the path is inaccessible if a connection of weight is set up through at least one of its nodes. When , it is not possible that one connection will occupy weight . In this case, at least two connections must be set up. When , the total weight of the blocking connections is equal to . In other cases, it is always possible to set up one or more connections of the total weight . Therefore, three cases can be distinguished:(i)Case 1: for (ii)Case 2: for (iii)Case 3: for other values of Case  1  (). A plane is inaccessible for a new connection if there is a node on the connecting path which already carries a connection of weight . In the worst-case scenario, each of such connections can be set up through a different plane. In one input link, connections of weight can be set up. If is not an integer, there is some free bandwidth at this input terminal, but its weight is lower than and it cannot be used to set up the next connection in this link. The connecting path of connection may be blocked by another connecting path from input or output terminals. Connection may be blocked in the node of stage , where , so there are such paths from the input terminals. Connection may also be blocked in the node of stage , where , so there are such paths to the output terminals, too. Thus, there are available inputs and outputs, and connections of weight from these inputs or to these outputs may blockplanes. When is even, there is also the central stage and (when is odd, ). So, there may be additionalplanes blocked. In the worst-case scenario, one more plane is needed to set up connection . Therefore, finally,planes are needed.
Case  2 (). A plane is inaccessible for a new connection , if there is a node on the connecting path which already carries two connections of weight . Similar to the previous case, there may be connections of weight at each input terminal other than and connections of weight at input terminal . Thus, there are connections, and connections make a node inaccessible for the new connection . The same situation occurs at every output terminal. Therefore,planes are needed. When is even, then there is also the central stage. Connections passing through the node which belongs to this stage blockplanes. It should be noted that since , is always even and there will be no bandwidth of weight available for the next connection from inputs counted in . There may only be a free bandwidth of weight at inputs counted in or in input ; however, one connection of such weight will not block the new connection. So, in the worst-case scenario, one more plane is needed to set up connection . Therefore, finally,planes are needed.
Case  3 (other values of ). A plane is inaccessible for a new connection , if there is a node on its connecting path that already carries connections of the total weight greater than . When , only one connection of such weight may be set up. In the other case, at least two connections must be set up and their total weight should be greater than . At each input or output terminal, such connections may be set up andplanes may be blocked for the new connection. However, there is still a free bandwidth of weight at input terminal and at output terminal , too. When , a connection of weight greater than at these terminals cannot be set up. So, there is still a free bandwidth of weight at each input and output terminal, except and . It may be used by subsequent connections, provided that . If or and , several connections of such weight, which pass through one node, may make the plane inaccessible by the new connection. The minimum number of these connections is denoted by . Therefore, the nextplanes will be inaccessible by the new connection. However, there can still be some free bandwidth of weight at input terminal and free bandwidth of weightat input terminals not counted in . The same situation occurs at the output terminals. When this weight is greater than , it may blockadditional planes. We still have a free bandwidth of weight but this bandwidth is lower than or equal to and cannot block a plane for the new connection. When , the total weight of the free bandwidth at the input terminals is equal toConnections using this bandwidth and the free bandwidth of weight at input terminal will occupy not more thanplanes. The weight of all the remaining bandwidth is now . The same situation occurs at the output terminals, including . When is even, similar considerations may be used to receive the number of planes blocked by the connections from inputs counted in (it should be noted that, in , we have no input and output , so cannot be considered). Thus, there areplanes.
Nevertheless, at each input terminal counted in the central stage, there may be a free bandwidth of weight or , depending on being higher or lower than . Therefore, we may have additionalplanes inaccessible when or and , or there may beplanes inaccessible when . It is also possible that and . In this case, is always equal to , so in the worst state there can be only connections at each input (output) terminal, and each of them has weight , or one connection of weight . Therefore, in the worst-case scenario, only connections of weight may be considered. There are such input (output) terminals. At input and output , we have a free bandwidth of weight which may be used by connections of weight . At one interstage link, there are connections of weight which make a plane inaccessible for connection . So,planes are needed. Similarly, in the central stage, there areinaccessible planes. In a particular case, and we have the same formula as these from Case . In the worst state, one more plane is needed to set up connection . So, finally, the number of planes needed for other values of isCombining all cases together and using , we can writeIn Case , we havebut since , we have . It should be noted that, for Cases and , the given conditions are also necessary. In Case and for , our conditions are also sufficient. For , the conditions given in Theorem 2 are also necessary.
The number of planes must be maximized through all and it reaches a maximum at , so putting in the respective formulas, we obtain the conditions given in Theorem 2.

5. Comparison and Results

5.1. Examples for Discrete Bandwidth Model

In the following few examples, the worst-case scenarios in switching fabrics with different parameters are shown for the discrete bandwidth model.

Example 1. Let us consider a switching network with , , , and , and let the maximum number of channels occupied by one connection be . In this example, the switching fabric requires planes to be strict-sense nonblocking. The worst state in this switching network is shown in Figure 4. The plane will be inaccessible for connection if there is a node on the connecting path which already carries connections of the total number of occupied channels greater than or equal to . In this example, there is also a central stage, so plane is inaccessible by (in Figure 4, it is plane 1). In the set of inputs, there are still free channels and free channel at input , but these free channels cannot block a plane for ; therefore, . There is also a central stage and free channel. So, plane is inaccessible by the new connection (in Figure 4, it is plane 2). Similarly, at the output terminals, there are also free channels, but it is still possible to set up connection and we need only one additional plane (in Figure 4, it is plane 3). Finally, planes are needed.

Figure 4: The worst state in the switching network with , , , , , and ; filled cells denote occupied channels; empty cells denote free channels.

Example 2. This time, let us consider a switching network with , , , and , and let the maximum number of channels occupied by one connection be . The switching network, according to Theorem 1, requires planes. The worst state in the switching network is shown in Figure 5. The plane will be inaccessible for connection if there is a node on the connecting path which already carries a connection of the total number of occupied channels greater than or equal to . We also have a central stage, so plane is needed (in Figure 5, it is plane 1). In the set of , we have free channels and free channel at input . But these channels cannot block a plane for and . In the central stage, there is free channel, but, together with channels, the 6 mentioned channels (5 + 1) cannot block a plane for connection ; therefore, . Similarly, at the output terminals, there are free channels and it is still possible to set up a new connection in the same plane with them. So, we only need one additional plane (in Figure 5, it is plane 2). Finally, planes are needed.

Figure 5: The worst state in the switching network with , , , , , and ; filled cells denote occupied channels; empty cells denote free channels.

Example 3. Let us consider a switching network with parameters , , , and , and let the maximum number of channels occupied by one connection be . In this example, the switching fabric contains 3 planes. The worst state in the switching network is shown in Figure 6. The plane will be again inaccessible for connection if there is a node on the connecting path which already carries a connection of the total number of occupied channels greater than or equal to . There is also a central stage and plane is inaccessible by (in Figure 6, it is plane 1). We have the set of inputs and there are still free channels. In this example, there are no free channels at input and output , so is also equal to and these free channels cannot block a plane for ; therefore, . In the central stage, there are free channels. So, plane is inaccessible by the new connection (in Figure 6, it is plane 2). At the input terminal, there is now only one free channel and it will not block a plane for the next connection. At the output terminals, there are free channels, but it is still possible to set up connection and we only need one additional plane (in Figure 6, it is plane 3). Finally, planes are needed.

Figure 6: The worst state in the switching network with , , , , , and ; filled cells denote occupied channels; empty cells denote free channels.
5.2. Examples for Continuous Bandwidth Model

In the following few examples, the worst-case scenarios in switching fabrics with different parameters are shown for the continuous bandwidth model.

Example 1. Let us consider a switching network with , , and , and let connection weights be between and . In this example, the nonblocking conditions are determined by Case , because and . This switching network contains 5 planes. The worst state in the switching network is shown in Figure 7. A plane will be inaccessible for connection if there is a node on the connecting path which already carries connections of the total weight higher than . Two such connections can be set up in two input terminals. In both of these input terminals, there is still a free bandwidth of weight lower than 0.4 and it cannot be used by the next connection in this link. In both input terminal and output terminal , there is no free bandwidth, because connection occupies the whole link capacity. Therefore, there are planes inaccessible by connection (in Figure 7, these are planes 1 and 2). In this case, there is also a central stage. Similarly, there are additional inaccessible planes (in Figure 7, these are planes 3 and 4). But in the worst state, one more plane is needed to set up connection (in Figure 7, this is plane 5). So, finally, we need planes.

Figure 7: The worst state in the switching network with , , , , , and .

Example 2. Let us this time consider a switching network with , , and , and let connection weights be between and . In this example, the nonblocking conditions are determined by Case , because has a different value and . The switching network, according to Theorem 2, contains planes. The worst state in the switching network is shown in Figure 8. The plane will be inaccessible for connection if there is a node on the connecting path which already carries a set of connections of the weight higher than . One such set of connections can be set up in one input terminal. In this input terminal, there is free bandwidth of weight lower than (because ) and it cannot be used by a new connection in this link. Therefore, planes will be inaccessible by connection (in Figure 8, these are planes 1, 2, and 3). The same situation occurs at the output terminals (in Figure 8, these are planes 4, 5, and 6). In this example, there is no central stage and , so there are no other unconsidered terminals, except terminals and . In both of them, it is possible to set up a connection of weight , but the plane will be still accessible for connection . Thus, one additional plane is needed (in Figure 8, this is plane 7). Finally, there are planes.

Figure 8: The worst state in the switching network with , , , , , and .

Example 3. In this example, we consider a switching network with , , and , and the connection weights are between and . The nonblocking conditions are determined by Case , because has different values and . This switching network contains 23 planes. The worst state is shown in Figure 9. The plane will be inaccessible for connection if there is a node on the connecting path which already carries a set of connections of the total weight greater than . Three such sets of connections can be set up in one input terminal, so (in Figure 9, these are planes from 1 to 9). In one of these input terminals, there is a free bandwidth of weight higher than (because ) and it can be used by the next connection in this link. However, this bandwidth from one link cannot block connection , but such connections can make a plane inaccessible to a new connection. So, there is another inaccessible plane (in Figure 9, this is plane 10). But there is free bandwidth at the input terminals not considered in . We also have some free bandwidth at input terminal , so plane will be inaccessible for connection (in Figure 9, this is plane 11). The same situation occurs at the output terminals (in Figure 9, these are planes from 12 to 22). In this example, there is no central stage, so, in the worst state, one more plane is needed (in Figure 9, this is plane 23). So, finally, there are planes.

Figure 9: The worst state in the switching network with , , , , , and .
5.3. Comparison and Results for Continuous Bandwidth Model

In Tables 13, we compared our results with the sufficient conditions proposed in [25], where means Theorem 2. In these tables, when grows, the number of the required planes also grows, but, in the cases for , our results are better than or equal to those in [25]. For any case, except for Case when and , our results show that fewer planes are required than in [25].

Table 1: Number of planes for , , and (Case  3 for ).
Table 2: Number of planes for , , and (Case  3 for ).
Table 3: Number of planes for , , and (Case  3 for ).

In Table 4, the number of planes for different is compared when , , and . In this table, we have Cases and . As it can be seen from this table, for Case for , we always get a better result than the ones given in [25]. For Case , for and for and when , we always get better results too. In Case , for , our results are in the upper bound and they are the same as those proposed in [25].

Table 4: Number of planes for , , and (c1: Case  1; c3a: Case  3 for ; c3b: Case  3 for or for and ; c3c: Case  3 for ).

In Tables 5 and 6, switching networks for Case are compared. As it can be seen from Table 5, for Case , our results are better than the ones given in [25]. In Table 6, for Cases and , for and , we always get better results than in [25]. In other cases in these tables (it is always Case for ), our results are in the upper bound; however, they are not worse than the ones proposed in [25].

Table 5: Number of planes for , , and (c2: Case  2 for and or for and ; c3c: Case  3 for ).
Table 6: Number of planes for , , and (c2: Case  2; c3b: Case  3 for and ; c3c: Case  3 for ).

6. Conclusions

In this paper, we investigated the nonblocking behavior of multirate switching networks. In the discrete bandwidth model, we extended the conditions proposed in [25] and we showed numerical examples for this model too. In the continuous bandwidth model, we have proved that, for , fewer planes are needed for the multirate switching network to be nonblocking in the strict sense than it was previously known. The results that we obtained have been compared in Tables 16. We also showed a few numerical examples for the continuous bandwidth case. For , our results confirm those known earlier. Although, in the proof of Theorem 2, only the sufficient conditions were considered, it should be noted that, for Cases , , and with , they are also necessary conditions. This means that we can show the blocking state in the switching network composed of fewer planes than it was given in Theorem 2. For Case with , our result constitutes the upper bound.

Presented multirate switching networks can be used, for example, in data centers or in 4G/5G networks where bandwidth is a crucial aspect. The discrete model allows describing the behavior of a multiservice system where each service could require different basic bandwidth units (channels). In case where bandwidth could have any fraction of an available bandwidth, this means that some services could use any size of a bandwidth and the continuous model is then used.

The nonblocking operation of switching networks is currently under study.

Notations

A new connection from input to output which occupies channels
:Total number of paths which may intersect with a considered path in a bipartite graph
:Number of paths accessible from the input side of the switching network which may intersect with a considered path in a bipartite graph
:Number of paths accessible from the output side of the switching network which may intersect with a considered path in a bipartite graph
:Capacity of input/output port
:Minimum capacity of a single connection
:Maximum capacity of a single connection
:Number of inputs/outputs in a single switching element
:Number of channels demanded by a new connection
:Maximum number of channels demanded by a new connection
:Number of channels available in the input/output link
:Number of channels available in the interstage link
:Number of stages in the switching network
:Capacity of the switching network
:Number of vertically stacked planes in a switching network
:Weight of a single connection
:Stages’ parity indicator.

Competing Interests

The authors declare that there are no competing interests regarding the publication of this article.

Acknowledgments

The work described in this paper was financed from the funds of the Ministry of Science and Higher Education for the year 2016 (Grant no. 08/82/DSMK/8218).

References

  1. C. Clos, “A study of non-blocking switching networks,” Bell System Technical Journal, vol. 32, no. 2, pp. 406–424, 1953. View at Publisher · View at Google Scholar
  2. F. K. Hwang, The Mathematical Theory of Nonblocking Switching Networks, vol. 15 of Series on Applied Mathematics, World Scientific Publishing, River Edge, NJ, USA, 2nd edition, 2004. View at Publisher · View at Google Scholar · View at MathSciNet
  3. W. Kabacinski, Nonblocking Electronic and Photonic Switching Fabrics, Kluwer Academic, 2005.
  4. A. Pattavina, Switching Theory: Architecture and Performance in Broadband ATM Networks, John Wiley & Sons, Berlin, Germany, 1998.
  5. G. Maier and A. Pattavina, “Multicast three-stage Clos networks,” Computer Communications, vol. 33, no. 8, pp. 923–928, 2010. View at Publisher · View at Google Scholar · View at Scopus
  6. R. Melen and J. S. Turner, “Nonblocking multirate networks,” SIAM Journal on Computing, vol. 18, no. 2, pp. 301–313, 1989. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. S.-P. Chung and K. W. Ross, “On nonblocking multirate interconnection networks,” SIAM Journal on Computing, vol. 20, no. 4, pp. 726–736, 1991. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. M. Collier and T. Curran, “The strictly non-blocking condition for three-stage networks,” in Proceedings of the 14th International Teletraffic Congress (ITC '94), pp. 635–644, Antibes Juan-les-Pins, France, 1994.
  9. F. K. Liotopoulos and S. Chalasani, Strictly Nonblocking Operation of 3-Stage Clos Switching Networks, Performance Modeling and Evaluation of ATM Network, vol. 2, Chapman & Hall, London, UK, 1996.
  10. W. Kabacinski and F. K. Liotopoulos, “Non-blocking three-stage multirate switching networks,” in Proceedings of the 6th IFIP Workshop on Performance, Modelling and Evaluation of ATM Networks, vol. 26, pp. 1–10, Ilkley, UK, 1998.
  11. W. Kabacinski, “Non-blocking asymmetrical three-stage multirate switching networks,” in Proceedings of the International Conference on Communication Technology (ICCT '98), vol. 1, pp. S11-11/1–S11-11/5, Beijing, China, October 1998. View at Publisher · View at Google Scholar
  12. S. C. Liew, M.-H. Ng, and C. W. Chan, “Blocking and nonblocking multirate clos switching networks,” IEEE/ACM Transactions on Networking, vol. 6, no. 3, pp. 307–318, 1998. View at Publisher · View at Google Scholar · View at Scopus
  13. M. Stasiak, “Combinatorial considerations for switching systems carrying multi-channel traffic streams,” Annales Des Télécommunications, vol. 51, no. 11-12, pp. 611–626, 1996. View at Publisher · View at Google Scholar · View at Scopus
  14. E. Valdimarsson, “Blocking in multirate interconnection networks,” IEEE Transactions on Communications, vol. 42, no. 234, pp. 2028–2035, 1994. View at Publisher · View at Google Scholar · View at Scopus
  15. S. Kaczmarek, “Characteristic of vertically stacked switching networks,” Telecommunications Review, vol. 56, no. 2, pp. 54–56, 1983 (Polish). View at Google Scholar
  16. C.-T. Lea, “Multi-log2 N networks and their applications in high-speed electronic and photonic switching systems,” IEEE Transactions on Communications, vol. 38, no. 10, pp. 1740–1749, 1990. View at Publisher · View at Google Scholar · View at Scopus
  17. D.-J. Shyy and C.-T. Lea, “Log2 (N, m, p) strictly nonblocking networks,” IEEE Transactions on Communications, vol. 39, no. 10, pp. 1502–1510, 1991. View at Publisher · View at Google Scholar · View at Scopus
  18. C.-T. Lea and D.-J. Shyy, “Tradeoff of horizontal decomposition versus vertical stacking in rearrangeable nonblocking networks,” IEEE Transactions on Communications, vol. 39, no. 6, pp. 899–904, 1991. View at Publisher · View at Google Scholar · View at Scopus
  19. C.-T. Lea, “Multirate log2(N,e,p)log2(N,e,p) Networks,” in Proceedings of the IEEE Global Telecommunications Conference (GLOBECOM '94), pp. 319–323, San Francisco, Calif, USA, December 1994.
  20. C.-T. Lea, “Buffered or unbuffered: a case study based on logd(N,e,p)logd(N,e,p) networks,” IEEE Transactions on Communications, vol. 44, no. 1, pp. 105–113, 1996. View at Google Scholar
  21. W. Kabaciński and G. Danilewicz, “Non-blocking operation of multi-log2(N) switching networks,” in Proceedings of the 3rd IEEE International Workshop on Broadband Switching Systems, pp. 140–144, Kingston, Ontario, Canada, 1999.
  22. W. Kabaciński and M. Żal, “Non-blocking operation of multi-log2 N switching networks,” System Science, vol. 25, no. 4, pp. 83–97, 1999. View at Google Scholar
  23. W. Kabaciński and T. Wichary, “Multi-log2(N) multirate switching networks with multicast connections,” in Proceedings of the Polish Telertaffic Symposium, pp. 297–317, Kraków, Poland, 2003.
  24. R. Melen and J. S. Turner, “Nonblocking Multirate Networks,” in Proceedings of the 8th Annual Joint Conference of the IEEE Computer and Communications Societies Technology (INFOCOM '89), Ottawa, Canada, 1989.
  25. F. K. Hwang, Y. He, and Y. Wang, “Strictly nonblocking multirate logd(N, m, p) networks,” SIAM Journal on Computing, vol. 34, no. 5, pp. 1271–1278, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  26. N. Sambo, M. Secondini, F. Cugini et al., “Modeling and distributed provisioning in 10-40-100-Gb/s multirate wavelength switched optical networks,” Journal of Lightwave Technology, vol. 29, no. 9, Article ID 5722966, pp. 1248–1257, 2011. View at Publisher · View at Google Scholar · View at Scopus
  27. M. Bertolini, O. Rocher, A. Bisson, P. Pecci, and G. Bellotti, “Multi-rate vs. OTN: comparing approaches to build scalable, cost-effective 100Gb/s networks,” in Proceedings of the European Conference and Exhibition on Optical Communication (ECEOC '12), Amsterdam, The Netherland, September 2012. View at Publisher · View at Google Scholar · View at Scopus
  28. V. G. Vassilakis, I. D. Moscholios, and M. D. Logothetis, “The extended connection-dependent threshold model for call-level performance analysis of multi-rate loss systems under the bandwidth reservation policy,” International Journal of Communication Systems, vol. 25, no. 7, pp. 849–873, 2012. View at Publisher · View at Google Scholar · View at Scopus
  29. H. Beyranvand and J. A. Salehi, “Multirate and multi-quality-of-service passive optical network based on hybrid WDM/OCDM system,” IEEE Communications Magazine, vol. 49, no. 2, pp. S39–S44, 2011. View at Publisher · View at Google Scholar · View at Scopus
  30. Z. Guo and Y. Yang, “On nonblocking multirate multicast fat-tree data center networks with server redundancy,” in Proceedings of the IEEE 26th International Parallel and Distributed Processing Symposium (IPDPS '12), pp. 1034–1044, May 2012. View at Publisher · View at Google Scholar · View at Scopus
  31. N. Sambo, P. Castoldi, F. Cugini, G. Bottari, and P. Iovanna, “Toward high-rate and flexible optical networks,” IEEE Communications Magazine, vol. 50, no. 5, pp. 66–72, 2012. View at Publisher · View at Google Scholar · View at Scopus
  32. M. Jinno, H. Takara, B. Kozicki, Y. Tsukishima, Y. Sone, and S. Matsuoka, “Spectrum-efficient and scalable elastic optical path network: architecture, benefits, and enabling technologies,” IEEE Communications Magazine, vol. 47, no. 11, pp. 66–73, 2009. View at Publisher · View at Google Scholar · View at Scopus
  33. O. Gerstel, M. Jinno, A. Lord, and S. J. B. Yoo, “Elastic optical networking: a new dawn for the optical layer?” IEEE Communications Magazine, vol. 50, no. 2, pp. S12–S20, 2012. View at Publisher · View at Google Scholar · View at Scopus
  34. J. M. Simmons, Optical Network Design and Planning, Springer, Berlin, Germany, 2nd edition, 2014.
  35. M. Klinkowski and K. Walkowiak, “On the advantages of elastic optical networks for provisioning of cloud computing traffic,” IEEE Network, vol. 27, no. 6, pp. 44–51, 2013. View at Publisher · View at Google Scholar · View at Scopus
  36. W. Kabaciński, M. Michalski, and M. Abdulsahib, “The strict-sense nonblocking elastic optical switch,” in Proceedings of the IEEE 16th International Conference on High-Performance Switching and Routing (HPSR '15), pp. 1–6, Budapest, Hungary, July 2015. View at Publisher · View at Google Scholar
  37. C. Guo, H. Wu, K. Tan, L. Shi, Y. Zhang, and S. Lu, “DCell: a scalable and fault-tolerant network structure for data centers,” ACM SIGCOMM Computer Communication Review, vol. 38, pp. 75–86, 2008. View at Google Scholar
  38. M. Al-Fares, A. Loukissas, and A. Vahdat, “A scalable, commodity data center network architecture,” in Proceedings of the ACM SIGCOMM Conference on Data Communication (SIGCOMM '08), vol. 38, pp. 63–74, Seattle, Wash, USA, August 2008. View at Publisher · View at Google Scholar · View at Scopus
  39. D. Guo, T. Chen, D. Li, Y. Liu, X. Liu, and G. Chen, “BCN: expansible network structures for data centers using hierarchical compound graphs,” in Proceedings of the IEEE INFOCOM, pp. 61–65, Shanghai, China, April 2011. View at Publisher · View at Google Scholar · View at Scopus
  40. L. Chen, E. Hall, L. Theogarajan, and J. Bowers, “Photonic switching for data center applications,” IEEE Photonics Journal, vol. 3, no. 5, pp. 834–844, 2011. View at Publisher · View at Google Scholar · View at Scopus
  41. Y. Ohsita and M. Murata, “Data center network topologies using optical packet switches,” in Proceedings of the 32nd IEEE International Conference on Distributed Computing Systems Workshops (ICDCSW '12), pp. 57–64, June 2012. View at Publisher · View at Google Scholar · View at Scopus
  42. G. Danilewicz and R. Rajewski, “The architecture and strict-sense nonblocking conditions of a new baseline-based optical switching network composed of symmetrical and asymmetrical switching elements,” IEEE Transactions on Communications, vol. 62, no. 3, pp. 1058–1069, 2014. View at Publisher · View at Google Scholar · View at Scopus
  43. C. L. Liu, Introduction to Combinatorial Mathematics, McGraw-Hill, New York, NY, USA, 1968. View at MathSciNet
  44. D. B. West, Introduction to Graph Theory, Prentice Hall, 1996. View at MathSciNet
  45. R. J. Wilson, Introduction to Graph Theory, Prentice Hall, 5th edition, 2012.
  46. C.-T. Lea, “Bipartite graph design principle for photonic switching systems,” IEEE Transactions on Communications, vol. 38, no. 4, pp. 529–538, 1990. View at Publisher · View at Google Scholar · View at Scopus