Research Article  Open Access
StrictSense Nonblocking Conditions for the Multirate Switching Fabric for the Discrete Bandwidth Model
Abstract
The article discusses the strictsense nonblocking conditions derived for the multirate switching fabric for the discrete bandwidth model at the connection level. Architecture of the switching fabric was described in previous study; however, conditions for the multirate discrete bandwidth model as well as comparison with different structures have not been published before. Both sufficient and necessary conditions were introduced and proved in this study. A few numerical examples which help to understand an idea of the multirate bandwidth model for the switching fabrics were delivered as well. Additionally a comparison of achieved results to the banyan switching structures and a comparison of the costs of all mentioned in this study structures expressed as the number of optical elements were done.
1. Introduction
The switching fabric was formally described in [1], where denotes the number of inputs/outputs of switching fabric. It was shown that the switching network is a better solution for optical switching than typical banyantype switching networks. The multiplane version of the mentioned switching architecture is called the multi switching fabric and it was described in detail in [1] as well. A multiplane switching network is achieved by vertically stacking copies of the structure. These planes have to be connected to inputs and outputs of the switching fabric. The idea of a multiplane structure is shown in Figure 1. The exact number of planes used to build the multi switching fabric depends strongly on the type of nonblocking conditions. The strictsense nonblocking (SSNB) and rearrangeable nonBlocking (RNB) conditions for the spacedivision multi switching fabric were described in detail and proved in [1]. The multi switching network was later extended to the MBA switching fabric (where is the maximal number of inputs and/or outputs one switching element can have; for details see [2]), whereas the SSNB and RNB conditions for the MBA switching network were delivered in [2] and [3], respectively.
In [1] it was shown that the switching fabric is a very attractive solution, compared with banyan [4–6], omega [5–7], and baseline [5, 6, 8] switching networks. Actually, the attractive element of the solution is the cost of one plane as well as the multiplane switching fabric, where cost is expressed as the number of active and passive optical switching elements.
A multirate switching network is a structure in which any connection is associated with a weight representing a certain bandwidth of input and output as well as interstage links. Interstage links just connect inputs with outputs of the switching network. In the discrete bandwidth model, it is assumed that there is a finite number of rates and the lowest rate divides all other rates. The lowest rate is very often called a channel or a basic bandwidth unit. In this study, it is assumed that each interstage link has channels (or rates) and each input and output link has channels (or rates), where . A new connection might require an integer number of channels , where and denotes the maximal number of channels demanded by a single request. The discrete bandwidth model will be considered in this study.
Different multirate switching networks were described in [9–22]. It should be noted that this type of switching networks has been considered in recent years as an interesting solution [23–27]. This type of structure can be used, for example, in Data Center Networks (DCNs) [28–32], in multiprocessor systems, in UMTS, and in 4G or in 5G networks where bandwidth is a very crucial aspect. By using multirate structures, it is also possible to handle a different network traffic in telecommunication and computer networks generated by many services handled by network providers or companies. Such a different type of traffic could be caused by, for example, video streaming, voice calls in the Voice over IP (VoIP) services, data transmission, websites viewing by home and company users, and so on. Using a new type of switching network structures in DCNs or multiprocessor systems allows building more energy–efficient and cheaper architectures, where the cost could be understood as the number of active and passive optical switching elements [1, 2, 33]. However, the topic of energy–efficiency is not considered in this study and it will be discussed in a future article. In turn, the classification of different types of services enables the proper management of resources and appropriate performance for each of these services. Each service requires different resources, expressed very often in basic bandwidth units or in the number of channels. If sufficient resources are available, each service considered can be realized in such a switching network. It was assumed that one connection represents some service and each service requires a different number of channels , where the maximal number of channels one service could demand is , and .
In turn, the classification of different types of services enables the proper management of resources and appropriate performance for each of these services especially for the UMTS/4G/5G networks. Each service requires different resources, expressed very often in basic bandwidth units or in the number of channels. And for the UMTS/4G/5G networks bandwidth is a very crucial aspect due to many users to be served and restrictions of the available bandwidth. If sufficient resources are available, each service considered can be realized in such a switching network. In this study it was assumed that one connection represents some service and each service requires a different number of channels , where the maximal number of channels one service could demand is , and .
It should be also noted that a special type of a multirate switching network is an Elastic Optical Network (EON), regarded as a “hot topic” in optical networking and switching [34–43]. Optical networks used currently by network operators and Internet providers will be in the immediate future replaced by EONs. Moreover, EONs will probably also be soon widely used in DCNs [44–47]. In an EON any connection is established on an optical path. Such a path can occupy a bandwidth that is a multiple of a so–called frequency slot unit (FSU) [36]. A frequency slot unit occupies 12.5 GHz of the bandwidth [48] and adjacent frequency slot units may be assigned to one optical path to set up a connection. This may be modeled by the discrete bandwidth model with a single rate denoting the frequency slot unit and denoting the number of such units in one connection. The discrete bandwidth model of the switching fabric could be used in EONs; however, SSNB conditions will be different than those described in this study due to assumption that not any free FSUs but only adjacent free FSUs are used in EONs for each connection. Therefore, description of how to adopt the SSNB switching network into EON is not topic of this study and it will be presented in a future article.
The remainder of this paper is organized as follows. In Section 2, a general model representing the switching fabric is described. This model constitutes a starting point for considerations included in the next sections of this article. In Section 3, the discrete bandwidth model is described. A relation between the number of channels required by a new connection and the number of available channels in inputs’ (outputs’) as well as in interstage links are delivered. A theorem giving the proper number of planes for the strictsense nonblocking switching fabric is delivered. A proof for sufficient and necessary conditions is included in this section as well. In Section 4 a few numerical examples explaining how to find a proper number of planes for preset switching network’s parameters are shown. In Section 5 achieved results and comparison of the switching fabric with the banyan structure are presented. The last section constitutes the conclusions and directions of the future work.
2. Model Description
Both one plane and multiplane switching fabrics have a capacity and are built from stages [1]. If the number of stages is even, two outer stages (the input stage and the output stage ), as well as inner stages (denoted by , where ), can be found. In turn, if the number of stages in the switching network is odd, one more stage can be identified. This additional stage is called the central stage and it is denoted by . The switching fabric stages are shown in Figure 2(a).
(a)
(b)
In the remaining part of this article, the topology of the switching network will be represented by a bipartite graph [20, 22, 49–52]. Thus, it is possible to simplify the analysis of occupied channels and planes for the discrete bandwidth model. Such a bipartite graph is built from nodes and edges. Each node corresponds to exactly one input, output, or interstage link in a switching network. In turn, each edge corresponds to exactly one crosspoint in such a switching network. An example of bipartite graph representation of one plane of the switching network from Figure 2(a) is shown in Figure 2(b).
A graph of intersecting path will also be used [1, 20, 22, 49]. Such a graph constitutes a subgraph of the bipartite graph, which makes it possible to simplify the analysis of the relationship between intersecting paths and the considered path in the worstcase scenario. An example of a graph of intersecting path of the switching network from Figure 2(a) is shown in Figure 3. There are four such graphs possible. More accurate and comprehensive discussion of these graphs can be found in [1].
(a)
(b)
(c)
(d)
In the spacedivision switching network represented by a bipartite graph, it is not allowed for two paths (where each path corresponds to a different connection) to meet at the same node of this graph. If it happens, one connection is blocked. To solve this problem, a multiplane structure could be used (see Figure 1). Then, each connection could be set up in a separate plane. The number of planes depends on the type of nonblocking or blocking conditions.
To simplify further calculations, the parameter will be usedwhere denotes a modulo operation. if the number of stages in the switching network is odd and for an even number of stages .
Generally, the considered path between input and output in the switching fabric is denoted by . The considered path goes through more than one node in the bipartite graph. Path could be blocked by any of the already set up paths. A total number of paths intersecting with path in the bipartite graph is denoted by . All blocking paths could be divided into two sets: and . The number of blocking connections in these sets is and , respectively, where
Set consists of all nodes of the bipartite graph carrying intersecting paths that could be achieved from the input side of the switching network. It should also be noted that the number of intersecting paths in set strongly depends on the switching fabric’s capacity. For capacities and , set has intersecting paths. For , set has intersecting paths. Generally, for greater capacities (), each intersecting path could be established from inputs numbered from to , to outputs numbered from to , where symbol denotes the smallest integer greater than or equal to . Therefore, there are intersecting paths; however, it should be mentioned that one of these paths is in fact the considered path . Thus, in set , for capacity , there is a total of intersecting paths. When putting all the above cases together, the number of intersecting paths which can intersect with the considered connection is as follows:
Set consists of all nodes of the bipartite graph carrying intersecting paths that could be achieved from the output side of the switching fabric. Like for set , the number of intersecting paths in set strongly depends on the switching fabric’s capacity. For in set there are intersecting paths. For capacities and , set consists of intersecting paths. For greater capacities () each intersecting path could be set up from inputs numbered from to , to outputs numbered from to . When the number of stages is odd, then each intersecting path from set could be set up from inputs numbered from to , to outputs numbered from to . It should also be noted that one of these paths is the considered path . Thus, using expression (1), in set there is a number of intersecting paths. When putting all the above cases together, the number of intersecting paths with the considered connection is as follows:
Sets and are shown in Figures 4(a) and 4(b) for even and odd number of stages in the switching fabric, respectively. Sets and are denoted by a blue and a red background, respectively, and the proper nodes of the intersecting paths graph, representing input and output links, are colored blue and red, respectively, as well. It could be seen that, for an odd number of stages , both sets and are equal to each other (see Figure 4(b)). When the central stage is present (the number of stages is even), it is already included in set (see Figure 4(a)).
(a)
(b)
3. Discrete Bandwidth Model
For the discrete bandwidth model, the capacity of any link is divided into basic bandwidth units, also called channels. A given connection can occupy such channels, where is limited by , and denotes the maximal number of channels that could be occupied by a single connection. Generally, the number of channels available at each input or at each output link is denoted by , and the number of channels available at each interstage link is denoted by . The relationship between the mentioned numbers of channels is .
In the remaining part of this study, a single connection established between input link and output link , which requires channels, will be denoted by . A new connection could be set up ifwhere denotes the number of channels in the input link (output link ) occupied by already set up connections. In a similar way, an interstage link could be used to establish connection whenwhere denotes the number of channels in the interstage link occupied by already set up connections. It means that the interstage link which hasoccupied channels is inaccessible for connection .
Moreover, to clarify further calculation and formulas, designation will be used. is given by the following expression:where symbol denotes the greatest integer value lower than or equal to .
Theorem 1. The multi switching network is strictsense nonblocking for the discrete bandwidth model and for condition if and only ifwhere
Proof. The necessary and sufficient conditions will be proved using the worstcase scenario in the multi switching fabric.
The considered connection is denoted by . This connection could be blocked in any bipartite graph node if in this node, except connection , other connections are set up, which occupy channels altogether in an interstage link. These blocking connections could originate in sets and . The number of blocking paths in these sets is determined by expressions (3) and (4), respectively. According to the assumption for the discrete bandwidth model (see Section 1), in each input and output link there are available channels and channels block the considered connection (there are no free channels required by connection ). It should be noted that channels block the considered connection in one plane. The number of planes in which connection is blocked by the intersecting paths originating in set isThe number of planes in which the considered connection is blocked by the intersecting paths originating in set isIt should be also mentioned that, in the worstcase scenario, in set there arefree channels and in set there arefree channels. Additionally, it should be taken into account that there could be free channels at input link , and exactly the same number of free channels could be at output link . Thus, blocking connections from the abovementioned additional free channels can occupyandadditional planes from input and output side of the switching network, respectively.
In the worstcase scenario, besides planes , and , one more plane is needed to set up the considered connection . Therefore, finally planes are needed, where , and are given by expressions (14), (15), (18), and (19), respectively.
The number of planes must be maximized for all . The maximum will be achieved if . Thus, using in expressions (14), (15), (18), and (19), the conditions given in Theorem 1 will be obtained.
A well known switching theory method based on finding the worstcase scenario in switching fabrics [1, 2, 20] was used in the proof. It allowed the determination of the necessary as well as the sufficient conditions.
The discrete bandwidth model could be adopted in all systems where channels are used. Thus, for example, this model will work for the UMTS system, the 4G system, the 5G system, and so on. It could be assumed that each kind of channels used, for example, in the UMTS/4G/5G, could be divided into basic bandwidth units (or in fact into smallest channels or slots). One UMTS/4G/5G channel then will use maximum small channels/slots.
4. Numerical Examples
Three examples will be described in order to analyze the proof from Section 3. These examples show how to find the proper number of planes for the strictsense nonblocking switching network. Of course, examples with more channels available at each input and output and more channels in interstage links are possible as well; however, to increase clarity they are not presented in the examples below.
4.1. Example 1
In this example a switching fabric built from stages is investigated. At each input and output link there are available channels, and at each interstage link there are available channels. Moreover, the maximal number of channels that can be occupied by a new connection is . Therefore, a switching fabric is nonblocking in strictsense if it is built from a minimum of planes. The worstcase scenario is shown in Figure 5. One plane cannot be used to set up connection if in a bipartite graph in connection path there can be found any node that handles channels occupied by other already set up connections.
(a)
(b)
(c)
(d)
(e)
All intersecting paths from set block plane (see Figure 5(a)). In turn, all intersecting paths from set block plane (see Figure 5(b)).
It should be noted that at the considered input there are 2 free channels which together with 3 unused channels in set can block additional plane (see Figure 5(c)). A similar situation occurs at the output side of switching fabric. In set , there are 3 free channels that were not used by the blocking connections originating in set . These 3 channels together with 2 channels at the considered output can block another additional plane (see Figure 5(d)).
The considered connection has to be set up in an additional plane; therefore, one more plane is needed (see Figure 5(e)). Thus, a switching network is SSNB if the total number of planes is .
4.2. Example 2
In this example a switching fabric built from stages is investigated. At each input and output link there are available channels, and at each interstage link there are available channels. Moreover, the maximal number of channels a new connection can occupy is . Therefore, switching fabric is nonblocking in strictsense if it is built from a minimum of planes. The worstcase scenario is shown in Figure 6. One plane cannot be used to set up connection if in a bipartite graph in connection path there can be found any node that handles channels occupied by other already set up connections.
(a)
(b)
(c)
All intersecting paths from set block plane (see Figure 6(a)). In turn, all intersecting paths from set block plane (see Figure 6(b)).
It should be noted that at the considered input there is only one free channel and there are no free channels in set . However, one channel cannot block the considered connection ; therefore, . A similar situation occurs at the output side of switching fabric. In set there are no free channels and there is only one free channel at the considered output . However, this one channel cannot block the considered connection ; therefore, .
The considered connection has to be set up in an additional plane; therefore, one more plane is needed (see Figure 6(c)). Thus, a switching network is SSNB if the total number of planes is .
4.3. Example 3
In this example a switching fabric built from stages is investigated. At each input and output link there are available channels and at each interstage link there are available channels. Moreover, the maximal number of channels a new connection can occupy is . Therefore, a switching fabric is nonblocking in strictsense if it is built from a minimum of planes. The worstcase scenario is shown in Figures 7, 8, and 9. One plane cannot be used to set up connection if in a bipartite graph in connection path there can be found any node which handles channels occupied by other already set up connections.
(a)
(b)
(a)
(b)
All intersecting paths from set block planes (see Figures 7(a) and 7(b)). In turn, all intersecting paths from set block planes (see Figures 8(a) and 8(b)).
It should be noted that at the considered input there are no free channels and there are 3 free channels in set . However, these 3 channels cannot block the considered connection ; therefore, . A similar situation occurs at the output side of switching fabric. In set there are 3 free channels and there are no free channels at the considered output . However, these 3 channels cannot block the considered connection ; therefore, .
The considered connection has to be set up in an additional plane; therefore, one more plane is needed (see Figure 9). Thus, a switching network is SSNB if the total number of planes is .
5. Results and Comparison
To compare the multiplane switching fabric with the multiplane banyan switching network of the same capacity , the number of planes is not enough as a comparison parameter due to differences in the construction of one plane in each mentioned structure and a different measure parameter has to be used.
The multiplane banyan network is built of vertically staked planes, similarly to the multiplane switching fabric; more than one plane is needed to ensure SSNB conditions. Therefore, to compare different structures to each other very often, the number of crosspoints in the switching network is used [6, 53–55]. However, for optical networks such a parameter is still not enough. Therefore, in this article it was assumed, similarly to that in [1, 2], that the cost of the optical switching network is expressed as the number of active (semiconductor optical amplifiers) and passive (optical splitters and optical combiners) optical switching elements. It results from the fact that the size and/or number of passive and active optical elements depends strongly on the capacity of a particular optical switching element which is used to build a switching network. And what is more, sometimes not only one type of switching elements is used to build a switching fabric. Therefore, counting and allows us to compare one–plane structures with multiplane structures as well as different multiplane structures with each other.
It should also be noted that the cost of a whole switching fabric is the cost of one plane multiplied by the number of such planes plus additional passive optical elements (optical splitters at the input side and optical combiners at the output side of the switching network) used to connect these planes to inputs and outputs of the switching fabric (as it can be seen in Figure 1). A detailed description of one plane’s cost of the multiplane banyan switching fabric is given in [56] and the cost of one plane of a multiplane switching fabric is given in [1]. Therefore, taking into account the number of planes given in Theorem 1, the exact cost of a multiplane switching fabric can be calculated.
It should be noted that the total cost of a switching network is expressed as the sum of partial costs; one is the cost given as the number of passive optical elements and the second cost is given as the number of active optical elements . However, the cost of one optical splitter or optical combiner is not equal to the cost of one semiconductor optical amplifier. Generally, one active optical element is about 10 times more expensive than a passive optical element. Thus, in this article a normalized cost was introduced and it is given as
Tables 1, 2, and 3 show the partial cost of different switching fabrics expressed as the number of passive optical elements , the partial cost of different switching fabrics expressed as the number of active optical elements , and the normalized cost of different switching network’s structures . For both partial costs ( and ) there is also an additional column where some percentage values are given. It should be understood what part of the cost of the multiplane banyan switching network constitutes the cost of the multiple switching fabric. If the percentage value is higher than then the banyan switching network is cheaper, and if this value is lower than then the switching fabric is built of a fewer number of optical elements. For example for the partial cost expressed in passive optical elements, when the capacity is , then for a 10plane banyan switching network is 9 600 and for a 13plane is 6 016, which constitutes 63% of 9 600. In both cases the number of planes (10 and 13) makes a switching fabric SSNB. The bold font for the normalized cost was used to show which structure is cheaper.



It may look a little bit strange that for some capacity (like, for example, in Table 1 or and in Table 3) the banyan switching fabric is a cheaper solution than the switching fabric; however, it results directly from the SSNB conditions, i.e., the number of planes needed to build a particular SSNB switching network. It should also be noted that the required number of planes depends on parameters: , , and ; it can be clearly seen in Tables 1, 2, and 3. In the case when , , , and , the number of planes is 11 and 19 for the multiplane banyan and multi switching network, respectively. Because one plane is built of passive and active optical elements, in the case when there are more planes it could mean that such a solution is then more expensive, like it is, for example, for , , , and for the multi switching network (see Table 1) or for , , , and for the multi switching network (see Table 3).
It can be seen that for , , and the multiplane banyan network is a cheaper solution than the multiplane switching fabric only for capacities , , and . For other capacities, the multiplane is always a cheaper solution. For , , and the multiplane banyan network is a cheaper solution for capacities . For greater capacity () the multi switching fabric is always a cheaper solution. In turn, for , , and , the multiplane banyan network is a cheaper solution only for capacities and . For other capacities, the multi switching fabric is always a cheaper solution.
Switching networks of larger capacity, with hundreds or even thousands of interfaces, are commonly used in backbone network’s nodes or in DCNs (for example, line cards). In turn, switching networks of smaller size are very often used in end devices like, for example, access points or home–user routers. It should be noted that the multiplane switching fabric and the multiplane banyan switching network are dedicated to be used in backbone networks and so the difference in cost becomes a more significant factor.
Switching structures can also be compared with each other using other parameters like, for example, power consumption or latency. In this case, however, the results depend not only on the structure itself but on the physical implementation as well, and the physical implementation of a switching fabric strongly depends on the technology in which the switching network is realized. Some optical paths can be lengthened to align latency in the switching fabric; therefore latency for each path could be the same in the whole structure. Because of that, a latency as a comparison parameter is not considered in this study.
6. Conclusions
In this article, a theorem of strictsense nonblocking conditions for the switching fabric for the discrete bandwidth model was delivered. This type of a switching network was described in detail in a few papers; however, multirate traffic has not been considered before. The multirate optical switching network could be used, for example, in DCNs, in multiprocessor systems or in UMTS/4G/5G networks where bandwidth is a very crucial aspect. The discrete bandwidth model allows us to describe the behavior of a multiservice system where each service could require different basic bandwidth units (channels).
The theorem proposed in this article was proved with the use of a method known in switching networks, based on finding the worstcase scenario in a switching fabric. It allowed for the determination of the necessary, as well as the sufficient conditions. A few numerical examples were given to illustrate how the proper number of planes was calculated for the strictsense nonblocking switching fabric. The results obtained in this article’s results were discussed and compared to the banyan switching network’s architecture as well. What is more, the cost (expressed as the number of passive and active optical switching elements) of both switching fabrics discussed in this study was also compared. From the results it is clearly visible that for an optical core network’s nodes the multi switching fabric is a cheaper solution than the multiplane banyantype switching network.
It should be noted that both switching networks compared in this article could be built of switching elements of different sizes. Nevertheless, such a detailed comparison is a more complex task due to architectural differences of compared architectures like the banyan switching fabric built of switching elements (where ), the MBA switching fabric [2], and it constitutes a future work.
The strictsense nonblocking condition for the multirate switching network for the continuous bandwidth model is under study now as well.
Notations
:  Connection path between input and output 
:  A new channel connection from input to output 
:  Number of active optical elements 
:  Number of passive optical elements 
:  Normalized cost of a switching network 
:  Total number of paths which may intersect with a considered path in a bipartite graph 
:  Number of paths accessible from the input side of the switching network which may intersect with a considered path in a bipartite graph 
:  Number of paths accessible from the output side of the switching network which may intersect with a considered path in a bipartite graph 
:  Number of channels available in the input/output link 
:  Number of channels available in the interstage link 
:  Number of channels demanded by a new connection 
:  Maximum number of channels demanded by a connection 
:  Minimal number of occupied channels which makes an interstage link inaccessible for connection 
:  Set of all nodes of the bipartite graph carrying intersecting paths that could be achieved from the input side of the switching network 
:  Maximal number of inputs and/or outputs one switching element can have in the switching fabric 
:  Number of inputs 
:  Indicator of a parity number of stages 
:  Number of stages in the switching network 
:  Capacity of the switching network 
:  Set of all nodes of the bipartite graph carrying intersecting paths that could be achieved from the output side of the switching network 
:  Number of outputs 
:  Number of vertically stacked planes in a switching network 
:  Stage in the switching network. 
DCN:  Data Center Network 
EON:  Elastic Optical Network 
FSU:  Frequency slot unit 
RNB:  Rearrangeable nonblocking 
SSNB:  Strictsense nonblocking 
VoIP:  Voice over IP. 
Data Availability
No data were used to support my article and everything is already included in my submitted article.
Conflicts of Interest
The author declares that there are no conflicts of interest regarding the publication of this article.
Acknowledgments
This research was funded by the Ministry of Science and Higher Education, Poland.
References
 G. Danilewicz, W. Kabacinski, and R. Rajewski, “The log2N1 optical switching fabrics,” IEEE Transactions on Communications, vol. 59, no. 1, pp. 213–225, 2011. View at: Publisher Site  Google Scholar
 G. Danilewicz and R. Rajewski, “The architecture and strictsense nonblocking conditions of a new baselinebased optical switching network composed of symmetrical and asymmetrical switching elements,” IEEE Transactions on Communications, vol. 62, no. 3, pp. 1058–1069, 2014. View at: Publisher Site  Google Scholar
 R. Rajewski, “The rearrangeable nonblocking conditions in the multiMBA(N,e,2) switching network,” in Proceedings of the 16th International Telecommunication Network Strategy and Planning Symposium (NETWORKS), pp. 1–6, IEEE, Funchal, Madeira, Portugal, 2014. View at: Google Scholar
 L. R. Goke and G. J. Lipovski, “Banyan networks for partitioning multiprocessor systems,” ACM SIGARCH Computer Architecture News, vol. 2, no. 4, pp. 21–28, 1973. View at: Publisher Site  Google Scholar
 A. Pattavina, Switching Theory: Architecture and Performance in Broadband ATM Networks, John Wiley & Sons, 1998.
 W. Kabaciński, Nonblocking Electronic and Photonic Switching Fabrics, Springer, 2005. View at: Publisher Site
 D. H. Lawrie, “Access and alignment of data in an array processor,” IEEE Transactions on Computers, vol. C24, no. 12, pp. 1145–1155, 1975. View at: Publisher Site  Google Scholar
 C. L. Wu and T. Y. Feng, “On a class of multistage interconnection networks,” IEEE Transactions on Computers, vol. 29, no. 8, pp. 694–702, 1980. View at: Publisher Site  Google Scholar  MathSciNet
 R. Melen and J. S. Turner, “Nonblocking multirate networks,” SIAM Journal on Computing, vol. 18, no. 2, pp. 301–313, 1989. View at: Publisher Site  Google Scholar  MathSciNet
 S.P. Chung and K. W. Ross, “On nonblocking multirate interconnection networks,” SIAM Journal on Computing, vol. 20, no. 4, pp. 726–736, 1991. View at: Publisher Site  Google Scholar  MathSciNet
 M. Collier and T. Curran, “The strictly nonblocking condition for threestage networks,” in Proceedings of the 14th International Teletraffic Congress (ITC), vol. 1, pp. 635–644, Antibes JuanlesPins, France, 1994. View at: Publisher Site  Google Scholar
 F. K. Liotopoulos and S. Chalasani, “Strictly nonblocking operation of 3stage clos switching networks,” in Performance Modeling and Evaluation of ATM Network, vol. II, Chapman & Hall, London, 1996. View at: Google Scholar
 W. Kabaciński, “Nonblocking threestage multirate switching networks,” in Proceedings of the 6th IFIP Workshop on Performance, Modelling and Evaluation of ATM Networks, pp. 226/1–26/10, Ilkley, UK, 1998. View at: Google Scholar
 W. Kabaciński, “Nonblocking asymmetrical threestage multirate switching networks,” in Proceedings of the International Conference on Communication Technology (ICCT), vol. 1, pp. S1111/1–S1111/5, Beijing, China, 1998. View at: Publisher Site  Google Scholar
 S. C. Liew, M.H. Ng, and C. W. Chan, “Blocking and nonblocking multirate clos switching networks,” IEEE/ACM Transactions on Networking, vol. 6, no. 3, pp. 307–318, 1998. View at: Publisher Site  Google Scholar
 M. Stasiak, “Combinatorial consideratins for switching systems carrying multichannel traffic streams,” Annals of Telecommunications, vol. 51, no. 1112, pp. 611–625, 1996 (Arabic). View at: Google Scholar
 E. Valdimarsson, “Blocking in multirate interconnection networks,” IEEE Transactions on Communications, vol. 42, no. 234, pp. 2028–2035, 1994. View at: Publisher Site  Google Scholar
 C.T. Lea, “Multirate log2(N,e,p) Networks,” in Proceedings of the Global Telecommunications Conference (GLOBECOM), pp. 319–323, IEEE, San Francisco, Calif, USA, 1994. View at: Google Scholar
 R. Melen and J. S. Turner, “Nonblocking multirate networks,” in Proceedings of the 8th Annual Joint Conference of the IEEE Computer and Communications Society Technology (INFOCOM), IEEE, Ottawa, Canada, 1989. View at: Publisher Site  Google Scholar  MathSciNet
 C.T. Lea, “Multilog2N networks and their applications in highspeed electronic and photonic switching systems,” IEEE Transactions on Communications, vol. 38, no. 10, pp. 1740–1749, 1990. View at: Publisher Site  Google Scholar
 D.J. Shyy and C.T. Lea, “Log_{2} (N, m, p) strictly nonblocking networks,” IEEE Transactions on Communications, vol. 39, no. 10, pp. 1502–1510, 1991. View at: Publisher Site  Google Scholar
 C.T. Lea and D.J. Shyy, “Tradeoff of horizontal decomposition versus vertical stacking in rearrangeable nonblocking networks,” IEEE Transactions on Communications, vol. 39, no. 6, pp. 899–904, 1991. View at: Publisher Site  Google Scholar
 N. Sambo, M. Secondini, F. Cugini et al., “Modeling and distributed provisioning in 1040100Gb/s multirate wavelength switched optical networks,” Journal of Lightwave Technology, vol. 29, no. 9, Article ID 5722966, pp. 1248–1257, 2011. View at: Publisher Site  Google Scholar
 M. Bertolini, O. Rocher, A. Bisson, P. Pecci, and G. Bellotti, “Multirate vs. OTN: comparing approaches to build scalable, costeffective 100Gb/s networks,” in Proceedings of the European Conference and Exhibition on Optical Communication (ECEOC), Amsterdam, The Netherland, 2012. View at: Publisher Site  Google Scholar
 V. G. Vassilakis, I. D. Moscholios, and M. D. Logothetis, “The extended connectiondependent threshold model for calllevel performance analysis of multirate loss systems under the bandwidth reservation policy,” International Journal of Communication Systems, vol. 25, no. 7, pp. 849–873, 2012. View at: Publisher Site  Google Scholar
 H. Beyranvand and J. A. Salehi, “Multirate and multiqualityofservice passive optical network based on hybrid WDM/OCDM system,” IEEE Communications Magazine, vol. 49, no. 2, pp. S39–S44, 2011. View at: Publisher Site  Google Scholar
 Z. Guo and Y. Yang, “On nonblocking multicast fattree data center networks with server redundancy,” in Proceedings of the 26th International Parallel and Distributed Processing Symposium (IPDPS), vol. 64, pp. 1034–1044, IEEE, 2012. View at: Google Scholar  MathSciNet
 C. Guo, H. Wu, K. Tan, L. Shi, Y. Zhang, and S. Lu, “DCell: a scalable and faulttolerant network structure for data centers,” Proceedings of the ACM SIGCOMM Computer Communication Review, vol. 38, no. 4, pp. 75–86, 2008. View at: Publisher Site  Google Scholar
 M. AlFares, A. Loukissas, A. Vahdat, and A. Scalable, “Commodity data center network architecture,” in Proceedings of the ACM SIGCOMM Conference on Data Communication (SIGCOMM), vol. 38, pp. 63–74, Seattle, Wash, USA, 2008. View at: Publisher Site  Google Scholar
 D. Guo, T. Chen, D. Li, Y. Liu, X. Liu, and G. Chen, “BCN: expansible network structures for data centers using hierarchical compound graphs,” in Proceedings of the 30th International Conference on Computer Communications (INFOCOM), pp. 61–65, IEEE, Shanghai, China, 2011. View at: Publisher Site  Google Scholar
 L. Chen, E. Hall, L. Theogarajan, and J. Bowers, “Photonic switching for data center applications,” IEEE Photonics Journal, vol. 3, no. 5, pp. 834–844, 2011. View at: Publisher Site  Google Scholar
 Y. Ohsita and M. Murata, “Data center network topologies using optical packet switches,” in Proceedings of the 32nd International Conference on Distributed Computing Systems Workshops (ICDCSW), pp. 57–64, IEEE, 2012. View at: Publisher Site  Google Scholar
 B. A. Small and K. Bergman, “Optimization of multiplestage optical interconnection networks,” IEEE Photonics Technology Letters, vol. 18, no. 1, pp. 238–240, 2006. View at: Publisher Site  Google Scholar
 N. Sambo, P. Castoldi, F. Cugini, G. Bottari, and P. Iovanna, “Toward highrate and flexible optical networks,” IEEE Communications Magazine, vol. 50, no. 5, pp. 66–70, 2012. View at: Publisher Site  Google Scholar
 M. Jinno, H. Takara, B. Kozicki, Y. Tsukishima, Y. Sone, and S. Matsuoka, “Spectrumefficient and scalable elastic optical path network: architecture, benefits, and enabling technologies,” IEEE Communications Magazine, vol. 47, no. 11, pp. 66–73, 2009. View at: Publisher Site  Google Scholar
 O. Gerstel, M. Jinno, A. Lord, and S. J. B. Yoo, “Elastic optical networking: a new dawn for the optical layer?” IEEE Communications Magazine, vol. 50, no. 2, pp. S12–S20, 2012. View at: Publisher Site  Google Scholar
 J. M. Simmons, Optical Network Design and Planning, Springer, 2nd edition, 2014.
 M. Klinkowski and K. Walkowiak, “On the advantages of elastic optical networks for provisioning of cloud computing traffic,” IEEE Network, vol. 27, no. 6, pp. 44–51, 2013. View at: Publisher Site  Google Scholar
 W. Kabaciński, M. Michalski, and M. Abdulsahib, “The strictsense nonblocking elastic optical switch,” in Proceedings of the 16th International Conference on HighPerformance Switching and Routing (HPSR), pp. 1–4, IEEE, Budapest, Hungary, 2015. View at: Publisher Site  Google Scholar
 I. Tomkos, S. Azodolmolky, J. SolePareta, D. Careglio, and E. Palkopoulou, “A tutorial on the flexible optical networking paradigm: State of the art, trends, and research challenges,” Proceedings of the IEEE, vol. 102, no. 9, pp. 1317–1337, 2014. View at: Publisher Site  Google Scholar
 V. López and L. Velasco, Elastic Optical Networks. Architectures, Technologies, and Control, Springer International Publishing, Switzerland, 2016.
 W. Kabaciński, M. Michalski, and R. Rajewski, “Strictsense nonblocking WSW node architectures for elastic optical networks,” IEEE Journal of Lightwave Technology, vol. 34, no. 13, pp. 3155–3162, 2016. View at: Publisher Site  Google Scholar
 G. Danilewicz, W. Kabaciński, and R. Rajewski, “Strictsense nonblocking spacewavelengthspace switching fabrics for elastic optical network nodes,” Journal of Optical Communications and Networking, vol. 8, no. 10, pp. 745–756, 2016. View at: Publisher Site  Google Scholar
 A. Greenberg, J. R. Hamilton, N. Jain et al., “11VL2: a scalable and flexible data center network,” in Proceedings of the ACM SIGCOMM Conference on Data Communication (SIGCOMM), pp. 1–12, Barcelona, Spain, 2009. View at: Publisher Site  Google Scholar
 S. J. B. Yoo, “Integrated photonicelectronic technologies for next generation data centers and the future internet,” in Proceedings of the International Conference on Photonics in Switching (PS), pp. 1–3, Ajaccio, France, 2012. View at: Google Scholar
 S. Hogg, “Clos Networks: What’s Old is New Again,” Network World, 2014, http://www.networkworld.com/article/2226122/ciscosubnetlosnetworks–whatsoldisnewagain/ciscosubnet/closnetworks–whatsoldisnewagain.html. View at: Google Scholar
 W. Kabaciński, M. Michalski, R. Rajewski, and M. Zal, “Optical datacenter networks with elastic optical switches,” in Proceedings of the International Conference on Communications (ICC), pp. 1–6, IEEE, Paris, France, 2017. View at: Publisher Site  Google Scholar
 ITUT Recommendation G.694.1, “Spectral grids for WDM applications: DWDM frequency grid,” International Telecommunication Union – Telecommunication Standardization Sector (ITUT), 2012. View at: Google Scholar
 C.T. Lea, “Bipartite graph design principle for photonic switching systems,” IEEE Transactions on Communications, vol. 38, no. 4, pp. 529–538, 1990. View at: Publisher Site  Google Scholar
 C. L. Liu, Introduction to Combinatorial Mathematics, McGrawHill, New York, NY, USA, 1968. View at: MathSciNet
 D. B. West, Introduction to Graph Theory, PrenticeHall of India Private Limited, New Delhi, India, 1996. View at: MathSciNet
 R. J. Wilson, Introduction to Graph Theory, Prentice Hall, 5th edition, 2010. View at: MathSciNet
 C. Clos, “A study of nonblocking switching networks,” Bell System Technical Journal, vol. 32, no. 2, pp. 406–424, 1953. View at: Publisher Site  Google Scholar
 F. K. Hwang, The Mathematical Theory of Nonblocking Switching Networks, vol. 15 of Series on Applied Mathematics, World Scientific Publishing, River Edge, NJ, USA, 2nd edition, 2004. View at: Publisher Site  MathSciNet
 A. Jajszczyk and R. Wójcik, “The enumeration method for selecting optimum switching network structures,” IEEE Communications Letters, vol. 9, no. 1, pp. 6465, 2005. View at: Publisher Site  Google Scholar
 W. Kabaciński and R. Rajewski, “The strictsense nonblocking multirate logd(N,0,p) switching network,” Mathematical Problems in Engineering, vol. 2017, Article ID 1575828, 14 pages, 2017. View at: Publisher Site  Google Scholar  MathSciNet
Copyright
Copyright © 2019 Remigiusz Rajewski. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.