Journal of Electrical and Computer Engineering

Volume 2011, Article ID 189434, 12 pages

http://dx.doi.org/10.1155/2011/189434

## Topological Properties of Hierarchical Interconnection Networks: A Review and Comparison

^{1}Department of Information Science, College For Women, Kuwait University, Safat 13060, Kuwait^{2}Department of Computer Engineering, Faculty of Engineering, Al Baha University, P.O. Box 1988, Al Baha 65431, Saudi Arabia

Received 14 August 2010; Revised 19 November 2010; Accepted 12 February 2011

Academic Editor: Y. W. Chang

Copyright © 2011 Mostafa Abd-El-Barr and Turki F. Al-Somani. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Hierarchical interconnection networks (HINs) provide a framework for designing networks with reduced link cost by taking advantage of the locality of communication that exists in parallel applications. HINs employ multiple levels. Lower-level networks provide local communication while higher-level networks facilitate remote communication. HINs provide fault tolerance in the presence of some faulty nodes and/or links. Existing HINs can be broadly classified into two classes. those that use nodes and/or links replication and those that use standby interface nodes. The first class includes Hierarchical Cubic Networks, Hierarchical Completely Connected Networks, and Triple-based Hierarchical Interconnection Networks. The second HINs class includes Modular Fault-Tolerant Hypercube Networks and Hierarchical Fault-Tolerant Interconnection Network. This paper presents a review and comparison of the topological properties of both classes of HINs. The topological properties considered are network degree, diameter, cost and packing density. The outcome of this study show among all HINs two networks that is, the Root-Folded Heawood (RFH) and the Flooded Heawood (FloH), belonging to the first HIN class provide the best network cost, defined as the product of network diameter and degree. The study also shows that HFCube provide the best packing density, that is, the smallest chip area required for VLSI implementation.

#### 1. Introduction

In nonshared memory multiprocessors systems, interconnecting processors via message passing can be either directly, which is costly for large number of processors, that is, , where is the total number of processors, or indirectly by routing messages via intermediate processors. Three important factors should be considered at the design stage of indirect connection: minimizing the message delay, reducing the cost, and maximizing the reliability. It has been shown that hypercube networks (HCNs) are highly efficient in connecting multicomputer networks [1]. However, HCNs have the limitation that increasing the number of nodes requires a change in the basic node configuration and causes an exponential increase in the number of links. This limits the applicability of the hypercube to very large systems. Also, there is some locality of communication among nodes of the hypercube which is not exploited for performance gains. An interesting recent result in the context of fault-tolerant cycle embedding in hypercube is that of recursive embedding of a longest cycle into -dimensional hypercube in a way to tolerate at most () faulty nodes [2]. This is to be compared to its predecessor result of tolerating a maximum of () faulty nodes.

Hierarchical interconnection networks (HINs) provide an opportunity for taking advantage of such factors [3]. These networks employ multiple levels in which lower-level networks are used to provide local communication and higher-level networks are used to facilitate remote communication. HINs provide fault tolerance through duplicating the higher-level network, known as the replication technique (see Figure 1), or through the use of standby spare interface nodes (see Figure 2).

In recent years, hierarchical interconnection networks have attracted increasing attention. This is because they provide a framework to design networks with reduced link cost. They also take advantage of the inherent synergy among communicating tasks in parallel applications. At the lowest level, HINs provide clusters of individual nodes, with nodes in each cluster connected by a network. This network is called a level-one network. At the next level, groups of clusters are connected by a level-two network. The topology at each level can be the same or can be different. If the topologies at all levels are the same, then the network is called *homogenous* hierarchical interconnection network; otherwise, the network is called a *heterogeneous* hierarchical interconnection network [3].

Hierarchical interconnection network was originally introduced by Dandamudi and Eager in [3] using only two levels. Wei and Levy in [4] proposed a class of general hierarchical interconnection networks in which more than one node from each cluster can act as interface nodes. The procedure for construction the HINs in this case is as follows. The total number of nodes is divided into clusters of nodes each. Each cluster of nodes is connected to form a level-1 network. The nodes in every cluster are ordered in the same way. Then nodes, , from each cluster are selected to act as interface nodes. To construct level-2 networks, these interface nodes are first divided into groups. Each group consists of nodes, which are from different clusters. Then, the nodes of each group are again divided into clusters of nodes each. Each cluster is connected to form a level-2 network, and nodes, , are selected as the interface nodes to construct the level-3 networks and so on. The interconnection networks used to construct clusters at different levels may have the same or different topologies (see Figure 3).

A number of hierarchical interconnection networks have been proposed in the literature [5–13, 15–17]. These can be classified into Class 1, such as Block-Shift Network (BSN), Extended Hypercube (EH), deBruijn with Hypercube (dBCube), Hierarchical Hypercube (HHC), Hierarchical Cubic Network (HCN), Hierarchical Completely Connected Networks (HCC), HIN with Folded Hypercubes (HFCube), Folded Heawood (FolH), Root-Folded Heawood (RFH), Recursively Expanded Heawood (REH), Flooded Heawood (FloH), Triple-based Hierarchical Interconnection Networks (THIN), and Rectangular Twisted Torus Meshes (RTTM); Class 2, such as Modular Fault-Tolerant Hypercube Networks (MFTHN) and Hierarchical Fault-Tolerant Interconnection Network (HFTIN). This paper presents a study and a comparison of the topological properties of these hierarchical interconnection networks. The topological properties included in this study are defined below.

*Definition 1. **Network degree* is defined as the maximum number of ports per node for all nodes in the network.

*Definition 2. **Network diameter* is defined as the maximum shortest path consisting of distinct hops, between any two nodes in the network.

*Definition 3. **Network cost* is defined as the product of network diameter and degree.

*Definition 4. **Packing density* is defined as the ratio of the number of nodes of a network to its cost.

This paper is organized as follows: a description of HINs based on the replication technique is introduced in Section 2. Section 3 provides an introduction to HINs based on the standby spare interface node technique. Section 4 provides the detailed comparison conducted among HINs. Finally, Section 5 concludes the work.

#### 2. HINs Based on the Replication Technique

The replication technique aims at avoiding excessive traffic on intercluster links by duplicating the level-two network and having two or more interface nodes per cluster. A number of the proposed hierarchical interconnection networks in the literature are based on the replication technique [5–9, 12, 13, 15, 17]. The following subsections provide description of each category of these HINs.

##### 2.1. Block Shift Network (BSN)

The Block-Shift Network (BSN) was introduced by Pan in 1991 [5]. Neighboring nodes in a BSN are tightly coupled while remote nodes are loosely coupled. This property makes the BSN topology suitable for localized traffic patterns, a property observed in a number of applications. There are nodes in a BSN. In each step of constructing a BSN network, only bits can be changed within the section of the right most bits. This gives the resulting network the label BSN. Changing the two parameters and defines the network connection type (see Figures 4(a) and 4(b)). The degree of a BSN is computed as and its diameter is . The design of the BSN is greatly motivated by its flexibility. This is because the parameters and can be changed in accordance with the performance and cost requirements of a given network. In addition, a BSN is scalable in the sense that changing the size of the network does not require changing the hardware of its nodes. A number of existing networks can be considered as special cases of the BSN. For example, BSN is the shuffle-exchange network, BSN is the -dimensional hypercube, while BSN is the complete network.

##### 2.2. Extended Hypercubes (EH)

This is a hierarchical and recursive structure with a constant predefined building block. The EH was introduced by Kumar and Patnaik in 1992 [6]. The basic module EH of an EH consists of -cube of Processor Elements (PEs) and a Network Controller (NC). The NC is used as a communication processor to handle intermodule communication. The basic EH module is said to be of degree one. The EH architecture can be constructed by connecting the basic modules up to levels as shown in Figure 5. In this figure, two levels () of the basic EH are connected to form an EH. The degree of an EH is while its diameter is .

##### 2.3. dBCube

The dBCube is a compound graph consisting of a deBruijn graph in which each node is replaced by a hypercube cluster [7]. A dBCube is defined as having cubes/cluster (or the size of the deBruijn graph) and that each cube is of dimension (see Figure 6). In Figure 6, each node is a 2-cube () and there are eight cubes/cluster (). The degree of a dBCube is and its diameter is ().

##### 2.4. The Hierarchical Hypercubes

Malluhi and Bayoumi introduced the Hierarchical Hypercube Network of order (-HHC) [8]. The structure of an -HHC consists of three levels of hierarchy. To simplify the description of HHC structure, assume that the order for a nonnegative integer . At the lowest level of hierarchy, there is a pool of nodes. These nodes are grouped into clusters of nodes each, and the nodes in each cluster are interconnected to form an -cube called the Son-cube or the -cube. The set of the -cubes constitutes the second level of hierarchy. A father-cube or the -cube connects the -cubes in a hypercube fashion. Figure 7 shows a 5-HHC. The degree of an HHC is while its diameter is .

##### 2.5. Hierarchical Cubic Networks (HCN)

The Hierarchical Cubic Network HCN is a hierarchical network consisting of clusters, each of which is an -dimensional hypercube [9]. Each node in the HCN has () links connected to it. Figure 8 shows HCN. The HCN uses almost half as many links as a comparable hypercube and yet has a smaller diameter than a comparable hypercube while emulating desirable properties of a hypercube. The degree of an HCN is while its diameter is . In [18], a maximal number of node-disjoint paths have been constructed between each pair of distinct nodes of the HCN. The maximum length of these node-disjoint paths (-fault diameter) is bounded above by . The ()-wide diameter of the HCN is shown to be . These results represent about two-thirds those of a comparable hypercube.

In [19], fault-free Hamiltonian cycles in an HCN with link faults have been constructed using Gray codes. Since the HCN is regular of degree , the result shown in [19] is optimal. The longest fault-free cycles of length in an HCN with a one-node fault and fault-free cycles of length at least in an HCN with -node faults have been also constructed in [19], where is the number of nodes in the HCN, if or 4 and if .

##### 2.6. Generalized Hierarchical Completely Connected Networks (HCC)

Takabatake et al. proposed the generalized hierarchical completely connected networks (HCCs) in [10]. The construction of an HCC starts from a basic block (a level-1 block) which consists of nodes of constant degree. Then a level- block for is constructed recursively by interconnecting any pair of macro nodes ( level-() blocks) completely. An HCC has a constant node degree, which is , regardless of the size of the network. A generalized HCC_{G} and the concept of HCC are illustrated in Figure 9.

##### 2.7. HIN with Folded Hypercubes as Basic Clusters (HFCube)

A hierarchical interconnection network using folded hypercubes as basic clusters is proposed in [11] and is denoted as HFCube. An HFCube has clusters, where each cluster is a folded hypercube FHC. Each node in the HFCube has links connected to it. Accordingly, the degree and diameter of a HFCube is and , respectively. An HFCube is illustrated in Figure 10.

##### 2.8. Heawood HINs

Jan et al. in [12] proposed four hierarchical interconnection networks based on Heawood graphs. These include (1) Folded Heawood (FolH), (2) Root-Folded Heawood (RFH), (3) Recursively Expanded Heawood (REH), and (4) Flooded Heawood (FloH). A Heawood graph has fourteen nodes with twenty-one links connecting them as shown in Figure 11. Each node in a Heawood network has three neighboring nodes. For any pair of nodes, there are three paths for routing a message between them and the minimum length of cycles containing any pair of nodes is 6. Folded Heawood HIN is based on the same concept used in the definition of folded Petersen network [11]. A two-dimensional folded Heawood network are shown in Figure 12 (note that the details of only two neighbors in Figure 12 is shown for the sake of brevity). Root-Folded Heawood HIN, alternatively, uses only a single link between neighboring nodes as shown in Figure 13. The recursive expansion method has been applied with the Recursively Expanded Heawood HIN to avoid bottlenecks caused by the root nodes of the Root-Folded Heawood HIN (see Figure 14). Similarly, the Flooded Heawood HIN is obtained by recursively expanding the basic Heawood network as illustrated in Figure 15.

##### 2.9. Triple-Based HIN (THIN)

The Triple-based HIN (THIN) was proposed by Qiao et al. in [15]. Figure 18 shows examples of THINs with different levels. In Figure 18, (a) shows a level 0 THIN, (b) shows a level 1 THIN, (c) shows a level 2 THIN, and (d) shows a level 3 THIN. The topology of THIN is very simple and the node degree is very low. THIN has obviously a hierarchal, symmetric, and scalable characteristic. The degree and diameter of THIN is 3 and , respectively.

##### 2.10. Rectangular Twisted Torus Meshes (RTTMs)

The Rectangular Twisted Torus Meshes (RTTMs) have been proposed in [16]. At the lowest level of RTTM network, the Level-1 subnetwork, also called a Basic Module, consists of a mesh connection of nodes. Successively higher-level networks are built by recursively interconnecting next lower level subnetworks in the form of a Rectangular Twisted Torus, which consists of a rectangular array of rows and columns (see Figure 19). An appealing property of the RTTM network is its smaller diameter and shorter average distance, which implies a reduction in communication delays. An -RTTM denotes an RTTM network with levels in its hierarchy. The number of nodes in RTTM is . The degree and diameter of RTTM are 4 and , respectively.

#### 3. HINs Based on the Standby Spare Interface Node Technique

Standby spare interface node technique provides a standby spare interface node to avoid the intercluster disconnection caused by interface node failure. Abdulla in [13] proposed the Modular Fault-Tolerant Hypercube Networks (MFTHN). MFTHN is a hierarchical network using the spare standby technique based on a basic block called Fault-Tolerant Basic Block (FTBB). The FTBB is a binary hypercube of dimension to which a spare node has been added to provide fault tolerance, which is connected to all the nodes of the hypercube. Large hypercubes can be built using FFTBs by utilizing the recursive construction property of the hypercube (see Figure 16).

In [14], the Hierarchical Fault-Tolerant Interconnection Network (HFTIN) was proposed. HFTIN uses a different type of FFTB as a basic building block in level one and to use torus at level two, which have a constant degree number. In this network, the FTBB consists of 16 main nodes and 4 spare nodes (see Figure 17). In [20], the performance comparison shows that as the number of faults exceeds six, the HFTIN has a higher probability to recover from faults as compared to HCN architectures.

#### 4. Comparisons

The comparisons performed in this section are based on the topological properties of the different hierarchical interconnection networks presented in the previous sections in addition to the Hypercube. Only HINs which are based on the replication technique are considered since they are more regular and practical. HINs that need any configuration parameters are also not included in our comparisons. The included HINs are HIN, dBCube, HHC, HCN, HFCube, FolH, RFH, REH, FloH, and THIN.

Table 1 summarizes the number of nodes, degree and diameter of these hierarchical interconnection networks. Figure 20 shows the relationship of network degree and the number of nodes in the network (network size). The graph shows that the three networks RFH, FloH and THIN possess constant network degree regardless of the network size. The graph also shows that for network size larger than 1024 nodes those same three networks offers the lowest network degree. Among the three networks, the RFH and the THIN networks offer the lowest network degrees. The Graph also shows that the HHC network offer a logarithmic increasing function which results in a lower network degree than the REH network for network size up to 2048 nodes. The Hypercube provides the highest network degree. Figure 21 relates the network size with the diameter. The figure shows that the THIN network possesses the highest network diameter while the HFCube provides the lowest network diameter. The diameter of a network is a measure of the network performance in terms of worst-case communication delay.

Figure 22 shows the cost (degree × diameter) of networks with respect to its size. The figure shows that the THIN network has the highest network cost while RFH and FloH have the lowest network cost. This is a reflection of the constant low network degree. Figure 23 shows the packing density of networks with respect to its size. The higher the packing density of a network, the smaller the chip area required for its VLSI layout. The figure shows that the HFCube has the highest packing density while dBCube requires the lowest packing density.

#### 5. Conclusion

Hierarchical Interconnection Networks (HINs) provide a design framework of networks with reduced link cost and the efficient use of the locality of communication that exists in parallel applications. In this paper, we reviewed a number of HINs. These networks have been categorized into two main categories based on how they provide fault tolerance. These two categories are (1) the replication technique category and (2) the standby spare node category. A brief description of each hierarchical interconnection network is presented showing the different ways in which HINs are constructed. The paper presented a topological performance comparison among HINs. The topological properties include network degree, diameter, cost, and packing density. The results show that RFH and FloH offer the best network cost while the HFCube offer the best packing density, that is, the smallest required chip area for VLSI layout.

#### Acknowledgments

The authors would like to acknowledge the support of Kuwait University and Al Baha University.

#### References

- J. Kim, C. R. Das, W. Lin, and T. Y. Feng, “Reliability evaluation of hypercube multicomputers,”
*IEEE Transactions on Reliability*, vol. 38, no. 1, pp. 121–129, 1989. View at Publisher · View at Google Scholar · View at Scopus - J. S. Fu, “Fault-tolerant cycle embedding in the hypercube,”
*Parallel Computing*, vol. 29, no. 6, pp. 821–832, 2003. View at Publisher · View at Google Scholar · View at Scopus - S. P. Dandamudi and D. L. Eager, “Hierarchical interconnection networks for multicomputer systems,”
*IEEE Transactions on Computers*, vol. 39, no. 6, pp. 786–797, 1990. View at Publisher · View at Google Scholar · View at Scopus - S. Wei and S. Levy, “Design and analysis of efficient hierarchical interconnection networks,” in
*Proceedings of Supercomputing '91*, pp. 390–399, November 1991. View at Scopus - Y. Pan,
*The block shift network: interconnection strategies for large parallel systems*, Ph.D. thesis, Department of Computer Science, University of Pittsburgh, 1991. - J. M. Kumar and L. M. Patnaik, “Extended hypercube: a hierarchical interconnection network of hypercubes,”
*IEEE Transactions on Parallel and Distributed Systems*, pp. 45–57, 1992. View at Google Scholar · View at Scopus - C. Chen, D. P. Agrawal, and J. R. Burke, “dBCube: a new class of hierarchical multiprocessor interconnection networks with area efficient layout,”
*IEEE Transactions on Parallel and Distributed Systems*, vol. 4, no. 12, pp. 1332–1344, 1993. View at Publisher · View at Google Scholar · View at Scopus - Q. M. Malluhi and M. A. Bayoumi, “Hierarchical hypercube: a new interconnection topology for massively parallel systems,”
*IEEE Transactions on Parallel and Distributed Systems*, vol. 5, no. 1, pp. 17–30, 1994. View at Publisher · View at Google Scholar · View at Scopus - K. Ghose and K. R. Desai, “Hierarchical cubic networks,”
*IEEE Transactions on Parallel and Distributed Systems*, vol. 6, no. 4, pp. 427–435, 1995. View at Publisher · View at Google Scholar · View at Scopus - T. Takabatake, K. Kaneko, and H. Ito, “HCC: generalized hierarchical completely-connected networks,”
*IEICE Transactions on Information & Systems*, vol. E83-D, no. 6, pp. 1216–1224, 2000. View at Google Scholar · View at Scopus - Y. Shi, Z. Hou, and J. Song, “Hierarchical interconnection networks with folded hypercubes as basic cluster,” in
*Proceedings of the 4th International Conference/Exhibition on High Performance Computing in the Asia-Pacific Region*, vol. 1, pp. 134–137, 2000. - G. E. Jan, Y. S. Hwang, M. B. O. Lin, and D. Liang, “Novel hierarchical interconnection networks for high-performance multicomputer systems,”
*Journal of Information Science and Engineering*, vol. 20, no. 6, pp. 1213–1229, 2004. View at Google Scholar · View at Scopus - A. M. Abdulla,
*Reliability of modular fault-tolerant hypercube networks*, M.S. thesis, Department of Computer Engineering, King Fahd University of Petroleum & Minerals, 1995. - M. H. Abd-El-Barr, F. Daud, and K. M. Al-Tawil, “A hierarchical fault-tolerant interconnection network,” in
*Proceedings of the 15th IEEE Annual International Phoenix Conference on Computers and Communications*, pp. 123–128, March 1996. View at Scopus - B. Qiao, F. Shi, and W. Ji, “THIN: a new hierarchical interconnection network-on-chip for SOC,” in
*Proceedings of the 7th International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP '07)*, pp. 446–457, 2007. View at Scopus - Y. Liu, C. Li, and J. Han, “RTTM: a new hierarchical interconnection network for massively parallel computing. High performance computing and applications,” in
*High Performance Computing and Applications*, vol. 5938 of*Lecture Notes in Computer Science*, pp. 264–271, Springer, Berlin, Germany, 2010. View at Google Scholar - S. Öhring and S. K. Das, “Folded petersen cube networks: new competitors for the hypercubes,”
*IEEE Transactions on Parallel and Distributed Systems*, vol. 7, no. 2, pp. 151–168, 1996. View at Google Scholar · View at Scopus - J. S. Fu, G. H. Chen, and D. R. Duh, “Node-disjoint paths and related problems on hierarchical cubic networks,”
*Networks*, vol. 40, no. 3, pp. 142–154, 2002. View at Publisher · View at Google Scholar · View at Scopus - J. S. Fu and G. H. Chen, “Fault-tolerant cycle embedding in hierarchical cubic networks,”
*Networks*, vol. 43, no. 1, pp. 28–38, 2004. View at Publisher · View at Google Scholar · View at Scopus - M. Abd-El-Barr, “Performance comparison of a number of reliable and fault-tolerant hierarchical interconnection networks,” in
*Proceedings of the 18th IASTED International Conference on Parallel and Distributed Computing and Systems (PDCS '06)*, pp. 12–17, Dallas, Tex, USA, November 2006. View at Scopus