Abstract

A deep neural network has multiple layers to learn more complex patterns and is built to simulate the activity of the human brain. Currently, it provides the best solutions to many problems in image recognition, speech recognition, and natural language processing. The present study deals with the topological properties of deep neural networks. The topological index is a numeric quantity associated to the connectivity of the network and is correlated to the efficiency and accuracy of the output of the network. Different degree-related topological indices such as Zagreb index, Randic index, atom-bond connectivity index, geometric-arithmetic index, forgotten index, multiple Zagreb indices, and hyper-Zagreb index of deep neural network with a finite number of hidden layers are computed in this study.

1. Introduction

Neural networks are not only studied in artificial intelligence but also have got great applications in intrusion detection systems, image processing, localization, medicine, and chemical and environmental sciences [13]. Neural networks are used to model and learn complex and nonlinear relationships, which is very important in real life because many of the relationships of inputs and outputs are nonlinear and complex. Artificial neural networks are the backbone of robotics, defense technology, and neural chemistry. Neural networks are not only being widely used as a tool for predictive analysis but also trained successfully to model processes including crystallization, adsorption, distillation, gasification, dry reforming, and filtration in neural chemistry [48].

The topological index associates a unique number to a graph or network, which provides correlation with the physiochemical properties of the network. Degree-based topological index depends upon the connectivity of the network. The first degree-based topological index, called the Randić index, was formulated by Milan Randić [9] while analyzing the boiling point of paraffin. Over the last three decades, hundreds of topological indices have been formulated by researchers, which are helpful in studying the different properties of chemical graphs like reactivity, stability, boiling point, enthalpy of formation, and Kovat’s constant and inherits physical properties of materials such as stress, elasticity, strain, mechanical strength, and many others.

Bollobás and Erdős [10] introduced the general Randić index given by equation (1). The first and second Zagreb indices were introduced by Gutman and Trinajstić [11] in 1972, which appeared during the analysis of -electron energy of atoms. The multiplicative version of these Zagreb indices (the first multiplicative Zagreb index and the second multiplicative Zagreb index) of a graph were formulated by Ghorbani and Azimi [12]. Shirdel et al. [13] introduced a new version of Zagreb indices named as the hyper-Zagreb index. The widely used atom-bond connectivity (ABC) index is introduced by Estrada et al. [14]. Zhou and Trinajstić [15] gave the idea of the sum-connectivity index (SCI). The geometric-arithmetic index was introduced by Vukičević and Furtula [16]. Javaid et al. [17] investigated the degree-based topological indices for the probabilistic neural networks in 2017. Topological indices for multilayered probabilistic neural networks and recurrent neural networks have also been computed recently [1821]. For more work-related to computation and bounds of topological indices, see [2229].

Consider a graph having a set of nodes and a set of edges . Degree of a node , denoted by , is the number of nodes connected to via an edge. A degree-based topological indices of a graph are defined as follows:

Randić index

General Randić index

First Zagreb index

Second Zagreb index

First multiple Zagreb index

Second multiple Zagreb index

Hyper-Zagreb index

Atom-bond connectivity index

Sum connectivity index

Geometric-arithmetic index

2. Methodology

A deep neural network (DNN) can be represented by a graph , where denotes the nodes of the network and denotes the set of edges between the nodes. We consider a DNN with an input layer having nodes, hidden layers each layer having number of nodes such that the first layer has nodes, the second layer has nodes, and similarly, the r-th layer has nodes, which can also be expressed as . The output layer of DNN has nodes. Each node of every layer is connected to all nodes of the next layer. For instance, Figure 1 shows a DNN with an input layer having four nodes, an output layer with three nodes, and five hidden layers.

We first partition the edges of the graph of DNN according to the degree of end vertices of the graph. We analyze the structure of the graph by considering the connectivity of vertices of each layer to the next layer. In DNN, each node of every layer is connected to all nodes of the next layer. This fact is employed to count the degree of each vertex. Consider a deep neural network . Each node in the input layer has a degree because every input node is connected to each node of a first hidden layer having nodes. In the first hidden layer, all node () has the same degree, i.e., . Nodes of the second layer have degree . Similarly, the nodes of -th hidden layer have degree . The nodes of the output layer have degree .

We will compute topological indices using the edge partition method. We will classify the edges on basis of degrees of end-nodes of the edges. The number of edges connecting the input layer to the first hidden layer is , whose end-nodes have degrees and . The edges connecting -th hidden layer to -st layer have end-nodes having degrees and and the number of such edges is . Similarly, the edges connecting the last hidden layer to the output layer have degrees and of end-nodes. These findings are summarized in Table 1 below, which will be further helpful in computing the topological indices.

3. Results and Discussions

In this section, we have derived the expressions to compute the topological indices of the deep neural network. These results are related to the connectivity of nodes of DNN.

Theorem 1. Let be a deep neural network. Then the Randić index and general Randić index of DNN are given as(i)(ii)

Proof. We calculated the degrees of end nodes of every edge for . By using the definitions and values from Table 1, we get the following results:
(i)Substituting values from Table 1, we getThis can be expressed as follows:(ii)Using Table 1, we getThis can be further summarized as

Theorem 2. Let be a deep neural network. Then, first Zagreb index, second Zagreb index, first multiplicative Zagreb index, and second multiplicative Zagreb index of DNN are given as follows:(i)(ii)(iii)(iv)

Proof. To compute the topological indices of DNN, we use the edge partition method. In Table 1, we have calculated the degrees of end-nodes of each edge for. Now, by using the definitions and values from Table 1, we have the following results:(i)Substituting values from Table 1, we getIt can be expressed as follows:(ii)Substituting values from Table 1, we haveIt can be expressed as follows:(iii)Using Table 1, we getThis can be further summarized as follows:(iv)We know, from equation (6), Substituting the values from Table 1, we getpi The above expression can be expressed as follows:

Theorem 3. Let be a deep neural network. Then the forgotten Zagreb index and hyper-Zagreb index of DNN are given as follows:(i)(ii)

Proof. To compute the topological indices of DNN, we use the edge partition method. In Table 1, we have calculated the degrees of end nodes of every edge for . Now, by using the definitions and values from Table 1, we get the results given below(i)Using Table 1, the above relation becomesThis can be summarized as follows:(ii)Substituting values from Table 1, we getThe above expression can be further summarized as follows:

Theorem 4. Let be a deep neural network. The atom-bond connectivity index , geometric-arithmetic index , sum connectivity index , and augmented Zagreb index of DNN are given as follows:(i)(ii)(iii)(iv)

Proof. (i)Using edge partition in Table 1, we havewhich can be shortened as follows:(ii)Using Table 1, we getThis can be expressed as follows:(iii)Using Table 1, we getThis can be expressed as follows:(iv)Substituting values from Table 1, we getThis can be abbreviated as follows:

4. Conclusions

The deep neural network is helpful in modeling compounds with desirable physical and chemical properties employing the structure of compounds. This paper gives computational insight into the degree-dependent topological indices, which include the Randic index, Zagreb index, multiplicative Zagreb indices, harmonic index, ABC index, GA index, and sum-connectivity index of a general DNN with r-hidden layers. These indices correlate the structure with the properties such as boiling point, molar refractivity (MR), molar volume (MV), polar surface area, surface tension, enthalpy of vaporization, flash point, and many others. The results computed in the above theorems give generally closed formulas that can be exploited to compute the topological indices of neural networks under study by giving specific values to the input parameters. The values of the computed indices grow with the growth of hidden layers and also depend on the number of nodes in each layer.

A deep neural network is an important tool used in experimental design, data reduction, fault diagnosis, and process control. The QSAR studies must be integrated with the neural network approach in order to achieve a more physical understanding of the system. The use of DNN provides an alternative way of predicting physical properties and its linkage with topological indices can further enhance theoretical achievements.

This study can be extended further by analyzing the distance-based topological indices such as the Wiener index, Harary index, and PI index. Computation of spectral invariants of deep neural networks such as energy, Estrada energy, and Kirchhoff index is also open for further research in this area.

Data Availability

No data were used to support the findings of this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest regarding the publication of this paper.

Acknowledgments

The study was supported by the Science & Technology Bureau of Chengdu 2020-YF09-00005-SN and Sichuan Science and by the Technology program 2021YFH0107 Erasmus + SHYFTE Project 598649-EPP-1-2018-1-FR-EPPKA2-CBHE-JP and by the National Key Research and Development Program under Grant 2018YFB0904205.