Research Article

An Enhanced Deep Neural Network for Predicting Workplace Absenteeism

Figure 3

In Deep Neural Network, there are multiple hidden layers instead of a single layer of a Shallow Neural Network. In forward propagation, we compute linear and activation for all hidden layers and then compute the loss at the output layer. Then, through backpropagation, the derivative of loss with respect to weight and bias terms is calculated for all the layers in order to have optimized values of weight and bias terms. In the learning process, the forward propagation and backward propagation are performed for a number of iterations, until there is no further reduction in cost. and represent the weight and bias for layer L. is the input parameter to Layer L, and is the output produced from layer L. represents the number of units (also known as Neurons) in a given layer. α is the learning rate for the gradient descent algorithm.