Research Article

Gated Recurrent Unit with RSSIs from Heterogeneous Network for Mobile Positioning

Algorithm 1

Training of GRU.
(1)Require: loss function J(W,b), initial parameter W, b
(2)Normalize the features of input: equation (5)
(3)While J not converged Do
(4) Select k samples from the training set randomly
(5)For each sample Do
(6)  Input layer: equation (6)
(7)  Hidden layer: equations (7)–(12)
(8)  Output layer: equation (13)
(9)  Compute the loss function: equation (14)
(10)End For
(11)  Compute the gradients of weights
(12)  Update the weights
(13)End While
(14)Return W, b