Research Article
Deep Learning Methods for Arabic Autoencoder Speech Recognition System for Electro-Larynx Device
Table 2
The best chosen model structure for Arabic speech recognition.
| Different specifications for each model | Name of each layer | Autoencoder1 | Autoencoder2 | Autoencoder3 |
| Encoder layer 1 (120 units) dropout | LSTM (120 units) 0.25 rate | GRU (120 units) 0.25 rate | LSTM (120 units) 0.25 rate | Encoder layer 2 | LSTM (60 units) | GRU (60 units) | LSTM (120 units) | Dropout | 0.25 rate | 0.25 rate | 0.25 rate | Encoder layer 3 | LSTM (120 units) | GRU (120 units) | LSTM (120 units) | Decoder layer 1 | LSTM (120 units) | GRU (120 units) | GRU (120 units) | Decoder layer 2 | LSTM (60 units) | GRU (60 units) | GRU (60 units) | Dropout | 0.25 rate | 0.25 rate | 0.25 rate | Flatten layer | — | — | — | Recognize layer | Dense layer | SoftMax (10 neurons) | SoftMax (10 neurons) | SoftMax (10 neurons) |
|
|