Research Article
Adoption of Wireless Network and Artificial Intelligence Algorithm in Chinese-English Tense Translation
Table 2
Main parameter settings and descriptions of baseline model training.
| Parameters | Parameter value | Descriptions |
| Scr vocab size | 30000 | The size of source Chinese vocabulary | Tgt vocab size | 30000 | The size of target English vocabulary | Batch size | 64 | Minibatch size, the number of training samples taken out for each training | Embedding size | 500 | Dimensions of source and target word embedding | Encoder type | BiLSTM | The type of neural network used by Encoder, a bidirectional LSTM here | Decoder type | LSTM | The neural network type used by Decoder, the standard LSTM used here | Enc/Dec layers | 2 | Network layers of Encoder and Decoder | LSTM size | 500 | Dimension of hidden layer of the neural network in LSTM | Optimization | Adam | Type of the optimization functions | Learning rate | 0.001 | Learning rate for neural network training | Beam size | 10 | The size of each candidate set selected in beam-search |
|
|