Research Article

An Improved Transformer-Based Neural Machine Translation Strategy: Interacting-Head Attention

Figure 6

Training time of each epoch of four models on WMT17 EN-CS dataset.