Research Article

An Improved Transformer-Based Neural Machine Translation Strategy: Interacting-Head Attention

Figure 5

Training time of each epoch of four models on the WMT17 EN-DE dataset.