Research Article

[Retracted] A Post-training Quantization Method for the Design of Fixed-Point-Based FPGA/ASIC Hardware Accelerators for LSTM/GRU Algorithms

Table 2

Comparison of quantization results on the IMDb dataset.

Model#Layers#UnitsQuantization methodWeights bitsActivation bitsFP model accuracyQuantized model accuracyAccuracy variation

[34]LSTM1128In-training43282.8779.64−3.23
[1]LSTM1512In-training4489.5488.48−1.06
[39]LSTM170In-training43284.9886.24+1.26
[40]LSTM3512In-training4486.3786.31−0.06
Our workLSTM132Post-training51489.1988.860.33
[34]GRU1128In-training43280.3578.96−1.39
[1]GRU1512In-training4490.5488.25−2.29
Our workGRU132Post-training52090.2490.230.01