Research Article

[Retracted] A Post-training Quantization Method for the Design of Fixed-Point-Based FPGA/ASIC Hardware Accelerators for LSTM/GRU Algorithms

Table 3

Comparison of quantization results on the PTB dataset.

Model#Layers#UnitsQuantization methodWeights bitsActivation bitsFP model PPWQuantized model PPWPPW variation

[13]LSTM1300In-training3389.887.9−1.9
[1]LSTM1300In-training44109114+5
[41]LSTM1300In-training4497100+3
[42]LSTM1300In-training2297.2110.3+13.1
Our workLSTM1300Post-training111092.893.7+0.9
[13]GRU1300In-training3392.592.9+0.4
[1]GRU1300In-training44100102+2
Our workGRU1300Post-training8391.390.60.7