Research Article
[Retracted] A Post-training Quantization Method for the Design of Fixed-Point-Based FPGA/ASIC Hardware Accelerators for LSTM/GRU Algorithms
Table 2
Comparison of quantization results on the IMDb dataset.
| | Model | #Layers | #Units | Quantization method | Weights bits | Activation bits | FP model accuracy | Quantized model accuracy | Accuracy variation |
| [34] | LSTM | 1 | 128 | In-training | 4 | 32 | 82.87 | 79.64 | −3.23 | [1] | LSTM | 1 | 512 | In-training | 4 | 4 | 89.54 | 88.48 | −1.06 | [39] | LSTM | 1 | 70 | In-training | 4 | 32 | 84.98 | 86.24 | +1.26 | [40] | LSTM | 3 | 512 | In-training | 4 | 4 | 86.37 | 86.31 | −0.06 | Our work | LSTM | 1 | 32 | Post-training | 5 | 14 | 89.19 | 88.86 | −0.33 | [34] | GRU | 1 | 128 | In-training | 4 | 32 | 80.35 | 78.96 | −1.39 | [1] | GRU | 1 | 512 | In-training | 4 | 4 | 90.54 | 88.25 | −2.29 | Our work | GRU | 1 | 32 | Post-training | 5 | 20 | 90.24 | 90.23 | −0.01 |
|
|