Research Article
A Deep Learning Anomaly Detection Framework for Satellite Telemetry with Fake Anomalies
Table 2
DDMN and the compared methods in precision, recall, and F1-score.
(a) |
| | DDMN | | Precision | Recall | F1-score | Precision | Recall | F1-score |
| 2 | 1.0 | 0.939 | 0.969 | 0.34 | 0.947 | 0.501 | 2.5 | 1.0 | 0.936 | 0.967 | 0.654 | 0.946 | 0.773 | 3 | 1.0 | 0.928 | 0.963 | 0.654 | 0.945 | 0.773 | 3.5 | 1.0 | 0.919 | 0.956 | 0.654 | 0.923 | 0.766 | 4 | 1.0 | 0.9 | 0.951 | 0.654 | 0.907 | 0.76 |
|
|
(b) |
| | | -SCORE | Precision | Recall | F1-score | Precision | Recall | F1-score |
| 2 | 0.342 | 0.953 | 0.504 | 0.707 | 0.891 | 0.788 | 2.5 | 0.653 | 0.951 | 0.774 | 0.706 | 0.89 | 0.787 | 3 | 0.653 | 0.946 | 0.773 | 0.814 | 0.837 | 0.825 | 3.5 | 0.653 | 0.924 | 0.765 | 0.814 | 0.837 | 0.825 | 4 | 0.653 | 0.908 | 0.760 | 0.814 | 0.837 | 0.825 |
|
|
(c) |
| GMM | -means | Precision | Recall | F1-score | Precision | Recall | F1-score |
| 0.668 | 0.547 | 0.601 | 0.41 | 0.447 | 0.428 |
|
|