Research Article

[Retracted] Fine-Tuning Word Embeddings for Hierarchical Representation of Data Using a Corpus and a Knowledge Base for Various Machine Learning Applications

Table 4

Results (Spearman’s ) of HWE and other word embeddings models on the HyperLex dataset using different score functions.

ModelScore function

CBOW0.100.040.050.06
SGNS0.080.050.000.09
GloVe0.050.130.100.06
R-CBOW0.100.030.030.02
R-SGNS0.060.030.010.07
JR0.070.070.040.04
HyperVec0.170.470.510.04
LEAR0.440.630.630.21
Poincaré0.280.220.210.24
HWE0.270.480.350.26