A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings
Ver/
Fecha
2018Metadatos
Mostrar el registro completo del ítem
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics 1: 789-798 (2018)
Resumen
Recent work has managed to learn cross-lingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in more realistic scenarios. This work proposes an alternative approach based on a fully unsupervised initialization that explicitly exploits the structural similarity of the embeddings, and a robust self-learning algorithm that iteratively improves this solution. Our method succeeds in all tested scenarios and obtains the best published results in standard datasets, even surpassing previous supervised systems. Our implementation is released as an open source project at https://github.com/artetxem/vecmap.