Show simple item record

dc.contributor.authorArtetxe Zurutuza, Mikel
dc.contributor.authorLabaka Intxauspe, Gorka ORCID
dc.contributor.authorLópez Gazpio, Iñigo ORCID
dc.contributor.authorAgirre Bengoa, Eneko ORCID
dc.date.accessioned2024-07-23T11:22:45Z
dc.date.available2024-07-23T11:22:45Z
dc.date.issued2018
dc.identifier.citationThe 22nd Conference on Computational Natural Language Learning: Proceedings of the Conference, October 31 - November 1, 2018 Brussels, Belgium : 282-291 (2018)es_ES
dc.identifier.isbn978-1-948087-72-8
dc.identifier.urihttp://hdl.handle.net/10810/68988
dc.description.abstractFollowing the recent success of word embeddings, it has been argued that there is no such thing as an ideal representation for words, as different models tend to capture divergent and often mutually incompatible aspects like semantics/ syntax and similarity/relatedness. In this paper, we show that each embedding model captures more information than directly apparent. A linear transformation that adjusts the similarity order of the model without any external resource can tailor it to achieve better results in those aspects, providing a new perspective on how embeddings encode divergent linguistic information. In addition, we explore the relation between intrinsic and extrinsic evaluation, as the effect of our transformations in downstream tasks is higher for unsupervised systems than for supervised ones.es_ES
dc.description.sponsorshipThis research was partially supported by the Spanish MINECO (TUNER TIN2015-65308-C5- 1-R, MUSTER PCIN-2015-226 and TADEEP TIN2015-70214-P, cofunded by EU FEDER), the UPV/EHU (excellence research group), and the NVIDIA GPU grant program. Mikel Artetxe and I ˜nigo Lopez-Gazpio enjoy a doctoral grant from the Spanish MECD.es_ES
dc.language.isoenges_ES
dc.publisherACLes_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/TIN2015-65308-C5- 1-Res_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/PCIN-2015-226es_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/TIN2015-70214-Pes_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.titleUncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluationes_ES
dc.typeinfo:eu-repo/semantics/conferenceObjectes_ES
dc.rights.holder(c)2018 The Association for Computational Linguistics, licensed on a Creative Commons Attribution 4.0 International License.es_ES
dc.relation.publisherversionhttps://doi.org/10.18653/v1/k18-1028es_ES
dc.identifier.doi10.18653/v1/K18-1028
dc.departamentoesCiencia de la computación e inteligencia artificiales_ES
dc.departamentoeuKonputazio zientziak eta adimen artifizialaes_ES


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

(c)2018 The Association for Computational Linguistics, licensed on a Creative Commons Attribution 4.0 International License.
Except where otherwise noted, this item's license is described as (c)2018 The Association for Computational Linguistics, licensed on a Creative Commons Attribution 4.0 International License.