Show simple item record

dc.contributor.authorCer, Daniel
dc.contributor.authorDiab, Mona
dc.contributor.authorAgirre Bengoa, Eneko ORCID
dc.contributor.authorLópez Gazpio, Iñigo ORCID
dc.contributor.authorSpecia, Lucia
dc.date.accessioned2024-07-23T11:37:45Z
dc.date.available2024-07-23T11:37:45Z
dc.date.issued2017-08
dc.identifier.citation11th International Workshop on Semantic Evaluations (SemEval-2017): Proceedings of the Workshop, August 3 - 4, 2017, Vancouver, Canada : 1-14 (2017)es_ES
dc.identifier.isbn978-1-945626-55-5
dc.identifier.urihttp://hdl.handle.net/10810/68989
dc.description.abstractSemantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).es_ES
dc.description.sponsorshipThis material is based in part upon work supported by QNRF-NPRP 6 - 1020-1-199 OPTDIAC that funded Arabic translation, and by a grant from the Spanish MINECO (projects TUNER TIN2015-65308-C5-1-R and MUSTER PCIN-2015-226 cofunded by EU FEDER) that funded STS label annotation and by the QT21 EU project (H2020 No. 645452) that funded STS labels and data preparation for machine translation pairs. I˜nigo Lopez-Gazpio is supported by the Spanish MECD. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of QNRF-NPRP, Spanish MINECO, QT21 EU, or the Spanish MECD.es_ES
dc.language.isoenges_ES
dc.publisherACLes_ES
dc.relationinfo:eu-repo/grantAgreement/EC/H2020/645452es_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/PCIN-2015-226es_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/TIN2015-65308-C5- 1-Res_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.titleSemeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluationes_ES
dc.typeinfo:eu-repo/semantics/conferenceObjectes_ES
dc.rights.holder(c)2017 The Association for Computational Linguistics, licensed on a Creative Commons Attribution 4.0 International License.es_ES
dc.relation.publisherversionhttps://doi.org/10.18653/v1/s17-2001es_ES
dc.identifier.doi10.18653/v1/S17-2001
dc.departamentoesCiencia de la computación e inteligencia artificiales_ES
dc.departamentoeuKonputazio zientziak eta adimen artifizialaes_ES


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

(c)2017 The Association for Computational Linguistics, licensed on a Creative Commons Attribution 4.0 International License.
Except where otherwise noted, this item's license is described as (c)2017 The Association for Computational Linguistics, licensed on a Creative Commons Attribution 4.0 International License.