Show simple item record

dc.contributor.authorLópez Gazpio, Iñigo ORCID
dc.contributor.authorMarichalar Anglada, Montserrat ORCID
dc.contributor.authorGonzalez Aguirre, Aitor
dc.contributor.authorRigau Claramunt, Germán ORCID
dc.contributor.authorUria Garin, Larraitz ORCID
dc.contributor.authorAgirre Bengoa, Eneko ORCID
dc.date.accessioned2024-07-23T10:50:05Z
dc.date.available2024-07-23T10:50:05Z
dc.date.issued2016-12-12
dc.identifier.citationKnowledge-Based Systems 119 : 186-199 (2017)es_ES
dc.identifier.issn0950-7051
dc.identifier.issn1872-7409
dc.identifier.urihttp://hdl.handle.net/10810/68986
dc.description.abstractUser acceptance of artificial intelligence agents might depend on their ability to explain their reasoning to the users. We focus on a specific text processing task, the Semantic Textual Similarity task (STS), where systems need to measure the degree of semantic equivalence between two sentences. We propose to add an interpretability layer (iSTS for short) formalized as the alignment between pairs of segments across the two sentences, where the relation between the segments is labeled with a relation type and a similarity score. This way, a system performing STS could use the interpretability layer to explain to users why it returned that specific score for the given sentence pair. We present a publicly available dataset of sentence pairs annotated following the formalization. We then develop an iSTS system trained on this dataset, which given a sentence pair finds what is similar and what is different, in the form of graded and typed segment alignments. When evaluated on the dataset, the system performs better than an informed baseline, showing that the dataset and task are well-defined and feasible. Most importantly, two user studies show how the iSTS system output can be used to automatically produce explanations in natural language. Users performed the two tasks better when having access to the explanations, providing preliminary evidence that our dataset and method to automatically produce explanations do help users understand the output of STS systems better.es_ES
dc.description.sponsorshipThe work described in this project has been partially funded by MINECO in projects MUSTER (PCIN-2015-226) and TUNER (TIN 2015-65308-C5-1-R), as well as the Basque Government (A group research team, IT344-10).es_ES
dc.language.isoenges_ES
dc.publisherElsevieres_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/TIN 2015-65308-C5-1-Res_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/PCIN-2015-226es_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.subjectinterpretabilityes_ES
dc.subjecttutoring systemses_ES
dc.subjectsemantic textual similarityes_ES
dc.subjectnatural language understandinges_ES
dc.titleInterpretable semantic textual similarity: Finding and explaining differences between sentenceses_ES
dc.typeinfo:eu-repo/semantics/preprintes_ES
dc.rights.holder© 2016 Elsevier B.V. All rights reserved.es_ES
dc.relation.publisherversionhttps://doi.org/10.1016/j.knosys.2016.12.013es_ES
dc.identifier.doi10.1016/j.knosys.2016.12.013
dc.departamentoesCiencia de la computación e inteligencia artificiales_ES
dc.departamentoeuKonputazio zientziak eta adimen artifizialaes_ES


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record