Show simple item record

dc.contributor.authorAlirezazadeh, Pendar
dc.contributor.authorDornaika, Fadi
dc.contributor.authorMoujahid, Abdelmalik ORCID
dc.date.accessioned2022-04-21T11:13:23Z
dc.date.available2022-04-21T11:13:23Z
dc.date.issued2022-03-30
dc.identifier.citationSensors 22(7) : (2022) // Article ID 2660es_ES
dc.identifier.issn1424-8220
dc.identifier.urihttp://hdl.handle.net/10810/56381
dc.description.abstractConsumer-to-shop clothes retrieval refers to the problem of matching photos taken by customers with their counterparts in the shop. Due to some problems, such as a large number of clothing categories, different appearances of clothing items due to different camera angles and shooting conditions, different background environments, and different body postures, the retrieval accuracy of traditional consumer-to-shop models is always low. With advances in convolutional neural networks (CNNs), the accuracy of garment retrieval has been significantly improved. Most approaches addressing this problem use single CNNs in conjunction with a softmax loss function to extract discriminative features. In the fashion domain, negative pairs can have small or large visual differences that make it difficult to minimize intraclass variance and maximize interclass variance with softmax. Margin-based softmax losses such as Additive Margin-Softmax (aka CosFace) improve the discriminative power of the original softmax loss, but since they consider the same margin for the positive and negative pairs, they are not suitable for cross-domain fashion search. In this work, we introduce the cross-domain discriminative margin loss (DML) to deal with the large variability of negative pairs in fashion. DML learns two different margins for positive and negative pairs such that the negative margin is larger than the positive margin, which provides stronger intraclass reduction for negative pairs. The experiments conducted on publicly available fashion datasets DARN and two benchmarks of the DeepFashion dataset—(1) Consumer-to-Shop Clothes Retrieval and (2) InShop Clothes Retrieval—confirm that the proposed loss function not only outperforms the existing loss functions but also achieves the best performance.es_ES
dc.language.isoenges_ES
dc.publisherMDPIes_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/es/
dc.subjectcross-domain fashion retrievales_ES
dc.subjectmargin-based loss functiones_ES
dc.subjectadaptive margines_ES
dc.subjectdeep learninges_ES
dc.subjectdiscriminative analysises_ES
dc.titleDeep Learning with Discriminative Margin Loss for Cross-Domain Consumer-to-Shop Clothes Retrievales_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.date.updated2022-04-11T13:59:39Z
dc.rights.holder2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).es_ES
dc.relation.publisherversionhttps://www.mdpi.com/1424-8220/22/7/2660/htmes_ES
dc.identifier.doi10.3390/s22072660
dc.departamentoesCiencia de la computación e inteligencia artificial
dc.departamentoeuKonputazio zientziak eta adimen artifiziala


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Except where otherwise noted, this item's license is described as 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).