Show simple item record

dc.contributor.authorFranco Barranco, Daniel
dc.contributor.authorPastor Tronch, Julio
dc.contributor.authorGonzález Marfil, Aitor
dc.contributor.authorMuñoz Barrutia, Arrate
dc.contributor.authorArganda Carreras, Ignacio
dc.date.accessioned2022-09-12T18:14:59Z
dc.date.available2022-09-12T18:14:59Z
dc.date.issued2022-07
dc.identifier.citationComputer Methods and Programs in Biomedicine 222 : (2022) // Article ID 106949es_ES
dc.identifier.issn1872-7565
dc.identifier.urihttp://hdl.handle.net/10810/57702
dc.description.abstract[EN] BACKGROUND AND OBJECTIVE: Accurate segmentation of electron microscopy (EM) volumes of the brain is essential to characterize neuronal structures at a cell or organelle level. While supervised deep learning methods have led to major breakthroughs in that direction during the past years, they usually require large amounts of annotated data to be trained, and perform poorly on other data acquired under similar experimental and imaging conditions. This is a problem known as domain adaptation, since models that learned from a sample distribution (or source domain) struggle to maintain their performance on samples extracted from a different distribution or target domain. In this work, we address the complex case of deep learning based domain adaptation for mitochondria segmentation across EM datasets from different tissues and species. METHODS: We present three unsupervised domain adaptation strategies to improve mitochondria segmentation in the target domain based on (1) state-of-the-art style transfer between images of both domains; (2) self-supervised learning to pre-train a model using unlabeled source and target images, and then fine-tune it only with the source labels; and (3) multi-task neural network architectures trained end-to-end with both labeled and unlabeled images. Additionally, to ensure good generalization in our models, we propose a new training stopping criterion based on morphological priors obtained exclusively in the source domain. The code and its documentation are publicly available at https://github.com/danifranco/EM_domain_adaptation. RESULTS: We carried out all possible cross-dataset experiments using three publicly available EM datasets. We evaluated our proposed strategies and those of others based on the mitochondria semantic labels predicted on the target datasets. CONCLUSIONS: The methods introduced here outperform the baseline methods and compare favorably to the state of the art. In the absence of validation labels, monitoring our proposed morphology-based metric is an intuitive and effective way to stop the training process and select in average optimal models.es_ES
dc.description.sponsorshipI. Arganda-Carreras would like to acknowledge the support of the 2020 Leonardo Grant for Researchers and Cultural Creators, BBVA Foundation. This work is supported in part by the University of the Basque Country UPV/EHU grant GIU19/027 and by Ministerio de Ciencia, Innovación y Universidades, Agencia Estatal de Investigación, under grant PID2019-109820RB-I00, MCIN/AEI /10.13039/501100011033/, cofinanced by European Regional Development Fund (ERDF), “A way of making Europe.”es_ES
dc.language.isoenges_ES
dc.publisherElsevieres_ES
dc.relationinfo:eu-repo/grantAgreement/MICINN/PID2019-109820RB-I00es_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/es/*
dc.titleDeep learning based domain adaptation for mitochondria segmentation on EM volumes.es_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.holderCopyright © 2022 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)es_ES
dc.rights.holderAtribución 3.0 España*
dc.relation.publisherversionhttps://www.sciencedirect.com/science/article/pii/S0169260722003315?via%3Dihubes_ES
dc.identifier.doi10.1016/j.cmpb.2022.106949
dc.departamentoesCiencia de la computación e inteligencia artificiales_ES
dc.departamentoeuKonputazio zientziak eta adimen artifizialaes_ES


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Copyright © 2022 The Authors. Published by Elsevier B.V.  This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)
Except where otherwise noted, this item's license is described as Copyright © 2022 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)