Edge-enhanced dual discriminator generative adversarial network for fast MRI with parallel imaging using multi-view information
dc.contributor.author | Huang, Jiahao | |
dc.contributor.author | Ding, Weiping | |
dc.contributor.author | Lv, Jun | |
dc.contributor.author | Yang, Jingwen | |
dc.contributor.author | Dong, Hao | |
dc.contributor.author | Del Ser Lorente, Javier | |
dc.contributor.author | Xia, Jun | |
dc.contributor.author | Ren, Tiaojuan | |
dc.contributor.author | Wong, Stephen T. | |
dc.contributor.author | Yang, Guang | |
dc.date.accessioned | 2023-02-09T17:17:08Z | |
dc.date.available | 2023-02-09T17:17:08Z | |
dc.date.issued | 2022-10 | |
dc.identifier.citation | Applied Intelligence 52(13) : 14693-14710 (2022) | es_ES |
dc.identifier.issn | 0924-669X | |
dc.identifier.issn | 1573-7497 | |
dc.identifier.uri | http://hdl.handle.net/10810/59739 | |
dc.description.abstract | In clinical medicine, magnetic resonance imaging (MRI) is one of the most important tools for diagnosis, triage, prognosis, and treatment planning. However, MRI suffers from an inherent slow data acquisition process because data is collected sequentially in k-space. In recent years, most MRI reconstruction methods proposed in the literature focus on holistic image reconstruction rather than enhancing the edge information. This work steps aside this general trend by elaborating on the enhancement of edge information. Specifically, we introduce a novel parallel imaging coupled dual discriminator generative adversarial network (PIDD-GAN) for fast multi-channel MRI reconstruction by incorporating multi-view information. The dual discriminator design aims to improve the edge information in MRI reconstruction. One discriminator is used for holistic image reconstruction, whereas the other one is responsible for enhancing edge information. An improved U-Net with local and global residual learning is proposed for the generator. Frequency channel attention blocks (FCA Blocks) are embedded in the generator for incorporating attention mechanisms. Content loss is introduced to train the generator for better reconstruction quality. We performed comprehensive experiments on Calgary-Campinas public brain MR dataset and compared our method with state-of-the-art MRI reconstruction methods. Ablation studies of residual learning were conducted on the MICCAI13 dataset to validate the proposed modules. Results show that our PIDD-GAN provides high-quality reconstructed MR images, with well-preserved edge information. The time of single-image reconstruction is below 5ms, which meets the demand of faster processing. | es_ES |
dc.description.sponsorship | This work was supported in part by the Zhejiang Shuren University Basic Scientific Research Special Funds, in part by the European Research Council Innovative Medicines Initiative (DRAGON, H2020-JTI-IMI2 101005122), in part by the AI for Health Imaging Award (CHAIMELEON, H2020-SC1-FA-DTS-2019-1 952172), in part by the UK Research and Innovation Future Leaders Fellowship (MR/V023799/1), in part by the British Heart Foundation (Project Number: TG/18/5/34111, PG/16/78/32402), in part by the Foundation of Peking University School and Hospital of Stomatology [KUSSNT-19B11], in part by the Peking University Health Science Center Youth Science and Technology Innovation Cultivation Fund [BMU2021PYB017], in part by the National Natural Science Foundation of China [61976120], in part by the Natural Science Foundation of Jiangsu Province [BK20191445], in part by the Qing Lan Project of Jiangsu Province, in part by National Natural Science Foundation of China [61902338], in part by the Project of Shenzhen International Cooperation Foundation [GJHZ20180926165402083], in part by the Basque Government through the ELKARTEK funding program [KK-2020/00049], and in part by the consolidated research group MATHMODE [IT1294-19]. | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | Springer | es_ES |
dc.rights | info:eu-repo/semantics/openAccess | es_ES |
dc.rights.uri | http://creativecommons.org/licenses/by/3.0/es/ | * |
dc.subject | fast MRI | es_ES |
dc.subject | parallel imaging | es_ES |
dc.subject | multi-view learning | es_ES |
dc.subject | generative adversarial networks | es_ES |
dc.subject | edge enhancement | es_ES |
dc.title | Edge-enhanced dual discriminator generative adversarial network for fast MRI with parallel imaging using multi-view information | es_ES |
dc.type | info:eu-repo/semantics/article | es_ES |
dc.rights.holder | © The Author(s) 2021. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. | es_ES |
dc.rights.holder | Atribución 3.0 España | * |
dc.relation.publisherversion | https://link.springer.com/article/10.1007/s10489-021-03092-w | es_ES |
dc.identifier.doi | 10.1007/s10489-021-03092-w | |
dc.departamentoes | Ingeniería de comunicaciones | es_ES |
dc.departamentoeu | Komunikazioen ingeniaritza | es_ES |
Files in this item
This item appears in the following Collection(s)
Except where otherwise noted, this item's license is described as © The Author(s) 2021. This article is licensed under a Creative Commons
Attribution 4.0 International License, which permits use, sharing,
adaptation, distribution and reproduction in any medium or format,
as long as you give appropriate credit to the original author(s) and the
source, provide a link to the Creative Commons licence, and indicate
if changes were made. The images or other third party material in this
article are included in the article's Creative Commons licence, unless
indicated otherwise in a credit line to the material. If material is not
included in the article's Creative Commons licence and your intended
use is not permitted by statutory regulation or exceeds the permitted
use, you will need to obtain permission directly from the copyright
holder. To view a copy of this licence, visit http://creativecommons.
org/licenses/by/4.0/.