Show simple item record

dc.contributor.authorAbou Ali, Mohamad
dc.contributor.authorDornaika, Fadi
dc.contributor.authorArganda Carreras, Ignacio
dc.date.accessioned2023-11-27T18:23:23Z
dc.date.available2023-11-27T18:23:23Z
dc.date.issued2023-11-15
dc.identifier.citationAlgorithms 16(11): (2023) // Article ID 525es_ES
dc.identifier.issn1999-4893
dc.identifier.urihttp://hdl.handle.net/10810/63167
dc.description.abstractDeep learning (DL) has made significant advances in computer vision with the advent of vision transformers (ViTs). Unlike convolutional neural networks (CNNs), ViTs use self-attention to extract both local and global features from image data, and then apply residual connections to feed these features directly into a fully networked multilayer perceptron head. In hospitals, hematologists prepare peripheral blood smears (PBSs) and read them under a medical microscope to detect abnormalities in blood counts such as leukemia. However, this task is time-consuming and prone to human error. This study investigated the transfer learning process of the Google ViT and ImageNet CNNs to automate the reading of PBSs. The study used two online PBS datasets, PBC and BCCD, and transferred them into balanced datasets to investigate the influence of data amount and noise immunity on both neural networks. The PBC results showed that the Google ViT is an excellent DL neural solution for data scarcity. The BCCD results showed that the Google ViT is superior to ImageNet CNNs in dealing with unclean, noisy image data because it is able to extract both global and local features and use residual connections, despite the additional time and computational overhead.es_ES
dc.description.sponsorshipThis work is supported by grant PID2021-126701OB-I00 funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe”, and by grant GIU19/027 funded by the University of the Basque Country UPV/EHU.es_ES
dc.language.isoenges_ES
dc.publisherMDPIes_ES
dc.relationinfo:eu-repo/grantAgreement/MCIN/PID2021-126701OB-I00es_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectconvolutional neural network (CNN)es_ES
dc.subjectvision transformer (ViT)es_ES
dc.subjectImageNet modelses_ES
dc.subjecttransfer learning (TL)es_ES
dc.subjectmachine learning (ML)es_ES
dc.subjectdeep learning (DP)es_ES
dc.subjectwhite blood cell classificationes_ES
dc.subjectperipheral blood cell (PBC)es_ES
dc.subjectblood cell count and detection (BCCD)es_ES
dc.titleWhite Blood Cell Classification: Convolutional Neural Network (CNN) and Vision Transformer (ViT) under Medical Microscopees_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.date.updated2023-11-24T14:28:34Z
dc.rights.holder© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).es_ES
dc.relation.publisherversionhttps://www.mdpi.com/1999-4893/16/11/525es_ES
dc.departamentoesCiencia de la computación e inteligencia artificial
dc.departamentoeuKonputazio zientziak eta adimen artifiziala


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Except where otherwise noted, this item's license is described as © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).