dc.contributor.author | Aranjuelo Ansa, Nerea | |
dc.contributor.author | García Castaño, Jorge | |
dc.contributor.author | Unzueta Irurtia, Luis | |
dc.contributor.author | García Torres, Sara | |
dc.contributor.author | Elordi Hidalgo, Unai | |
dc.contributor.author | Otaegui Madurga, Oihana | |
dc.date.accessioned | 2021-08-13T08:08:59Z | |
dc.date.available | 2021-08-13T08:08:59Z | |
dc.date.issued | 2021 | |
dc.identifier.citation | In Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) 5 : 80-91 (2021) | es_ES |
dc.identifier.isbn | 978-989-758-488-6 | |
dc.identifier.issn | 2184-4321 | |
dc.identifier.uri | http://hdl.handle.net/10810/52866 | |
dc.description.abstract | [EN] Synthetic simulated environments are gaining popularity in the Deep Learning Era, as they can alleviate the
effort and cost of two critical tasks to build multi-camera systems for surveillance applications: setting up
the camera system to cover the use cases and generating the labeled dataset to train the required Deep Neural
Networks (DNNs). However, there are no simulated environments ready to solve them for all kind of scenarios
and use cases. Typically, ‘ad hoc’ environments are built, which cannot be easily applied to other contexts.
In this work we present a methodology to build synthetic simulated environments with sufficient generality to
be usable in different contexts, with little effort. Our methodology tackles the challenges of the appropriate
parameterization of scene configurations, the strategies to generate randomly a wide and balanced range of
situations of interest for training DNNs with synthetic data, and the quick image capturing from virtual cameras
considering the rendering bottlenecks. We show a practical implementation example for the detection of
incorrectly placed luggage in aircraft cabins, including the qualitative and quantitative analysis of the data
generation process and its influence in a DNN training, and the required modifications to adapt it to other
surveillance contexts. | es_ES |
dc.description.sponsorship | This work has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation program under grant agreement No. 865162, SmaCS (https://www.smacs.eu/) | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | SciTePress, Science and Technology Publications, Lda | es_ES |
dc.relation | info:eu-repo/grantAgreement/EC/H2020/865162 | es_ES |
dc.rights | info:eu-repo/semantics/openAccess | es_ES |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/es/ | * |
dc.subject | simulated environments | es_ES |
dc.subject | synthetic data | es_ES |
dc.subject | deep neural networks | es_ES |
dc.subject | object detection | es_ES |
dc.subject | video surveillance | es_ES |
dc.title | Building synthetic simulated environments for configuring and training multi-camera systems for surveillance applications | es_ES |
dc.type | info:eu-repo/semantics/conferenceObject | es_ES |
dc.rights.holder | ©2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved. CC BY-NC-ND 4.0 | es_ES |
dc.rights.holder | Atribución-NoComercial-SinDerivadas 3.0 España | * |
dc.relation.publisherversion | https://www.scitepress.org/PublicationsDetail.aspx?ID=Pr3XxXfcWd8=&t=1 | es_ES |
dc.identifier.doi | 10.5220/0010232400800091 | |
dc.contributor.funder | European Commission | |
dc.departamentoes | Lenguajes y sistemas informáticos | es_ES |
dc.departamentoeu | Hizkuntza eta sistema informatikoak | es_ES |