Show simple item record

dc.contributor.advisorCases Gutiérrez, Blanca Rosa ORCID
dc.contributor.advisorArganda Carreras, Ignacio
dc.contributor.advisorDornaika, Fadi
dc.contributor.advisorUnzueta Irurtia, Luis
dc.contributor.authorGoenetxea Imaz, Jon
dc.date.accessioned2021-03-04T10:02:01Z
dc.date.available2021-03-04T10:02:01Z
dc.date.issued2020-12-17
dc.date.submitted2020-12-17
dc.identifier.urihttp://hdl.handle.net/10810/50468
dc.description182 p.es_ES
dc.description.abstractMonocular RGB cameras are present in most scopes and devices, including embedded environments like robots, cars and home automation. Most of these environments have in common a significant presence of human operators with whom the system has to interact. This context provides the motivation to use the captured monocular images to improve the understanding of the operator and the surrounding scene for more accurate results and applications.However, monocular images do not have depth information, which is a crucial element in understanding the 3D scene correctly. Estimating the three-dimensional information of an object in the scene using a single two-dimensional image is already a challenge. The challenge grows if the object is deformable (e.g., a human body or a human face) and there is a need to track its movements and interactions in the scene.Several methods attempt to solve this task, including modern regression methods based on Deep NeuralNetworks. However, despite the great results, most are computationally demanding and therefore unsuitable for several environments. Computational efficiency is a critical feature for computationally constrained setups like embedded or onboard systems present in robotics and automotive applications, among others.This study proposes computationally efficient methodologies to reconstruct and track three-dimensional deformable objects, such as human faces and human bodies, using a single monocular RGB camera. To model the deformability of faces and bodies, it considers two types of deformations: non-rigid deformations for face tracking, and rigid multi-body deformations for body pose tracking. Furthermore, it studies their performance on computationally restricted devices like smartphones and onboard systems used in the automotive industry. The information extracted from such devices gives valuable insight into human behaviour a crucial element in improving human-machine interaction.We tested the proposed approaches in different challenging application fields like onboard driver monitoring systems, human behaviour analysis from monocular videos, and human face tracking on embedded devices.es_ES
dc.language.isoenges_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/es/*
dc.subjectartificial intelligencees_ES
dc.subjectinteligencia artificiales_ES
dc.titleComputationally efficient deformable 3D object tracking with a monocular RGB cameraes_ES
dc.typeinfo:eu-repo/semantics/doctoralThesises_ES
dc.rights.holderAtribución 3.0 España*
dc.rights.holder(cc)2020 JON GOENETXEA IMAZ (cc by 4.0)
dc.identifier.studentID748538es_ES
dc.identifier.projectID17900es_ES
dc.departamentoesCiencia de la computación e inteligencia artificiales_ES
dc.departamentoeuKonputazio zientziak eta adimen artifizialaes_ES


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Atribución 3.0 España
Except where otherwise noted, this item's license is described as Atribución 3.0 España