Stability Analysis for Autonomous Vehicle Navigation Trained over Deep Deterministic Policy Gradient
dc.contributor.author | Cabezas Olivenza, Mireya | |
dc.contributor.author | Zulueta Guerrero, Ekaitz | |
dc.contributor.author | Sánchez Chica, Ander | |
dc.contributor.author | Fernández Gámiz, Unai | |
dc.contributor.author | Teso Fernández de Betoño, Adrián | |
dc.date.accessioned | 2023-01-12T14:57:04Z | |
dc.date.available | 2023-01-12T14:57:04Z | |
dc.date.issued | 2022-12-27 | |
dc.identifier.citation | Mathematics 11(1) : (2023) // Article ID 132 | es_ES |
dc.identifier.issn | 2227-7390 | |
dc.identifier.uri | http://hdl.handle.net/10810/59262 | |
dc.description.abstract | The Deep Deterministic Policy Gradient (DDPG) algorithm is a reinforcement learning algorithm that combines Q-learning with a policy. Nevertheless, this algorithm generates failures that are not well understood. Rather than looking for those errors, this study presents a way to evaluate the suitability of the results obtained. Using the purpose of autonomous vehicle navigation, the DDPG algorithm is applied, obtaining an agent capable of generating trajectories. This agent is evaluated in terms of stability through the Lyapunov function, verifying if the proposed navigation objectives are achieved. The reward function of the DDPG is used because it is unknown if the neural networks of the actor and the critic are correctly trained. Two agents are obtained, and a comparison is performed between them in terms of stability, demonstrating that the Lyapunov function can be used as an evaluation method for agents obtained by the DDPG algorithm. Verifying the stability at a fixed future horizon, it is possible to determine whether the obtained agent is valid and can be used as a vehicle controller, so a task-satisfaction assessment can be performed. Furthermore, the proposed analysis is an indication of which parts of the navigation area are insufficient in training terms. | es_ES |
dc.description.sponsorship | The current study has been sponsored by the Government of the Basque Country-ELKARTEK21/10 KK-2021/00014 research program “Estudio de nuevas técnicas de inteligencia artificial basadas en Deep Learning dirigidas a la optimización de procesos industrials”. | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | MDPI | es_ES |
dc.rights | info:eu-repo/semantics/openAccess | es_ES |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | |
dc.subject | navigation | es_ES |
dc.subject | neural network | es_ES |
dc.subject | autonomous vehicle | es_ES |
dc.subject | reinforcement learning | es_ES |
dc.subject | DDPG | es_ES |
dc.subject | lyapunov | es_ES |
dc.subject | stability | es_ES |
dc.subject | q-learning | es_ES |
dc.title | Stability Analysis for Autonomous Vehicle Navigation Trained over Deep Deterministic Policy Gradient | es_ES |
dc.type | info:eu-repo/semantics/article | es_ES |
dc.date.updated | 2023-01-06T13:52:45Z | |
dc.rights.holder | © 2022 by the authors.Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/ 4.0/). | es_ES |
dc.relation.publisherversion | https://www.mdpi.com/2227-7390/11/1/132 | es_ES |
dc.identifier.doi | 10.3390/math11010132 | |
dc.departamentoes | Ingeniería de sistemas y automática | |
dc.departamentoes | Ingeniería Energética | |
dc.departamentoeu | Sistemen ingeniaritza eta automatika | |
dc.departamentoeu | Energia Ingenieritza |
Files in this item
This item appears in the following Collection(s)
Except where otherwise noted, this item's license is described as © 2022 by the authors.Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/ 4.0/).