dc.contributor.authorÁlvarez de Toledo, Santiago
dc.contributor.authorAnguera, Aurea
dc.contributor.authorBarreiro, José M.
dc.contributor.authorLara Torralbo, Juan Alfonso
dc.contributor.authorLizcano, David
dc.date.accessioned2018-04-20T12:16:52Z
dc.date.available2018-04-20T12:16:52Z
dc.date.issued2017
dc.identifier.issn1424-8220
dc.identifier.urihttp://hdl.handle.net/20.500.12226/22
dc.description.abstractOver the last few decades, a number of reinforcement learning techniques have emerged, and different reinforcement learning-based applications have proliferated. However, such techniques tend to specialize in a particular field. This is an obstacle to their generalization and extrapolation to other areas. Besides, neither the reward-punishment (r-p) learning process nor the convergence of results is fast and efficient enough. To address these obstacles, this research proposes a general reinforcement learning model. This model is independent of input and output types and based on general bioinspired principles that help to speed up the learning process. The model is composed of a perception module based on sensors whose specific perceptions are mapped as perception patterns. In this manner, similar perceptions (even if perceived at different positions in the environment) are accounted for by the same perception pattern. Additionally, the model includes a procedure that statistically associates perception-action pattern pairs depending on the positive or negative results output by executing the respective action in response to a particular perception during the learning process. To do this, the model is fitted with a mechanism that reacts positively or negatively to particular sensory stimuli in order to rate results. The model is supplemented by an action module that can be configured depending on the maneuverability of each specific agent. The model has been applied in the air navigation domain, a field with strong safety restrictions, which led us to implement a simulated system equipped with the proposed model. Accordingly, the perception sensors were based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology, which is described in this paper. The results were quite satisfactory, and it outperformed traditional methods existing in the literature with respect to learning reliability and efficiency.es
dc.language.isoenes
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.titleA Reinforcement Learning Model Equipped with Sensors for Generating Perception Patterns: Implementation of a Simulated Air Navigation System Using ADS-B (Automatic Dependent Surveillance-Broadcast) Technologyes
dc.typearticlees
dc.description.course2017-18es
dc.identifier.doi10.3390/s17010188
dc.issue.number1es
dc.journal.titleSensorses
dc.page.initial188es
dc.publisher.facultyEscuela de Ciencias Técnicas e Ingenieríaes
dc.rights.accessRightsopenAccesses
dc.subject.keywordMachine learninges
dc.subject.keywordReinforcement learninges
dc.subject.keywordADS-Bes
dc.subject.keywordperception-action-value associationes
dc.subject.keywordAir navigationes
dc.volume.number17es


Ficheros en el ítem

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution-NonCommercial-NoDerivatives 4.0 Internacional