Options
A Novel Artificial Hydrocarbon Networks Based Value Function Approximation in Hierarchical Reinforcement Learning
Journal
Advances in Soft Computing
Lecture Notes in Computer Science
ISSN
0302-9743
1611-3349
Date Issued
2017
Author(s)
Type
Resource Types::text::book::book part
Abstract
Reinforcement learning aims to solve the problem of learning optimal or near-optimal decision-making policies for a given domain problem. However, it is known that increasing the dimensionality of the input space (i.e. environment) will increase the complexity for the learning algorithms, falling into the curse of dimensionality. Value function approximation and hierarchical reinforcement learning have been two different approaches proposed to alleviate reinforcement learning from this illness. In that sense, this paper proposes a new value function approximation using artificial hydrocarbon networks –a supervised learning method inspired on chemical carbon networks– with regularization at each subtask in a hierarchical reinforcement learning framework. Comparative results using a greedy sparse value function approximation over the MAXQ hierarchical method was computed, proving that artificial hydrocarbon networks improves accuracy and efficiency on the value function approximation. © Springer International Publishing AG 2017.