Learning policies for abstract state spaces

Autor(en): Timmer, S.
Riedmiller, M.
Stichwörter: Cart-pole system; Learning process; Q-function; State space, Approximation theory; Dynamic programming; Functions, Learning systems
Erscheinungsdatum: 2005
Journal: Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics
Volumen: 4
Startseite: 3179
Seitenende: 3184
Zusammenfassung: 
Applying Q-Learning to multidimensional, real-valued state spaces is time-consuming in most cases. In this article, we deal with the assumption that a coarse partition of the state space is sufficient for learning good or even optimal policies. An algorithm is presented which constructs proper policies for abstract state spaces using an incremental procedure without approximating a Q-function. By combining an approach similar to dynamic programming and a search for policies, we can speed up the learning process. To provide empirical evidence, we use a cart-pole system. Experiments were conducted for a simulated environment as well as for a real plant. © 2005 IEEE.
Beschreibung: 
Conference of IEEE Systems, Man and Cybernetics Society, Proceedings - 2005 International Conference on Systems, Man and Cybernetics ; Conference Date: 10 October 2005 Through 12 October 2005; Conference Code:66062
ISSN: 1062922X
Externe URL: https://www.scopus.com/inward/record.uri?eid=2-s2.0-27944452968&partnerID=40&md5=e8dad867fe1a18ce4c9bfef988c42210

Zur Langanzeige

Seitenaufrufe

2
Letzte Woche
0
Letzter Monat
1
geprüft am 18.05.2024

Google ScholarTM

Prüfen