Learning models of relational MDPs using graph Kernels

Autor(en): Halbritter, F.
Geibel, P.
Stichwörter: Graph theory; Mathematical models; Support vector machines, Graph Kernels; Indirect reinforcement learning; Relational reinforcement learning; Reward function, Learning algorithms
Erscheinungsdatum: 2007
Herausgeber: Springer Verlag
Journal: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volumen: 4827 LNAI
Startseite: 409
Seitenende: 419
Zusammenfassung: 
Relational reinforcement learning is the application of reinforcement learning to structured state descriptions. Model-based methods learn a policy based on a known model that comprises a description of the actions and their effects as well as the reward function. If the model is initially unknown, one might learn the model first and then apply the model-based method (indirect reinforcement learning). In this paper, we propose a method for model-learning that is based on a combination of several SVMs using graph kernels. Indeterministic processes can be dealt with by combining the kernel approach with a clustering technique. We demonstrate the validity of the approach by a range of experiments on various Blocksworld scenarios. © Springer-Verlag Berlin Heidelberg 2007.
Beschreibung: 
Conference of 6th Mexican International Conference on Artificial Intelligence, MICAI 2007 ; Conference Date: 4 November 2007 Through 10 November 2007; Conference Code:71204
ISBN: 9783540766308
ISSN: 03029743
DOI: 10.1007/978-3-540-76631-5_39
Externe URL: https://www.scopus.com/inward/record.uri?eid=2-s2.0-38149024775&doi=10.1007%2f978-3-540-76631-5_39&partnerID=40&md5=f946c93e781e39de2c2ecf243fbdf36b

Zur Langanzeige

Seitenaufrufe

3
Letzte Woche
0
Letzter Monat
0
geprüft am 01.06.2024

Google ScholarTM

Prüfen

Altmetric