Learning sparse and meaningful representations through embodiment

Autor(en): Clay, Viviane
Koenig, Peter 
Kuehnberger, Kai-Uwe 
Pipa, Gordon 
Stichwörter: Computer Science; Computer Science, Artificial Intelligence; Deep learning; Embodied cognition; Embodiment; Neurosciences; Neurosciences & Neurology; PERCEPTION; Reinforcement learning; Representation learning; SEE; Sparse coding
Erscheinungsdatum: 2021
Herausgeber: PERGAMON-ELSEVIER SCIENCE LTD
Journal: NEURAL NETWORKS
Volumen: 134
Startseite: 23
Seitenende: 41
Zusammenfassung: 
How do humans acquire a meaningful understanding of the world with little to no supervision or semantic labels provided by the environment? Here we investigate embodiment with a closed loop between action and perception as one key component in this process. We take a close look at the representations learned by a deep reinforcement learning agent that is trained with high-dimensional visual observations collected in a 3D environment with very sparse rewards. We show that this agent learns stable representations of meaningful concepts such as doors without receiving any semantic labels. Our results show that the agent learns to represent the action relevant information, extracted from a simulated camera stream, in a wide variety of sparse activation patterns. The quality of the representations learned shows the strength of embodied learning and its advantages over fully supervised approaches. (C) 2020 The Authors. Published by Elsevier Ltd.
ISSN: 08936080
DOI: 10.1016/j.neunet.2020.11.004

Show full item record

Page view(s)

9
Last Week
0
Last month
1
checked on Feb 26, 2024

Google ScholarTM

Check

Altmetric