Emerging Bayesian priors in a self-organizing recurrent network

Autor(en): Lazar, A.
Pipa, G. 
Triesch, J.
Stichwörter: Bayesian inference; Inference engines; intrinsic plasticity; Network performance; recurrent networks; Recurrent neural networks, Probability distributions; Spontaneous activity; statistical priors; STDP; STDP, Bayesian networks
Erscheinungsdatum: 2011
Journal: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volumen: 6792 LNCS
Ausgabe: PART 2
Startseite: 127
Seitenende: 134
We explore the role of local plasticity rules in learning statistical priors in a self-organizing recurrent neural network (SORN). The network receives input sequences composed of different symbols and learns the structure embedded in these sequences via a simple spike-timing-dependent plasticity rule, while synaptic normalization and intrinsic plasticity maintain a low level of activity. After learning, the network exhibits spontaneous activity that matches the stimulus-evoked activity during training and thus can be interpreted as samples from the network's prior probability distribution over evoked activity states. Further, we show how learning the frequency and spatio-temporal characteristics of the input sequences influences network performance in several classification tasks. These results suggest a novel connection between low level learning mechanisms and high level concepts of statistical inference. © 2011 Springer-Verlag.
Conference of 21st International Conference on Artificial Neural Networks, ICANN 2011 ; Conference Date: 14 June 2011 Through 17 June 2011; Conference Code:85226
ISBN: 9783642217371
ISSN: 03029743
DOI: 10.1007/978-3-642-21738-8_17
Externe URL: https://www.scopus.com/inward/record.uri?eid=2-s2.0-79959353033&doi=10.1007%2f978-3-642-21738-8_17&partnerID=40&md5=2c5f5dc8ce14a335ab754533af9d89dd

Show full item record

Page view(s)

Last Week
Last month
checked on Apr 23, 2024

Google ScholarTM