Architectural bias in recurrent neural networks - Fractal analysis

Autor(en): Tino, P
Hammer, B
Herausgeber: Dorronsoro, JR
Stichwörter: Computer Science; Computer Science, Artificial Intelligence
Erscheinungsdatum: 2002
Herausgeber: SPRINGER-VERLAG BERLIN
Journal: ARTIFICIAL NEURAL NETWORKS - ICANN 2002
LECTURE NOTES IN COMPUTER SCIENCE
Volumen: 2415
Startseite: 1359
Seitenende: 1364
Zusammenfassung: 
We have recently shown that when initiated with ``small'' weights, recurrent neural networks (RNNs) with standard sigmoid-type activation functions are inherently biased towards Markov models, i.e. even prior to any training, RNN dynamics can be readily used to extract finite memory machines [6,8]. Following [2], we refer to this phenomenon as the architectural bias of RNNs. In this paper we further extend our work on the architectural bias in RNNs by performing a rigorous fractal analysis of recurrent activation patterns. We obtain both lower and upper bounds on various types of fractal dimensions, such as box-counting and Hausdorff dimensions.
Beschreibung: 
12th International Conference on Artifical Neural Networks (ICANN 2002), MADRID, SPAIN, AUG 28-30, 2002
ISBN: 9783540440741
ISSN: 03029743

Show full item record

Page view(s)

1
Last Week
0
Last month
0
checked on Feb 25, 2024

Google ScholarTM

Check

Altmetric