Generalization ability of folding networks

Autor(en): Hammer, B
Stichwörter: computational learning theory; Computer Science; Computer Science, Artificial Intelligence; Computer Science, Information Systems; Engineering; Engineering, Electrical & Electronic; folding networks; luckiness function; recurrent neural networks; UCED property; VC dimension
Erscheinungsdatum: 2001
Herausgeber: IEEE COMPUTER SOC
Journal: IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
Volumen: 13
Ausgabe: 2
Startseite: 196
Seitenende: 206
Zusammenfassung: 
The information theoretical learnability of folding networks, a very successful approach capable of dealing with tree structured inputs, is examined. We find bounds on the VC, pseudo-, and fat shattering dimension of folding networks with various activation functions. As a consequence, valid generalization of folding networks can be guaranteed. However, distribution independent bounds on the generalization error cannot exist in principle. We propose two approaches which take the specific distribution into account and allow us to derive explicit bounds on the deviation of the empirical error from the real error of a learning algorithm: The first approach requires the probability of large trees to be limited a priori and the second approach deals with situations where the maximum input height in a concrete learning example is restricted.
ISSN: 10414347
DOI: 10.1109/69.917560

Show full item record

Google ScholarTM

Check

Altmetric