On the approximation capability of recurrent neural networks

Autor(en): Hammer, B
Stichwörter: computational capability; Computer Science; Computer Science, Artificial Intelligence; NETS; recurrent neural networks; sigmoidal networks; universal approximation
Erscheinungsdatum: 2000
Herausgeber: ELSEVIER SCIENCE BV
Journal: NEUROCOMPUTING
Volumen: 31
Ausgabe: 1-4
Startseite: 107
Seitenende: 123
Zusammenfassung: 
The capability of recurrent neural networks of approximating functions from lists of real vectors to a real vector space is examined. Any measurable function can be approximated in probability. Additionally, bounds on the resources sufficient for an approximation can be derived in interesting cases. On the contrary, there exist computable mappings on symbolic data which cannot be approximated in the maximum norm. For restricted input length, some continuous functions on real-valued sequences need a number of neurons increasing at least linearly in the input length. On unary sequences, any mapping with bounded range can be approximated in the maximum norm. Consequently, standard sigmoidal networks can compute any mapping on offline inputs as a computational model. (C) 2000 Elsevier Science B.V. All rights reserved.
ISSN: 09252312
DOI: 10.1016/S0925-2312(99)00174-5

Show full item record

Google ScholarTM

Check

Altmetric