On approximate learning by multi-layered feedforward circuits

Autor(en): DasGupta, B
Hammer, B
Herausgeber: Arimura, H
Jain, S
Sharma, A
Stichwörter: COMPLEXITY; Computer Science; Computer Science, Artificial Intelligence; Computer Science, Theory & Methods; HARDNESS; NETS; NEURAL NETWORKS
Erscheinungsdatum: 2000
Herausgeber: SPRINGER-VERLAG BERLIN
Journal: ALGORITHMIC LEARNING THEORY, PROCEEDINGS
LECTURE NOTES IN ARTIFICIAL INTELLIGENCE
Volumen: 1968
Startseite: 264
Seitenende: 278
Zusammenfassung: 
We consider the problem of efficient approximate learning by multilayered feedforward circuits subject to two objective functions. First, we consider the objective to maximize the ratio of correctly classified points compared to the training set size (e.g., see [3, 5]). We show that for single hidden layer threshold circuits with n hidden nodes and varying input dimension. approximation of this ratio within a relative error c/n(3), for some positive constant c, is NP-hard even if the number of examples is limited with respect to n. For architectures with two hidden nodes (e.g., as in [6]), approximating the objective within some fixed factor is NP-hard even if any sigmoid-like activation function in the hidden layer and epsilon-separation of the output [19] is considered, or if the semilinear activation function substitutes the threshold function. Next, we consider the objective to minimize the failure ratio [2]. We show that it is NP-hard to approximate the failure ratio within every constant larger than. 1 for a multilayered threshold circuit provided the input biases are zero. Furthermore, even weak approximation of this objective is almost NP-hard.
Beschreibung: 
11th International Conference on Algorithmic Learning Theory (ALT 2000), SYDNEY, AUSTRALIA, DEC 11-13, 2000
ISBN: 9783540412373
ISSN: 03029743

Show full item record

Page view(s)

2
Last Week
1
Last month
0
checked on Apr 22, 2024

Google ScholarTM

Check

Altmetric