On approximate learning by multi-layered feedforward circuits

DC FieldValueLanguage
dc.contributor.authorDasGupta, B
dc.contributor.authorHammer, B
dc.contributor.editorArimura, H
dc.contributor.editorJain, S
dc.contributor.editorSharma, A
dc.date.accessioned2021-12-23T16:13:56Z-
dc.date.available2021-12-23T16:13:56Z-
dc.date.issued2000
dc.identifier.isbn9783540412373
dc.identifier.issn03029743
dc.identifier.urihttps://osnascholar.ub.uni-osnabrueck.de/handle/unios/10822-
dc.description11th International Conference on Algorithmic Learning Theory (ALT 2000), SYDNEY, AUSTRALIA, DEC 11-13, 2000
dc.description.abstractWe consider the problem of efficient approximate learning by multilayered feedforward circuits subject to two objective functions. First, we consider the objective to maximize the ratio of correctly classified points compared to the training set size (e.g., see [3, 5]). We show that for single hidden layer threshold circuits with n hidden nodes and varying input dimension. approximation of this ratio within a relative error c/n(3), for some positive constant c, is NP-hard even if the number of examples is limited with respect to n. For architectures with two hidden nodes (e.g., as in [6]), approximating the objective within some fixed factor is NP-hard even if any sigmoid-like activation function in the hidden layer and epsilon-separation of the output [19] is considered, or if the semilinear activation function substitutes the threshold function. Next, we consider the objective to minimize the failure ratio [2]. We show that it is NP-hard to approximate the failure ratio within every constant larger than. 1 for a multilayered threshold circuit provided the input biases are zero. Furthermore, even weak approximation of this objective is almost NP-hard.
dc.language.isoen
dc.publisherSPRINGER-VERLAG BERLIN
dc.relation.ispartofALGORITHMIC LEARNING THEORY, PROCEEDINGS
dc.relation.ispartofLECTURE NOTES IN ARTIFICIAL INTELLIGENCE
dc.subjectCOMPLEXITY
dc.subjectComputer Science
dc.subjectComputer Science, Artificial Intelligence
dc.subjectComputer Science, Theory & Methods
dc.subjectHARDNESS
dc.subjectNETS
dc.subjectNEURAL NETWORKS
dc.titleOn approximate learning by multi-layered feedforward circuits
dc.typeconference paper
dc.identifier.isiISI:000175008600020
dc.description.volume1968
dc.description.startpage264
dc.description.endpage278
dc.contributor.orcid0000-0002-0935-5591
dc.contributor.researcheridE-8624-2010
dc.publisher.placeHEIDELBERGER PLATZ 3, D-14197 BERLIN, GERMANY
dcterms.oaStatusGreen Published
Show simple item record

Page view(s)

2
Last Week
0
Last month
1
checked on May 27, 2024

Google ScholarTM

Check

Altmetric