STANN – Synthesis Templates forArtificial Neural Network Inference andTraining

DC ElementWertSprache
dc.contributor.authorRothmann, Marc
dc.contributor.authorPorrmann, Mario
dc.contributor.editorRojas, I.
dc.contributor.editorJoya, G.
dc.contributor.editorCatala, A.
dc.date.accessioned2024-01-04T10:29:02Z-
dc.date.available2024-01-04T10:29:02Z-
dc.date.issued2023
dc.identifier.isbn9783031430848
dc.identifier.issn0302-9743
dc.identifier.urihttp://osnascholar.ub.uni-osnabrueck.de/handle/unios/72959-
dc.descriptionCited by: 0; Conference name: 17th International Work-Conference on Artificial Neural Networks, IWANN 2023; Conference date: 19 June 2023 through 21 June 2023; Conference code: 302169
dc.description.abstractWhile Deep Learning accelerators have been a research area of high interest, the focus was usually on monolithic accelerators for the inference of large CNNs. Only recently have accelerators for neural network training started to gain more attention. STANN is a template library that enables quick and efficient FPGA-based implementations of neural networks via high-level synthesis. It supports both inference and training to be applicable to domains such as deep reinforcement learning. Its templates are highly configurable and can be composed in different ways to create different hardware architectures. The evaluation compares different accelerator architectures implemented with STANN to showcase STANN's flexibility. A Xilinx Alveo U50 and a Xilinx Versal ACAP development board are used as the hardware platforms for the evaluation. The results show that the new Versal architecture is very promising for neural network training due to its improved support for floating-point calculations. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
dc.language.isoen
dc.publisherSpringer Science and Business Media Deutschland GmbH
dc.relation.ispartofLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
dc.subjectDeep Learning
dc.subjectDigital arithmetic
dc.subjectField programmable gate arrays (FPGA)
dc.subjectFPGA
dc.subjectFPGA-based implementation
dc.subjectHardware Accelerators
dc.subjectHigh level synthesis
dc.subjectMonolithics
dc.subjectNetwork architecture
dc.subjectNetwork inference
dc.subjectNetwork training
dc.subjectNeural networks
dc.subjectNeural networks trainings
dc.subjectNeural-networks
dc.subjectReinforcement learning
dc.subjectResearch areas
dc.subjectTemplate libraries
dc.titleSTANN – Synthesis Templates forArtificial Neural Network Inference andTraining
dc.typeconference paper
dc.identifier.doi10.1007/978-3-031-43085-5_31
dc.identifier.scopus2-s2.0-85174491693
dc.identifier.urlhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85174491693&doi=10.1007%2f978-3-031-43085-5_31&partnerID=40&md5=742208bdf3c1dd58b6beb12f1af00269
dc.description.volume14134 LNCS
dc.description.startpage394 – 405
dcterms.isPartOf.abbreviationLect. Notes Comput. Sci.
local.import.remainsaffiliations : Osnabrück University, Osnabrück, Germany
local.import.remainscorrespondence_address : M. Rothmann; Osnabrück University, Osnabrück, Germany; email: mrothmann@uni-osnabrueck.de
local.import.remainspublication_stage : Final
crisitem.author.deptFB 06 - Mathematik/Informatik-
crisitem.author.deptidfb06-
crisitem.author.orcid0000-0003-1005-5753-
crisitem.author.parentorgUniversität Osnabrück-
crisitem.author.netidPoMa309-
Zur Kurzanzeige

Seitenaufrufe

6
Letzte Woche
1
Letzter Monat
2
geprüft am 23.05.2024

Google ScholarTM

Prüfen

Altmetric