Abstracting local transformer attention for enhancing interpretability on time series data

Autor(en): Schwenke, L.
Atzmueller, M.
Herausgeber: Seidl, T.
Fromm, M.
Obermeier, S.
Stichwörter: Abstracting; Attention; Data visualization; Deep Learning; Interpretability; Learning approach; Performance; Petroleum reservoir evaluation, Attention; Research problems; Sequential data; Time Series Analysis; Time-series analysis; Time-series data; Transformer; Transformer, Time series analysis
Erscheinungsdatum: 2020
Herausgeber: CEUR-WS
Journal: CEUR Workshop Proceedings
Volumen: 2993
Startseite: 205
Seitenende: 218
Zusammenfassung: 
Transformers have demonstrated considerable performance on sequential data, recently also towards time series data. However, enhancing their interpretability and explainability is still a major research problem, similar to other prominent deep learning approaches. In this paper, we tackle this issue specifically for time series data, where we build on our previous research regarding attention abstraction, aggregation and visualization. In particular, we combine two of our initial attention aggregation techniques and perform a detailed evaluation of this extended scope with our previously used local attention abstraction technique, demonstrating its efficacy on one synthetic as well as three real-world datasets. © 2021 Copyright for this paper by its authors.
Beschreibung: 
Conference of 2021 Learning, Knowledge, Data, Analytics Workshops, LWDA 2021 ; Conference Date: 1 September 2021 Through 3 September 2021; Conference Code:173242
ISSN: 16130073
Externe URL: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85118864405&partnerID=40&md5=fe0191bb6a00b9dbb952f875884b8ffa

Show full item record

Page view(s)

25
Last Week
0
Last month
1
checked on May 18, 2024

Google ScholarTM

Check