Learning a visual attention model for adaptive fast-forward in video surveillance

Autor(en): Ḧoferlin, B.
Pfl̈uger, H.
Ḧoferlin, M.
Heidemann, G. 
Weiskopf, D.
Stichwörter: Adaptive fast-forward; Fast forward; Field of views; Person detector; Rectangle features; Relevance feedback; Semantic Model; Semantics, Security systems; Topdown; Video surveillance; Visual attention; Visual attention model, Pattern recognition
Erscheinungsdatum: 2012
Journal: ICPRAM 2012 - Proceedings of the 1st International Conference on Pattern Recognition Applications and Methods
Volumen: 2
Startseite: 25
Seitenende: 32
Zusammenfassung: 
The focus of visual attention is guided by salient signals in the peripheral field of view (bottom-up) as well as by the relevance feedback of a semantic model (top-down). As a result, humans are able to evaluate new situations very fast, with only a view numbers of fixations. In this paper, we present a learned model for the fast prediction of visual attention in video. We consider bottom-up and memory-less top-down mechanisms of visual attention guidance, and apply the model to video playback-speed adaption. The presented visual attention model is based on rectangle features that are fast to compute and capable of describing the known mechanisms of bottom-up processing, such as motion, contrast, color, symmetry, and others as well as topdown cues, such as face and person detectors. We show that the visual attention model outperforms other recent methods in adaption of video playback-speed.
Beschreibung: 
Conference of 1st International Conference on Pattern Recognition Applications and Methods, ICPRAM 2012 ; Conference Date: 6 February 2012 Through 8 February 2012; Conference Code:90182
ISBN: 9789898425980
Externe URL: https://www.scopus.com/inward/record.uri?eid=2-s2.0-84862234015&partnerID=40&md5=3e535c40414572e1ffa989d32fe9c8d6

Show full item record

Page view(s)

1
Last Week
0
Last month
0
checked on May 19, 2024

Google ScholarTM

Check

Altmetric