Dynamic Fusion Networks For Predicting Saliency in Videos

Göster/ Aç
Tarih
2023Yazar
Koçak Özcan, Aysun
Ambargo Süresi
Acik erisimÜst veri
Tüm öğe kaydını gösterÖzet
Saliency estimation methods aim to model the human visual attention mechanisms which help to process the most relevant regions in a scene. In other words, the saliency estimation methods' goal is to develop a computational model to detect attention grabbing regions in a scene. The literature consists of the methods grouped under two different branch according to the characteristics of the scene; static and dynamic (video). Compared to static saliency studies, dynamic saliency is still unexplored. Predicting saliency in videos is a challenging problem due to complex modeling of interactions between spatial and temporal information, especially when ever-changing, dynamic nature of videos is considered. In recent years, researchers have proposed large-scale datasets and models that take advantage of deep learning as a way to understand what's important for video saliency. These approaches, however, learn to combine spatial and temporal features in a static manner and do not adapt themselves much to the changes in the video content.
In this thesis, we introduce Gated Fusion Network for dynamic saliency (GFSalNet), the first deep saliency model capable of making predictions in a dynamic way via gated fusion mechanism and investigate the efficiency of adaptive combining strategy by using the proposed model. Moreover, our model also exploits spatial and channel-wise attention within a multi-scale architecture that further allows for highly accurate predictions. We evaluate the proposed approach on a number of datasets, and our experimental analysis demonstrates that it outperforms or is highly competitive with the state of the art. Importantly, we show that it has a good generalization ability, and moreover, exploits temporal information more effectively via its adaptive fusion scheme. The detailed analyzes of the effects of the temporal and spatial components of the dynamic scenes and adaptive fusion strategy are demonstrated via qualitative and quantitative results.