A Newly Developed Ground Truth Dataset for Visual Saliency in Videos

Welcome to DSpace BU Repository

Welcome to the Bahria University DSpace digital repository. DSpace is a digital service that collects, preserves, and distributes digital material. Repositories are important tools for preserving an organization's legacy; they facilitate digital preservation and scholarly communication.

Show simple item record

dc.contributor.author Muhammad Zeeshan
dc.contributor.author Muhammad Majid
dc.contributor.author Imran Fareed Nizami
dc.contributor.author Syed Muhammad Anwar
dc.contributor.author Ikram Ud Din
dc.contributor.author Muhammad Khurram Khan
dc.date.accessioned 2018-12-05T10:35:28Z
dc.date.available 2018-12-05T10:35:28Z
dc.date.issued 2018
dc.identifier.uri http://hdl.handle.net/123456789/7885
dc.description.abstract Visual saliency models aim to detect important and eye catching portions in a scene by exploiting human visual system characteristics. The effectiveness of visual saliency models is evaluated by comparing saliency maps with a ground truth data set. In recent years, several visual saliency computation algorithms and ground truth data sets have been proposed for images. However, there is lack of ground truth data sets for videos. A new human labeled ground truth is prepared for video sequences that are commonly used in video coding. The selected videos are from different genres including conversational, sports, outdoor, and indoor having low, medium, and high motion. Saliency mask is obtained for each video by nine different subjects, which are asked to label the salient region in each frame in the form of a rectangular bounding box. A majority voting criteria is used to construct a nal ground truth saliency mask for each frame. Sixteen different state-of-the-art visual saliency algorithms are selected for comparison and their effectiveness is computed quantitatively on the newly developed ground truth. It is evident from results that multiple kernel learning and spectral residual-based saliency algorithms perform best for different genres and motion-type videos in terms of F-measure and execution time, respectively. en_US
dc.language.iso en en_US
dc.publisher Bahria University Islamabad Campus en_US
dc.relation.ispartofseries ;10.1109/ACCESS.2018.2826562
dc.subject Department of Electrical Engineering 10.1109/ACCESS.2018.2826562 en_US
dc.title A Newly Developed Ground Truth Dataset for Visual Saliency in Videos en_US
dc.type Article en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account