IIT-KGP Visual Saliency Data
This page contains a link to the groundtruth data prepared from Visual Saliency Experimentation, conducted during 2009 January to March. This data have been used as benchmark data for comparing different algorithms to determine salient points in an image. Following the links from these pages, one may obtain the set of original images and their corresponding initial groundtruths and final groundtruths (as bilevel images) in three different zipped files. Correspondences are established by tagging the same number in their file names.

Salient locations are recorded for 100 images assisted by 62 volunteers. Each of these volunteers is shown 24 images.

Images for our experiment are chosen from a larger set of databases taken from iLab image database1,2, UCID3, Zurich natural image database4, and the Internet.

1    L. Itti, C. Koch, and E. Niebur, A model of saliency-based visual attention for rapid scene analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pages 1254-1259, 1998.
2    L. Itti, and C. Koch, Feature combination strategies for saliency-based visual attention systems, Journal of Electronic Imaging, vol. 10, issue 1, pages 161-169, 2001.
3    G. Schaefer, and M. Stitch, UCID - an uncompressed color image database, Proc. of SPIE Storage and Retrieval Methods and Applications for Multimedia, vol. 5307, pages 472-480, 2004.
4    H. P. Frey, P. Konig, and W. Einhauser, The role of first- and second- order stimulus features for human overt attention, Perception and Psychophysics, vol. 69, pages 153-161, 2007.




Set of files for download

Images - Set of input images.
Initial Groundtruth - Circular disk representation of groundtruths. The porcedure is presented in 5.
Final Groundtruth - Image Segmentation is utilized for better represent of groundtruth. This representation of groundtruth nearly maintains the shape and size of the underlying salient objects.



5    R. Pal, J. Mukherjee, and P. Mitra, An Approach for Preparing Groundtruth Data and Evaluating Visual Saliency Models, Proc. of International Conference on Pattern Recognition and Machine Intelligence, LNCS 5909, pages 279-284, December 2009.