Multi-polarimetric textural distinctiveness for outdoor robotic saliency detection

TitleMulti-polarimetric textural distinctiveness for outdoor robotic saliency detection
Publication TypeConference Paper
Year of Publication2015
AuthorsHaider, S., C. Scharfenberger, F. Kazemzadeh, A. Wong, and D. A. Clausi
Conference NameSPIE Electronic Imaging: Intelligent Robots and Computer Vision XXXII: Algorithms and Techniques
Conference LocationSan Francisco, California, United States
Abstract

Saliency detection is utilized in applications where distinguishing unique items in a scene is important. One such application is the area of mobile robotics, where robots that rely on vision, while navigating outdoors to detect and identify objects, utilize saliency approaches to identify a set of potential candidates to recognize. The state of art in saliency detection for mobile robotics often rely upon visible light imaging using conventional camera setups aiming to distinguish an object against its surroundings based on factors such as feature compactness, heterogeneity and/or homogeneity. These methods limit themselves only to what can be captured using conventional camera setups, which can be hampered by image saturation seen on sunny days, as well as detector insensitivity to slight differences in colour. To address some of these issues, neutral density filters have been placed on cameras for mobile robotics to remove bright specular highlights, but require longer exposure times and do increase the sensitivity to slight colour differences.
To remedy these issues for mobile robotics, one is motivated to incorporate different optical modes to capture additional useful information about the scene. In this work, we propose a novel saliency detection method that not only incorporates an additional mode for saliency detection, but is also not well-explored in literature: visible light multi-polarimetric imaging. The incorporation of multi-polarimetric imaging for saliency detection is motivated by the optical property of materials known as Fresnel reflections. By observing the scenes split in reflected intensity between multiple polarization states, we can infer the distribution of the refractive index and rely upon that in determining object saliency.
In the proposed multi-polarimetric saliency detection approach, we captured a visible light image along with multiple polarization states of a scene. Rotational-invariant multi-polarimetric textural representations are extracted from the captured multi-polarimetric imaging data and a high-dimensional sparse texture model is learned from the representations. The multi-polarimetric texture distinctiveness of the scene is characterized using a fully-connected graphical model based on the sparse texture model, which is then used to determine the saliency at each pixel of the scene along with general visual attentive constraints.
To evaluate the efficacy of the proposed multi-polarimetric texture distinctiveness approach for the purpose of mobile robotics saliency detection, images were captured of stationary objects with similar colour intensities as their surroundings under strong natural ambient light, which is considered difficult for existing saliency detection approaches. Based on the captured images, the proposed approach was then compared to existing state-ofthe-art saliency detection approaches. It was observed that existing saliency detection algorithms struggled with determining the saliency of the objects due to color intensity similarities between the objects and their surroundings. On the other hand, the proposed multi-polarimetric texture distinctiveness approach, by utilizing polarimetric information in its saliency detection framework, was able to provide noticeably improved saliency maps. As such, the proposed approach shows considerable promise in significantly improving the detection of salient objects under difficult scenarios often encountered in mobile robotics, and merits further investigation.