The goal of research in computer vision is to impart and improvise the visual intelligence in a machine i.e. to facilitate a machine to see, perceive, and respond in human-like fashion(though with reduced complexity) using multitudinal sensors and actuators. The major challenge in dealing with these kinds of machines is in making them perceive and learn from huge amount of visual information received through their sensors. Mimicking human like visual perception is an area of research that grabs attention of many researchers. To achieve this complex task of visual perception and learning, Visual Attention model is developed. A visual attention model enables the robot to selectively (and autonomously) choose a “behaviourally relevant” segment of visual information for further processing while relative exclusion of others (Visual Attention for Robotic Cognition: A Survey, March 2011).The aim of this paper is to suggest an improvised visual attention model with reduced complexity while determining the potential region of interest in a scenario.
cited By (since 1996)0; Conference of org.apache.xalan.xsltc.dom.DOMAdapter@7961a7e3 ; Conference Date: org.apache.xalan.xsltc.dom.DOMAdapter@7a556656 Through org.apache.xalan.xsltc.dom.DOMAdapter@7cb1d6d8; Conference Code:93500
J. Amudha, Chadalawada, R. Kiran, Subashini, V., and B. Kumar, B., “Optimised Computational Visual Attention Model for Robotic Cognition”, in Intelligent Informatics: Proceedings of the International Symposium on Intelligent Informatics ISI'12 Held at August 4-5 2012, , Chennai, India, 2012, vol. 182, pp. 249–260.