The world around us has abundant of visual information and it is indeed a hilarious job for the brain to process this continuous flow of visual information bombarding the retina and to extract the small portions of information that are important for further actions. Visual attention systems provides the brain with a mechanism of focusing computational resources on one object at a time, either driven by low-level properties (bottom-up attention) or based on a specific task (top-down attention). Moving the focus of attention to locations one by one enables sequential recognition of objects at these locations. What appears to be a straight-forward sequence of processes (first focus attention to a location, then process object information there) is in fact an intricate system of interactions between visual attention and object recognition. How, for instance, can the focus of attention from one object to the next is performed? Can the existing information used for processing the attention can be used for the next object recognition task also? If so how to use it? Whether the existing knowledge about a target object can be utilized in the recognition system to bias the attention from the top down? This thesis attempts to address these questions with a combination of how computational models can be adopted for artificial visual attention systems and how the bottom-up and top-down approaches can be studied empirically for various applications. The base of this research work is on the popular model by Koch and Ullman  which is based on the psychological work by Treisman  terme the feature-integration-theory. The model uses saliency maps in combination with a winner-take-all selection mechanism. Once a region has been selected for processing it is inhibited to enable other regions to compete for the available resources.
J. Amudha, “Performance evaluation of bottom-up and top-down approaches in computational visual attention system”, 2012.