Affiliation:
1. University of Idaho, Moscow, ID, USA
Abstract
Guided visual search is a common theme in HF applications. In this project we use large-scale databases of wildlife camera-trap imagery as a testbed for optimization of target highlighting. MegaDetector, a generic animal detection deep machine learning model, provides bounding boxes for potential targets within an image. In some cases, human observers are necessary to confirm or further classify the detections. Outlining the bounding box can direct human attention to the target AOI to improve observers’ classification speed and accuracy. However, this outline introduces visual clutter and crowding at the AOI boundary. In a first empirical study we investigate the use of padding to mitigate the effects of local clutter and compared different methods of visual highlighting (colored outline vs. blur outside of AOI). We found support for using padding to improve performance when animals were hard to see. Both colored outlines and blur were effective at directing observers’ attention.
Reference7 articles.
1. Beery S., Morris D., Yang S. (2019). Efficient Pipeline for Camera Trap Image Review (arXiv:1907.06772). arXiv. http://arxiv.org/abs/1907.06772
2. Recognition in Terra Incognita
3. Display Clutter
4. Orienting of Attention
5. Measuring visual clutter