Abstract
AbstractWe propose a biologically inspired attentional search model for target search in a 3D environment, which has two separate channels for object classification, analogous to the “what” pathway in the human visual system, and for prediction of the next location of the camera, analogous to the “where” pathway. We generated 3D Cluttered Cube datasets that consist of an image on one vertical face, and clutter images on the other faces. The camera goes around each cube on a circular orbit centered on the cube and determines the identity of the image and the face on which it is located. The images pasted on the cube faces were drawn from three: MNIST handwriting digit, QuickDraw, and RGB MNIST handwriting digit datasets. The attentional input of 3 concentric cropped windows resembling the high-resolution central fovea and low-resolution periphery of the retina, flows through a Classifier Network and a Camera Motion Network. The Classifier Network classifies the current view into one of the classes or clutter. The Camera Motion Network predicts the camera’s next position on the orbit (varying the azimuthal angle or ‘θ’). Here the camera performs one of three actions: move right, move left, or don’t move. The Camera-Position Network adds the camera’s current θ information into the higher features level of the Classifier Network and the Camera Motion Network. The Camera Motion Network is trained using Q-learning where the reward is 1 if the classifier network gives the correct classification, otherwise 0. Total loss is computed by adding the mean square loss of temporal difference and cross entropy loss. Then the total loss is backpropagated using Adam optimizer. Results on two grayscale image datasets and one RGB image dataset show that the proposed model is successfully able to discover the desired search pattern to find the target face on the cube, and also classify the target face accurately.
Publisher
Cold Spring Harbor Laboratory