Affiliation:
1. The School of Electronic, Electrical and Communication Engineering University of Chinese Academy of Sciences Beijing China
2. The Key Laboratory of Network Information System Technology (NIST) Aerospace Information Research Institute Chinese Academy of Sciences Beijing China
Abstract
AbstractNowadays relational position embedding is widely used in many large multi‐modal models. It begins with relational captioning (a branch of image captioning) and contains two procedures: geometric modelling and prior attention. However, there are some problems that remain unsolved in the conventional procedures. This paper reviews the shortcomings of geometric modelling and prior attention. Then, a new framework called relational guided transformer (RGT) is proposed to verify the authors' conclusion from the origin of relational position embedding—relational captioning. Specifically, RGT has two simple but effective improvements in geometric modelling and prior attention: (1) A machine‐learned geometric modelling strategy called multi‐task geometric modelling (MTG) is used under multi‐task learning, replacing the original hand‐made geometric feature. (2) The effectiveness of multiple kinds of prior attention is discussed and preserved in a better form, which is called spatial guided attention (SGA) to integrate the geometric prior knowledge. Extensive experiments on MSCOCO and Flickr30k have been performed to investigate the effectiveness of each module and prove our argument. The superiority of the model comparing to the authors' baseline has also been proven on the offline evaluation with the “Karpathy” test split of both datasets.
Publisher
Institution of Engineering and Technology (IET)