Affiliation:
1. Peking University, Beijing, China
2. Peking University, China and Peng Cheng Laboratory, Shenzhen, China
Abstract
Image-text retrieval aims to take the text (image) query to retrieve the semantically relevant images (texts), which is fundamental and critical in the search system, online shopping, and social network. Existing works have shown the effectiveness of visual-semantic embedding and unimodal knowledge exploiting (e.g., textual knowledge) in connecting the image and text. However, they neglect the implicit multimodal knowledge relations between these two modalities when the image contains information that is not directly described in the text, hindering the ability to connect the image and text with the implicit semantic relations. For instance, an image shows a person next to the “tap” but the pairing text description may only include the word “wash,” missing the washing tool “tap.” The implicit semantic relation between image object “tap” and text word “wash” can help to connect the above image and text. To sufficiently utilize the implicit multimodal knowledge relations, we propose a
M
ultimodal
K
nowledge enhanced
V
isual-
S
emantic
E
mbedding (MKVSE) approach building a multimodal knowledge graph to explicitly represent the implicit multimodal knowledge relations and injecting it to visual-semantic embedding for image-text retrieval task. The contributions in this article can be summarized as follows: (1)
M
ultimodal
K
nowledge
G
raph (MKG) is proposed to explicitly represent the implicit multimodal knowledge relations between the image and text as
intra-modal semantic relations
and
inter-modal co-occurrence relations
. Intra-modal semantic relations provide synonymy information that is implicit in the unimodal data such as the text corpus. And inter-modal co-occurrence relations characterize the co-occurrence correlations (such as temporal, causal, and logical) that are implicit in image-text pairs. These two relations help establishing reliable image-text connections in the higher-level semantic space. (2)
M
ultimodal
G
raph
C
onvolution
N
etworks (MGCN) is proposed to reason on the MKG in two steps to sufficiently utilize the implicit multimodal knowledge relations. In the first step, MGCN focuses on the intra-modal relations to distinguish other entities in the semantic space. In the second step, MGCN focuses on the inter-modal relations to connect multimodal entities based on co-occurrence correlations. The two-step reasoning manner can sufficiently utilize the implicit semantic relations between two modal entities to enhance the embeddings of the image and text. Extensive experiments are conducted on two widely used datasets, namely, Flickr30k and MSCOCO, to demonstrate the superiority of the proposed MKVSE approach in achieving state-of-the-art performances. The codes are available at
https://github.com/PKU-ICST-MIPL/MKVSE-TOMM2023
.
Funder
National Natural Science Foundation of China
2022 Tencent Wechat Rhino-Bird Focused Research Program
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Hardware and Architecture
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献