Affiliation:
1. Guangdong Key Laboratory of Big Data Intelligence for Vocational Education Shenzhen Polytechnic University Shenzhen China
Abstract
SummaryMultimodal sentiment analysis is a popular research direction in the field of affective computing. It extends unimodal‐based sentiment analysis to the environment based on multimodal information exchange. Face retrieval is the most critical technology in the current multimodal sentiment analysis field. Traditional methods for face retrieval rely on large amounts of data to train the matching relationship between text and face images. This approach suffers from several issues, including high data acquisition costs, insufficient or poor quality training data, and high computational costs for training models. To address the aforementioned issues, this paper proposes a face retrieval technique that merges a large language model with a visual base model. This technique explores the use of zero samples for training, eliminating the need to collect data and train the model to achieve face retrieval. As a result, it significantly reduces the cost requirements of data and computational power. First, an accurate natural text of face description is generated using a large language model based on cue engineering based on face‐independent features. Second, the text and images are encoded using a visual base model and mapped into the same multidimensional space for matching. The experimental results indicate that this method performs well on the CelebA dataset. Extracting 31 different sets of face feature descriptions from the CelebA dataset for retrieval achieves 72% on top1 accuracy and 93% on top3 accuracy. Compared to traditional methods, the method proposed in this paper achieves considerable face retrieval accuracy without collecting text image pairs of data and without training the model.