Visual preferences prediction for a photo gallery based on image captioning methods
Author:
Kharchevnikova A.S.1,
Savchenko A.V.1
Affiliation:
1. National Research University Higher School of Economics, Nizhny Novgorod, Russia
Abstract
The paper considers a problem of extracting user preferences based on their photo gallery. We propose a novel approach based on image captioning, i.e., automatic generation of textual descriptions of photos, and their classification. Known image captioning methods based on convolutional and recurrent (Long short-term memory) neural networks are analyzed. We train several models that combine the visual features of a photograph and the outputs of an Long short-term memory block by using Google's Conceptual Captions dataset. We examine application of natural language processing algorithms to transform obtained textual annotations into user preferences. Experimental studies are carried out using Microsoft COCO Captions, Flickr8k and a specially collected dataset reflecting the user’s interests. It is demonstrated that the best quality of preference prediction is achieved using keyword search methods and text summarization from Watson API, which are 8 % more accurate compared to traditional latent Dirichlet allocation. Moreover, descriptions generated by trained neural models are classified 1 – 7 % more accurately when compared to known image captioning models.
Funder
National Research University Higher School of Economics
Publisher
Samara State National Research University
Subject
Electrical and Electronic Engineering,Computer Science Applications,Atomic and Molecular Physics, and Optics
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Analyzing the Robustness of Vision & Language Models;IEEE/ACM Transactions on Audio, Speech, and Language Processing;2024