Abstract
AbstractWhen a recurrent neural network (RNN) language model is used for caption generation, the image information can be fed to the neural network either by directly incorporating it in the RNN – conditioning the language model by ‘injecting’ image features – or in a layer following the RNN – conditioning the language model by ‘merging’ image features. While both options are attested in the literature, there is as yet no systematic comparison between the two. In this paper, we empirically show that it is not especially detrimental to performance whether one architecture is used or another. The merge architecture does have practical advantages, as conditioning by merging allows the RNN’s hidden state vector to shrink in size by up to four times. Our results suggest that the visual and linguistic modalities for caption generation need not be jointly encoded by the RNN as that yields large, memory-intensive models with few tangible advantages in performance; rather, the multimodal integration should be delayed to a subsequent stage.
Publisher
Cambridge University Press (CUP)
Subject
Artificial Intelligence,Linguistics and Language,Language and Linguistics,Software
Reference45 articles.
1. Kiros R. , Salakhutdinov R. , and Zemel R. S. 2014b. Unifying visual-semantic embeddings with multimodal neural language models. CoRR, 1411.2539.
2. Kiros R. , Salakhutdinov R. , and Zemel R. S. 2014a. Multimodal neural language models. In Proceedings of the ICML’14, pp. 595–603.
3. Chen X. , and Zitnick C. L. 2015. Mind’s eye: a recurrent visual representation for image caption generation. In Proceedings of the CVPR’15.
4. Understanding the difficulty of training deep feedforward neural networks;Glorot;Aistats,2010
5. Donahue J. , Hendricks L. A. , Guadarrama S. , Rohrbach M. , Venugopalan S. , Saenko K. , and Darrell T. 2015. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the CVPR’15.
Cited by
64 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Image caption generation using transfer learning;Computer Science and Mathematical Modelling;2023-10-30
2. RBBA: ResNet - BERT - Bahdanau Attention for Image Caption Generator;2023 14th International Conference on Information and Communication Technology Convergence (ICTC);2023-10-11
3. Time-dependent deep learning predictions of 3D electrode particle-resolved microstructure effect on voltage discharge curves;Journal of Power Sources;2023-09
4. The BeeMate: Air quality monitoring through crowdsourced audiovisual data;2023 8th International Conference on Smart and Sustainable Technologies (SpliTech);2023-06-20
5. Generative image captioning in Urdu using deep learning;Journal of Ambient Intelligence and Humanized Computing;2023-04-10