Abstract
AbstractThe purpose of our work is to automatically generate textual video description schemas from surveillance video scenes compatible with police incidents reports. Our proposed approach is based on a generic and flexible context-free ontology. The general schema is of the form [actuator] [action] [over/with] [actuated object] [+ descriptors: distance, speed, etc.]. We focus on scenes containing exactly two objects. Through elaborated steps, we generate a formatted textual description. We try to identify the existence of an interaction between the two objects, including remote interaction which does not involve physical contact and we point out when aggressivity took place in these cases. We use supervised deep learning to classify scenes into interaction or no-interaction classes and then into subclasses. The chosen descriptors used to represent subclasses are keys in surveillance systems that help generate live alerts and facilitate offline investigation.
Publisher
Springer Science and Business Media LLC
Subject
General Earth and Planetary Sciences,General Physics and Astronomy,General Engineering,General Environmental Science,General Materials Science,General Chemical Engineering
Reference31 articles.
1. Youssef WF, Haidar S, Joly P (2018) Generic video surveillance description ontology. In: 1st international conference on big data and cyber-security intelligence (BDCSIntell 2018). BDCSIntell 2018. 6
2. Andrei Georgios E, Daniel H et al (2021): Language and vision workshop - 2018. http://languageandvision.com/2018.html. Accessed Feb 2021
3. Aafaq N, Mian A, Liu W, Gilani SZ, Shah M (2018) Video description: a survey of methods, datasets and evaluation metrics
4. Venugopalan S, Xu H, Donahue J, Rohrbach M, Mooney R, Saenko, K (2014) Translating videos to natural language using deep recurrent neural
networks
5. Pan Y, Mei T, Yao T, Li H, Rui Y (2016) Jointly modeling embedding and translation to bridge video and language. In: proceedings of the ieee
conference on computer vision and pattern recognition. pp 4594–4602
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献