Collective intelligence within web video

Author:

Chorianopoulos Konstantinos

Abstract

AbstractWe present a user-based approach for detecting interesting video segments through simple signal processing of users’ collective interactions with the video player (e.g., seek/scrub, play, pause). Previous research has focused on content-based systems that have the benefit of analyzing a video without user interactions, but they are monolithic, because the resulting key-frames are the same regardless of the user preferences. We developed the open-source SocialSkip system on a modular cloud-based architecture and analyzed hundreds of user interactions within difficult video genres (lecture, how-to, documentary) by modeling them as user interest time series. We found that the replaying activity is better than the skipping forward one in matching the semantics of a video, and that all interesting video segments can be found within a factor of two times the average user skipping step from the local maximums of the replay time series. The concept of simple signal processing of implicit user interactions within video could be applied to any type of Web video system (e.g., TV, desktop, tablet), in order to improve the user navigation experience with dynamic and personalized key-frames.

Publisher

Springer Science and Business Media LLC

Subject

General Computer Science

Reference32 articles.

1. Cha M, Kwak H, Rodriguez P, Ahn Y, Moon S: I tube, you tube, everybody tubes: analyzing the world’s largest user generated content video system. In Proceedings of the 7th ACM SIGCOMM Conference on internet Measurement (San Diego, California, USA, October 24–26, 2007). IMC ’07. New York, NY: ACM; 2007:1–14.

2. Cheng X, Dale C, Liu J: Statistics and social network of YouTube videos. In Quality of service. IEEE: IWQoS 2008. 16th International Workshop on; 2008:229–238.

3. Mitra S, Mayank A, Amit Y, Niklas C, Derek E, Anirban M: Characterizing Web-based video sharing workloads. ACM Trans. 2011, Web 5(2):Article 8. May 2011

4. Davis M: Media streams: an iconic visual language for video representation. In Human-computer interaction. Edited by: Baecker RM, Jonathan G, Buxton WAS, Saul G. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc; 1995:854–866.

5. Ma Y-F, Lu L, Zhang H-J, Li M: A user attention model for video summarization. In Proceedings of the tenth ACM international conference on multimedia (MULTIMEDIA ’02). New York, NY, USA: ACM; 2002:533–542.

Cited by 30 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Detecting Important Video Shots Using Video Summarization and Feature Extraction Techniques;2022 2nd International Conference on Advances in Engineering Science and Technology (AEST);2022-10-24

2. Immersion Measurement in Watching Videos Using Eye-tracking Data;IEEE Transactions on Affective Computing;2022-10-01

3. DanmuVis: Visualizing Danmu Content Dynamics and Associated Viewer Behaviors in Online Videos;Computer Graphics Forum;2022-06

4. SoftVideo: Improving the Learning Experience of Software Tutorial Videos with Collective Interaction Data;27th International Conference on Intelligent User Interfaces;2022-03-22

5. Video Important Shot Detection Based on ORB Algorithm and FLANN Technique;2022 8th International Engineering Conference on Sustainable Technology and Development (IEC);2022-02-23

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3