ACRONYM

Author:

Monaghan Fergal1,Handschuh Siegfried2,O’Sullivan David2

Affiliation:

1. SAP Research, UK

2. National University of Ireland, Galway, Ireland

Abstract

With the advent of online social networks and User-Generated Content (UGC), the social Web is experiencing an explosion of audio-visual data. However, the usefulness of the collected data is in doubt, given that the means of retrieval are limited by the semantic gap between them and people’s perceived understanding of the memories they represent. Whereas machines interpret UGC media as series of binary audio-visual data, humans perceive the context under which the content is captured and the people, places, and events represented. The Annotation CReatiON for Your Media (ACRONYM) framework addresses the semantic gap by supporting the creation of a layer of explicit machine-interpretable meaning describing UGC context. This paper presents an overview of a use case of ACRONYM for semantic annotation of personal photographs. The authors define a set of recommendation algorithms employed by ACRONYM to support the annotation of generic UGC multimedia. This paper introduces the context metrics and combination methods that form the recommendation algorithms used by ACRONYM to determine the people represented in multimedia resources. For the photograph annotation use case, these result in an increase in recommendation accuracy. Context-based algorithms provide a cheap and robust means of UGC media annotation that is compatible with and complimentary to content-recognition techniques.

Publisher

IGI Global

Reference53 articles.

1. Adida, B., Birbeck, M., McCarron, S., & Pemberton, S. (2008). RDFa in XHTML: Syntax and processing. Retrieved from http://www.w3.org/TR/rdfa-syntax/

2. Adobe Developer Technologies. (2005). Extensible Metadata Platform (XMP) specification. San Jose, CA: Adobe Systems Incorporated. Retrieved from http://partners.adobe.com/public/developer/en/xmp/sdk/XMPspecification.pdf

3. Ahern, S., King, S., Naaman, M., Nair, R., & Yang, J. H.-I. (2007). ZoneTag: Rich, community-supported context-aware media capture and annotation. In Proceedings of the Mobile Spatial Interaction Workshop at the SIGCHI Conference on Human Factors in Computing Systems, San Jose, CA. New York, NY: ACM.

4. Anguelov, D., Lee, K.-C., Gokturk, S. B., & Sumengen, B. (2007). Contextual identity recognition in personal photo albums. In Proceedings of the International Conference on Computer Vision and Pattern Recognition, Minneapolis, MN (pp. 1-7). Washington, DC: IEEE Computer Society. Retrieved from http://robotics.stanford.edu/~drago/Papers/ cvpr2007.pdf

5. Barthelmess, P., Kaiser, E., & McGee, D. R. (2007). Toward content-aware multimodal tagging of personal photo collections. In Proceedings of the 9th International Conference on Multimodal Interfaces, Nagoya, Aichi, Japan (pp.122-125). New York, NY: ACM. Retrieved from http://home.comcast.net/~pbarthelmess/ Publications/Photos/icmi259-barthelmess.pdf

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3