Affiliation:
1. Zhejiang University, China
2. Dalian University of Technology, China
3. Arizona State University
Abstract
While visualization has been widely used as a data presentation tool in both desktop and mobile devices, the rapid visualization of information from images is still underexplored. In this work, we present a smartphone image acquisition and visualization approach for text-based data. Our prototype, ShotVis, takes images of text captured from mobile devices and extracts information for visualization. First, scattered characters in the text are processed and interactively reformulated to be stored as structured data (i.e., tables of numbers, lists of words, sentences). From there, ShotVis allows users to interactively bind visual forms to the underlying data and produce visualizations of the selected forms through touch-based interactions. In this manner, ShotVis can quickly summarize text from images into word clouds, scatterplots, and various other visualizations all through a simple click of the camera. In this way, ShotVis facilitates the interactive exploration of text data captured via cameras in smartphone devices. To demonstrate our prototype, several case studies are presented along with one user study to demonstrate the effectiveness of our approach.
Funder
Major Program of the National Natural Science Foundation of China
Fundamental Research Funds for the Central Universities
Major Program of the Natural Science Foundation of Zhejiang province, China
National Natural Science Foundation of China
US National Science Foundation
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Hardware and Architecture
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Multimodal Visual-Semantic Representations Learning for Scene Text Recognition;ACM Transactions on Multimedia Computing, Communications, and Applications;2024-03-27
2. Convolutional Attention Networks for Scene Text Recognition;ACM Transactions on Multimedia Computing, Communications, and Applications;2019-01-31