Affiliation:
1. Pacific Northwest National Laboratory, Richland, WA, USA.
Abstract
In this article, we develop guidelines for evaluating visual analytics environments based on a synthesis of reviews for the entries to the 2009 Visual Analytics Science and Technology (VAST) Symposium Challenge and from a user study with professional intelligence analysts. By analyzing the 2009 VAST Challenge reviews, we gained a better understanding of what is important to our reviewers, both visualization researchers and professional analysts. We also report on a small user study with professional analysts to determine the important factors that they use in evaluating visual analysis systems. We also looked at guidelines developed by researchers in various domains and synthesized the results from these three efforts into an initial set for use by others in the community. One challenge for future visual analytics systems is to help in the generation of reports. In our user study, we also worked with analysts to understand the criteria they used to evaluate the quality of analytic reports. We propose that this knowledge will be useful as researchers look at systems to automate some of the report generation.1 From these two efforts, we produced some initial guidelines for evaluating visual analytics environments and for the evaluation of analytic reports. It is important to understand that these guidelines are initial drafts and are limited in scope as the visual analytics systems we evaluated were used in specific tasks. We propose these guidelines as a starting point for the Visual Analytics Community.
Subject
Computer Vision and Pattern Recognition
Cited by
20 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献