Affiliation:
1. University of Virginia, Charlottesville, VA, USA
2. University of California Santa Cruz, Santa Cruz, CA, USA
Abstract
In this paper, we identify trends about, benefits from, and barriers to performing user evaluations in software engineering research. From a corpus of over 3,000 papers spanning ten years, we report on various subtypes of user evaluations (e.g., coding tasks vs. questionnaires) and relate user evaluations to paper topics (e.g., debugging vs. technology transfer). We identify the external measures of impact, such as best paper awards and citation counts, that are correlated with the presence of user evaluations. We complement this with a survey of over 100 researchers from over 40 different universities and labs in which we identify a set of perceived barriers to performing user evaluations.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design,Software
Cited by
14 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. What’s (Not) Working in Programmer User Studies?;ACM Transactions on Software Engineering and Methodology;2023-07-24
2. Usability-Oriented Design of Liquid Types for Java;2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE);2023-05
3. Let’s Talk With Developers, Not About Developers: A Review of Automatic Program Repair Research;IEEE Transactions on Software Engineering;2023-01-01
4. PLIERS;ACM Transactions on Computer-Human Interaction;2021-08-31
5. Search-Based Crash Reproduction and Its Impact on Debugging;IEEE Transactions on Software Engineering;2020-12-01