Affiliation:
1. Ridgeline, Inc., USA
2. California Polytechnic State University, USA
Abstract
Background and Context.
Students’ programming projects are often assessed on the basis of their tests as well as their implementations, most commonly using test adequacy criteria like branch coverage, or, in some cases, mutation analysis. As a result, students are implicitly encouraged to use these tools during their development process (i.e., so they have awareness of the strength of their own test suites).
Objectives.
Little is known about how students choose test cases for their software while being guided by these feedback mechanisms. We aim to explore the interaction between students and commonly used testing feedback mechanisms (in this case, branch coverage and mutation-based feedback).
Method.
We use grounded theory to explore this interaction. We conducted 12 think-aloud interviews with students as they were asked to complete a series of software testing tasks, each of which involved a different feedback mechanism. Interviews were recorded and transcripts were analyzed, and we present the overarching themes that emerged from our analysis.
Findings.
Our findings are organized into a process model describing how students completed software testing tasks while being guided by a test adequacy criterion. Program comprehension strategies were commonly employed to reason about feedback and devise test cases. Mutation-based feedback tended to be cognitively overwhelming for students, and they resorted to weaker heuristics in order to address this feedback.
Implications.
In the presence of testing feedback, students did not appear to consider
problem coverage
as a testing goal so much as
program coverage
. While test adequacy criteria can be useful for
assessment
of software tests, we must consider whether they represent good goals for testing, and if our current methods of practice and assessment are encouraging poor testing habits.
Funder
Baker/Koob endowments at California Polytechnic State University
Publisher
Association for Computing Machinery (ACM)
Subject
Education,General Computer Science
Reference68 articles.
1. Mutation analysis vs. code coverage in automated assessment of students' testing skills
2. Maurício Aniche. 2022. Effective Software Testing: A Developer’s Guide. Manning, Shelter Island, NY.
3. Pragmatic Software Testing Education
4. How developers engineer test cases: An observational study;Aniche Maurício;IEEE Transactions on Software Engineering,2021
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Probeable Problems for Beginner-level Programming-with-AI Contests;Proceedings of the 2024 ACM Conference on International Computing Education Research - Volume 1;2024-08-12