Evaluating the Effectiveness of a Testing Checklist Intervention in CS2: An Quasi-experimental Replication Study

Published:

Recommended citation:

Gina R. Bai, Zuoxuan Jiang, Thomas W. Price, and Kathryn T. Stolee. 2024. Evaluating the Effectiveness of a Testing Checklist Intervention in CS2: An Quasi-experimental Replication Study. In Proceedings of the 20th ACM Conference on International Computing Education Research V.1 (ICER ’24 Vol. 1), August 13–15, 2024, Melbourne, VIC, Australia. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3632620.3671102


[Full Paper]

Abstract

Students often run into trouble when learning and practicing software testing. Recent prior studies demonstrate that a lightweight testing checklist that contains testing strategies and tutorial information could assist students in writing higher-quality tests. Prior studies also suggest that students with lower prior knowledge in unit testing may benefit more from the checklists. However, insights on the potential benefits and costs of the testing checklists in a classroom setting are lacking. To address this, we conducted an operational replication study in a CS2 course with 342 students (171 from Fall 2023 and 171 from Spring 2024) who had no prior experience in unit testing.

In this paper, we report our experience in introducing the testing checklists as optional tool support in a CS2 course. To evaluate the effectiveness of the testing checklists in a classroom setting, we quantitatively analyze a combination of programming assignment submissions and survey responses generated by students. Our results suggest that students who received the testing checklists achieved significantly higher quality in their test code, in terms of code coverage and mutation coverage, compared to those who did not. We also observed that the exposure to the testing checklists in students’ early learning process encouraged students to write more unit tests to cover possible testing scenarios.