Award Date


Degree Type


Degree Name

Master of Science in Computer Science


Computer Science

First Committee Member

Andreas Stefik

Second Committee Member

Edward Jorgensen

Third Committee Member

Jorge Fonseca Cacho

Fourth Committee Member

Sarah Harris

Number of Pages



Context: Computer Science enrollment has seen increases in recent years. At the University of Nevada, Las Vegas we have seen an average year to year growth rate of 17.33% in the spring and 13.71% in the fall over the past 10 years in our entry level programming course. These enrollment increases have led to considerable additional costs for grading course material.Objective: The goal of this study is to determine the impact of automatic grading systems on students. If automatic grading is at least as effective as manual grading in practice, it may reduce cost under the context of at least entry level courses. However, negative impacts of automatic grading are not well understood in the literature and such systems should at least "do no harm" to students in order to be considered. Participants: We recruited 171 college level computer science students from our introduction to C++ programming class (CS135 - Computer Science I) at the University of Nevada, Las Vegas during the fall semester of 2021 and analyzed their work over the semester. Study Method: A counterbalanced within subjects study with repeated measures was run over the course of the fall 2021 semester measuring scores from programming lab assignments. The goal was to evaluate the student impact when students are graded by a paid human teaching assistant vs. an automatic grading platform. Findings: Each student had ten manually graded and ten automatically graded lab scores that were collected, resulting in 3,420 total data points. After data cleaning (e.g., outlier and missing lab submissions), we were left with 2,539. Results show that assignments automatically graded had higher scores with lower standard deviation amongst submissions (M = 98.7, SD= 2.4), compared to those graded manually (M = 95.9, SD = 6.2), a significant difference of moderate size (F(3.27, 130.79) = 6.249, p < 0.001, eta_P^2 = 0.135). Conclusions: While our results were gathered in a particular context, a first programming course, we found that automated grading had no obvious negative impact on students. Notably, we observed a significant increase in grades, we theorize because it provided immediate feedback on code submissions. Further, we observed a higher standard deviation in manually graded assignments. After inspection, we suspect this was caused by general inconsistency between human graders, despite training and practice. More work is needed, but we conclude that automated grading in our context may have had a small positive impact, in addition to potentially reducing cost.


automated assessment; computer-managed instruction; grading; real-time feedback; student outcome


Computer Sciences | Education | Statistics and Probability

File Format


File Size

1966 KB

Degree Grantor

University of Nevada, Las Vegas




IN COPYRIGHT. For more information about this rights statement, please visit