Auto-graded scaffolding exercises for theoretical computer science

With Jason Xia*, Eliot Wong Robson*, Tue Do*, Aidan Glickman*, Zhuofan Jia*, Eric Jin*, Jiwon Lee*, Patrick Lin*, Steven Pan*, Samuel Ruggerio*, Tomoko Sakurayama*, Andrew Yin*, Yael Gertner, and Brad Solomon.

Accepted to the 2023 ASEE Annual Conference and Exposition.

We report on an ongoing effort to develop auto-graded scaffolding exercises to support a large upper-division theoretical computer science course at the University of Illinois Urbana-Champaign. Most of our exercises are organized as guided problem sets, each consisting of a sequence of questions that guide students through the process of solving an algorithm design or proof question. Our guided problem sets support multiple correct solutions, detect common mistakes, automatically provide counterexamples for incorrect answers, provide helpful narrative feedback, and award partial credit consistent with grading rubrics for written homeworks and exams. We have also incorporated several new interactive tools that enable students to submit solutions similar to traditional human-graded written homework. Our exercises have been used by almost 2000 students since our development effort began in early 2021. We report results from a survey of students from the Fall 2022 offering of the course, comparing their experience with the new auto-graded exercises with written homework.

Publications - Jeff Erickson ( 30 Mar 2023