LLM-based personalized grading to enable independent and open-ended summative assessments

Engineering courses often rely on programming assignments for learning assessment. In these courses, autograding is a common mechanism to provide quick feedback to students and reduce teacher workload in large courses. However, personalized feedback to students is limited with autograders. The rigidity needed to make autograded assignments work efficiently tends to limit the student's creative exploration. We are developing an AI-based autograder that will (1) provide personalized feedback to students, (2) grade free-form assignments using LLMs so that students can develop independent and creative projects. We hypothesize that this autograder will enhance student self-efficacy in engineering education.

Current participants

  1. Alex Frias
  2. Max Fu
  3. Aksheen Rathod

Selected references

  1. A survey on LLMs:Yang, Jingfeng, et al. "Harnessing the power of llms in practice: A survey on chatgpt and beyond." arXiv preprint arXiv:2304.13712 (2023).
  2. On fine-tuning LLMs: Wang, Yihan, et al. "Two-stage LLM Fine-tuning with Less Specialization and More Generalization." NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models. 2023.
  3. Relevant autograder: Oli, Priti, et al. "Automated Assessment of Students' Code Comprehension using LLMs." arXiv preprint arXiv:2401.05399 (2023).

Last update: Mar 15, 2024. You can contribute to this page by creating a pull request on GitHub.