Details
- Supervisors
- Faculty
- Degree label
- Abstract
- In introductory computer science courses, instructors often use auto-graders to offer automated feedback and corrections to their students. However, these auto-graders typically rely on unit tests, which only evaluate the code's output. As a result, they may miss bad coding practices or reject almost correct solutions. One possible solution to this problem is to extract and analyze information directly from the source code written by the students. This approach would enable us to automatically answer questions such as "Does this code contain bad practices?" or "Has the student substituted a 'print' for a 'return'?". This master thesis proposes to investigate the technique of logical meta-programming to create queries that target specific coding patterns in student code. These patterns may represent either common coding idioms or rather frequent coding flaws occurring in code produced by the students. The basic idea is to translate Python code into a logical database and then use a logical programming language to write different queries that can be applied to this database. With this tool, instructors can easily detect bad practices and provide more accurate feedback and corrections to their students.