Version: 1,
Uploaded by: Administrator,
Date Uploaded:
25 November 2022
Warning
You are about to be redirected to a website not operated by the Mauritius Research and Innovation Council. Kindly note that we are not responsible for the availability or content of the linked site. Are you sure you want to leave this page?
Conducting practical tests or exams, and grading assignments related to programming modules for a large group of students are tedious, error-prone, and time consuming. Furthermore, given the global deadly pandemic, there is a need to prepare for prolonged online education and online assessments. The risks of cheating and plagiarism is higher for online assessments. A solution would be to set different questions for every student. But, unfortunately, this is infeasible for classes with a high number of students. Setting common questions permits students to collaborate and cheat. They can easily copy from each other’s answers, make minor changes to make their code look different, and submit it as their own.In this paper, an online platform for automated grading of programming assignments is considered. Students can work on their assignments and submit their code on Codeboard.io for automated grading and instant feedback. However, it does not perform any check for code similarity across submissions. Standard plagiarism detectors, such as Turnitin, fail to detect source code plagiarism. Therefore, the output from Codeboard.io is processed and fed to an online code similarity detector, namely MOSS (Measure of Software Similarity), hosted by Stanford University. Such tools are put together under a framework that allows for automated grading of programming assignments, and the identification of suspected cases of plagiarism, which can then be dealt with accordingly. Using the mentioned tools, an implementation of the proposed framework was successfully tested and evaluated with two classes of 20 students and one class of 100 students.