Computer-based assessments in programming modules (Or how I learned to stop worrying and love Perception…)Tuesday, May 24th, 2011
Assessing programming learning outcomes in an authentic way is challenging:
• Assessed programming assignments are resource-intensive, particularly if authenticity implies openness (few real-world programmers work in isolated cubicles without access to reference material or existing code).
• A computer-based test using predefined code is more manageable, but it only tests students’ understanding of syntax and logic, not their ability to write code.
• Coursework isn’t a realistic option when the Internet is such a rich source of ‘inspiration’, modules are large and assignments can’t be open-ended, individual projects.
The approach described here is midway between an authentic programming assessment and a computer-based syntax test. It reduces the impact of plagiarism and permits students to work and learn collaboratively, yet still be assessed individually.
In my web-based programming module, students create a portfolio from their weekly exercises, which they submit for feedback but not formal marking. The final assessment of the work is done online in open-book exam conditions using QuestionMark Perception™.
Questions present model solutions with fill-in-the-blank gaps, chosen to test students’ ability to manipulate key concepts. These can be marked automatically due to the small set of possible solutions. (The image above shows an example question created from an exercise to animate text within a webpage). During the assessment, students are expected to refer to their solutions because the assessment is not actually intended to assess their ability to write new code. Rather, it verifies that they understand their solution and can modify it to fit the model. Comparing three years of data with two years of traditional marking in the same module suggests that the computer-based assessment produces marks with a similar average and variance to manually marked work folders. The up-front cost of developing questions is less than the time that was devoted to marking and dealing with plagiarism, plus the Perception test questions are sustainable and can give feedback to students. However it’s not all positive! The assessment is not truly authentic and penalises poor spelling or typing skills (which might disadvantage international,
dyslexic and disabled students). Even so, I think it provides a strong use-case for Perception over Blackboard and Moodle: only Perception provides the necessary question and assessment types.
Please contact me if you’d like more information and share any thoughts on the ADC Newsletter blog, blogs.kingston.ac.uk/adc
Dr James Denholm-Price, Faculty of CISM