BTL Surpass for online assessment in Computer Science

Over the last couple of years I have been leading the introduction of BTL’s Surpass online assessment platform for  exams in Computer Science. I posted the requirements for an online exam system we agreed on a few months ago. I have now written up an evaluation case study: Use of BTL Surpass for online exams in Computer … Continue reading BTL Surpass for online assessment in Computer Science

The post BTL Surpass for online assessment in Computer Science appeared first on Sharing and learning.

Over the last couple of years I have been leading the introduction of BTL’s Surpass online assessment platform for  exams in Computer Science. I posted the requirements for an online exam system we agreed on a few months ago. I have now written up an evaluation case study: Use of BTL Surpass for online exams in Computer Science, an LTDI report (local copy). TL;DR: nothing is perfect, but Surpass did what we hoped, and it is planned to continue & expand its use.

My colleagues Hans-Wofgang has also presented on our experiences of “Enhancing the Learning Experience on Programming-focused Courses via Electronic Assessment Tools” at the Trends in Functional Programming in Education Conference, Canterbury, 19-21. This paper includes work by Sanusi Usman on using Surpass for formative assessment.

A question for online exams in computer science showing few lines of JAVA code with gaps for the student to complete.
A fill the blanks style question for online exams in computer coding. (Not from a real exam!)

The post BTL Surpass for online assessment in Computer Science appeared first on Sharing and learning.

Quick notes: Ian Pirie on assessment

Ian Pirie Asst Principal for Learning Developments at University of Edinburgh came out to Heriot-Watt yesterday to talk about some assessment and feedback initiatives at UoE.  The background ideas motivating what they have been doing are not new, and Ian didn’t say that they were, they’re centred around the pedagogy of assessment & feedback as learning, … Continue reading Quick notes: Ian Pirie on assessment

Ian Pirie Asst Principal for Learning Developments at University of Edinburgh came out to Heriot-Watt yesterday to talk about some assessment and feedback initiatives at UoE.  The background ideas motivating what they have been doing are not new, and Ian didn’t say that they were, they’re centred around the pedagogy of assessment & feedback as learning, and the generally low student satisfaction relating to feedback shown though the USS. Ian did make a very compelling argument about the focus of assessment: he asked whether we thought the point of assessment was

  1. to ensure standards are maintained [e.g. only the best will pass]
  2. to show what students have learnt,
    or
  3. to help students learn.

The responses from the room were split 2:1 between answers 2 and 3, showing progress away from the exam-as-a-hurdle model of assessment. Ian’s excellent point was that if you design your assessment to help students learn, that will mean doing things like making sure  your assessments address the right objectives, that the students understand these learning objectives and criteria, and that they get feedback which is useful to them, then you will also address points 2 and 1.

Ideas I found interesting from the initiatives at UoE, included

  • Having students describe learning objectives in their own words, to check they understand them (or at least have read them).
  • Giving students verbal feedback and having them write it up themselves (for the same reason). Don’t give students their mark until they have done this, that means they won’t avoid doing it but also once students know they have / have not done “well enough” their interest in the assessment wanes.
  • Peer marking with adaptive comparative judgement. Getting students to rank other students’ work leads to reliable marking (the course leader can assess which pieces of work sit on grade boundaries if that’s what you need)

In the context of that last one, Ian mention No More Marking which has links with the Mathematics Learning Support Centre at Loughborough University. I would like to know more about how many comparisons need to be made before a reliable rank ordering is arrived at, which will influence how practical the approach is given the number of students on a course and the length of the work being marked (you wouldn’t want all students to have to mark all submissions if each submission was many pages long). But given the advantages of peer marking on getting students to reflect on what were the objectives for a specific assessment I am seriously considering using the approach to mark a small piece of coursework from my design for online learning course. There’s the additional rationale there that it illustrates the use of technology to manage assessment and facilitate a pedagogic approach, showing that computer aided assessment goes beyond multiple choice objective tests, which is part of the syllabus for that course.

New projects for me at Heriot-Watt

I’ve been at Heriot-Watt University for many years now but haven’t really had much to do with the use of technology to enhance teaching and learning here. A couple of new projects might change that. The Learning and Teaching Strategy for the School of Mathematical and Computer Sciences mentions using technology to create a more student centred … Continue reading New projects for me at Heriot-Watt

I’ve been at Heriot-Watt University for many years now but haven’t really had much to do with the use of technology to enhance teaching and learning here. A couple of new projects might change that.

The Learning and Teaching Strategy for the School of Mathematical and Computer Sciences mentions using technology to create a more student centred approach to learning, and also reshaping the soft learning environment to meet challenges raised by things like delivering courses across campuses in Edinburgh, Dubai, Malaysia, and with learning partners around the world. So it references ideas like the use of khan-academy style videos where appropriate, effective use of formative assessment and feedback and use of the virtual learning environment to facilitate student interaction and collaboration across those different campuses.

To put this strategy into action the School has set up a working group, which I am convening. The approach will not to be prescriptive and dictatorial, that wouldn’t work; we want to focus on identifying, nurturing and disseminating within the School the existing practice that aligns with those strategic aims. We also want to bring in ideas from outwith the School that can be realised in our contexts, they will have to be practical ideas with demonstrable benefits (I’ll still do explorative researchy things, but through other work). We started work a couple of weeks ago, with two initial tasks: 1, a survey to identify what people are already doing that might be worth sharing and to identify what ideas they would like help progressing; and, 2, an internal show-and-tell event to discuss such ideas. I rather hope that the event isn’t a one-off, that it leads to other similar events, and also that the practice we find through it and the survey can be made open so that we can interact with all the other people doing similar at their own institutions.

Coincidently, I have also been asked to look at automated assessment, especially in exam scenarios in Computer Science. We have run electronic exams in the past, and many staff appreciated the automatic marking, but the system that we used until now is no longer available. So I shall be working with colleagues to try to find a replacement. I haven’t worked much with online assessment before, but I think there are three related but separate strands that will need following: 1, the software system, its functionality and usability; 2, policy issues such as security for high stakes assessment; 3, pedagogic issues. Clearly they are interdependent, for example if your pedagogic considerations lead you to decide that students should have access to the web during exams, then the security issues you need to consider change.  My feeling is that only an off-the-shelf system will be sustainable for us, so I’m looking at commercial and open source systems that have already been developed. However, Computer Science obviously has a very particular relationship with the use of computers in teaching and assessment that may not be exploited by general purpose computer aided assessment.