Thursday 17 July 2014

Open Mentor

The use of a 'learning support tool for tutors' (Whitelock, 2014) enabling them to 'reflect on the quality of feedback they provide for their students' (ibid), is to be welcomed.  In general, time and budget restrictions prevent constant reviews or supervision of tutors' commentaries, thus a relatively cost free tool that can monitor tutor's input could be an effective measure to ensure continuous improvement. 

There are challenges with this type of tool, particularly with its initial construction: how the creators perceive useful and effective feedback, what they consider to be key words or sentences and the reasoning behind their decisions to these to selected categories.  The database and algorithms also needs to be reviewed and updated regularly to incorporate current terminology or correct errors if it is to provide useful feedback which would suggest a need for ongoing investment and financial support if the tool is to survive.

On submission of an 'essay' to "Open Mentor" (OM) it will provide a report that analyses the assessor's comments, clustering them into groups similar to Bale's categories:
Bale:               a) Positive reactions
                        b) Attempted Answers
                        c) Questions
                        d) Negative reactions (Whitelock, 2003)

Open Mentor: a) Positive reactions
                         b) Teaching points
                         c) Questions
                         d) Negative reactions

The report will also suggest an 'ideal' number of comments for each category. Tutor's can access representational graphs as well as an overall analysis of submitted work for a group or cohort.

Despite its shortcomings it provides a useful and speedy overview of the assessor's annotations. It can identify issues such as the lack of constructive or too many destructive comments. OM will helpfully makes suggestions about the ideal number of comments that could be included within each category, something that a new (or experienced) assessor might aim towards.

There are major limitations to this utility. It has the ability to recognise formal and informal comments but the algorithms for determining which category or how many categories to which a comment is allocated appears to be abitrary. In 'Brown's' sample essay, OM identified a large number of comments that simply referred to 'reference' issues both as positive teaching points and/or questions. These comments were concerned with technical aspects of the essay rather than its qualitative content and yet OM skewed the overall review of the tutor's approach by considering them as positive inputs.

For example: the comment below appears in both the Teaching points category and Questions category in OM's report:

"Here you should have a citation. Did you get this from a particular report?"

It would probably be more advantageous and less time-consuming to cluster repetitive comments and highlight to the tutor that one acknowledgement would suffice.  OM, however, appears unable to recognise repitition. Neither can it distinguish between a technical comment/question and comments or questions that will add depth to the student's learning and increase motivation.

Clearly 'Open Mentor' is still a work in progress.

Refs:
Open Mentor (2012) http://openmentor.org.uk (Accessed 17th July 2014)

Whitelock, D. (2014) Open Mentor, H817, Block 4, Week 24, The Open University, Milton Keynes

Whitelock, D., Watt, S., Raw, Y. and Moreale, E. (2003) ‘Analysing tutor feedback to students: first steps towards constructing an electronic monitoring system’, Research in Learning Technology, vol. 11, no. 3, pp. 31–42; also available online at http://www.researchinlearningtechnology.net/index.php/rlt/article/view/11283/12973 (Accessed 15th July)

No comments:

Post a Comment