Introduction
This investigation will focus on quality assurance in assessment and feedback specifically in the case of the School of X. While there may be issues in the wider context of the institution surrounding the diversity of the student population, and their previous educational experience, that will also impact on the results amongst many other factors. The key issues to be considered in this study are the issue of inconsistency of marks awarded and the final qualification grades, both below comparable courses and the sector average.
The Issues
The recent increase of university courses (Gibbs, 2006) coupled with the increasing diversity of the student intake may not be reconcilable with the traditional methods of assessment used throughout our Higher Education (HE) institutions. Traditionally, the achievements of students are measured using a grading system which suggests that thousands of students across the UK’s many different HE institutions with a variety of degrees and modules can be treated as having reached the same educational achievement (Murphy, 1996; Yorke, 2011). Government funding per student has halved in real terms in last 15 years (Gibbs, 2006), while staff to student ratios have increased (Gibbs, 2006). As class sizes have increased so has the bureaucracy associated with meeting the internal and external requirements for quality assurance (Gibbs, 2012). This has impacted negatively on the time and resources available to provide feedback and raise standards.
“if a university were to announce it was going to cut back its teaching to 2 per cent of what another institution provided there might be something of an outcry. However, this is exactly what many institutions have done with regards to assessment and feedback, without announcing this, or even, I suspect, planning it.” (Gibbs, 2006, p.14)
An issue that leads to inconsistencies in marks awarded, as is the situation at the School of X, is the disconnect between the criteria given to students on which the assessment of their work is apparently based and the actual practice of assessing their work, carried out by individual tutors in HE institutions (Orr, 2007). The use of criterion referenced marking that relates specifically to the intended criteria / learning outcomes of the module may reduce the inconsistencies between marking in both comparable and non-comparable subjects. However, there is a tendency for tutors to revert back to ‘norm’ referencing when marking students work (Yorke, 2011). How can this be overcome? There needs to be co-operation between those marking the work, that they are applying the same interpretations to the work, and to the judgements used to measure its success. Orr (2007) refers to the ‘verbal dance’ carried out by lecturers trying to agree a mark while Allen (1998, p.244) refers to conversations whereby marks are agreed as the process of ‘shopping around for the grade’ akin to bargaining and the relative power of assessors. The work in front of the marker is set in the wider context of the lecturers’ experience of the students and their own belief systems, as it is impossible for markers to ‘park their values and beliefs at the study door’ (Leach et al, 2000), all of which points to the impossibility of objective or consistent marking. The situation at the School of X may be better or worse than the scenarios described here, however it is impossible to rule them out as contributors to the situation given their problem with inconsistencies in the marks awarded.
The Quality Assurance Agency (QAA) in the UK, expresses the view that the problems of assessment can substantially be solved if only assessment criteria are expressed with sufficient clarity (QAA, 2013). This has prompted HE institutions to provide transparent, accountable assessment criteria or Intended Learning Outcomes (ILO’s) (Souto-Otero, 2012), for taught modules and programmes which are also transparent to students (Gibbs & Simpson, 2004). Interpreting these specifications and generic terms such as ‘critically evaluate’ mean different things in different disciplines and students may interpret ILO’s differently both to other students and to their tutors (Yorke, 2011), but students may lack the educational maturity to recognise this (Price et al, 2010; Yorke, 2003; Hounsell, 2007). A likely cause of the below average grades at the School of X is the students poor understanding of the criteria that their work is being assessed by and the varying tutor interpretations in the assessment process.
Concerns about declining standards and worries about quality assurance, approval and student reaction have resulted in institutions being cautious about approving changes to assessment let alone implementing innovative new methods (Gibbs, 2006). In Higher Education teachers assessment judgements are more readily accepted than those working in the school sector, where strict regulations must be adhered to (Murphy, 2006). It is not suggested that the school regulation system should be adopted by HE, as this would threaten the very diversity and creativity of original research and the different approaches to learning that universities are all about (Murphy, 2006). But that grading needs to be seen as less a practice of measurement, and more one of professional judgement, where standards and criteria are agreed and upheld by a community of practice within a disciplined community (HEA, 2012; Price, 2005). This then means we have to address how markers can be trained to improve their professionalism (Yorke, 2011).
Staff Development in assessment practice in higher education has been sorely lacking and at best patchy (Elton, 2006; Gibbs, 2006; Murphy, 1996; Williams & Ryan, 2006). In subjects such as art and design the ‘positivist’ or empirical approach to assessment does not work, Elton (2006) claims an ‘interpretivist’ approach, one that relies on professional judgement, is needed. Assessment must rely on the ‘connoisseurship’ of professionally trained assessors (Eisner, 1985; Yorke, 2011). Compulsory continuous professional development (CPD) of academic staff for assessment, should be implemented if standards of teaching and assessment are to improve.
Green (1993) and Gibbs (2012) have looked at quality assurance activities across Western Europe and found that in the UK the standard of quality procedures of degree awards fall well below the standard elsewhere. Quality assurance is tacitly ensured by external examiners who have nothing to do with the complex system of quality control and audits within universities. So they are not the solution to the quality assurance issue as they cannot influence the culture within individual institutions nor across institutions (Murphy, 2006). Should we even try to make comparisons between courses and awards if we are also to maintain reasonable diversity in our HE institutions? The government is keen to stress the comparable and overall quality of degrees across the UK, but many employers will make their own judgements about the value of degrees from different universities, for example no one seriously equates a first from Skelmersdale with a first from Oxford (Murphy, 1996). The Honours degree classification may be less and less appropriate, but change, if it happens, will be slow, in which case we should move from a measurement model of assessment to a more judgemental approach, based on trained professionalism, where quality assurance can be agreed and upheld more readily (Yorke, 2011; Bloxham, 2009).
Solutions
So how can the situation at the School of X be improved? This study finds a number of changes could impact dramatically on the key areas identified.
- Structured self and peer assessment should be implemented into the course structures aimed at increasing the learning opportunities and raising standards among the students (Falchikov, 2007).
- Strengthening student’s ability to make objective judgements about the criteria being used to assess their work, would potentially improve the average marks awarded (Tan, 2007). However, there is no guarantee that this would result in The School of X meeting the average marks in the sector.
- Marking scales with fewer bands (A – D) or holistic grading, where marks or grades are grouped into four categories; marginal, adequate, good and excellent, (Biggs & Tang, 2011) could have a dramatic impact on the consistency of the marks awarded. However, this approach is unlikely to impact on the below average marks compared to the sector average.
- Improving staff training in assessment methods (Yorke, 2011).
The ability of the teaching staff to assess and assign marks in a consistent manner to the students’ work is fundamental to any educational institution and allows for improvements in student achievement. As Gibbs (2012) states, the most important factor in improving education is ensuring consistent standards are applied across the school and the university (Gibbs, 2012).
Fiona Velez-Colby, Yaroslav, Julia Berg
Word count: 1395
May 2013
References
Allen, G. (1998). Risk and uncertainty in assessment: exploring the contribution of economics to
identifying and analysing the social dynamic in grading. Assessment & Evaluation in Higher
Education, 23(3), 241–258.
Biggs, J., & Tang, C. (2011). Teaching for quality learning at university (4th ed.). Maidenhead: Open University Press McGraw Hill Education.
Bloxham, S. (2009). Marking and moderation in the UK: false assumptions and wasted resources. Assessment & Evaluation in Higher Education, 34(2), 209–220.
Eisner, E. W. (1985). The art of educational evaluation. London: Falmer Press.
Elton, L. (2006). Academic professionalism: the need for change. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education. London: Routledge.
Falchikov, N. (2007). The place of peers in learning and assessment. In D. Boud & N. Falchikov (Eds.), Rethinking assessment in higher education: learning for the longer term (pp. 128–143). London: Routledge.
Gibbs, G. (2006). Why assessment is changing. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education. London: Routledge.
Gibbs, G. (2012). Implications of ‘Dimensions of quality’ in a market environment. York: Higher Education Academy.
Gibbs, G., & Simpson, C. (2004). Does your assessment support your students’ learning? London: Centre for Higher Education Practice, Open University.
Green, D. (Ed.). (1993). What is quality in higher education? London: Open University Press.
Higher Education Academy. (2012). A marked improvement. York: Higher Education Academy.
Hounsell, D. (2007). Towards more sustainable feedback to students. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education. London: Routledge.
Leach, L., Neutze, G., & Zepke, N. (2000). Learners’ perceptions of assessment: tensions between
philosophy and practice. Studies in the Education of Adults, 32(1), 107–119.
Murphy, R. (1996). Firsts among equals: the case of British university degrees. In B. Boyle & T. Christie (Eds.), Issues in setting standards: establishing comparabilities. London: Routledge Falmer.
Murphy, R. (2006). Evaluating new priorities for assessment. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education. London: Routledge.
Orr, S. (2007). Assessment moderation: constructing the marks and constructing the students. Assessment & Evaluation in Higher Education, 32(6), 645-656.
Price, M. (2005). Assessment standards: the role of communities of practice and the scholarship of assessment. Assessment & Evaluation in Higher Education, 30(3), 215-230.
Price, M., Handley, K., Millar, J., & O’Donovan, B. (2010). Feedback: all that effort, but what is the effect? Assessment & Evaluation in Higher Education, 35(3), 277-289.
Quality Assurance Agency for Higher Education (QAA). (2013). UK quality code for higher education: Part A: setting and maintaining threshold academic standards. Retrieved 29 April, 2013, from http://www.qaa.ac.uk/AssuringStandardsAndQuality/quality-code/Pages/UK-Quality-Code-Part-A.aspx
Souto-Otero, M. (2012). Learning outcomes: good, irrelevant, bad or none of the above? Journal of Education and Work, 25(3), 249-258.
Tan, K. (2007). Conceptions of self-assessment: what is needed for long-term learning? In D. Boud & N. Falchikov (Eds.), Rethinking assessment in higher education: learning for the longer term (pp. 114-127). London: Routledge.
Williams, S., & Ryan, S. (2006). Identifying themes for staff development: the essential part of PDP innovation. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education. London: Routledge.
Yorke, M. (2003). Formative assessment in higher education: moves towards theory and the enhancement of pedagogic practice. Higher Education, 45(4), 477-501.
Yorke, M. (2011). Summative assessment: dealing with the ‘measurement fallacy’. Studies in Higher Education, 36(3), 251-273.