PBL 3

Introduction

The structure of many modules at UK HE institutions fails to make the most of learning opportunities, which would enable students to improve their work prior to completion of the module (Gibbs & Simpson, 2004; Moon, 2004). With the increased diversity of students in HE, it is more important than ever students’ individual strengths and weaknesses are clarified, by feeding-forward with formative assessment, which will both encourage and enable them to meet the intended learning outcomes and achieve higher marks (Gibbs, 2006; Hounsell, 2007; Price et al, 2010). However, this is likely to involve module re-structure.

The Problem

  • Level 4 Video Production module: 20 credit module 10 weeks of teaching time
  • Students: mixed ability & mixed educational background
  • 2 assignments 50/50 weighting
  • Submission dates: week 8 & week 13

The problems lie with the assessment of the module and the module structure:

  • The assessment of the module means there is no time for the students to receive formative feedback prior to completing their first assignment and receiving a mark. This reduces the relevance of the feedback they receive for the first assignment. Secondly, the students have limited opportunity to improve on their grade for the second assignment. Due to the specific hand in dates and the set 3 week summative assessment period, they are half way through the second assignment, when they receive their marks and feedback for their first assignment. Consequently, the learning opportunities for the cycle of reflection, evaluation, analysis and subsequent application are reduced (Kolb 1984; Gibbs, 1988).
  • The module structure means that a considerable amount of student skills practice takes place during contact time. While this learning through practical experience is crucial, the students are not engaging sufficiently with practice in their own time. Consideration needs to be taken about where the learning takes place and if better use could be made of contact time to deepen their critical reflection of experiential learning and work conducted independently (Moon, 2004).
The Focus

In order to examine the problems identified we will focus on the purpose of feedback, which we argue should be to allow students to improve before final assessment. We will then investigate the importance of experiential learning among students so they can become self-directed learners within the context of the module structure.

Encouraging students’ to become active learners enables them through dialogue, to reflect and evaluate, and provides motivation for improvement (Gibbs & Simpson, 2004). Students are more likely to read their feedback and engage with it positively, if there’s an opportunity for them to enhance their work (Crooks, 1988).

Box 1: The Survey

The students were given written summative feedback for their 1st submission. 18 students were given a grade and 10 were not. 28 students then responded to a feedback survey asking if they:

  1. had self assessed their written work (using a rubric discussed in class) prior to hand-in
    • if yes, had they found it useful
  2. had peer assessed their written work (using a rubric discussed in class) prior to hand-in
    • if yes, had they found it useful
  3. had read their feedback after the first assignment
    • if yes, did they feel positively or negatively about the feedback

Results:

  1. 23 students used the rubric to self assess their work and 5 did not
    • 22 found it useful 1 did not
  2. 15 students tried peer assessment and 13 did not
    • All students who tried peer assessment found it useful
  3. 27 students read their feedback and 1 did not
    • 24 of which found it useful, but 3 did not (many said it made them understand their weaknesses / what they needed to do to improve)
  4. Of the 10 students who did not receive a grade with their feedback
    • 6 felt frustrated by this, of these, 2 felt negatively about their feedback

Students generally wanted to see their marks, not just their feedback, because it was summative and as one student put it ‘definite’. The mark was especially important because there was no longer an opportunity to improve on this work. They also wanted to compare their marks with others as a guide to their level. The survey seems to support the argument that students regard their mark as self-validating, and generally require the incentive of a mark of some sort if they are to engage with the work, and complete the assignment (Gibbs & Simpson, 2004; Irons, 2008). However, students are culturally conditioned to expect marks, when instead we need to develop their assessment literacy, so they understand the greater importance of assessment for learning (Higher Education Academy [HEA], 2012). This must be supported at a programme level.

It is a university requirement that students receive summative assessment to test if they have achieved the learning outcomes set, and as a diagnostic tool to determine the standards needed for improvement (UoS, 2012).

If we are to take into account different student’s learning abilities and learning styles, as described in the survey (Box 1), what is being assessed, must be broken down into clear descriptors which can be measured by students as well as tutors. The assessment itself must also take into account the differences in students’ needs and their reactions to feedback (Biggs & Tang, 2011; Boud & Falchikov, 2007). However, too often course descriptors are defined in overly academic terms which may not be clear to all students working at level 4 (Gibbs, 2006). The role that formative assessment and feedback performs, is in clarifying the ILO’s prior to summative assessment, enabling students to self-reflect on whether they have met the specified requirements and make improvements to their work.

Students may also vary greatly in their development of key skills and the time required to improve their work. Assessment strategies need to take this into account by actively supporting these students in an inclusive pedagogy, without simply resorting to giving them ‘extra time’ (UKPSF V1). Hence the importance of formative assessment, so that students can self-assess their work, and with constructive suggestions, improve on it before hand in. This is even more important for students with specific skills weaknesses, and would improve inclusivity (HEA, 2012, Tenet 5).

Another important factor to consider, is in an educational environment, where does the actual learning take place? As lecturers we could argue that it is in the lecture or taught session where we impart our knowledge about a particular subject onto the students, this is defined as the propositional learning lens (Hager & Hodkinson, 2009), which assumes acquiring knowledge is simply a process of transfer. The same assumptions of the propositional learning lens can be applied to the skills learning lens (Hager & Hodkinson, 2009), relevant to this investigation, as skills are the focus of the learning in the case study. However, in terms of the students understanding of this information and their ability to engage with it on a deeper level, they must participate in a dialogue that will effect internal change and enable new understandings to be applied to new situations. The active sharing of knowledge and skills, and the resulting deeper learning, mostly occurs outside of the formal taught session (Marton & Saljo, 1976). This experiential learning works best in a culture where learning takes place as part of social activity, and students are inducted into their professional practice through ‘legitimate peripheral participation’ (Lave & Wenger, 1991; Moon, 2004). Opportunities for this kind of deep learning would benefit the students if it was factored in to the design of modules from the outset.

Solutions

1. Redesign the module assessment and feedback strategy in such a way to achieve student engagement with their learning. This would be enhanced if assignment one and assignment two were given formative feedback, at any time the student felt ready to submit it, prior to the hand-in date at the end of the module. Students would then have the opportunity to improve on both assignments, before summative assessment, in line with the University Assessment guide (UoS, 2012; UKPSF A4, K5, V2). Students could also be given an ‘indicative mark’ with their formative feedback, as evidenced in the survey, so they know their current level and what they need to do to improve.

This approach would afford more time during the module for putting transformative reflection into practice, deepening the learning (Moon, 2004; Biggs & Tang, 2011), and would be more inclusive, allowing for student differences in both the speed at which they work, and the level of support and the amount of feedback they require. It would improve students’ emotional and practical engagement with their feedback and encourage deeper, self-directed, life-long learning (Tan, 2007; Hounsell 2007).

2. Re-structuring the module to make more directed use of the student’s class contact and practice time would enhance this further. So that students are required to participate in active learning, using newly acquired skills in their own time (in their groups), and required to bring evidence of their work and self-criticism back to class in the form of video to be played to their peers. This way more time on task will take place, and more feedback on their work can be given along with group evaluation, followed by immediate reflection of the task (Gibbs, 1988; Kolb 1984). This will consolidate what has been learned, thus completing and deepening the learning cycle in each class. There is empirical evidence to show, it is the ‘time on task’ that deepens students learning and understanding (Gibbs & Simpson, 2004; Chickering & Gamson 1991). Therefore designing this into the module structure is a key factor in improving student performance. It is worth noting that the pedagogical approaches discussed in this study implemented into a single module alone will have little or no impact on the student’s marks on a degree course. Without discussion and agreement across all teaching staff, covering all modules within a programme, focusing on the types of feedback and assessment, and how they are used to encourage student learning, very little will be improved. Enhancing communities of practice within programme teams, by encouraging a consistent and collaborative approach to assessment and feedback, and deepening reflection on the students’ experiential learning, are crucial elements in improving student learning, and the quality of their degrees (Gibbs, 2010; Price, 2005).

Julia Berg, Fiona Velez-Colby, Yaroslav (from right to left)

Word count: 1494

May 2013

References

Biggs, J., & Tang, C. (2011). Teaching for quality learning at university (4th ed.). Maidenhead: Open University Press McGraw Hill Education.

Chickering, A. W., & Gamson, Z. F. (1991). Applying the seven principles for good practice in undergraduate education. San Francisco: Jossey-Bass.

Crooks, T. J. (1988). The impact of classroom evaluation practices on students. Review of Educational Research, 58, 438-481.

Boud, D., & Falchikov, N. (Eds.). (2007). Rethinking assessment in higher education: learning for the longer term. London: Routledge.

Gibbs, G. (1988). Learning by doing: a guide to teaching and learning methods. Oxford: Further Education Unit.

Gibbs, G. (2006). Why assessment is changing. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education. London: Routledge.

Gibbs, G. (2010). Using assessment to support student learning. Leeds: Leeds Metropolitan University.

Gibbs, G., & Simpson, C. (2004). Does your assessment support your students’ learning? London: Centre for Higher Education Practice, Open University.

Hager, P., & Hodkinson, P. (2009). Moving beyond the metaphor of transfer of learning. British Educational Research Journal, 35(4), 619-638.

Higher Education Academy. (2012). A marked improvement. York: Higher Education Academy.

Higher Education Academy. (n.d.). UK Professional Standards Framework. Retrieved 4 October, 2012, from http://www.heacademy.ac.uk/assets/York/documents/ourwork/rewardandrecog/ProfessionalStandardsFramework.pdf

Hounsell, D. (2007). Towards more sustainable feedback to students. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education (pp. 101-113). London: Routledge.

Irons, A. (2008). Enhancing learning through formative assessment and feedback. Oxon: Routledge.

Kolb, D. A. (1984). Experiential learning: experience as the source of learning and development. Englewood Cliffs: Prentice Hall.

Lave, J., & Wenger, E. (1991). Situated learning, legitimate peripheral participation. New York: Cambridge University Press.

Marton, F., & Saljo, R. (1976). On qualitative differences in learning: 1 outcome and process. British Journal of Educational Psychology, 46(1), 4-11.

Moon, J. A. (2004). A handbook of reflective and experiential learning: theory and practice. London: Routledge Falmer.

Price, M. (2005). Assessment standards: the role of communities of practice and the scholarship of assessment. Assessment & Evaluation in Higher Education, 30(3), 215-230.

Price, M., Handley, K., Millar, J., & O’Donovan, B. (2010). Feedback: all that effort, but what is the effect? Assessment & Evaluation in Higher Education, 35(3), 277-289.

Tan, K. (2007). Conceptions of self-assessment: what is needed for long-term learning? In D. Boud & N. Falchikov (Eds.), Rethinking assessment in higher education: learning for the longer term (pp. 114-127). London: Routledge.

University of Salford. (2012). University assessment handbook: a guide to assessment design, delivery and feedback.