Developing student assessment related to a work-placement: A bridge between practice and improvement
University of Brighton, UK
University of Lapland, Finland
This paper explores the ways in which student assessment can be developed in higher education and work-related contexts to form a strong bridge between practice and improvement. Our aim is to provide a starting point for evaluation and improvement of assessment practices, which benefits the learners, instructors, and designers of the curricula, as well as developers of educational organisations. Empirical explorations in the article draw on interview data gathered from physiotherapy students, teachers, and workplace supervisors, on their experiences of student assessment, in two higher education and work-related contexts. Although the actual empirical study is not the focus of this paper, short descriptions of the participants' lived-through experiences are used to illustrate the explicit theoretical purpose of the discussion. In particular, we outline a theoretical model of 'zones and mirrors' that provides a compelling link between student assessment and learning, by exploring the stages and elements that can join the two, in dialogue with the empirical data.
Before giving a broader insight into the proposed model and its exploration with empirical data, the main terminology used in the remainder of the paper will first be clarified. Then, in order to outline the background and motivation for this article - the definition of a model of 'zones and mirrors' of assessment and learning, and the need to build a bridge between practice and improvement - contemporary themes and debates connected with assessment and work-placements, and quality aspects in assessment and evaluation, are briefly highlighted.
More generally, the term assessment in the literature is often used in parallel with feedback (Evans, 2013; Li & De Luca, 2014), and as a synonym for evaluation (Sadler, 2005; 2010). Furthermore, assessment feedback is in use as an umbrella concept "to capture the diversity of definitions, to include the varied roles, types, foci, meanings, and functions, along with the conceptual frameworks underpinning the principles of assessment and feedback, as well as all feedback exchanges generated within an assessment design" (Evans, 2013). For Raivola (2000), who is a Finnish scholar, evaluation can be seen as the broadest concept. According to Raivola, evaluation systems are needed to gather different types of assessment information (including, for example, feedback given to students or by students) for different purposes; for use in educational policy, and for the examination and enhancement of educational processes. In addition, he emphasises the contextual factors of quality and education, and states that quality is a multidimensional phenomenon, unique for every product and for the process creating the product. In this article the term 'assessment' is used to refer to any procedure for estimating student learning for whatever purpose, and 'evaluation' is understood more broadly, as defined by Raivola.
The ongoing shifts and transformations related to student assessment and learning practices are much discussed in the literature. It is noted that, traditionally, assessment in higher education has focused on 'measuring knowledge' rather than 'fostering learning', and the role given to students has been that of 'objects' of measurement rather than that of active 'agents' or 'participants' in their own assessment process (Hager & Butler, 1994; Biggs, 1996, 1999; Serafini, 2000; Boud, 2000; Poikela, 2002). It is suggested that the emphasis in the contemporary era is shifting towards student centred practices, and the needs of the learners being the key focus for institutional attention (Boud, 2000, 2007; Poikela, 2002, 2004; Boud & Falchikov, 2005, 2006; Poikela & Poikela, 2005, 2012). However, mixed pictures of assessment practices and existing conceptions of learning and assessment are noted (Hager & Hodkinson, 2009; Ashgar, 2012; Dearnley et al., 2013; Evans, 2013), adding to the complexity of the situation. It is claimed that while reforms and more innovative approaches to student assessment are welcomed by some, they can cause confusion to others (Dearnley et al., 2013), and the meaning of assessment seems to be a 'work-in-progress' (Bennett, 2011; Evans, 2013; Li & De Luca, 2014). Furthermore, it is suggested that the student assessment experience is becoming an increasingly challenging area in higher education (Ashgar, 2012; Dearnly et al., 2013), and the lack of knowledge and understanding of the student assessment processes related to work-placements needs to be addressed (Ferns & Moore, 2012).
In exploring the ways in which student assessment could be developed related to work-placements, in this article we start with the premise that all the above mentioned shifts and transformations of the contemporary era are noteworthy. We also believe that student assessment practices should be aligned, not only with the applied pedagogical assumptions and work-life expectations, but also to foster learning and practice improvement in a wider sense. One implication of this is that students need to become assessors of their own learning within the context of participation in practice; recognised as one of the stakeholders of the assessment process, involved in the making of decisions that affect them, as well as making a contribution in the workplace. Therefore, it is the claim of this paper that a more holistic (relational and contextual) approach to development is needed in higher education with regard to work-placements; one in which all aspects of student assessment - regardless of the environment or the tools in use - are collaboratively considered and tuned to support student learning and practice improvement, on different levels.
Esa Poikela (2002, 2004) found an analogical relationship between Kolb's (1984) experiential learning cycle, Boud's (2000) concept of judgmental assessment, and Pettigrew's (1985) contextualism as a mode of analysis. Poikela (2002, 2004) presented the idea of context-based assessment (CBA) which requires that situational and contextual factors (of different levels) are carefully considered. It offers a very broad perspective on assessment processes in educational contexts, and presents the zones and mirrors of assessment and learning by exploring the stages and elements that can join the two (on different levels). This provides a starting point for the exploration of student assessment practice that could benefit the learners, instructors, and designers of the curricula, as well as the developers of organisations, and is presented next, as follows.
Figure 1: The mirrors of the assessment process
The core of the figure consists of the cycle of experiential learning (Kolb, 1984) with reflective observation as an essential part of that process. Self-assessment occupies the central zone of the core, process assessment the middle, and product assessment (in the sense of the assessment of the learning outcomes and achievements of the learner) the outer zone. Between the three zones are the boundaries needed for learning and development; not only from a student perspective, but also from the perspective of the instructors; for example, in developing assessment skills.
After outlining the theoretical model that provides a compelling link between student assessment and learning, and forms a bridge between practice and improvement, the stages and elements that can join the two will be explored next, in relation to empirical data obtained in two higher education and work-related contexts.
PBL, as an approach to learning and instruction and to organising the curriculum, does not follow the structure of academic subjects but that of problem solving within shared and individual learning processes (Barrows, 1996; Boud & Feletti, 1997). In the literature, certain 'core characteristics' are recognisable across the different applications of PBL, such as: students work in small groups guided by tutors, problems are used as the starting point of the learning process, and the learning process is organised around self-study and supported by complementary learning resources (Barrows, 1996; Boud & Feletti, 1997; Savin-Baden & Major, 2004; Schmidt et al., 2009; Poikela & Poikela, 2005, 2012). Assessment, however, is an area that appears to be somewhat problematic and controversial within the PBL literature as well. On the one hand, the potential of assessment is addressed, and on the other hand it is suggested that assessment practices may not fit well with the assumed pedagogical rationale, the expected educational outcomes, or the needs of the students (see Savin-Baden, 2003, 2004; Poikela & Poikela, 2005; Winning, Lim & Townsend, 2005; Poikela, Vuoskoski & KärnäŠ, 2009; Poikela & Moore, 2011).
With regard to the context of assessment, in this paper, collaborative assessment methods based on student self-assessment and instructor feedback were addressed in the study plan of both courses. Accordingly, the learning goals and outcomes for the student, in both programs, were recorded on a written assessment form, and were expected to be considered by all parties (student, teacher, and workplace educator/supervisor) during the work-placement. At the end of the work-placement, the student's performance was assessed based on a 'pass-fail' basis for case B, and a numerical (0-5) scale for case A. In addition, a work journal had to be written during the period by the student in both courses. Here, the work journal was a reflective account of the student's clinical experiences together with relevant research literature, put into the form of a case report. The journal was marked at the end of the placement by the teacher and the workplace educator, numerically (0-5) in case A, and on a 'pass-fail' basis in case B.
During the work-placements, teachers were supposed to visit the placements for supervision and assessment purposes when possible, in both schools. Besides, online tools were used for asynchronous and/or synchronous communication between the parties (student, teacher, and workplace supervisor). Online discussion forums and/or emails were used for asynchronous communication, feedback, and information sharing, in both cases. In case B, web cameras and headset microphones with desktop conferencing tools (Adobe Connect Professional) were used for synchronous communication when the teachers were not able to visit the workplaces; mainly when the location of the student placement was further than 40-50 kilometres away from the university campus.
In practice, in-depth interviews were conducted soon after the students' work-placements (between August 2008 and April 2009), after obtaining the required ethical approvals (according to the requirements of the Finnish higher education and health care systems). The interviewees consisted of second year students (n=8), their teachers, (n=4) and workplace supervisors (n=4). All sixteen participants were interviewed face-to-face in their first language (Finnish) by the first author of this article. Anonymity, confidentiality, and other ethical aspects as well as the research interest and methods, were clarified at the beginning of the interview session. Each interview was conducted based on the same interview plan (see Appendix 1) and open, in-depth methods were used (see Kvale, 1996). All interviews lasted between 45 and 75 minutes, and were audio-recorded, then transcribed verbatim by the researcher-interviewer.
A qualitative approach, drawing from philosophical phenomenology and hermeneutics, was applied for the purposes of data analysis and discussion in this article. Adopting the phenomenological attitude, as understood in the study, relates to the object of research (work-placement assessment as a lived through experience) as a lifeworld phenomenon and experiential given, understanding the experience from the perspective of the experiencer, and suspending all theoretical assumptions of the experience, when collecting and analysing the data, and describing the phenomenon (see Giorgi, 2009). Hermeneutics, then, relates to the interpretive framework applied to the raw data, in this case seeking to find evidence in the given context from the applied theory perspective (see Pollio, Henley & Thompson, 2006). Although the way of combining the two approaches has been criticised elsewhere (Cloonan, 1996; Giorgi, 2006; Applebaum, 2011), it is assumed that the phenomenological criteria are not violated, as no phenomenological (descriptive) status, in the strict Husserlian sense of the term, is claimed for the interpretations of the raw data in this article.
The empirical findings and their implications have been previously discussed in another forum (Poikela & Vuoskoski, 2009). Based on the more specific nature and limited scope of this paper, only some of the most representative quotations (translated from the original expressions in Finnish) from the raw data have been selected for exploration through the chosen theoretical lens, based on the discursive accounts as follows.
The most typical issue regarding the meaning of assessment, as expressed by participants in the empirical data, was the dynamics between self- and process assessment. The significance of shared discussions, and receiving feedback from others (peers and/or instructors) for learning and self-improvement, face to face and online, was particularly addressed by the students. For example, the students stated that asynchronous collaboration online enabled them to receive and post written comments and feedback and to reflect on the given feedback. At the same time they appreciated receiving timely feedback, positive encouragement, and constructive criticism from the instructors. With versatile feedback, students were able to recognise their own progression and future challenges.
It was very important for me to receive instant feedback especially from those in authority, and from professionals. I was able to utilise the teacher's and supervisor's feedback much better than my own reflection. (Student 3/case A)In the data, self-assessment and process assessment were not considered easy tasks by any parties (students or instructors), but both were recognised as essential to the enhancement of (student) learning and improvement. The students of case B highlighted desktop conferencing as a versatile and efficient means of synchronous communication, allowing students to obtain immediate feedback and responses from their teacher during placement, and reassurance of being 'on the right track'. In addition, the textual online communications enabled students to follow parallel discussions about questions which also concerned them. However, most of the asynchronous collaborations in the online discussion forum were between students and teachers, and workplace supervisors were not actively involved (although they had similar access to the forum). If the students felt that the given positive feedback (face to face or online) was undeserved, or the feedback they anticipated was delayed, or they felt that no constructive criticism had been given, they found the situation frustrating and this decreased their motivation for self-improvement.
Usually we talked with my supervisor after each patient situation. I was told in a very constructive manner if there was something I could have done better. I never felt offended or uncomfortable and afterwards I usually agreed with my supervisor, and I was able to understand what went wrong and why. (Student 1/case B)
The discussions and feedback that I was receiving from my peers were very beneficial. Sometimes I was getting support for my own thoughts and ideas, and sometimes, if my peers didn't agree with me, I had to go and search for more information. (Student 3/case B)Based on the data, the importance of students' self-assessment skills, and their need for further development of themselves were well recognised by all parties. As implied above, the student participants addressed the significance of receiving versatile feedback from others, of enhancing reflection and understanding of their own learning, and of acknowledgement of areas that needed further development. Similarly, teachers and workplace supervisors stated that their aim was to support student learning and improvement, and development in self-assessment.
At the beginning of the period, when everything was new to me, I needed all the positive feedback and encouragement I was given, but later on, when things were getting better, I would have expected more constructive and corrective feedback to be able to stay motivated and willing to improve my performance. (Student 4/case A)
I am trying to be less demanding for the second-year students. I am trying to get them used to conducting self-assessment by explaining to them what it is and what should be taken into account with it; and later on, when they are more experienced, I will expect them to be able to analyse their own learning much more systematically. (Teacher 1/case A)However, assessment produces information that is needed by all parties committed to improvement. Continuous improvement in self-assessment and process assessment is important both for students and instructors. Because learning demands skills of reflection and interaction, meaningful and effective ways/tools supporting the quality and relatedness of individual and joint processes are needed. The use of case reports and recordings of learning goals and achievements, for example, offers information for collaborative assessment, which can be mutually beneficial. But clearly, when more sophisticated technologies are implemented, attention must be paid to training all actors to use these online environments and technologies, to be able to support learning and assessment, and to facilitate the collaboration and communication of all parties within the assessment process.
In the data, the integration of (collaborative) process and product assessment was challenging. At its best, it was described as repetitive negotiation about the learning goals and achievements of the student, the criteria, and/or the possibilities and limitations of the work-placement. Supervisors and teachers both addressed the importance of giving timely feedback to the student, but the time required for continuous process assessment was not always in balance with their resources and other work responsibilities. Neither was the communication of the learning goals and achievements, nor understanding of the meaning of the assessment criteria, in balance between all parties.
I found our assessment discussions useful. It all began from setting the goals together, and the teacher was helping with questions and different viewpoints. The supervisor was more concerned about my skills and what I could do, but it all helped me to understand my own capabilities and challenges at the workplace. And in the final evaluation we were considering the achievements together [with the supervisor], and it really helped me to notice my own progression. (Student 4/case A)Clearly, the circumstances, as present in the data, did not always support the implementation of collaborative assessment, equal participation, and building on shared understanding of all parties. Nevertheless, the significance of continuous feedback and process assessment for the student was positively acknowledged by all parties. Participants, including students and instructors, also stated the benefits of the collaborative use of the assessment form and regular discussion around the goals and achievements. When striving for students to becoming assessors of their own learning within the context of participation in practice (see Boud & Falchikov, 2006), and for the whole assessment system to be aligned with the applied pedagogical assumptions (Poikela & Poikela, 2005), there is a need for transparency; the processes of learning and instruction, including assessment, should be shared with and developed between the students, teachers, and workplace supervisors (Poikela, Vuoskoski & Kärnä, 2009). Furthermore, instead of complaining about or accepting the lack of resources, attention could (and should) be focused on planning and strategies for (collaborative) improvement of the assessment practice.
I found the assessment form very useful, especially the assessment criteria. Some of the students needed a lot of guidance for understanding the criteria, and the difference between the goals and the achievements, to be able to use the criteria in their own self-assessment; but for some students it was relatively easy. (Teacher 1/case B)
I tried to give positive feedback and encouragement for the student, but, like always, I was often too busy doing something else; and that meant a guilty conscience for not having enough time for the student. (Supervisor 2/case A)
From a competency-based viewpoint (Hager, 2004), integration of product assessment within the context of working life is related to student's professional knowledge and competence. This type of 'knowing', and how it is developed, can be characterised as a process involving decision-making and problem solving while accessing increasing amounts of tacit knowledge. Within a contextual framework (Poikela, 2004; Pettigrew, 1985), tacit knowledge (and also explicit knowledge) is owned not only by individuals but by communities of workers, and the whole work-organisation. Therefore, in a more holistic approach to assessment and development, within higher educational and work-related contexts, not only is the integration of process and product assessment required, but also an appreciation of the limitations of local contexts. In other words, besides the needs and interests of different individuals, and people working together as communities of practice, the needs and requirements embedded within organisational, cultural, and societal contexts need to be acknowledged.
Assessment of professional knowledge is difficult, because tacit knowledge becomes visible only in fluent personal or shared actions (Schö, 1983). Therefore, as noted by Poikela (2004), it is understandable that, in such circumstances, assessment is easily focused on the results of actions; that is, learning outcomes. However, in this kind of assessment system, learners are left alone with their difficulties because they do not receive enough information about their knowledge, and also lack process information for self-improvement. According to Poikela, those involved in curriculum development are also left without meaningful information of the assessment process. Although other types of 'assessment information' related to work-placements, for example student evaluation of teaching, are often routinely collected for development purposes. Furthermore, an assessment system, which concentrates on measuring 'qualifications', has its mirror only between the products and contexts. This again results in a secured control system focusing on the qualifications of individual learners based on detailed examinations. However, a more dynamic assessment system based on generating learning and improvement, building on the judgment and meaningful information of all stakeholders, provides opportunities for learning and improvement within the whole educational system, and for justifying the pedagogical (or other) changes needed.
In the data, the purpose of assessment as a guiding factor in the learning and development of individual students had clearly been internalised. Accordingly, the importance for student participation and continuous assessment, and discussion and negotiation between all parties, was understood, yet not fully acted upon. Shared understanding and collaborative utilisation of the assessment form was considered to have been particularly challenging. The question was also raised whether the students were given too much responsibility for their own process assessment, especially in recording their learning and improvement. This again may imply that the 'mirror' of assessment and learning was predominantly between the products (learning outcomes) and the context.
I have learnt how to use the assessment form and I think I am quite good at it; but still I was hoping that my supervisors would have recorded their opinions of my suggested achievements on the form; or that we would at least have considered my achievements together; but there wasn't any real negotiation or discussion about them. [during the work-placement]. (Student 2/case B)The dynamics between the assessment of learning outcomes and the context of assessment, as present in the data, appear somewhat distorted. There were some differences in the roles and responsibilities as well as concerns of the teachers and the workplace supervisors related to student assessment, as expressed by the participants themselves. Teachers described their own role as a mediator between the student and the workplace supervisor, responsible for the pedagogical purposes of work-related assessment and the curricular expectations. Supervisors acknowledged themselves as being merely concerned about the students' understanding of the workplace requirements, but also expressed feelings of uncertainty about their role as an assessor, the pedagogical expectations, and the meaning of the assessment criteria.
I wasn't told what went wrong in my patient report, not until the end of the period, and then it was too late to make any corrections. I don't know if there was any sense in that. I think assessment in general [during the work-placement] was based too much on student self-assessment. I mean how the assessment was conducted; I nearly didn't get any constructive or corrective feedback at all. (Student 1/case A)
I see my own role in the early phase more as an advisor than an assessor. The early workplace periods are so exciting for the student; they need a lot of support and encouragement, and although we were having skilled and experienced supervisors, I still thought I had to ensure that they [workplace supervisors] really understood the study phase of the students correctly. (Teacher 1/case A)Overall, the exploration of the empirical data suggests that there is some mismatch between the formal learning-assessment environment as expected in the curricula, and the actualised assessment practice as experienced by participants related to a work-placement. Neither were the expectations of different parties aligned with each other, nor was the meaning of assessment clear between participants. However, based on the data, it can be stated that there was a shared, unifying meaning within the individual variations of the assessment experience, highlighting the commitment for collaborative assessment practice; discussion between participants within the assessment process (student, teacher, and workplace supervisor); supporting student self-assessment; and learners receiving continuous feedback from various resources. In summary, the most problematic area, that needs further attention, in the light of the empirical data and the applied theoretical model, was the need for a more integrative and holistic approach to the student assessment and learning related to work-placements; that is, integration of process and product assessment, individual and shared processes, understandings and meaning(s) of assessment in different contexts, and acknowledgement of the need for continuous improvement of all parties involved in the assessment process.
I think it was really hard for the student to understand the expectations of the workplace; and for us it was hard to understand the expectations of the school; for example, what was the level of competence to be expected from the student and how could we be fair in our assessment for the student. (Supervisor 1/case A)
This provided a starting point for the theoretical exploration and discussion on the development of student assessment practices that could benefit different stakeholders involved in student assessment in higher educational contexts more widely; namely the learners, instructors, and designers of curricula, as well as developers of higher educational and work organisations. As such, it aimed to highlight the development of student learning and assessment in higher education based on mutual communication and co-operation between all parties involved in the improvement of the educational processes. The paper also contributes to previously published research which argues that there is a need for knowledge of the assessment processes in work-related environments (Ferns & Moore, 2012), of the student assessment experiences (Ashgar, 2012; Dearnly et al., 2013), and the meaning of assessment (Bennett, 2011; Evans, 2013; Li & De Luca, 2014) in higher educational contexts, and in engagement with integrated work and learning perspectives in general (Johnsson, Boud & Solomon, 2012). Moreover, it offers a model for bridging the gap between learning and assessment and/or practice and development; that is, the ways in which student learning and assessment in relation to work (or work-placements) are understood and talked about, the ways in which they are experienced and/or actually practised, and the ways in which they are being developed, in higher educational contexts.
Based on the data, the teachers and supervisors had a clear focus on supporting student learning, but limited resources were felt to be a hindrance to creating shared actions and understanding between participants. Nevertheless, the importance of developing students' (self-) assessment skills was highlighted by all parties. The participants also clearly stated their experience of the benefits of the collaborative use of the assessment form, and regular discussion between parties. However, a question was also raised: namely, whether students were given too much responsibility for their own learning and assessment. Furthermore, it was not only the students who felt that they were not receiving enough information for learning and self-improvement; the instructors also felt that they lacked important information for examining the student learning and assessment processes, and for justifying the pedagogical goals. These results support the suggestion of the previous research, that while reforms and more innovative approaches to student assessment are welcomed by some, they can cause confusion to others (Dearnley et al., 2013); that the student assessment experience is becoming an increasingly challenging area in higher education (Ashgar, 2012; Dearnly et al., 2013); and that there is a need for more integrated perspectives on work and learning, and interactive relationships between various individuals within the organisational contexts (Johnsson, Boud & Solomon, 2012). In this paper, we suggest that the assessment and evaluation system as a whole for work-placements and higher education learning environments should be more focused on learning and development, on multiple levels, and that a more holistic perspective towards development is also needed. Besides, the exploration of the empirical data suggests that although the different elements needed for the learning and development of all parties of the assessment process are generally recognised, and the means for the development are available, the focus of assessment is still largely on the products rather than the process of learning and improvement, in a wider sense.
In conclusion, this paper has highlighted that the assessment and evaluation of work-placements is a fundamental part of higher education development, and that it is essential to take them into account during the curriculum planning phase, and its implementation into practice. Within the context of the ongoing shifts and transformations of the contemporary era, the need for the development of assessment practices in parallel with the emergent views and various forms and appearances of work-related learning and pedagogy is becoming more and more pronounced. However, successful implementation of the anticipated aims and assumptions needs persistent development work, interactive conditions, and mutual recognition of different stakeholders involved in the process, at all levels. In this article, we have provided a starting point for the evaluation and improvement of student assessment practices which may benefit parties on multiple levels. It is our intention to publicise the presented model for further discussion and development.
Applebaum, M. H. (2011). (Mis)appropriations of Gadamer in qualitative research: A Husserlian critique (Part 1). Indo-Pacific Journal of Phenomenology, 11(1), 1-17. http://dx.doi.org/10.2989/IPJP.2011.11.1.8.1107
Ashgar, M. (2012). The lived experience of formative assessment practice in a British university. Journal of Further and Higher Education, 36(2), 205-223. http://dx.doi.org/10.1080/0309877X.2011.606901
Dall'Alba, G. (2009). Learning professional ways of being: Ambiguities of becoming. Educational Philosophy and Theory, 41(1), 34-45. http://dx.doi.org/10.1111/j.1469-5812.2008.00475.x
Barnett, D. (1999). Learning to work and working to learn. In D. Boud & J. Garrick (Eds), Understanding learning at work. London, New York: Routledge, pp. 29-44.
Barrows, H. S. (1996). Problem-based learning in medicine and beyond. In M. Birenbaum & F. Dochy (Eds), Bringing problem-based learning to higher education: Theory and practice. San Francisco, CA: Jossey-Bass, pp.3-13.
Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education: Principles, Policy & Practice, 18(1), 5-25. http://dx.doi.org/10.1080/0969594X.2010.513678
Biggs, J. (1999). Teaching for quality learning at university: What the student does. Buckingham: The Society for Research into Higher Education and Open University Press.
Biggs, J. (1996). Assessing learning quality: Reconciling institutional, staff and educational demands. Assessment & Evaluation in Higher Education, 21(1), 5-16. http://dx.doi.org/10.1080/0260293960210101
Boud, D. (2007). Reframing assessment as if learning were important. In D. Boud & N. Falchikov (Eds), Rethinking assessment in higher education: Learning for the longer term. London: Routledge, pp.14-27.
Boud, D. (2000). Sustainable assessment: Rethinking assessment for the learning society. Studies in Continuing Education, 22, 151-167. http://dx.doi.org/10.1080/713695728
Boud, D. & Falchikov, N. (2006). Aligning assessment with long-term learning. Assessment & Evaluation in Higher Education, 31(4), 399-413. http://dx.doi.org/10.1080/02602930600679050
Boud, D. & Falchikov, N. (2005). Redesigning assessment for learning beyond higher education. In Higher education in a changing world. Proceedings of the 28th HERDSA Annual Conference, Sydney, 3-6 July, pp 34., 34-41. http://www.herdsa.org.au/wp-content/uploads/conference/2005/papers/boud.pdf
Boud, D. & Feletti, G. (Eds). (1997). The challenge of problem-based learning. London: Kogan Page.
Boud, D. & Garrick, J. (1999). Understandings of workplace learning. In D. Boud & J. Garrick (Eds), Understanding learning at work. London, New York: Routledge, pp.1-12.
Boud, D. & Lawson, R. (2011). The development of student judgement: The role of practice in grade prediction. Paper presented at the 14th Biennial EARLI Conference, 29 August - 3 September, Exeter, UK.
Boud, D., Solomon, N. & Symes, C. (2001). New practices for new times. In D. Boud & N. Solomon (Eds), Work-based learning: A new higher education? Buckingham: The Society for Research into Higher Education & Open University Press, pp.3-17.
Cloonan, T. F. (1995). The early history of phenomenological psychological research in America. Journal of Phenomenological Psychology, 26(1), 46-126. http://dx.doi.org/10.1163/156916295X00033
Cooper, L., Orrell, J. & Bowden, M. (2010). Work integrated learning: A guide to effective practice. Routledge: USA.
Dearnley, C. A., Taylor, J. D., Laxton, J. C., Rinomhota, S. & Nkosana-Nyawata, I. (2013). The student experience of piloting multi-modal performance feedback tools in health and social care practice (work)-based settings. Assessment & Evaluation in Higher Education, 38(4), 436-450. http://dx.doi.org/10.1080/02602938.2011.645014
Evans, C. (2013). Making sense of assessment feedback in higher education. Review of Educational Research, 83(1), 70-120. http://dx.doi.org/10.3102/0034654312474350
Ferns, S. & Moore, K. (2012). Assessing student outcomes in fieldwork placements: An overview of current practice. Asia-Pacific Journal of Cooperative Education, 13(4), 207-224. http://www.apjce.org/files/APJCE_13_4_207_224.pdf
Giorgi, A. (2009). The descriptive phenomenological method in psychology: A modified Husserlian approach. Pittsburgh PA: Duquesne University Press. http://www.dupress.duq.edu/products/psychology6-paper
Giorgi, A. (2006). Concerning variations in the application of the phenomenological method. The Humanistic Psychologist, 34(4), 305-319. http://dx.doi.org/10.1207/s15473333thp3404_2
Hager, P. (2004). The competence affair, or why vocational education and training urgently needs a new understanding of learning. Journal of Vocational Education & Training, 56(3), 409-433. http://dx.doi.org/10.1080/13636820400200262
Hager, P. & Butler, J. (1994). Problem-based learning and paradigms of assessment. In S. E. Chen, R. M. Cowdroy, A. J. Kingsland & M. J. Ostwald (Eds), Reflections on problem-based learning. Sydney: Australian PBL Network.
Hager, P. & Hodkinson, P. (2009). Moving beyond the metaphor of transfer of learning. British Educational Research Journal, 35(4), 619-638. http://dx.doi.org/10.1080/01411920802642371
Johnsson, M. C., Boud, D. & Solomon, N. (2012). Learning in-between, across and beyond workplace boundaries. International Journal of Human Resources Development and Management, 12(1/2), 61-76. http://dx.doi.org/10.1504/IJHRDM.2012.044200
Kolb, D. (1984). Experiential learning: Experience as the source of learning and development. Englewood Cliffs, NJ: Prentice Hall.
Kvale, S. (1996). InterViews: An introduction to qualitative research interviewing. Thousand Oaks, California: SAGE.
Laitinen-Väänänen, S. (2008). The construction of supervision and physiotherapy expertise: A qualitative study of physiotherapy students' learning sessions in clinical education. Doctoral thesis. Jyväskylä: University of Jyväskylä (Studies in Sport, Physical Education and Health). http://urn.fi/URN:ISBN:978-951-39-3297-8
Li, J. & De Luca, R. (2014). Review of assessment feedback. Studies in Higher Education, 39(2), 378-393. http://dx.doi.org/10.1080/03075079.2012.709494
Parjanen, M. (2003). Amerikkalaisen opiskelija-arvioinnin soveltaminen suomalaiseen yliopistoon [The application of American student assessment to a Finnish university]. Korkeakoulujen arviointineuvoston julkaisuja. Helsinki: Edita. https://karvi.fi/app/uploads/2015/01/KKA_803.pdf
Pettigrew, A. M. (1985). Contextual research: A natural way to link theory and practice. In E. E. Lawler, A. M. Mohrman, S. A. Mohrman, G. E. Ledford & T. G. Cummings (Eds), Doing research that is useful for theory and practice. USA: Lexington Books. pp.222-274.
Poikela, E. (2004). Developing criteria for knowing and learning at work: Towards context-based assessment. The Journal of Workplace Learning, 16(5), 267-274. http://dx.doi.org/10.1108/13665620410545543
Poikela, E. (2002). Osaamisen arviointi [Assessing knowing]. In R. Honkonen (Ed), Koulutuksen lumo - Retoriikka, politiikka ja arviointi [Enchantment of education - Rhetoric, politics and assessment]. Tampere: Tampere University Press, pp.229-245.
Poikela, E. & Poikela, S. (Eds.) (2012). Competence and problem based learning - experience, learning and future. Selected papers book, International Conference on PBL, 12-13 April 2012. Rovaniemi University of Applied Sciences Publication Series A no 3: Rovaniemi.
Poikela, E. & Poikela, S. (2006). Learning and knowing at work - professional growth as a tutor. In E. Poikela & A. R. Nummenmaa (Eds), Understanding problem-based learning. Tampere: Tampere University Press, pp.183-207.
Poikela, E. & Poikela, S. (2005). The strategic points of problem-based learning - organising curricula and assessment. In E. Poikela & S. Poikela (Eds), Problem-based learning in context - bridging work and education. Selected papers book, International Conference on PBL, 9-11 June 2005, Lahti, Finland. Tampere: Tampere University Press, pp.7-22.
Poikela, S. (2005). Learning at work as a tutor - the processes of producing, creating and sharing knowledge in a work community. In Problem-based learning in context - bridging work and education. Selected papers book, International Conference on PBL, 9-11 June 2005, Lahti, Finland. Tampere: Tampere University Press, pp.177-194.
Poikela, S. & Moore, I. (2011). PBL challenges both curriculum and teaching. In T. Barrett & S. Moore (Eds.), New approaches to problem-based learning: Revitalising your practice in higher education. New York: Routledge. pp.229-238.
Poikela, S. & Vuoskoski, P. (2009). Developing context-based assessment in problem-based physiotherapy education. Presentation at ISATT 2009, 14th Biennial Conference of International Study Association on Teachers and Teaching. University of Lapland, Rovaniemi, Finland, 1-4 July 2009.
Poikela, S., Vuoskoski, P. & Kärnä, M. (2009). Developing creative learning environments in problem-based learning. In Tan Oon-Seng (Ed.), Problem-based learning and creativity. Singapore: Cengage Learning Asia. pp.67-85.
Pollio, H. R., Henley, T. & Thompson, C. B. (2006). The phenomenology of everyday life. NY: Cambridge University Press.
Raivola, R. (2000). Tehoa vai laatua koulutukseen? [Efficacy or quality for education?]. Helsinki: WSOY.
Sadler, D. R. (2010). Fidelity as a precondition for integrity in grading academic achievement. Assessment & Evaluation in Higher Education, 35(6), 727-743. http://dx.doi.org/10.1080/02602930902977756
Sadler, D. A. (2005). Interpretations of criteria-based assessment and grading in higher education. Assessment & Evaluation in Higher Education, 30(2), 175-194. http://dx.doi.org/10.1080/0260293042000264262
Savin-Baden, M. (2004). Understanding the impact of assessment on students in problem-based learning. Innovations in Education and Teaching International, 41(2), 221-33. http://dx.doi.org/10.1080/1470329042000208729
Savin-Baden, M. (2003). Assessment, the last great problem in higher education? PBL Insight, 6(1).
Savin-Baden, M. & Major, C. (2004). Foundations of problem-based learning. Maidenhead: Open University Press/SRHE.
Schmidt, H. G., Van der Molen, H. T., Te Winkel, W. W. R. & Wijnen, W. H. F. W. (2009). Constructivist, problem-based learning does work: A meta-analysis of curricular comparisons involving a single medical school. Educational Psychologist, 44(4), 227-249. http://dx.doi.org/10.1080/00461520903213592
Schmidt, R. & Gibbs, P. (2009). The challenges of work-based learning in the changing context of the European Higher Education Area. European Journal of Education, 44(3), 399-410. http://dx.doi.org/10.1111/j.1465-3435.2009.01393.x
Schön, D. A. (1983). The reflective practitioner: How professionals think in action. New York: Basic Books.
Scott, R. H. & van Etten, E. (2013). Environmental and conservation volunteering as workplace integrated learning for university students. In Special Issue: Teaching and Learning in Higher Education: Western Australia's TL Forum. Issues in Educational Research, 23(2), 242-257. http://www.iier.org.au/iier23/scott.html
Thistlethwaite, J. E. (2013). Practice-based learning across and between the health professions: A conceptual exploration of definitions and diversity and their impact on interprofessional education. International Journal of Practice-based Learning in Health and Social Care, 1(1), 15-28. http://dx.doi.org/10.11120/pblh.2013.00003
Trede, F. (2012). Role of work-integrated learning in developing professionalism and professional identity. Asia-Pacific Journal of Cooperative Education, 13(3), 159-167. http://www.apjce.org/files/APJCE_13_3_159_167.pdf
Virolainen, M. & Stenström, M-L. (2013). Building workplace learning with polytechnics in Finland: Multiple goals and cooperation in enhancing connectivity. Journal of Education and Work, 26(4), 376-401. http://dx.doi.org/10.1080/13639080.2012.661846
Von Treuer, K., Sturre, V., Keele, S. & McLeod, J. (2011). An integrated model for the evaluation of work placements. Asia-Pacific Journal of Cooperative Education, 12(3), 195-204. http://www.apjce.org/files/APJCE_12_3_195_204.pdf
Webster-Wright, A. (2009). Reframing professional development through understanding authentic professional learning. Review of Educational Research, 79(2), 702-739. http://dx.doi.org/10.3102/0034654308330970
Winning, T., Lim, E. & Townsend, G. (2005). Student experiences of assessment in two problem-based dental curricula: Adelaide and Dublin. Assessment & Evaluation in Higher Education, 30(5), 489-505. http://dx.doi.org/10.1080/02602930500187014
|Authors: Dr Pirjo Vuoskoski is a senior lecturer in the School of Health Sciences, University of Brighton, UK. A physiotherapist by background, she holds a PhD in education. Her interdisciplinary research interests integrate phenomenological and qualitative research, human lifeworld phenomena, higher education, work-related learning, problem-based pedagogy and physiotherapy.
Dr Sari Poikela is an associate professor in the Faculty of Education, University of Lapland, Finland. She holds a PhD in adult education. Her research interests include development of higher education: problem-based learning, evaluation and assessment, university and international pedagogy; and the interaction between work-life and education. Email: firstname.lastname@example.org Web: http://www.ulapland.fi/InEnglish/Units/Faculty-of-Education
Please cite as: Vuoskoski, P. & Poikela, S. (2015). Developing student assessment related to a work-placement: A bridge between practice and improvement. Issues in Educational Research, 25(4), 535-554. http://www.iier.org.au/iier25/vuoskoski.html