Evaluating written, audio and video feedback in higher education summative assessment tasks
University of South Australia
This paper evaluates various feedback models utilised for summative assessment tasks for tertiary digital media students at the University of South Australia in Australia. The aim of this research project was to establish the advantages and disadvantages of each technique, and to determine which model provided students with more insight into their academic performance. In the first half of 2014, in the course Design Language in Media Arts, three different summative feedback models were trialled: written feedback; audio feedback; and video feedback. 77 first year students participated in the learning experiment where a different feedback model was utilised for each of the three major assignments in the course. The three summative feedback techniques were evaluated at the end of the semester in the form of an online survey, which provided participating students with the opportunity to critically reflect on the learning experience. The findings of the study are discussed in light of the growing use of non-traditional feedback measures in higher education, and provide insight into the advantages and disadvantages of each model, from both student and staff perspectives.
Feedback is central to the learning experience, and providing written comments on students' assignments is seen as a key feature of feedback processes for summative assessment in higher education (Nicol, 2010). Despite this, researchers have highlighted several difficulties students face in relation to understanding such feedback (Blayney and Freeman, 2004; Duncan 2007; Higgins, Hartley and Skelton, 2002; Merry and Orsmond, 2008). Duncan (2007) notes that written feedback can often focus on mechanical aspects of the submission, such as spelling and grammar, rather than concentrating on the core of the work. Duncan also found that written comments could be vague, such as "use a more academic style"; illegible if handwritten; inconsistent in quality and quantity across a range of tutors; and could focus on positive and encouraging comments at the expense of clear advice on how to improve the quality in subsequent work. Duncan's findings are consistent with numerous prior investigations that have analysed students' perceptions of teacher comments (Crook et al, 2012; Lizzio and Wilson 2008; Orsmond, Merry, and Reiling 2005; Poulos and Mahony, 2008).
Video feedback, as well as other video-based learning techniques, have also been successfully incorporated into teaching and learning in higher education in recent times (Abdous and Yoshimura, 2010; Abrahamson, 2010; Bracher, Collier, Ottewill, and Shephard, 2005; Cann, 2007; Henderson and Phillips, 2015; West and Turner, 2015). Cann (2007) notes that videos have a broad acceptance amongst students and can offer a richer format for feedback provision than audio or written techniques. Henderson and Phillips (2015) suggest students found video-based feedback to be more individualised and personalised than text-based feedback, while West and Turner (2015) note that video feedback can be easier to comprehend and act upon. As video is a visual medium it has the potential to enhance learning in different ways to other technologies, including the potential for demonstrations; i.e., seeing as opposed to being told how to improve subsequent coursework (Abrahamson, 2010). A further advantage is that, like audio, video files provide a permanent record, which can be stored online and replayed at the students' convenience, as opposed to handwritten feedback forms which can be lost or damaged.
Within the course there were three summative assessment tasks, spaced evenly during the semester, as well as a series of weekly formative assessment tasks. The formative assessment tasks were completed both online and in class during tutorial sessions. Every two weeks, students submitted work-in-progress imagery to a gallery in the course's online learning environment, hosted by a Facebook application called 'the Café', and provided critiques on their peers' submissions.
When critiquing their peers, students were asked to address explicit assessment criteria related to the task, and to be constructive in their commentary, focusing on elements they considered to be successful, as well as areas for improvement and further consideration. All critiques were required to be at least 50 words in length and it was mandatory for students to respond to critiques in order to establish a dialogue between peers. Online submissions and critiques were linked to the summative assessment tasks in the course and timed so that students were able to address their peers' critiques, and consider making changes to their work. In alternating weeks, students brought work-in-progress to tutorials and discussed their work with both peers and staff. During these sessions, the cohort broke into small groups of four or five students, discussed their work under the guidance of the tutor, and provided critiques, comments and suggestions to their peers face-to-face. These group discussions lasted approximately 20 minutes at the start of each session, where upon students then worked individually and had one-on-one sessions with the tutor, enabling the provision of staff feedback. Participation within these formative assessment tasks was worth 15% of the final grade for the course. Students were not required to grade their peers within the online and in class peer feedback models.
The three summative assessment tasks were due for submission in weeks 5, 8 and 13 respectively. The first assignment provided students with the opportunity to produce a small section of a music video, exploring the photographic technique 'pixilation'. The second assignment allowed students to produce a print marketing strategy for a design exhibition. For the third and final assignment, students were required to explore the design theory and techniques of a specific designer, produce their own design work, and then display their findings in a digital presentation. It was decided that audio feedback would be provided for the first assignment, video feedback for the second assignment, and written feedback for the third assignment, as each feedback technique lent itself to the focus of the assessment task.
Prior to the start of the course, several audio and video recording software packages were trialled and assessed to determine which would be best suited to the learning experience. Three audio recording software packages, Audacity, RecordPad and Adobe Audition were tested. Several factors were taken into consideration when choosing the software package to use, including accessibility; cost; editing capabilities; and available output formats. Audacity was eventually chosen as the most suitable program to use for this research project as it featured comparable editing capabilities to both RecordPad and Adobe Audition, as well being able to output to .mp3 format, a high quality and small file size format. Audacity is also free to use, whereas RecordPad ($29.99) and Adobe Audition ($143.88 per annum as part of an Adobe CC licence) are not.
Three video recording software packages, CamStudio, Snagit and Camtasia Studio were also trialled prior to the commencement of the semester. Again, several factors were taken into account to determine which program would be used in the study, specifically the availability and subsequent quality of audio recording; the ability to record the entire desktop; drawing capabilities; editing capabilities; and available output formats. Camtasia Studio was chosen as the most suitable package, as it featured superior recording and editing capabilities to the other two programs. Despite being significantly more expensive at $179, compared to CamStudio, free, and Snagit, $29.95, the ability to export as an .mp4 file, was seen as crucial to the research project. This ensured that video feedback files could be produced at a high resolution and could run for several minutes, while only being three to four megabytes in size. This was very important in the context of the research as the feedback files were required to be uploaded to the course's learning management system and then sent to students. Small file size enabled the videos to be uploaded and accessed by students relatively quickly.
The first piece of summative assessment was submitted by students in the fifth week of the course and, in line with university policy, feedback and grades were required to be distributed to students within two weeks. The feedback process involved viewing all student submissions to attain an understanding of the general quality of work and then individually assessing each submission against the assessment criteria. The process of assessing each student's work took approximately 10 to 15 minutes and included the course lecturer viewing the student's submission several times (a music video) and taking written notes. Once this had been completed, the assessor then opened Audacity and provided verbal feedback for a period of between one and two minutes. After the feedback had been recorded, it was reviewed; edited or re-recorded if necessary; then exported as an .mp3 file; uploaded to the course website; and sent to the student.
The second piece of summative assessment was submitted by students in the eighth week of the course and the assessment process was similar to that of assignment one, with staff first looking at all of the submissions and then assessing the students one by one. The process of assessing each student's work took approximately 20 to 25 minutes - almost twice as long as the audio feedback - and included viewing the student's submission several times (graphic design posters) and writing notes. When this had been completed, the assessor opened Camtasia Studio and recorded narrated, visual feedback, which included critiquing the student's designs in Photoshop for a period of between three to four minutes. Once the feedback had been recorded, it was then reviewed; edited if necessary; exported as an .mp4 file; uploaded to the course website; and sent to the student.
The third and final piece of summative assessment was submitted by students in the 13th week of the course and the assessment approach followed that of the first two submissions. The process of assessing each student's work took approximately 20 minutes - longer than the audio feedback, though less than the video feedback - and included viewing the student's submission several times (a .pdf design presentation); providing written feedback in the form of open-ended comments; and charting the student's performance against the specific assessment criteria in the form of a rubric. Once the feedback had been written, it was then reviewed; edited if necessary; uploaded to the course website; and sent to the student.
|Demographic||No. of students|
in the course
|% respondents within|
|Number of respondents||77||100%||58||75%|
|Student type||Local student||65||84%||47||72%|
|Note: The survey yielded a response rate of 75%|
In the questionnaire, students were asked which type of summative feedback they believed was most beneficial to their studies during the semester. The majority of students - 66% - indicated they felt the video feedback was most beneficial, while 22% stated written feedback, and 12% stated audio feedback. Table 2 outlines all student responses, broken down by gender, student type and student age.
The video feedback was the most popular format amongst male students (71%) and female students (59%), as well as local students (68%) and international students (55%). Notably, written feedback was more popular amongst females (34%) than males (13%); and audio feedback was more popular amongst international students (36%) than local students (6%). There was also an increase in popularity of the written feedback amongst older students within the cohort, with 50% of students aged over 25 preferring it to audio and video. Students who nominated the video feedback as the most beneficial did so for a variety of reasons, including the visual and aural aspects, as well as the clarity of the feedback:
|Feedback type||Male participants||Female participants||Total participants|
|Feedback type||Local participants||International participants||Total participants|
The video allowed me to see what the feedback was on my work, as well as the areas I needed to improve upon. (Male, local, 17-18)From the 22% of students who preferred the written feedback, students noted the assessment criteria rubric, the document layout, and the familiarity of written comments as positive aspects:
I thought the video feedback was excellent. Being told what you did wrong, what you could improve on and what you did right is good but actually seeing it with an explanation helps you take it on board. (Male, local, 19-24)
This incorporated both audio and visuals into the feedback and so made it really obvious what the teacher was commenting on - it was very easy to follow. (Male, local, 19-24)
I liked the fact that reasons were stated. This was easy to be able for me to go back and read and reflect on these things in the future. (Female, local, 17-18)The smaller cohort of respondents who preferred the audio feedback noted the direct quality of the narration, and the clarity of language used as positives:
I was able to see the breakdown of each component that was graded on, see which areas I need to improve on. (Female, local, 25-34)
The written feedback has a good layout and is easier to follow. (Female, local, 19-24)
The audio feedback got right to the point and made it clear what was done well and what required more consideration. (Female, local, 19-24)During the survey, students were also given the opportunity to discuss the accessibility of the different feedback techniques, as well as their comprehension of the provided feedback, and the insight it gave into their academic performance. The majority of students found all feedback formats easy to access, with the audio and written options garnering a slightly stronger response than video feedback. Some students did note that it took slightly longer to download the video feedback file, which had an average file size of around four megabytes, compared to one megabyte for the audio file, and less than 50 kilobytes for the written feedback. Students were pleased with their ability to access all feedback types, with both the video and audio options accessible on smart phones and tablets, due to the format types used (.mp4 for video, and .mp3 for audio). The majority of students found all of the feedback files were easy to understand, with no clear superior format in this category. Again, the majority of students indicated that each feedback type provided insight into their academic performance, with the video feedback gaining a stronger response, 93%, when compared to audio, 88%, and written,75%. Mean response and broad agreement statistics are shown in Table 3.
I could clearly understand what the teacher thought of my work, rather than the written notes which were harder to understand. (Male, international, 19-24)
|Topic||Audio feedback||Video feedback||Written feedback|
|The feedback file was easy to access.||4.69||97%||4.49||88%||4.75||94%|
|The feedback file was easy to understand.||4.66||92%||4.63||90%||4.63||91%|
|The feedback file provided insight into my academic performance.||4.54||88%||4.75||93%||4.20||75%|
|The survey used a 5-point Likert scale from 1 (strongly disagree), to 3 (undecided), to 5 (strongly agree)|
The audio feedback was helpful and direct. It was a good opportunity to speak directly to the student, especially in large classes it is easy to feel like you blend into the crowd. (Female, local, 19-24)The principal disadvantage of the audio feedback appeared to be that it simply wasn't as powerful as the video feedback. While most students appreciated the ability to hear the assessor's comments, specifically their tone and emphasis, the lack of a visual component seemed to impact on the efficacy of the feedback:
Hearing tone and emphasis on words helps grasp the intent behind them, much more so than just reading. (Male, local, 19-24)
Felt much more personalised than your average written feedback, it is easy to tell if you are putting effort into the feedback we receive in an audio recording. (Male, international, 19-24)
Audio feedback is OK, and it did make its point well. However, the method obviously lacks the visual impact of video and of written feedback. Nevertheless, it is good to have audio boosting the other media. (Female, local, 19-24)Several other students noted an inability to process the information provided in the audio feedback and a need to return to it for future reference:
I personally dislike hearing feedback, prefer it to be visual. (Male, local, 19-24)
Was probably marginally slower to take on board information than written feedback and didn't quite have the dynamics of video feedback. (Female, local, 25-34)When discussing the video model, the principal advantage was the clarity of the feedback provided:
I didn't really see an advantage in audio feedback. The disadvantage would be that I felt I would need to note down the feedback myself to refer back to it. (Male, local, 19-24)
I thought the video feedback was excellent. Being told what you did wrong, what you could improve on and what you did right is good, but actually seeing it with an explanation helps you take it on board. (Male, local, 17-18)The video feedback was also considered to be the most appropriate type of feedback for the project work in the course. Students recognised that they were working in a predominantly visual medium, and appreciated this format in their feedback as well:
I felt that it was very easy to pinpoint what you were commenting on. Once again, it felt personalised and you can't rush through video feedback. (Male, local, 19-24)
It is more personal - it is almost like face to face feedback. (Female, international, 19-24)
Video feedback is useful to point out areas of an assignment you're referring to. It's a good feedback method for film projects and digital media projects. (Male, local, 25-34)When discussing the written feedback, three specific advantages came to the fore. Firstly, many students indicated that it simply felt more like feedback:
File size and download times were seen as the only disadvantages of the video feedback:
Larger file size would be the only disadvantage I can think of. (Female, international, 19-24)
Probably slightly slower to download than the audio or written feedback. (Male, local, 19-24)
It feels official, and it is good to have something concrete at the end of the semester. (Female, local, 25-34)Secondly, the assessment criteria rubric allowed for greater interpretation of their performance:
It's what I'm used to - it's more formal. (Male, local, 35+)
It is the most frequently used by most teachers so just seems normal and expected. (Female, local, 19-24)
I was able to see where I could improve on future work, which areas I really did well in, and other areas that let me down. (Female, local, 25-34)
The scales were good for specific assessment criteria. (Female, local, 19-24) Finally, students appreciated the ability to return to the feedback at later times without needing access to an electronic device:
A computer isn't always required to read written feedback and you can also take your pace reading rather than listening at a video or audio file's pace. (Male, local, 19-24)The key disadvantage of the written feedback was the perceived lack of detail. Despite providing between 100 and 200 words of commentary, it simply didn't compare to the amount of detail in the audio and video models:
Disadvantage is feedback is quite succinct and short and does not offer as much insight into how to potentially improve. (Male, local, 19-24)Key advantages and disadvantages of each feedback model are outlined in Table 4.
Disadvantages could include the ability to rush through marking and not going through things as thoroughly as you may have if you were providing audio/visual feedback. It may also be more difficult to describe parts of an assignment you are assessing. (Female, local, 19-24)
I think I understand spoken explanations better than written ones - it provides more insight. (Male, international, 19-24)
|Audio||Free via software packages such as Audacity.||Fast to record feedback. Distribution to students is slower than written and faster than video feedback.||Can be conceived as more personal than written feedback.|
Vocal tone and emphasis can improve understanding of feedback.
Strong comprehension of feedback.
More detailed than written feedback.
|Comparatively large file size.|
Slower to distribute.
Requires digital access to listen to feedback.
No visual element involved.
|Video||Free via software packages such as CamStudio. Up to $179 for a licence of Camtasia Studio.||Slow to record and render feedback. Slow to distribute to students.||Feedback is engaging.|
Feedback is dynamic.
Can be conceived as more personal than written feedback.
Vocal tone and emphasis can improve understanding of feedback.
Greater insight into student performance.
Strong comprehension of feedback.
More detailed than written feedback.
|Comparatively large file size.|
Greater staff workload to produce feedback files.
Slower to distribute.
Requires digital access to view to feedback.
|Written||Free via a variety of word document production software packages.||Fast to write feedback and distribute to students.||A rubric can allow for faster interpretation of specific assessment criteria.|
Small file size.
Fast to produce and distribute.
Can be conceived as more formal.
Can be printed out and read at any time.
|Feedback is limited to text - no visual or aural element involved.|
Feedback is static.
Can be conceived as less substantial / detailed.
All up it was good to have a variety of feedback methods. The audio and video feedback especially helped to view the marker as a person instead of an institution, and the written feedback provided something more official at the end. (Male, local, 19-24)Considering this response, a range of feedback techniques, such as those presented in this paper, may allow for stronger insight into academic performance, rather than a single technique, be it video, audio or text based. Students may also respond more positively to each feedback type as it is different from the last; the novelty of experiencing something new may be more engaging and instructive, rather than receiving the same feedback format for each assignment. Beyond this, a mixed-media feedback model, incorporating visual, aural and written (specifically an assessment rubric) components could also enhance the student experience.
Of the three models presented in this paper, the video feedback was the most positively received, followed by the written feedback and lastly the audio feedback. While students appreciated the ability to hear their teacher's voice in the audio feedback, it was viewed as deficient when compared to the video model. In some cases, it also seemed that students didn't necessarily view the audio and video models as 'official' feedback, thus emphasising the need to clearly explain to students that feedback can come in a variety of different formats and mediums. It should be noted that at the time of completing the questionnaire, students had most recently received the written feedback model for their final assignment, which was worth 50% of their grade for the course, as opposed to 17.5% for the other two assignments, and that this may have influenced their responses.
At the conclusion of the course, students were asked to take part in a second online survey, a standard university-distributed course evaluation. This survey yielded a response rate of 56% and included topics relating to the quality of the delivered course. 98% of respondents indicated they had a clear idea of what was expected of them during the course, and 95% indicated they received feedback that was constructive and helpful. 100% of respondents indicated they were satisfied with the quality of the course, often referring to the variety of feedback as a key indicator. These factors indicated that the methods of delivery of feedback, along with its quality and frequency, can have a positive impact on students' experience within a course and their subsequent development as learners.
From a teacher's perspective, there were other significant advantages and disadvantages of the three feedback models, predominantly related to workload. Providing audio feedback initially proved to be the quickest and easiest model of the three, taking between 10 and 15 minutes per student. Recording the feedback was a straight forward process and became easier and faster with experience. Audacity exported the audio file in .mp3 format quickly and the resulting file was approximately one megabyte in size.
The video feedback files took much longer to produce and also took longer to distribute to students. The process of creating the video was more involved, including loading the student's work on screen, recording the feedback, editing the video if necessary, and then exporting the video in .mp4 format. On average, it took between 20 and 25 minutes to provide video feedback per student, close to twice as long as the audio feedback. The increased time frame was not only due to the more intricate recording process but also because the video feedback files ran much longer than the audio feedback files. It proved to be much faster to simply narrate a problem and resolution related to a submission rather than visually showing the student in a program such as Photoshop. The video files also took significantly longer to upload to the course's learning management system and distribute to students due to their file size.
Providing written feedback took approximately 20 minutes per student; however, it should be noted that the assessment weighting on the associated assignment was almost three times as much as the two previous assignments and, as such, required more in-depth analysis of the student's work. In terms of actually producing the feedback file and distributing it to students, this process was as fast as the audio model; it took slightly longer to provide the written commentary than it did to narrate the audio commentary. However, there was no necessary 'recording' process and the smaller file size of the word document meant that it was faster to distribute the feedback files to students.
The varying time frames associated with delivering each feedback model has a clear impact on a teacher's ability to incorporate them into a curriculum. With a small class size it is much easier to experiment with different feedback techniques and spend more time with formats such as video. If the class size is large however, there may not be the opportunity to utilise such techniques, given the desire, as well as the expectation, to provide timely feedback.
This study has indicated that there is no 'one size fits all' feedback model when it comes to assessment in higher education. When adopting a feedback model it is important for educators to take into consideration several factors, including the field of study; assessment type (formative or summative); assessment format (visual, aural, written); the class size; the student type (age; local or international; visual / hearing impaired); and available staff and student resources (software; hardware; Internet access). For the students participating in this study, video feedback was viewed as the most beneficial because it provided more in-depth analysis of their academic performance in assignments, which were largely visual-based. The feedback model matched the format of the assessment. For assignments or other assessment tasks which are text-based, text or audio feedback models may be much more appropriate. Video feedback was also appropriate in this instance because students had access to the internet, through a range of devices, and could collect their feedback files quickly and easily. This is not the case with many student cohorts in developing countries around the world where internet access or even access to a computer may be limited. That said, when time and resources allow for it, video feedback has proven to be a highly valued format by students working in visual-based fields of study.
Abrahamson, E. (2010). Assessment through video-feedback on an undergraduate sports rehabilitation programme. Higher Education Academy [HEA] Case Study. http://www.heacademy.ac.uk/assets/hlst/documents/case_studies/147_abrahamson_video-feedback.pdf
Bevan, R., Badge, J., Cann, A., Willmott, C. & Scott, J. (2008). Seeing eye-to-eye? Staff and student views on feedback. Bioscience Education, 12(1). http://dx.doi.org/10.3108/beej.12.1
Biggs, J. B. (2003). Teaching for quality learning at university: What the student does. Maidenhead, UK: Society for Research into Higher Education & Open University Press.
Blayney, P. & Freeman, M. (2004) Automated formative feedback and summative assessment using individualised spreadsheet assignments. Australasian Journal of Educational Technology, 20(2), 209-231. http://www.ascilite.org.au/ajet/ajet20/blayney.html
Bloxham, S., & Boyd, P. (2007). Developing effective assessment in higher education: A practical guide. Maidenhead: Open University Press.
Bracher, M., Collier, R., Ottewill, R. & Shephard, K. (2005). Accessing and engaging with video streams for educational purposes: experiences, issues and concerns. ALT-J, Research in Learning Technology, 13(2), 139-150. http://www.researchinlearningtechnology.net/index.php/rlt/article/download/10990/12694
Cann, A. (2007). Podcasting is dead. Long live video! Bioscience Education, 10(1). http://dx.doi.org/10.3108/beej.10.c1
Crook, A., Mauchline, A., Maw, S., Lawson, C., Drinkwater, R., Lundqvist, K., Orsmond, P., Gomez S. & Park, J. (2012). The use of video technology for providing feedback to students: Can it enhance the feedback experience for staff and students? Computers and Education, 58(1), 386-396. http://dx.doi.org/10.1016/j.compedu.2011.08.025
Duncan, N. (2007). 'Feed-forward': Improving students' use of tutors' comments. Assessment & Evaluation in Higher Education, 32(3), 271-283. http://dx.doi.org/10.1080/02602930600896498
Emery, R. & Atkinson, A. (2009). Group assessment feedback: 'The good, the bad and the ugly'. In Proceedings, A Word in Your Ear 2009, Sheffield, UK. http://research.shu.ac.uk/lti/awordinyourear2009/docs/emery-atkinson-Solent_Audio_Feedback_paper.pdf
Francis, R., Shannon, S. & Murison, S. (2008). Delivery of online digital feedback and assessment for design and engineering (architectural) students. Paper presented at the 46th Annual Conference of the Architectural Science Association, Griffith University. http://anzasca.net/wp-content/uploads/2014/08/p201.pdf
Gibbs, G. & Simpson, C. (2004). Conditions under which assessment supports learning. Learning and Teaching in Higher Education, 1(1), 3-31. http://www2.derby.ac.uk/lta-new/images/Documents/Assessment_for_learning/lathe_article_2004.pdf
Glover, C. & Brown, E. (2006). Written feedback for students: Too much, too detailed or too incomprehensible to be effective? Bioscience Education, 7(3). http://dx.doi.org/10.3108/beej.2006.07000004
Gould, J. & Day, P. (2013). Hearing you loud and clear: Student perspectives of audio feedback in higher education. Assessment & Evaluation in Higher Education, 38(5), 554-566. http://dx.doi.org/10.1080/02602938.2012.660131
Harris, L. R., Brown, G. T. L. & Harnett, J. A. (2014). Understanding classroom feedback practices: A study of New Zealand student experiences, perceptions, and emotional responses. Educational Assessment, Evaluation and Accountability, 26(2), 107-133. http://dx.doi.org/10.1007/s11092-013-9187-5
Henderson, M. & Phillips, M. (2015). Video-based feedback on student assessment: Scarily personal. Australasian Journal of Educational Technology, 31(1), 51-66. http://ascilite.org.au/ajet/submission/index.php/AJET/article/view/1878
Higgins, R., Hartley, P. & Skelton, A. (2002). The conscientious consumer: Reconsidering the role of assessment feedback in student learning. Studies in Higher Education, 27(1), 53-64. http://dx.doi.org/10.1080/03075070120099368
Ice, P., Curtis, R., Phillips, P. & Wells, J. (2007). Using asynchronous audio feedback to enhance teaching presence and students' sense of community. Journal of Asynchronous Learning Networks, 11(2), 3-25. http://files.eric.ed.gov/fulltext/EJ842694.pdf
Irons, A. (2008). Enhancing learning through formative assessment and feedback. Key guides for effective teaching in higher education. Abingdon, UK: Routledge.
King, D., McGugan, S. & Bunyan, N. (2008). Does it make a difference? Replacing text with audio feedback. Practice and Evidence of Scholarship of Teaching and Learning in Higher Education, 3(2), 145-163. http://www.pestlhe.org.uk/index.php/pestlhe/article/view/52/174
Lizzio, A. & Wilson, K. (2008). Feedback on assessment: Students' perceptions of quality and effectiveness. Assessment & Evaluation in Higher Education, 33(3), 263-275. http://dx.doi.org/10.1080/02602930701292548
Lunt, T. & Curran, J. (2010). 'Are you listening please?' The advantages of electronic audio feedback compared to written feedback. Assessment and Evaluation in Higher Education, 35(7), 759-769. http://dx.doi.org/10.1080/02602930902977772
Middleton A., Nortcliffe, A. & Owens, R. (2009). iGather: Learners as responsible audio collectors of tutor, peer and self reflection. In Proceedings, A Word in Your Ear 2009. Sheffield, UK. http://research.shu.ac.uk/lti/awordinyourear2009/docs/Middleton-Nortcliffe-Owens-iGather_final.pdf
Mory, E. H. (2003). Feedback research revisited. In D. H. Jonassen (Ed.), Handbook of research for educational communications and technology. pp. 745-783. New York: Macmillan.
Mutch A. (2003). Exploring the practice of feedback to students. Active Learning in Higher Education, 4(1), 24-38. http://dx.doi.org/10.1177/1469787403004001003
Narciss, S. (2008). Feedback strategies for interactive learning tasks. In J. M. Spector, M. D. Merrill, J. Van Merrienboer & M. P. Driscoll (Eds.), Handbook of research on educational communications and technology (3rd ed.). pp. 125-143. New York: Erlbaum.
Nicol, D. (2010). From monologue to dialogue: Improving written feedback processes in mass higher education. Assessment & Evaluation in Higher Education, 35(5), 501-517. http://dx.doi.org/10.1080/02602931003786559
Nicol, D. & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218. http://dx.doi.org/10.1080/03075070600572090
Nortcliffe, A. & Middleton, A. (2008). A three year case study of using audio to blend the engineer's learning environment. Engineering Education, 3(2), 45-57. http://22.214.171.124/archive/engineering/journal/index.php/ee/article/view/110/146.html
Orsmond, P. & Merry, S., (2008). Students' attitudes to and usage of academic feedback provided by audio files. Bioscience Education, 11(3). http://dx.doi.org/10.3108/beej.11.3
Orsmond, P., Merry, S. & Reiling, K. (2005). Biology students' utilisation of tutors' formative feedback: A qualitative interview study. Assessment & Evaluation in Higher Education, 30(4), 369-386. http://dx.doi.org/10.1080/02602930500099177
Orsmond, P. & Merry, S. (2011). Feedback alignment: Effective and ineffective links between tutors' and students' understanding of coursework feedback. Assessment & Evaluation in Higher Education, 36(2), 125-136. http://dx.doi.org/10.1080/02602930903201651
Poulos, A. & Mahony, M. J. (2008). Effectiveness of feedback: The students' perspective. Assessment & Evaluation in Higher Education, 33(2), 143-154. http://dx.doi.org/10.1080/02602930601127869
Price, M. (2007). Should we be giving less written feedback? Centre for Bioscience Bulletin, 22(9). https://www.brookes.ac.uk/aske/documents/PriceM_2007_PerspectivePaper.pdf
Ribchester, C., France, D. & Wheeler, A. (2008). Podcasting: A tool for enhancing assessment feedback? In E. O'Doherty (Ed.), The Fourth Education in a Changing Environment Conference Book 2007, 119-136. Santa Rosa: Informing Science Press. http://www.researchgate.net/profile/Derek_France/publication/30067710_Podcasting
Rotheram, B. (2009). Sounds good: Quicker, better assessment using audio feedback. JISC Final Report. http://www.jisc.ac.uk/publications/reports/2009/soundsgoodfinalreport.aspx
Scriven, M. (1967). The methodology of evaluation. In R. Tyler, R. Gagne & M. Scriven (Eds), Perspectives on curriculum evaluation. Chicago: Rand McNally and Co.
Taras, M. (2005). Assessment - summative and formative - some theoretical reflections. British Journal of Educational Studies, 53(4), 466-478. http://dx.doi.org/10.1111/j.1467-8527.2005.00307.x
Walker, M. (2009). An investigation into written comments on assignments: Do students find them usable? Assessment & Evaluation in Higher Education, 34(1), 67-78. http://dx.doi.org/10.1080/02602930801895752
Weaver, M. R. (2006). Do students value feedback? Student perceptions of tutors' written responses. Assessment & Evaluation in Higher Education, 31(3), 379-394. http://dx.doi.org/10.1080/02602930500353061
West, J. & Turner, W. (2015). Enhancing the assessment experience: Improving student perceptions, engagement and understanding using online video feedback. Innovations in Education and Teaching International, 38(2), 240-252. http://www.tandfonline.com/doi/abs/10.1080/14703297.2014.1003954
|Author: Dr Josh McCarthy is a lecturer in digital media with specific interests in animation, graphic design, photography and digital art. He teaches within the Bachelor of Media Arts program in the School of Communication, International Studies and Languages. Josh is also the director of Electricityscape, an Adelaide-based company delivering digital solutions to local and international clients.|
Email: Josh.McCarthy@unisa.edu.au Web: http://people.unisa.edu.au/Josh.McCarthy
Please cite as: McCarthy, J. (2015). Evaluating written, audio and video feedback in higher education summative assessment tasks. Issues in Educational Research, 25(2), 153-169. http://www.iier.org.au/iier25/mccarthy.html