SlideShare ist ein Scribd-Unternehmen logo
1 von 17
Downloaden Sie, um offline zu lesen
Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=cshe20
Download by: [Allama Iqbal Open University] Date: 22 August 2017, At: 06:31
Studies in Higher Education
ISSN: 0307-5079 (Print) 1470-174X (Online) Journal homepage: http://www.tandfonline.com/loi/cshe20
Review of assessment feedback
Jinrui Li & Rosemary De Luca
To cite this article: Jinrui Li & Rosemary De Luca (2014) Review of assessment feedback, Studies
in Higher Education, 39:2, 378-393, DOI: 10.1080/03075079.2012.709494
To link to this article: http://dx.doi.org/10.1080/03075079.2012.709494
Published online: 11 Sep 2012.
Submit your article to this journal
Article views: 1885
View related articles
View Crossmark data
Citing articles: 16 View citing articles
Review of assessment feedback
Jinrui Lia
* and Rosemary De Lucab
a
General & Applied Linguistics, University of Waikato, 16 Knighton Road, Hamilton, 3216
New Zealand; b
Faculty of Education, University of Waikato, Hamilton, 3240 New Zealand
This article reviews 37 empirical studies, selected from 363 articles and 20 journals,
on assessment feedback published between 2000 and 2011. The reviewed articles,
many of which came out of studies in the UK and Australia, reflect the most current
issues and developments in the area of assessing disciplinary writing. The article
aims to outline current studies on assessment feedback. These studies have
explored undergraduate students’ wide-ranging perspectives on the effectiveness
and utility of assessment feedback, the divergent styles of assessment feedback
of lecturers and tutors in various disciplines, teachers’ divergent interpretations
of assessment criteria and confusion about the dual roles of assessment feedback,
and the divergences between teachers’ beliefs and practices. The review includes
analysis and comparison of the research methods and findings of the studies. It
identifies a research space for assessment feedback and outlines implications for
further studies.
Keywords: assessment feedback; disciplinary writing; belief; practice
1. Introduction
In this review article the term ‘assessment feedback’ refers to comments and grades that
lecturers and tutors provide for the written work submitted by undergraduate students as
part of course requirements in various disciplines within tertiary education. Grades and
written comments are intended to have both formative and summative functions:
grades, although summative, also have a formative role in that they result from evalu-
ation of students’ work against standards and they affect students’ attention to further
improvement in feedback (Taras 2002); feedback has the role of justifying grades and
maintaining standards, as well as its formative role (Joughin 2008). Effective formative
feedback links closely to improved student learning. The effectiveness of feedback has
been limited because of both contextual constraints and theoretical gaps. Contextually,
there are constraints such as students’ various backgrounds in writing (Sakyi 2000),
various discourses in different disciplines (Russell and Yañez 2003), insufficient
assessment knowledge of staff (DeLuca and Klinger 2010; Weaver 2006), modular
form of programmes, institutional requirements (Bailey and Garner 2010) and policies
that emphasise measurement of students’ achievement instead of learning improvement
(Price et al. 2011). Within the frequently programmed modular patterns of courses in
higher education, feedback on assessment is often not received by the student until
after the completion of a module and may not be applicable to future modules.
© 2012 Society for Research into Higher Education
*Corresponding author. Email: jl287@students.waikato.ac.nz
Studies in Higher Education, 2014
Vol. 39, No. 2, 378–393, http://dx.doi.org/10.1080/03075079.2012.709494
Downloaded
by
[Allama
Iqbal
Open
University]
at
06:31
22
August
2017
Theoretically, there has been argument that the conventional teacher-centred assess-
ment model does not fit with the current student-centred pedagogy (Guba and
Lincoln, 1989) whereby students should be actively engaged in assessment activity
(Sadler 1989). New models of assessment have been proposed with emphasis on the
integration of feedback into ongoing student–teacher dialogues (Rust, O’ Donovan,
and Price 2005) and the effect of feedback on cultivating students’ self-regulation of
learning (Hattie and Timperley 2007; Nicol and Macfarlane-Dick 2006). However,
these theories have been discussed mainly among linguists focusing on the separate
roles of formative and summative assessment. There have been studies by educators
on the over-emphasis of the summative role of assessment in tertiary education
(Bailey and Garner 2010; Taras 2002). However, there has been little guidance
for disciplinary teachers on how to achieve the dual goals of assessment feedback.
Assessment feedback on written work of modular patterned courses within the dis-
ciplines has not been established as a branch of study for its own sake.
Consequently, we have identified the need to review current theories and empirical
studies that are relevant to assessment feedback and we present how these studies are
shaping into an area of study. By so doing, we aim to frame the study on assessment
feedback, demonstrate the need for theory development, and point out further directions
of study in this area for those who are concerned with the assessment activity at tertiary
level.
This review focuses on empirical studies in assessment feedback given on under-
graduates’ written work in various disciplines in higher education. We firstly used
the advanced search of Google Scholar, with the word ‘writing’, the exact phrase
‘assessment feedback’ and at least one of the words ‘university’ in articles published
in ‘education’, excluding the word ‘postgraduate’ from any time to 19 August 2011.
We then read the abstracts of the 363 articles from 20 journals resulting from the
search and found 56 articles that fitted the focus of the review. We also used the
website of the university library and searched the journals with the words ‘assessment
and evaluation’ or ‘higher education’ to locate the key journals on the focus of the
review, then searched articles in these journals with the key words ‘assessment feed-
back’. Content analysis was conducted on the articles resulting from the search with
the following criteria: empirical studies, focusing on assessment feedback at under-
graduate level, clear description of methodology, published between 2000 and 2011.
Finally, 37 empirical studies on assessment feedback were selected from 16 journals,
20 of which were from the journal Assessment & Evaluation in Higher Education.
We selected journal articles because of their currency and saliency for our topic.
The search result also included two recent reviews on feedback. One is an extensive
review by Shute (2008) on formative feedback at task level, excluding summative feed-
back. The review by Shute (2008) does not overlap with this review because it has a
different focus. Another review was conducted by Parboteeah and Anwar (2009) in
order to find how students’ motivation could be influenced by the effect of feedback.
The review included 18 publications between 1993 and 2006 on assignment feedback.
These publications were grouped into three themes: ‘(a) types, styles and pattern of
feedback, (b) misperceptions, (c) future learning’ (754). The result of the review was
that types of feedback did not affect learning; what mattered were the quantity and
quality of feedback, and the content of feedback that addressed both specific and
general issues. Among the 18 publications reviewed by Parboteeah and Anwar
(2009), only two (Carless 2006; Orrell 2006) are about assessment feedback on under-
graduates’ written work, and these are included in this review for a different focus.
Studies in Higher Education 379
Downloaded
by
[Allama
Iqbal
Open
University]
at
06:31
22
August
2017
The two reviews referred to in the last paragraph focused on formative feedback and
the influence of feedback on students’ motivation, which are different from the focus of
this review. The search result indicates that there has been no review that focuses on
assessment feedback on written work by undergraduate students in various disciplines.
Moreover, this current review represents a wide community of educators on assessment
feedback. Although it involves perspectives of teachers who focused on academic
writing itself, the majority of the empirical studies focused on the practice of assessing
writing in disciplinary areas.
Thefollowingthemeswerefoundbyreviewingthestudies.Firstly,studiesonstudents’
perspectives regarding the utility and effectiveness of feedback demonstrated various
opinions on effective feedback. Secondly, studies on teachers’ practices of assessment
feedback pointed out the divergent styles of assessment feedback and a common lack of
information on learning improvement. Thirdly, studies on teachers’ beliefs revealed tea-
chers’ divergent interpretations of assessment criteria and confusion about the dual roles
ofassessmentfeedback.Fourthly,comparisonbetweenteachers’beliefsandtheirpractices
highlighted the divergence between the intended role of assessment feedback for learning
improvement and the actual assessment feedback to justify grading. Moreover, compari-
sons between students’ perspectives and those of teachers demonstrated that divergent
opinions mainly focused on the roles and effectiveness of assessment feedback. Finally,
innovative approaches to effective assessment feedback were explored, including enga-
ging first-year undergraduates with written feedback by individual tutorials. Overall, the
issues of assessment feedback pointed to its dual function. The themes of the studies
will be presented in this review followed with conclusions, a suggested framework for
the study of assessment feedback, and implications for further studies.
2. Studies on students’ perspectives on assessment feedback
A large number of studies have explored students’ perspectives on assessment feed-
back. The main focus of these studies is students’ utilisation of feedback and their
opinions on what feedback is effective.
2.1. Students’ utilisation of feedback
Three studies in the UK context have explored issues relating to students’ utilisation of
assessment feedback in various disciplines, mainly by interviews (Higgins, Hartley, and
Skelton 2002; Orsmond, Merry, and Reiling 2005; Walker 2009). Higgins, Hartley, and
Skelton (2002) explored the impact of feedback on students’ learning in a three-year
study. They reported the findings of an initial study on students’ response to feedback
based on interviews of 19 first-year students in Business and Humanities in two insti-
tutions in the UK and 94 questionnaire responses. They found that the majority of stu-
dents (97%) read comments, most of them believed they kept the feedback in mind for
further use, but how they would use it was unclear. Barriers to effective feedback
were found mainly in three aspects: firstly, the modular programmes constrain the
tutors to provide timely feedback that could be applied by students to the improvement
of the next assignments; secondly, feedback often focused on grammatical errors or
negative aspects of students’ work without suggestions on how to make improvement;
finally, the language of feedback was general or vague. The issue of how students use
feedback was explored by Orsmond, Merry, and Reiling (2005). In this small study,
data were collected by interviews with 16 third-year biology students in a UK university.
380 J. Li and R. De Luca
Downloaded
by
[Allama
Iqbal
Open
University]
at
06:31
22
August
2017
These students had all received summative written feedback on their coursework; 13 of
them had also received formative feedback. Three students did not read feedback, one
read it if the marks conflicted with expectation, six thought marks were more important,
while the others gave equal importance to marks and feedback. This finding seems to be
in alignment with that of some previous studies that marks had an adverse effect on feed-
back utility because students focus on marks rather than written comments (e.g. Carless
2006; Mutch 2003). In addition, Orsmond, Merry, and Reiling (2005) found four areas
where the students used feedback: enhancing motivation, enhancing learning, encoura-
ging reflection and clarifying information and expectations about assignments.
Some students suggested ‘feedforward’ that would ‘give focus’ to the feedback pro-
vided at the end of the course (Orsmond, Merry, and Reiling 2005, 376). Some students
identified a place for more generalised feedback with regard to their progression
through the course. Other students applied feedback given in one course to another
course, thereby enriching their learning environment. Six students preferred feedback
that involved them in discussion with their tutors. Walker (2009) explored the types
of feedback that students would use. She firstly analysed 3000 written feedback
items on 106 assignments written by technology students in the Open University in
the UK, applying the six categories introduced by Brown and Glover (2006),
namely, content, skills development, motivating, demotivating, a mention of future
study, and a reference to a resource the student could use (69). Then forty-three of
the students were interviewed on the usability of feedback. Data fell mainly into the
first three categories. Interview data demonstrated that more than two-thirds of students
found the feedback was usable, especially feedback on skill development. The least
usable comments were those specific to an assignment that were not applicable to
other assignments. When asked what sort of comment they preferred, students said
‘they wished to be told what they had got wrong, and why, and how to do better’
(75). They also wanted feedforward for assignments in the future. The study raised
the question of how to improve the practice to make feedback more effective.
The findings of the above studies about the utilisation of feedback were not in agree-
ment with those of Crisp’s (2007) study in a university in Australia. Crisp (2007)
studied the effect of students’ response to feedback by comparing the assessment
result of one essay written by 51 social science students six weeks after they received
the assessment feedback on a similar essay assessed by the same marker. She found
there was little evidence of improvement either in scores or regarding the reduction
of problems that had occurred in the first essay. She concluded that the provision of
feedback alone could hardly result in improvement in a subsequent submission on
the same topic. However, it was not known how students perceived the feedback
from the first essay or if they had received the feedback on the first essay before
they submitted their second essay.
The differing results of studies on students’ utilisation of feedback may be due to
different contexts and different methods of data collection. Overall, the studies demon-
strated that assessment feedback was not fully used by students, which in turn raises the
question of what kind of feedback can effectively enhance students’ utilisation of feed-
back. This question will be discussed in the next section.
2.2 What kinds of feedback are effective?
Three types of effective feedback were explored by studies in UK universities. An early
study by Orsmond, Merry, and Reiling (2002) explored the usefulness of exemplars in
Studies in Higher Education 381
Downloaded
by
[Allama
Iqbal
Open
University]
at
06:31
22
August
2017
student-centred assessment activity in a UK university. The participants were
twenty-two first-year undergraduates majoring in environment and biology who were
enrolled in a module. The task was a poster presentation. Students were asked to
develop criteria for assessment, and then discussed the criteria with tutors. The tutors
provided posters of previous students as exemplars that students used to modify their
own work. Peer and self assessment were also used before the final submissions.
Students were also asked to complete a questionnaire on the effectiveness of this assess-
ment process. The study concluded that the use of exemplars was effective in helping
students complete the task.
Model answers, as a type of feedback similar to exemplars, were also found to be
effective in getting higher scores according to Huxham (2007). Huxham (2007)
surveyed 183 biology students in a UK university. The survey data demonstrated
that students preferred personal comments to model answers as feedback, although
they wanted both types of feedback; however, the examination results of 155 of the
students indicated that model answers generated higher marks than personal comments.
The study concluded that feedback should include both model answers and personal
comments. Another study by Bloxham and Campbell (2010) explored the effectiveness
of using a cover sheet on which students listed the aspects that they wished to have
commented on. Data were collected by interviews with nine first-year students and a
focus group of teachers in a UK university. The study concluded that the interactive
cover sheets could promote interaction between students and teachers and activate stu-
dents to take a role and some responsibility in assessment feedback.
Similar studies on types of effective feedback were also carried out in an Australian
context. These studies focused on the effectiveness of exemplars, and seemed to be in
agreement with Orsmond, Merry, and Reiling’s (2002) study. Hendry, Bromberger, and
Armstrong (2011) compared the effectiveness of marking sheets, exemplars and feed-
back. Data were collected from two focus groups of altogether 10 students in a law
course in a university in Australia; themes that emerged in the focus groups were
then further explored by a questionnaire returned by 92 students in the same course.
The study found that the students preferred exemplars to marking sheets in that exem-
plars offered specific demonstration of the required standards that students could use to
compare with their own work; students also believed that a combination of focused per-
sonal comments and explanations and references to standards was effective. The find-
ings of Hendry, Bromberger, and Armstrong’s (2011) study were similar to those of a
small-scale survey study on exemplars conducted by Handley and Williams (2011)
among students of one course run by a team of tutors of multi-disciplines in an Austra-
lian university. The findings were based on 63 responses to a survey questionnaire out
of 400 students. Students found that the online exemplars annotated with comments as
feedforward were effective mainly because they helped students have specific infor-
mation on structure and layout before they wrote the assignment; in addition, they
also tended to initiate dialogue between tutors and students. These studies on types
of effective feedback seemed to contradict the review result of Parboteeah and
Anwar (2009), who concluded that ‘the type, style and pattern of feedback makes no
difference to students’ learning’ (757). The reason, perhaps, is that Parboteeah and
Anwar’s (2009) conclusion was based on the review of three studies (Ashwell 2000;
Chandler 2003; Fazio 2001) on feedback given to second language writing, which
focus on feedback on grammatical errors.
However, the common information expressed by the studies reviewed above on
types of feedback is that students preferred feedforward that could help them clarify
382 J. Li and R. De Luca
Downloaded
by
[Allama
Iqbal
Open
University]
at
06:31
22
August
2017
the requirements; they also expected comments to be personal, explanatory, and cri-
teria-related. These features of feedback overlapped with a group of studies in the Aus-
tralian context that explored the features of effective feedback by means of focus groups
(Poulos and Mahony 2008), document analysis (Lizzio and Wilson 2008), and surveys
(Ferguson 2011). Poulos and Mahony (2008) used four focus groups to investigate
undergraduate students’ perspectives on effective feedback in a faculty of health
sciences in an Australian university. They found that students preferred timely, consist-
ent, transparent, and criteria-referenced assessment feedback. They concluded that the
effect of feedback related to students’ opinions of those who assessed their work. In
another Australian university, some other aspects of effective feedback were found
by Lizzio and Wilson (2008), who explored students’ perspectives on effective feed-
back in two studies. In the first study, they analysed the content of 238 written com-
ments from 57 psychology, law and arts students who were asked to describe
effective and ineffective feedback they had received on various types of written
work. Effective feedback was found to have the following aspects: to support learning
by relating to goals and strategies, to demonstrate the assessor’s engagement with the
assessed written work so as to provide fair assessment, to acknowledge achievements
and efforts, and to be considerate when commenting on negative aspects. Basing their
research on the results of the first study, they surveyed 277 students in wider disci-
plines. The common aspects of effective feedback found by the two studies are that
feedback should be able to promote learning development, and be encouraging and
fair. The findings of Poulos and Mahony (2008) and Lizzio and Wilson (2008) were
partly confirmed by Ferguson (2011), who surveyed 101 undergraduate and 465 gradu-
ate students who majored in education in an Australian university. The survey demon-
strated that students preferred timely, personalised, criteria-referenced, positive, and
clear feedback that could not only acknowledge their achievement but also lead to
improvement.
In addition, some UK studies reported features of ineffective feedback in relation to
the causes. Weaver (2006) studied students’ opinions of feedback data that were col-
lected by questionnaires returned by 44 students in the faculties of Business and Art
& Design out of a total number of 510 students. Twenty-two students participated in
group discussion, and provided samples of feedback they had received. The study
found that students had positive opinions on the feedback they received but also
pointed out that feedback could have been more usable if it had been too vague or
general, negative, unrelated to criteria or marks, and without further explanation and
advice on how to use the feedback. In addition, divergent opinions were found
between students of the two majors, mainly on the usefulness of feedback given at
the end of each module. This difference, according to Weaver (2006), resulted from
how modules were designed and how feedback was provided according to the
modules. The study concluded that the problems of feedback practice were due to
tutors’ insufficient knowledge of effective practice, time limits because of workload,
and perhaps personal beliefs on the purpose of feedback. Weaver (2006) acknowledged
that the study was limited due to the low response to the questionnaire.
With similar data collection methods but a larger number of participants, Hounsell
et al. (2008) studied the quality of feedback and assessment guidance to students. They
collected data from first and final year students at three biosciences departments in a UK
university. Data comprised 782 questionnaires and 23 group interviews with 69 stu-
dents. The survey data demonstrated that students overall had positive opinions on
staff regarding their supportive attitude but less positive opinions on the feedback
Studies in Higher Education 383
Downloaded
by
[Allama
Iqbal
Open
University]
at
06:31
22
August
2017
and guidance they received. The interview data revealed that the process of feedback
followed a six-step feedback loop: students’ prior experiences, initial guidance,
ongoing guidance, feedback on achievement, supplementary support, and feedforward.
However, the following problems were identified in this feedback loop: guidance was
provided before assessment yet this was insufficient or misunderstood; although
ongoing guidance was available, students were reluctant to approach staff to seek gui-
dance; feedback was not given or was insufficient; the quality of feedback was incon-
sistent because assignments of large classes were often marked by a group of markers;
feedback related to a specific piece of work rather than being embedded in the daily
process of learning; and, finally, as the survey data showed, students felt feedback
sometimes was ineffective due to delay for various reasons.
Moreover, Pokorny and Pickford (2010) pointed out the social-contextual influ-
ences on the effectiveness of feedback. They explored students’ perspectives on feed-
back using four focus groups of altogether 18 business majors who were in the first and
final years of study in a UK university. They found that students did not believe that
written feedback itself was often effective; the effectiveness of feedback was related
to the social relationship between students and teachers. Moreover, they found that
the final-year students had a broader and more complex view of feedback than the
first-year students. The final-year students’ perspectives were close to what was
suggested in literature, that feedback should clarify standards and goals, current per-
formance, and strategies to achieve the goals. Pokorny and Pickford (2010) suggested
that feedback should be contextually integrated in the whole process of learning.
Higgins, Hartley, and Skelton (2002), basing their findings on interviews with 19
first-year students in Business and Humanities in two institutions in the UK and 94
questionnaires, concluded that feedback should be timely, should include explanation
and suggestions for further improvement, should focus on higher-order concerns such
as level of argument and critical analysis, should use language the students could under-
stand rather than academic discourse, and should integrate peer-assessment and further
tutor–student discussions.
Overall, the selected studies on effectiveness of feedback were mainly carried out
among students in various disciplines in UK and Australian contexts. The main
approaches of data collection were survey, interview, and/or focus group. Students’
assessment preferences included timely, personal, criteria-referenced feedback that
could be used for further improvement.
3. Studies on teachers’ beliefs about assessment feedback and their practices
The foci of this group of studies fell into three categories: teachers’ practices in giving
assessment feedback, teachers’ beliefs about assessment feedback, and convergences
and divergences between beliefs and practices.
3.1 Practices of assessment feedback
A common approach to exploring the practice of feedback was document analysis of
recorded written comments on students’ work. One early and widely quoted study
was conducted by Ivanic, Clark, and Rimmershaw (2000) in a university in the UK
and a university in South Africa. Ivanic et al. compared five tutors of social sciences
with four EAP (English for Academic Purpose) tutors regarding their practice of
384 J. Li and R. De Luca
Downloaded
by
[Allama
Iqbal
Open
University]
at
06:31
22
August
2017
providing feedback. Nine pieces of written comment were collected and analysed. Six
categories were found:
. explain the grade in terms of strengths and weaknesses;
. correct or edit the student’s work;
. evaluate the match between the student’s essay and an ‘ideal’ answer;
. engage in dialogue with the student;
. give advice which will be useful in writing the next essay;
. give advice on rewriting the essay (55).
The study found that the written comments varied widely in quantity and wording. More-
over,subjecttutorstendedtousecommenttojustifygradeswhereasEAPtutorsaimedmore
to help students rewrite the essays. Suggestions were given on staff development regarding
how to write feedback, such as giving feedback on both drafts and final submission in
relationtoquality,quantityandappropriatetiming,explainingcriteriatostudents,engaging
students in feedback dialogue, considering the styles and messages conveyed by the feed-
back, including both positive and negative feedback, using a personal tone, and combining
written feedback with oral discussion. Suggestions were also given to EAP teachers
especially with regard to facilitating students’ writing in their own disciplines.
Later studies on assessment feedback had a relatively larger amount of data but with
a different focus of study. For example, Mutch (2003) analysed the content of 122 feed-
back sheets on essays written by undergraduates in 39 modules of a business course in a
UK university. The categories of analysis were: Factual, Comment, Developmental,
Implied developmental, Conversation, and Neutral. The study found that feedback
was written in categorical terms and focused on knowledge and content. Feedback
on how to improve was given to the less satisfactory work rather than the excellent
essays. The study points out the need to reflect on programme design and support to
teachers in order to have effective feedback practices. One limitation of the study
was that the feedback sheet may not have been enough to present a whole picture of
assessment feedback as some comments may have been written on the assessed paper.
With the same approach, Stern and Solomon (2006) studied whether feedback prac-
tice was in alignment with principles suggested in literature. They analysed written
comments on 598 graded papers from 30 departments of a university in the USA.
The content of the comments was coded into 23 categories. They found that most com-
ments were given on micro-level items, especially lexical errors, and there was a lack of
detailed rubrics and personal connections in the comments. Stern and Solomon (2006)
compared the comments on papers for English courses with those for other subjects;
unlike what was found by Ivanic et al. (2000), few differences were found except
that more micro-level comments were given on papers for English. Stern and
Solomon (2006) also compared the comments with the principles of effective written
comments they summarised from literature, namely:
to provide positive comments in addition to corrections; to provide feedback only on a
few select areas that are deemed important for that particular writing assignment –
those tied to the student learning goals for the paper assignment; and to provide comments
that identify patterns of weaknesses, errors, and strengths. (25–27)
The result of the comparison was that the feedback practices were not in alignment with
the principles. Stern and Solomon (2006) suggested that further research should focus
Studies in Higher Education 385
Downloaded
by
[Allama
Iqbal
Open
University]
at
06:31
22
August
2017
on what kind of training teachers received on feedback practice. Compared with Ivanic
et al.’s (2000) study, this study was larger in size and provided more detailed analysis
on the content of feedback and focused on the common features of assessment.
Although the findings of the study did indicate some divergent practices, this was
not the focus of the study.
Divergent practices within one discipline were found in the study by Read, Francis,
and Robson (2005) on assessment and feedback in relation to gender differences and
bias. The participants were 50 historians (half male, half female) from institutions
across England and Wales. These historians were asked to assess two sample essays
written by two students of different genders. After the assessment, the participants
were interviewed on their rationale for assessment. The major finding of the study
was that there were large variations in comments and grades given by the participants.
Read, Francis, and Robson (2005) challenged objectivity as an assessment standard. In
addition, the study did not find clear evidence of gender effect.
A recent study conducted by Carless et al. (2011) explored effective feedback prac-
tice using interviews with 10 teachers rated as excellent across 10 departments of a
Hong Kong university. They found four common features of effective feedback prac-
tice among these teachers: dialogic; incorporated with peer feedback or using portfolio
assessment; facilitated by technologies such as online dialogues and blogs; and com-
bined with activities that promote self-regulation. Moreover, they pointed out that
the design of assessment tasks was important in order to make feedback sustainable.
These studies revealed the variation of teachers’ practices and the need for pro-
fessional training. However, the studies were mainly based on document analysis
and interview; no data were collected by observations of the practices.
3.2 Beliefs about assessment feedback
A limited number of studies have explored teachers’ beliefs about assessment feedback
(Bailey and Garner 2010; Grainger, Purnell, and Zipf 2008; Harman and McDowell
2011). The common aspects of these studies were that data were collected mainly by
interviews with experienced teachers and the foci are various, such as the purpose
and content of assessment feedback.
Grainger, Purnell, and Zipf (2008) studied markers’ beliefs about assessment stan-
dards in an Australian university by analysing the conversations between five experi-
enced lecturers at two meetings, during which the lecturers were asked to talk about
their responses to some copies of students’ work. The findings of the study firstly con-
firmed Sadler’s (1989) findings that markers focused on both technical aspects
(grammar, referencing, paraphrasing, quotes, word limit, academic genre, structure,
and English expression) and the content of the writing (level of detail, depth of analysis,
depth of understanding, justification and evaluation). Moreover, the study found that
although markers agreed on some common criteria of assessment, they interpreted
the quality of standards differently. The study also mentioned that markers tended to
make a holistic judgement first and then match aspects of the assessed work with
specific items of criteria.
Bailey and Garner (2010) explored the purposes of assessment feedback from tea-
chers’ perspectives by interviewing 48 lecturers across disciplines in a UK university
on their beliefs and experiences of feedback in assessment. The lecturers believed feed-
back was used to inform learning, to justify the grade, and to meet the institutional
requirements. However, they were uncertain whether students could respond to
386 J. Li and R. De Luca
Downloaded
by
[Allama
Iqbal
Open
University]
at
06:31
22
August
2017
feedback. They believed priority was often given to institutional requirements at the
cost of its educational function.
Harman and McDowell (2011) studied teachers’ believed roles in assessment
activity in a UK university by 11 interviews and six follow-up interviews with
design lecturers. Discourse analysis of the data revealed that the lecturers believed
that their roles in assessment were providing expert guidance, encouragement and
support, and objective judgement, as well as maintaining professional standards.
However, the teachers felt pressure in that they were supposed to be objective judges
in assessment activity, and that this identity was in contradiction with their positioning
in other teaching activities such as the position of facilitator. Harman and McDowell
(2011) pointed out that teachers needed training and support to help them confront dif-
ficulties in assessment activity.
3.3 Comparison between teachers’ beliefs and practices
A very limited number of studies have compared teachers’ beliefs with their practices in
providing assessment feedback. The common findings of the studies were that teachers’
practices were not in agreement with their beliefs.
One study was carried out by Orrell (2006) in a university in Australia. Data on the
beliefs and practices of 16 experienced teachers of education courses were collected by
two methods: think-aloud protocol and teachers’ reflection. The tape-recorded data were
transcribed and coded using a grounded theory approach with content analysis. The
think-aloud data revealed that teachers responded to students’ work in the following
ways: teaching, editing, and feedback dialogue. The feedback varied significantly
regarding the quantity and focus. Two teachers provided only grades. The interview
data demonstrated that teachers believed feedback aimed to improve learning, but tea-
chers were not sure about the utility of feedback. Furthermore, Orrell (2006) compared
the teachers’ beliefs and practices and found very little convergence (22%). Although
teachers believed assessment feedback aimed to facilitate self-evaluation and learning
improvement, their feedback was more ‘defensive and summative’ (453). Teachers
were frustrated because they were aware that the formative purpose of feedback had to
give priority to the summative role. Teachers also felt pressure because teaching and
learning were both driven by the measurement role of assessment. The study pointed
out that the improvement of feedback practices requires the support of institutional pol-
icies. Another study that compared beliefs with practices of assessment feedback was
conducted by Li and Barnard (2010). They studied the beliefs and practices of tutors
in a faculty of arts in a New Zealand university. Data were collected by survey among
tutors in the faculty, interviews with 16 tutors, followed up with nine think-aloud and
stimulated recall sessions, and two focus group meetings. The audio-recorded data
were transcribed and analysed with a grounded theory approach. The study found that
tutors’ major concern in practice was to justify their grading instead of their believed
purpose of learning improvement. This finding aligns with other studies (e.g. Orrell
2006; Bailey and Garner 2010). Moreover, the study found that tutors often felt less
authoritative in providing feedback and felt confused when they received different
advice from those who supervised them; tutors’ practice of providing feedback was
based on their personal experiences of receiving and giving feedback.
The common issue from the studies that compared teachers’ beliefs and practices
was that teachers needed professional training and support in their practice, an issue
that was also identified by the studies on teachers’ practices and beliefs.
Studies in Higher Education 387
Downloaded
by
[Allama
Iqbal
Open
University]
at
06:31
22
August
2017
4. Comparison between teachers’ and students’ beliefs
Studies in different countries have compared students’ beliefs with those of teachers.
One major common finding of the studies is that students and teachers often have diver-
gent understandings of assessment and feedback. A leading study in this area was con-
ducted by Lea and Street (2000). Their research methods included in-depth, semi-
structured interviews with a total of 23 staff and 47 students who were involved in
cross-disciplinary writing at two universities in England. It was revealed that tutors
were ‘mainly influenced by specific conceptualizations of their own disciplines or
subject area in their assessments of students’ writing’ (39). Special attention was
given to ‘“structure” and “argument”’ (39). Though the teachers had a general idea
of what good writing is, they could not describe explicitly ‘what a well-developed argu-
ment looks like in a written assignment’ (39). Their interviews with students and learn-
ing support teachers indicated that the main difficulty in academic writing resulted from
‘the conflicting and contrasting requirements for writing on different courses and from
the fact that these requirements were frequently left implicit’ (38). They concluded that
the requirements and feedback relating to writing assignments were complex and
implicit, which required understanding of both linguistic consideration and ‘social
and institutional relationships associated with it’ (45).
Carless (2006) explored students’ and teachers’ perceptions of assessment and feed-
back by a large-scale survey of 460 staff and 1740 students in eight universities in Hong
Kong, a further small-scale, open-ended survey of 52 students, semi-structured inter-
views with 15 students and six further interviews with students in Cantonese, and inter-
views with five teachers. Divergent opinions were found between teachers and
students: teachers believed their feedback was effective whereas students had different
opinions. A consensus between teachers and students was that it was difficult for stu-
dents to fully understand criteria.
Meyer et al. (2010) surveyed 1238 first-year undergraduates and 879 teachers of
undergraduates at four tertiary institutions in New Zealand regarding their attitudes
towards assessment. They also collected assessment documents and conducted 14 inter-
views with management staff not only in the above institutions but also in three
additional institutions. According to the survey result, both teachers and students
agreed that students’ work should be assessed by their teachers instead of independent
markers. However, there were divergent opinions between teachers and students: stu-
dents tended to believe that assessment was for accountability whereas teachers were
more likely to believe assessment was for learning improvement; most students
agreed that grade moderation or adjustment could be made to achieve consistency
whereas most teachers disagreed. Moreover, the qualitative data revealed that insti-
tutional policies emphasised assessment procedures and assessment of learning.
The perspectives of both teachers and students of assessment and feedback were
further explored by two recent studies in the UK by a combination of focus group
and interview. Scaife and Wellington (2010) studied teachers’ and students’ opinions
by group interview involving 60 students, and interviews with eight teachers in five fac-
ulties across a university in the UK. The study found students were positive towards
assessment without marks, peer assessment and diagnostic feedback. However,
Scaife and Wellington (2010) were not able to fully compare students’ opinions with
those of teachers because teachers did not follow all the interview questions. Neverthe-
less, the study found that teachers had different understandings of assessment terms.
Some teachers used diagnostic feedback but were not aware of it. Moreover, both
388 J. Li and R. De Luca
Downloaded
by
[Allama
Iqbal
Open
University]
at
06:31
22
August
2017
teachers and students found the integration of peer assessment and technology such as
online blog effective, and opportunities to discuss with tutors to receive advice before
the final assessment. They suggested that students should have two stages of sub-
mission that could allow them opportunities to react to feedback. They also suggested
there be space for teacher training and development. Beaumont, O’Doherty, and
Shannon (2011) compared the perspectives on assessment feedback between students
and teachers, including tutors, by focus group and interview data. Participants included
145 undergraduates and 23 teachers, including tutors across three disciplines in English
universities. They found that although tutors believed feedback should aim at learning
improvement, they also stated its summative purpose; they found that unmatched per-
spectives between students and tutors were due to their different values and expec-
tations with regard to feedback and the decrease in university resources that could
support the activity of assessment and feedback.
5. The use of innovative approaches to assessment feedback
Despite the theoretical discussions on assessment and feedback, Taras (2006) found, by
analysis of historically documented assessment feedback in three faculties of a univer-
sity in the UK, that the practice of assessment and feedback was hardly influenced by
innovative principles such as interactive feedback and students’ participation.
Three recent studies in UK settings have reported the use of intervention, mainly by
individual tutorials, to increase first-year students’ engagement with assessment feed-
back in educational programmes (Cramp 2011; Murtagh and Baker 2009; Prowse
et al. 2007). The intervention approach reported by Prowse et al. (2007) comprised for-
mative feedback and grade, together with suggestions on how to improve the grade.
Extra marks were given on the subsequent submission for the students in an education
programme who followed the formative feedback. The study concluded that this
approach enhanced learning improvement. Murtagh and Baker (2009) reported the
use of a one-to-one tutorial as intervention, during which students could discuss with
the tutors the feedback and questions in relation to the next assignment and the goal
of improvement. The effect of the intervention was compared with that of conventional
written feedback using questionnaires among a cohort of first-year students in an edu-
cation programme. The study concluded that the intervention was more effective in
engaging students than some types of conventional written feedback. Similarly, in
Cramp’s (2011) study, an intervention was provided to first-year students in an edu-
cational programme in order to increase their engagement with feedback. The interven-
tion included the following steps: initiating students’ reflection on feedback they had
received; and suggesting students make plans to improve and have dialogue with
tutors on feedback, assignments, and ways to improve. Cramp’s (2011) study con-
cluded that the intervention improved students’ understanding of academic skills
needed, assessment requirements, and written feedback; it also helped to construct stu-
dents’ academic identities.
Also in the UK context, a similar study was conducted by Fisher, Cavanagh, and
Bowles (2011) on the effectiveness of intervention for first-year business students.
The intervention was in the form of feedback on drafts together with individual tutor-
ials. One hundred and six out of 539 students who chose to submit drafts on their
assignment had one-to-one discussion with their tutors on the submitted draft and
received a feedback sheet. The analysis of grades on final submission demonstrated
Studies in Higher Education 389
Downloaded
by
[Allama
Iqbal
Open
University]
at
06:31
22
August
2017
that the feedback on drafts was effective in increasing students’ grades. The effective-
ness was also supported by students’ comments.
These types of tutor–student interventions, along with peer feedback and assess-
ment (see, for example, Higgins, Hartley, and Skelton 2002), give students a participant
role in assessment, which is traditionally a hierarchical exercise. Reynolds and Trehan
(2000), taking a critical literacy approach to participative assessment, caution that
social hierarchies may still operate within these social groupings formed for assessment
purposes and tutors need to be aware of the complexities of new power relationships.
In addition, some other studies on alternative approaches to assessment also indi-
cated how assessment feedback could be used. For example, Schalkwyk (2010)
reported the practice of an early assessment system used in a university in South
Africa. Students’ work was assessed during the first few weeks of each module. The
assessment result was then sent to the students and other stakeholders to inform
them of the students’ level of achievement compared with that of their peers. The
assessment aimed to set up goals of development for students, to track students’
improvement, and to ensure their success. The study concluded that this early formative
assessment approach enhanced the collaboration between different stakeholders and
promoted positive changes in attitudes towards and practices of assessment, despite
the problems such as increased workload. Moreover, a number of studies have reported
the use of portfolio assessment in different countries. A most recent study was con-
ducted by Dysthe and Engelsen (2011) on the portfolio practices in Norwegian tertiary
institutes by a survey questionnaire among teachers and programme leaders randomly
selected in five universities and 25 colleges across the country. The 303 responses were
analysed with SPSS software. The study found that the educational policies boosted the
use of portfolios; however, the effect was limited by the module forms of courses, class
size, and differences in pedagogical traditions and disciplines. Very recently, an
increasing number of studies have focused on the use of technologies such as Turnitin
(Davis 2007) on assessment and feedback; the use of e-assessment was also reported in
disciplinary areas (Brinka and Lautenbach 2011).
6. Conclusion and suggestions
This review of 37 empirical studies on assessment feedback for undergraduates in
various disciplines demonstrates the following themes. Firstly, teachers believed that
assessment feedback should inform learning as well as justify grading. However,
they found it hard to balance these roles in their practice. Secondly, students had
various expectations of assessment feedback, the most common descriptors of which
included timely, personal, explicable, criteria-referenced, objective, and applicable to
further improvement. Thirdly, the major divergent perspective between students and
teachers was regarding how well assessment feedback served the formative role. More-
over, there were contextual limitations of assessment feedback, including modular pat-
terned programmes, teachers’ workloads, and institutional policies. From this review,
we noticed the tensions between formative and summative roles of assessment and
the need for professional training and support for teachers. We also noticed that
there is a lack of collaboration and consultation between subject teachers and those
who teach academic writing.
Of special note among studies reviewed are the limited number of studies on the
actual practices of feedback; the limited number of both countries and academic disci-
plines as research sites; the narrow range of data collection methods; the occurrence of
390 J. Li and R. De Luca
Downloaded
by
[Allama
Iqbal
Open
University]
at
06:31
22
August
2017
divergent and even contradictory findings; and the pioneer studies on approaches that
could improve the effectiveness of assessment feedback.
Further research should focus on the following topics: firstly and most urgently,
a framework that can guide the practice of assessment feedback in assessing writing
in the various disciplines. Perhaps the first step is to articulate beliefs and assump-
tions of various stakeholders (Broad 2003; Huot 2002), including students and tea-
chers within a programme as well as programme managers, course coordinators, and
policy makers. Further studies can work on the issues and insights gained from the
cross-disciplinary and cross-section dialogues. Secondly, studies should not only
focus on specific contextual issues but also investigate general issues across disci-
plines so that some general principles can be applied to different contexts. It may
be worthwhile exploring whether there is any difference in assessment feedback pro-
vided in hard science and in the humanities. Thirdly, more studies are needed on
teachers’ actual practice of providing assessment feedback, difficulties they come
across, and strategies and solutions they seek; and more studies are needed on the
impact of assessment on teachers. Fourthly, studies are needed on how the use of
new technologies such as e-assessment can affect the practice of assessment feed-
back. Finally, studies need to use multiple methods of data collection to explore
various aspects of assessment practice to provide more holistic pictures of assess-
ment feedback.
References
Ashwell, T. 2000. Patterns of teacher response to student writing in a multiple-draft composition
classroom: Is content feedback followed by form feedback the best method? Journal of
Second Language Writing 9, no. 3: 227–57.
Bailey, R., and M. Garner. 2010. Is the feedback in higher education assessment worth the paper
it is written on? Teachers’ reflections on their practices. Teaching in Higher Education 15,
no. 2: 187–98.
Beaumont, C., M. O’Doherty, and L. Shannon. 2011. Reconceptualising assessment feedback:
A key to improving student learning? Studies in Higher Education 36, no. 6: 1–17.
Bloxham, S., and L. Campbell. 2010. Generating dialogue in assessment feedback: Exploring
the use of interactive cover sheets. Assessment & Evaluation in Higher Education 35, no.
3: 291–300.
Brinka, R., and G. Lautenbach. 2011. Electronic assessment in higher education. Educational
Studies 37, no. 5: 503–12.
Broad, B. 2003. What we really value: Beyond rubrics in teaching and assessing writing. Logan,
UT: Utah State University Press.
Brown, E., and C. Glover. 2006. Evaluating written feedback. In Innovative assessment in
higher education, ed. C. Bryan and K. Clegg, 81–91. Abingdon: Routledge.
Carless, D. 2006. Differing perceptions in the feedback process. Studies in Higher Education 31,
no. 2: 219–33.
Carless, D., D. Salter, M. Yang, and J. Lam. 2011. Developing sustainable feedback practices.
Studies in Higher Education 36, no. 4: 395–407.
Chandler, J. 2003. The efficacy of various kinds of error feedback for improvement in the accu-
racy and fluency of L2 student writing. Journal of Second Language Writing 12, no. 3: 267–
96.
Cramp, A. 2011. Developing first-year engagement with written feedback. Active Learning in
Higher Education 12, no. 2: 113–24.
Crisp, B.R. 2007. Is it worth the effort? How feedback influences students’ subsequent sub-
mission of assessable work. Assessment & Evaluation in Higher Education 32, no. 5:
571–81.
Davis, M. 2007. The role of Turnitin within the formative process of academic writing. Brookes
eJournal of Learning and Teaching 2, no. 2, http://bejlt.brookes.ac.uk/article/.
Studies in Higher Education 391
Downloaded
by
[Allama
Iqbal
Open
University]
at
06:31
22
August
2017
DeLuca, C., and D.A. Klinger. 2010. Assessment literacy development: Identifying gaps in
teacher candidates’ learning. Assessment in Education 17, no. 4: 419–38.
Dysthe, O., and K.S. Engelsen. 2011. Portfolio practices in higher education in Norway in an
international perspective: Macro-, meso- and micro-level influences. Assessment &
Evaluation in Higher Education 36, no. 1: 63–79.
Fazio, L. 2001. The effects of corrections and commentaries on the Journal writing accuracy of
minority and majority language students. Journal of Second Language Writing 10, no. 4:
235–49.
Ferguson, P. 2011. Student perceptions of quality feedback in teacher education. Assessment &
Evaluation in Higher Education 36, no. 1: 51–62.
Fisher, R., J. Cavanagh, and A. Bowles. 2011. Assisting transition to university: Using assess-
ment as a formative learning tool. Assessment & Evaluation in Higher Education 36, no. 2:
225–37.
Grainger, P., K. Purnell, and R. Zipf. 2008. Judging quality through substantive conversations
between markers. Assessment & Evaluation in Higher Education 33, no. 2: 133–42.
Guba, E., and Y. Lincoln. 1989. Fourth generation evaluation. Beverly Hills, CA: Sage.
Handley, K., and L. Williams. 2011. From copying to learning: Using exemplars to engage stu-
dents with assessment criteria and feedback. Assessment & Evaluation in Higher Education
36, no. 1: 95–108.
Harman, K., and L. McDowell. 2011. Assessment talk in design: The multiple purposes of
assessment in HE. Teaching in Higher Education 16, no. 1: 41–52.
Hattie, J., and H. Timperley. 2007. The power of feedback. Review of Educational Research 77,
no. 1: 81–112.
Hendry, G.D., N. Bromberger, and S. Armstrong. 2011. Constructive guidance and feedback for
learning: The usefulness of exemplars, marking sheets and different types of feedback in a
first year law subject. Assessment & Evaluation in Higher Education 36, no. 1: 1–11.
Higgins, R., P. Hartley, and A. Skelton. 2002. The conscientious consumer: Reconsidering the
role of assessment feedback in student learning. Studies in Higher Education 27, no. 1: 53–64.
Hounsell, D., V. McCune, J. Hounsell, and J. Litjens. 2008. The quality of guidance and feed-
back to students. Higher Education Research & Development 27, no. 1: 55–67.
Huot, B. 2002. (Re)articulating writing assessment for teaching and learning. Logan, UT: Utah
State University Press.
Huxham, M. 2007. Fast and effective feedback: Are model answers the answer? Assessment &
Evaluation in Higher Education 32, no. 6: 601–11.
Ivanic, R., R. Clark, and R. Rimmershaw. 2000. What am I supposed to make of this? The mess-
ages conveyed to students by tutors’ written comments. In Student writing in higher edu-
cation: New contexts, ed. M.R. Lea and B. Stierer, 47–65. Buckingham: Society for
Research into Higher Education & Open University Press.
Joughin, G. 2008. Assessment, learning and judgement in higher education. London: Springer.
Lea, M.R., and B.V. Street. 2000. Student writing and staff feedback in higher education: An
academic literacy approach. In Student writing in higher education: New contexts, ed. M.
R. Lea and B. Stierer, 31–46. Buckingham: Society for Research into Higher Education
& Open University Press.
Li, J., and R. Barnard. 2010. Academic tutors’ beliefs about and practices of giving feedback
on students’ written assignments: A New Zealand case study. Assessing Writing 16, no.
2: 137–48.
Lizzio, A., and K. Wilson. 2008. Feedback on assessment: Students’ perceptions of quality and
effectiveness. Assessment & Evaluation in Higher Education 33, no. 3: 263–75.
Meyer, L.H., S. Davidson, L. McKenzie, M. Rees, H. Anderson, R. Fletcher, and P.M. Johnston.
2010. An investigation of tertiary assessment policy and practice: Alignment and contradic-
tions. Higher Education Quarterly 64, no. 3: 331–50.
Murtagh, L., and N. Baker. 2009. Feedback to feed forward: Student response to tutors’ written
comments on assignments. Practitioner Research in Higher Education 3, no. 1: 20–28.
Mutch, A. 2003. Exploring the practice of feedback to students. Active Learning in Higher
Education 4, no. 1: 24–38.
Nicol, D.J., and D. Macfarlane-Dick. 2006. Formative assessment and self-regulated learning: A
model and seven principles of good feedback practice. Studies in Higher Education 31, no.
2: 199–218.
392 J. Li and R. De Luca
Downloaded
by
[Allama
Iqbal
Open
University]
at
06:31
22
August
2017
Orrell, J. 2006. Feedback on learning achievement: Rhetoric and reality. Teaching in Higher
Education 11, no. 4: 441–56.
Orsmond, P., S. Merry, and K. Reiling. 2002. The use of exemplars and student derived marking
criteria in peer and self-assessment. Assessment & Evaluation in Higher Education 27, no. 4:
309–23.
Orsmond, P., S. Merry, and K. Reiling. 2005. Biology students’ utilization of tutors’ formative
feedback: A qualitative interview study. Assessment & Evaluation in Higher Education 30,
no. 4: 369–86.
Parboteeah, S., and M. Anwar. 2009. Thematic analysis of written assignment feedback:
Implications for nurse education. Nurse Education Today 29, no. 7: 753–57.
Pokorny, H., and P. Pickford. 2010. Complexity, cues and relationships: Student perceptions of
feedback. Active Learning in Higher Education 11, no. 1: 21–30.
Poulos, A., and M.J. Mahony. 2008. Effectiveness of feedback: The students’ perspective.
Assessment & Evaluation in Higher Education 33, no. 2: 143–54.
Price, M., J. Carroll, B. O’ Donovan, and C. Rust. 2011. If I was going there I wouldn’t start
from here: A critical commentary on current assessment practice. Assessment &
Evaluation in Higher Education 36, no. 4: 479–92.
Prowse, S., N. Duncan, J. Hughes, and D. Burke. 2007. ‘… do that and I’ll raise your grade’.
Innovative module design and recursive feedback. Teaching in Higher Education 12, no.
4: 437–45.
Read, B., B. Francis, and J. Robson. 2005. Gender, ‘bias’, assessment and feedback: Analyzing
the written assessment of undergraduate history essays. Assessment & Evaluation in Higher
Education 30, no. 3: 241–60.
Reynolds, M., and K. Trehan. 2000. Assessment: A critical perspective. Studies in Higher
Education 25, no. 3: 267–78.
Russell, D.R., and A. Yañez. 2003. ‘Big picture people rarely become historians’: Genre
systems and the contradictions of general education. In Writing selves/writing societies:
Research from activity perspectives, ed. C. Bazerman and D. Russell, 331–62, http://wac.
colostate.edu/books/selves_societies/.
Rust, C., B. O’ Donovan, and M. Price. 2005. A social constructivist assessment process model:
How the research literature shows us this could be best practice. Assessment & Evaluation in
Higher Education 30, no. 3: 231–40.
Sadler, D.R. 1989. Formative assessment and the design of instructional systems. Instructional
Science 18, no. 2: 119–44.
Sakyi, A.A. 2000. Validation of holistic scoring for ESL writing assessment: How raters evalu-
ate compositions. In Fairness and validation in language assessment, ed. J.J. Kunnan, 129–
52. Cambridge: Cambridge University Press.
Scaife, J., and J. Wellington. 2010. Varying perspectives and practices in formative and diagnos-
tic assessment: A case study. Journal of Education for Teaching 36, no. 2: 137–51.
Schalkwyk, S. 2010. Early assessment: Using a university-wide student support initiative to
effect real change. Teaching in Higher Education 15, no. 3: 299–310.
Shute, V.J. 2008. Focus on formative feedback. Review of Educational Research 78, no. 1: 153–
89.
Stern, L.A., and A. Solomon. 2006. Effective faculty feedback: The road less travelled.
Assessing Writing 11, no. 1: 22–41.
Taras, M. 2002. Using assessment for learning and learning from assessment. Assessment &
Evaluation in Higher Education 27, no. 6: 501–10.
Taras, M. 2006. Do unto others or not: Equity in feedback for undergraduates. Assessment &
Evaluation in Higher Education 31, no. 3: 365–77.
Walker, M. 2009. An investigation into written comments on assignments: Do students find
them usable? Assessment & Evaluation in Higher Education 34, no. 1: 67–78.
Weaver, M.R. 2006. Do students value feedback? Student perceptions of tutors’ written
responses. Assessment & Evaluation in Higher Education 31, no. 3: 379–94.
Studies in Higher Education 393
Downloaded
by
[Allama
Iqbal
Open
University]
at
06:31
22
August
2017

Weitere ähnliche Inhalte

Ähnlich wie Assessment review.pdf

Evaluating the evaluator a reflective approach
Evaluating the evaluator a reflective approachEvaluating the evaluator a reflective approach
Evaluating the evaluator a reflective approachAlexander Decker
 
Assessing Critical Thinking In Higher Education Current State And Directions...
Assessing Critical Thinking In Higher Education  Current State And Directions...Assessing Critical Thinking In Higher Education  Current State And Directions...
Assessing Critical Thinking In Higher Education Current State And Directions...Mandy Brown
 
A Situative Metaphor For Teacher Learning The Case Of University Tutors Lear...
A Situative Metaphor For Teacher Learning  The Case Of University Tutors Lear...A Situative Metaphor For Teacher Learning  The Case Of University Tutors Lear...
A Situative Metaphor For Teacher Learning The Case Of University Tutors Lear...Sabrina Green
 
2022_eat_framework_-aug_.pdf
2022_eat_framework_-aug_.pdf2022_eat_framework_-aug_.pdf
2022_eat_framework_-aug_.pdfRebecca665273
 
School effectiveness-and-improvement-contribution-of-teacher-qualification-to...
School effectiveness-and-improvement-contribution-of-teacher-qualification-to...School effectiveness-and-improvement-contribution-of-teacher-qualification-to...
School effectiveness-and-improvement-contribution-of-teacher-qualification-to...oircjournals
 
Investigating the Relationship Between Teaching Performance and Research Perf...
Investigating the Relationship Between Teaching Performance and Research Perf...Investigating the Relationship Between Teaching Performance and Research Perf...
Investigating the Relationship Between Teaching Performance and Research Perf...BRNSSPublicationHubI
 
Course Evaluation Poster
Course Evaluation PosterCourse Evaluation Poster
Course Evaluation PosterBridget Hanley
 
Feedback to students about academic writing_INTEGRITY Project
Feedback to students about academic writing_INTEGRITY ProjectFeedback to students about academic writing_INTEGRITY Project
Feedback to students about academic writing_INTEGRITY ProjectLaura Costelloe
 
A Comparative Study Of Competency-Based Courses Demonstrating A Potential Mea...
A Comparative Study Of Competency-Based Courses Demonstrating A Potential Mea...A Comparative Study Of Competency-Based Courses Demonstrating A Potential Mea...
A Comparative Study Of Competency-Based Courses Demonstrating A Potential Mea...Sheila Sinclair
 
Assessment System In Writing Essays By Graduate Students
Assessment System In Writing Essays By Graduate StudentsAssessment System In Writing Essays By Graduate Students
Assessment System In Writing Essays By Graduate StudentsAshley Hernandez
 
Factors affecting teachers' excellence from the perspective of queen rania aw...
Factors affecting teachers' excellence from the perspective of queen rania aw...Factors affecting teachers' excellence from the perspective of queen rania aw...
Factors affecting teachers' excellence from the perspective of queen rania aw...Alexander Decker
 
A Critical Analysis Of Research On Reading Teacher Education
A Critical Analysis Of Research On Reading Teacher EducationA Critical Analysis Of Research On Reading Teacher Education
A Critical Analysis Of Research On Reading Teacher EducationSarah Adams
 
An exploratory re-search for variables representative of Academic Quality
An exploratory re-search for variables representative of Academic QualityAn exploratory re-search for variables representative of Academic Quality
An exploratory re-search for variables representative of Academic QualityWaqas Tariq
 
Implementing peer feedback
Implementing peer feedbackImplementing peer feedback
Implementing peer feedbackDavid Carless
 
School effectiveness-and-improvement-contribution-of-teacher-qualification-to...
School effectiveness-and-improvement-contribution-of-teacher-qualification-to...School effectiveness-and-improvement-contribution-of-teacher-qualification-to...
School effectiveness-and-improvement-contribution-of-teacher-qualification-to...oircjournals
 
1877-0428 © 2010 Published by Elsevier Ltd.doi10.1016j.sbs.docx
1877-0428 © 2010 Published by Elsevier Ltd.doi10.1016j.sbs.docx1877-0428 © 2010 Published by Elsevier Ltd.doi10.1016j.sbs.docx
1877-0428 © 2010 Published by Elsevier Ltd.doi10.1016j.sbs.docxaulasnilda
 
Assessing Evaluation Fidelity Between Students and Instructors in the Basic C...
Assessing Evaluation Fidelity Between Students and Instructors in the Basic C...Assessing Evaluation Fidelity Between Students and Instructors in the Basic C...
Assessing Evaluation Fidelity Between Students and Instructors in the Basic C...Yolanda Ivey
 
Trabajo en pares uniersitarios
Trabajo en pares uniersitariosTrabajo en pares uniersitarios
Trabajo en pares uniersitariosSisercom SAC
 

Ähnlich wie Assessment review.pdf (20)

Evaluating the evaluator a reflective approach
Evaluating the evaluator a reflective approachEvaluating the evaluator a reflective approach
Evaluating the evaluator a reflective approach
 
Assessing Critical Thinking In Higher Education Current State And Directions...
Assessing Critical Thinking In Higher Education  Current State And Directions...Assessing Critical Thinking In Higher Education  Current State And Directions...
Assessing Critical Thinking In Higher Education Current State And Directions...
 
A Situative Metaphor For Teacher Learning The Case Of University Tutors Lear...
A Situative Metaphor For Teacher Learning  The Case Of University Tutors Lear...A Situative Metaphor For Teacher Learning  The Case Of University Tutors Lear...
A Situative Metaphor For Teacher Learning The Case Of University Tutors Lear...
 
2022_eat_framework_-aug_.pdf
2022_eat_framework_-aug_.pdf2022_eat_framework_-aug_.pdf
2022_eat_framework_-aug_.pdf
 
School effectiveness-and-improvement-contribution-of-teacher-qualification-to...
School effectiveness-and-improvement-contribution-of-teacher-qualification-to...School effectiveness-and-improvement-contribution-of-teacher-qualification-to...
School effectiveness-and-improvement-contribution-of-teacher-qualification-to...
 
Investigating the Relationship Between Teaching Performance and Research Perf...
Investigating the Relationship Between Teaching Performance and Research Perf...Investigating the Relationship Between Teaching Performance and Research Perf...
Investigating the Relationship Between Teaching Performance and Research Perf...
 
Course Evaluation Poster
Course Evaluation PosterCourse Evaluation Poster
Course Evaluation Poster
 
Feedback to students about academic writing_INTEGRITY Project
Feedback to students about academic writing_INTEGRITY ProjectFeedback to students about academic writing_INTEGRITY Project
Feedback to students about academic writing_INTEGRITY Project
 
A Comparative Study Of Competency-Based Courses Demonstrating A Potential Mea...
A Comparative Study Of Competency-Based Courses Demonstrating A Potential Mea...A Comparative Study Of Competency-Based Courses Demonstrating A Potential Mea...
A Comparative Study Of Competency-Based Courses Demonstrating A Potential Mea...
 
Assessment System In Writing Essays By Graduate Students
Assessment System In Writing Essays By Graduate StudentsAssessment System In Writing Essays By Graduate Students
Assessment System In Writing Essays By Graduate Students
 
Factors affecting teachers' excellence from the perspective of queen rania aw...
Factors affecting teachers' excellence from the perspective of queen rania aw...Factors affecting teachers' excellence from the perspective of queen rania aw...
Factors affecting teachers' excellence from the perspective of queen rania aw...
 
A Critical Analysis Of Research On Reading Teacher Education
A Critical Analysis Of Research On Reading Teacher EducationA Critical Analysis Of Research On Reading Teacher Education
A Critical Analysis Of Research On Reading Teacher Education
 
Effective-teaching-2013
 Effective-teaching-2013 Effective-teaching-2013
Effective-teaching-2013
 
Ed546794
Ed546794Ed546794
Ed546794
 
An exploratory re-search for variables representative of Academic Quality
An exploratory re-search for variables representative of Academic QualityAn exploratory re-search for variables representative of Academic Quality
An exploratory re-search for variables representative of Academic Quality
 
Implementing peer feedback
Implementing peer feedbackImplementing peer feedback
Implementing peer feedback
 
School effectiveness-and-improvement-contribution-of-teacher-qualification-to...
School effectiveness-and-improvement-contribution-of-teacher-qualification-to...School effectiveness-and-improvement-contribution-of-teacher-qualification-to...
School effectiveness-and-improvement-contribution-of-teacher-qualification-to...
 
1877-0428 © 2010 Published by Elsevier Ltd.doi10.1016j.sbs.docx
1877-0428 © 2010 Published by Elsevier Ltd.doi10.1016j.sbs.docx1877-0428 © 2010 Published by Elsevier Ltd.doi10.1016j.sbs.docx
1877-0428 © 2010 Published by Elsevier Ltd.doi10.1016j.sbs.docx
 
Assessing Evaluation Fidelity Between Students and Instructors in the Basic C...
Assessing Evaluation Fidelity Between Students and Instructors in the Basic C...Assessing Evaluation Fidelity Between Students and Instructors in the Basic C...
Assessing Evaluation Fidelity Between Students and Instructors in the Basic C...
 
Trabajo en pares uniersitarios
Trabajo en pares uniersitariosTrabajo en pares uniersitarios
Trabajo en pares uniersitarios
 

Mehr von Imtiaz Hussain

Mehr von Imtiaz Hussain (20)

Essentials for Measurement2.ppt
Essentials for Measurement2.pptEssentials for Measurement2.ppt
Essentials for Measurement2.ppt
 
BN-725592(3).ppt
BN-725592(3).pptBN-725592(3).ppt
BN-725592(3).ppt
 
Islamiat Compulsory Notes For ADP, BA and BSc.pdf
Islamiat Compulsory Notes For ADP, BA and BSc.pdfIslamiat Compulsory Notes For ADP, BA and BSc.pdf
Islamiat Compulsory Notes For ADP, BA and BSc.pdf
 
12 english idioms.pdf
12 english idioms.pdf12 english idioms.pdf
12 english idioms.pdf
 
5. RPL.pdf
5. RPL.pdf5. RPL.pdf
5. RPL.pdf
 
1. Overview of NVQF.pdf
1. Overview of NVQF.pdf1. Overview of NVQF.pdf
1. Overview of NVQF.pdf
 
4. Competency Standard.pdf
4. Competency Standard.pdf4. Competency Standard.pdf
4. Competency Standard.pdf
 
2. CBT vs Conventional.pdf
2.  CBT vs Conventional.pdf2.  CBT vs Conventional.pdf
2. CBT vs Conventional.pdf
 
3. Pathways to Assessment.pdf
3. Pathways to Assessment.pdf3. Pathways to Assessment.pdf
3. Pathways to Assessment.pdf
 
Unit. 6.doc
Unit. 6.docUnit. 6.doc
Unit. 6.doc
 
UNIT. 3.pdf
UNIT. 3.pdfUNIT. 3.pdf
UNIT. 3.pdf
 
Unit. 7.pdf
Unit. 7.pdfUnit. 7.pdf
Unit. 7.pdf
 
Unit. 7.doc
Unit. 7.docUnit. 7.doc
Unit. 7.doc
 
BN-725592(3).ppt
BN-725592(3).pptBN-725592(3).ppt
BN-725592(3).ppt
 
Biology 9 th-ch-3
Biology  9 th-ch-3Biology  9 th-ch-3
Biology 9 th-ch-3
 
Biology 9 th ch-9-mcq
Biology  9 th ch-9-mcqBiology  9 th ch-9-mcq
Biology 9 th ch-9-mcq
 
Biology 9 th ch-8-mcq
Biology  9 th ch-8-mcqBiology  9 th ch-8-mcq
Biology 9 th ch-8-mcq
 
Biology 9th ch-7
Biology  9th ch-7Biology  9th ch-7
Biology 9th ch-7
 
Biology 9th ch-6
Biology  9th ch-6Biology  9th ch-6
Biology 9th ch-6
 
Biology 9 th ch-2-
Biology  9 th ch-2-Biology  9 th ch-2-
Biology 9 th ch-2-
 

Kürzlich hochgeladen

Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxVishalSingh1417
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhikauryashika82
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docxPoojaSen20
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfChris Hunter
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxDenish Jangid
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfAyushMahapatra5
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...christianmathematics
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.christianmathematics
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Shubhangi Sonawane
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin ClassesCeline George
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxVishalSingh1417
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 

Kürzlich hochgeladen (20)

Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 

Assessment review.pdf

  • 1. Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=cshe20 Download by: [Allama Iqbal Open University] Date: 22 August 2017, At: 06:31 Studies in Higher Education ISSN: 0307-5079 (Print) 1470-174X (Online) Journal homepage: http://www.tandfonline.com/loi/cshe20 Review of assessment feedback Jinrui Li & Rosemary De Luca To cite this article: Jinrui Li & Rosemary De Luca (2014) Review of assessment feedback, Studies in Higher Education, 39:2, 378-393, DOI: 10.1080/03075079.2012.709494 To link to this article: http://dx.doi.org/10.1080/03075079.2012.709494 Published online: 11 Sep 2012. Submit your article to this journal Article views: 1885 View related articles View Crossmark data Citing articles: 16 View citing articles
  • 2. Review of assessment feedback Jinrui Lia * and Rosemary De Lucab a General & Applied Linguistics, University of Waikato, 16 Knighton Road, Hamilton, 3216 New Zealand; b Faculty of Education, University of Waikato, Hamilton, 3240 New Zealand This article reviews 37 empirical studies, selected from 363 articles and 20 journals, on assessment feedback published between 2000 and 2011. The reviewed articles, many of which came out of studies in the UK and Australia, reflect the most current issues and developments in the area of assessing disciplinary writing. The article aims to outline current studies on assessment feedback. These studies have explored undergraduate students’ wide-ranging perspectives on the effectiveness and utility of assessment feedback, the divergent styles of assessment feedback of lecturers and tutors in various disciplines, teachers’ divergent interpretations of assessment criteria and confusion about the dual roles of assessment feedback, and the divergences between teachers’ beliefs and practices. The review includes analysis and comparison of the research methods and findings of the studies. It identifies a research space for assessment feedback and outlines implications for further studies. Keywords: assessment feedback; disciplinary writing; belief; practice 1. Introduction In this review article the term ‘assessment feedback’ refers to comments and grades that lecturers and tutors provide for the written work submitted by undergraduate students as part of course requirements in various disciplines within tertiary education. Grades and written comments are intended to have both formative and summative functions: grades, although summative, also have a formative role in that they result from evalu- ation of students’ work against standards and they affect students’ attention to further improvement in feedback (Taras 2002); feedback has the role of justifying grades and maintaining standards, as well as its formative role (Joughin 2008). Effective formative feedback links closely to improved student learning. The effectiveness of feedback has been limited because of both contextual constraints and theoretical gaps. Contextually, there are constraints such as students’ various backgrounds in writing (Sakyi 2000), various discourses in different disciplines (Russell and Yañez 2003), insufficient assessment knowledge of staff (DeLuca and Klinger 2010; Weaver 2006), modular form of programmes, institutional requirements (Bailey and Garner 2010) and policies that emphasise measurement of students’ achievement instead of learning improvement (Price et al. 2011). Within the frequently programmed modular patterns of courses in higher education, feedback on assessment is often not received by the student until after the completion of a module and may not be applicable to future modules. © 2012 Society for Research into Higher Education *Corresponding author. Email: jl287@students.waikato.ac.nz Studies in Higher Education, 2014 Vol. 39, No. 2, 378–393, http://dx.doi.org/10.1080/03075079.2012.709494 Downloaded by [Allama Iqbal Open University] at 06:31 22 August 2017
  • 3. Theoretically, there has been argument that the conventional teacher-centred assess- ment model does not fit with the current student-centred pedagogy (Guba and Lincoln, 1989) whereby students should be actively engaged in assessment activity (Sadler 1989). New models of assessment have been proposed with emphasis on the integration of feedback into ongoing student–teacher dialogues (Rust, O’ Donovan, and Price 2005) and the effect of feedback on cultivating students’ self-regulation of learning (Hattie and Timperley 2007; Nicol and Macfarlane-Dick 2006). However, these theories have been discussed mainly among linguists focusing on the separate roles of formative and summative assessment. There have been studies by educators on the over-emphasis of the summative role of assessment in tertiary education (Bailey and Garner 2010; Taras 2002). However, there has been little guidance for disciplinary teachers on how to achieve the dual goals of assessment feedback. Assessment feedback on written work of modular patterned courses within the dis- ciplines has not been established as a branch of study for its own sake. Consequently, we have identified the need to review current theories and empirical studies that are relevant to assessment feedback and we present how these studies are shaping into an area of study. By so doing, we aim to frame the study on assessment feedback, demonstrate the need for theory development, and point out further directions of study in this area for those who are concerned with the assessment activity at tertiary level. This review focuses on empirical studies in assessment feedback given on under- graduates’ written work in various disciplines in higher education. We firstly used the advanced search of Google Scholar, with the word ‘writing’, the exact phrase ‘assessment feedback’ and at least one of the words ‘university’ in articles published in ‘education’, excluding the word ‘postgraduate’ from any time to 19 August 2011. We then read the abstracts of the 363 articles from 20 journals resulting from the search and found 56 articles that fitted the focus of the review. We also used the website of the university library and searched the journals with the words ‘assessment and evaluation’ or ‘higher education’ to locate the key journals on the focus of the review, then searched articles in these journals with the key words ‘assessment feed- back’. Content analysis was conducted on the articles resulting from the search with the following criteria: empirical studies, focusing on assessment feedback at under- graduate level, clear description of methodology, published between 2000 and 2011. Finally, 37 empirical studies on assessment feedback were selected from 16 journals, 20 of which were from the journal Assessment & Evaluation in Higher Education. We selected journal articles because of their currency and saliency for our topic. The search result also included two recent reviews on feedback. One is an extensive review by Shute (2008) on formative feedback at task level, excluding summative feed- back. The review by Shute (2008) does not overlap with this review because it has a different focus. Another review was conducted by Parboteeah and Anwar (2009) in order to find how students’ motivation could be influenced by the effect of feedback. The review included 18 publications between 1993 and 2006 on assignment feedback. These publications were grouped into three themes: ‘(a) types, styles and pattern of feedback, (b) misperceptions, (c) future learning’ (754). The result of the review was that types of feedback did not affect learning; what mattered were the quantity and quality of feedback, and the content of feedback that addressed both specific and general issues. Among the 18 publications reviewed by Parboteeah and Anwar (2009), only two (Carless 2006; Orrell 2006) are about assessment feedback on under- graduates’ written work, and these are included in this review for a different focus. Studies in Higher Education 379 Downloaded by [Allama Iqbal Open University] at 06:31 22 August 2017
  • 4. The two reviews referred to in the last paragraph focused on formative feedback and the influence of feedback on students’ motivation, which are different from the focus of this review. The search result indicates that there has been no review that focuses on assessment feedback on written work by undergraduate students in various disciplines. Moreover, this current review represents a wide community of educators on assessment feedback. Although it involves perspectives of teachers who focused on academic writing itself, the majority of the empirical studies focused on the practice of assessing writing in disciplinary areas. Thefollowingthemeswerefoundbyreviewingthestudies.Firstly,studiesonstudents’ perspectives regarding the utility and effectiveness of feedback demonstrated various opinions on effective feedback. Secondly, studies on teachers’ practices of assessment feedback pointed out the divergent styles of assessment feedback and a common lack of information on learning improvement. Thirdly, studies on teachers’ beliefs revealed tea- chers’ divergent interpretations of assessment criteria and confusion about the dual roles ofassessmentfeedback.Fourthly,comparisonbetweenteachers’beliefsandtheirpractices highlighted the divergence between the intended role of assessment feedback for learning improvement and the actual assessment feedback to justify grading. Moreover, compari- sons between students’ perspectives and those of teachers demonstrated that divergent opinions mainly focused on the roles and effectiveness of assessment feedback. Finally, innovative approaches to effective assessment feedback were explored, including enga- ging first-year undergraduates with written feedback by individual tutorials. Overall, the issues of assessment feedback pointed to its dual function. The themes of the studies will be presented in this review followed with conclusions, a suggested framework for the study of assessment feedback, and implications for further studies. 2. Studies on students’ perspectives on assessment feedback A large number of studies have explored students’ perspectives on assessment feed- back. The main focus of these studies is students’ utilisation of feedback and their opinions on what feedback is effective. 2.1. Students’ utilisation of feedback Three studies in the UK context have explored issues relating to students’ utilisation of assessment feedback in various disciplines, mainly by interviews (Higgins, Hartley, and Skelton 2002; Orsmond, Merry, and Reiling 2005; Walker 2009). Higgins, Hartley, and Skelton (2002) explored the impact of feedback on students’ learning in a three-year study. They reported the findings of an initial study on students’ response to feedback based on interviews of 19 first-year students in Business and Humanities in two insti- tutions in the UK and 94 questionnaire responses. They found that the majority of stu- dents (97%) read comments, most of them believed they kept the feedback in mind for further use, but how they would use it was unclear. Barriers to effective feedback were found mainly in three aspects: firstly, the modular programmes constrain the tutors to provide timely feedback that could be applied by students to the improvement of the next assignments; secondly, feedback often focused on grammatical errors or negative aspects of students’ work without suggestions on how to make improvement; finally, the language of feedback was general or vague. The issue of how students use feedback was explored by Orsmond, Merry, and Reiling (2005). In this small study, data were collected by interviews with 16 third-year biology students in a UK university. 380 J. Li and R. De Luca Downloaded by [Allama Iqbal Open University] at 06:31 22 August 2017
  • 5. These students had all received summative written feedback on their coursework; 13 of them had also received formative feedback. Three students did not read feedback, one read it if the marks conflicted with expectation, six thought marks were more important, while the others gave equal importance to marks and feedback. This finding seems to be in alignment with that of some previous studies that marks had an adverse effect on feed- back utility because students focus on marks rather than written comments (e.g. Carless 2006; Mutch 2003). In addition, Orsmond, Merry, and Reiling (2005) found four areas where the students used feedback: enhancing motivation, enhancing learning, encoura- ging reflection and clarifying information and expectations about assignments. Some students suggested ‘feedforward’ that would ‘give focus’ to the feedback pro- vided at the end of the course (Orsmond, Merry, and Reiling 2005, 376). Some students identified a place for more generalised feedback with regard to their progression through the course. Other students applied feedback given in one course to another course, thereby enriching their learning environment. Six students preferred feedback that involved them in discussion with their tutors. Walker (2009) explored the types of feedback that students would use. She firstly analysed 3000 written feedback items on 106 assignments written by technology students in the Open University in the UK, applying the six categories introduced by Brown and Glover (2006), namely, content, skills development, motivating, demotivating, a mention of future study, and a reference to a resource the student could use (69). Then forty-three of the students were interviewed on the usability of feedback. Data fell mainly into the first three categories. Interview data demonstrated that more than two-thirds of students found the feedback was usable, especially feedback on skill development. The least usable comments were those specific to an assignment that were not applicable to other assignments. When asked what sort of comment they preferred, students said ‘they wished to be told what they had got wrong, and why, and how to do better’ (75). They also wanted feedforward for assignments in the future. The study raised the question of how to improve the practice to make feedback more effective. The findings of the above studies about the utilisation of feedback were not in agree- ment with those of Crisp’s (2007) study in a university in Australia. Crisp (2007) studied the effect of students’ response to feedback by comparing the assessment result of one essay written by 51 social science students six weeks after they received the assessment feedback on a similar essay assessed by the same marker. She found there was little evidence of improvement either in scores or regarding the reduction of problems that had occurred in the first essay. She concluded that the provision of feedback alone could hardly result in improvement in a subsequent submission on the same topic. However, it was not known how students perceived the feedback from the first essay or if they had received the feedback on the first essay before they submitted their second essay. The differing results of studies on students’ utilisation of feedback may be due to different contexts and different methods of data collection. Overall, the studies demon- strated that assessment feedback was not fully used by students, which in turn raises the question of what kind of feedback can effectively enhance students’ utilisation of feed- back. This question will be discussed in the next section. 2.2 What kinds of feedback are effective? Three types of effective feedback were explored by studies in UK universities. An early study by Orsmond, Merry, and Reiling (2002) explored the usefulness of exemplars in Studies in Higher Education 381 Downloaded by [Allama Iqbal Open University] at 06:31 22 August 2017
  • 6. student-centred assessment activity in a UK university. The participants were twenty-two first-year undergraduates majoring in environment and biology who were enrolled in a module. The task was a poster presentation. Students were asked to develop criteria for assessment, and then discussed the criteria with tutors. The tutors provided posters of previous students as exemplars that students used to modify their own work. Peer and self assessment were also used before the final submissions. Students were also asked to complete a questionnaire on the effectiveness of this assess- ment process. The study concluded that the use of exemplars was effective in helping students complete the task. Model answers, as a type of feedback similar to exemplars, were also found to be effective in getting higher scores according to Huxham (2007). Huxham (2007) surveyed 183 biology students in a UK university. The survey data demonstrated that students preferred personal comments to model answers as feedback, although they wanted both types of feedback; however, the examination results of 155 of the students indicated that model answers generated higher marks than personal comments. The study concluded that feedback should include both model answers and personal comments. Another study by Bloxham and Campbell (2010) explored the effectiveness of using a cover sheet on which students listed the aspects that they wished to have commented on. Data were collected by interviews with nine first-year students and a focus group of teachers in a UK university. The study concluded that the interactive cover sheets could promote interaction between students and teachers and activate stu- dents to take a role and some responsibility in assessment feedback. Similar studies on types of effective feedback were also carried out in an Australian context. These studies focused on the effectiveness of exemplars, and seemed to be in agreement with Orsmond, Merry, and Reiling’s (2002) study. Hendry, Bromberger, and Armstrong (2011) compared the effectiveness of marking sheets, exemplars and feed- back. Data were collected from two focus groups of altogether 10 students in a law course in a university in Australia; themes that emerged in the focus groups were then further explored by a questionnaire returned by 92 students in the same course. The study found that the students preferred exemplars to marking sheets in that exem- plars offered specific demonstration of the required standards that students could use to compare with their own work; students also believed that a combination of focused per- sonal comments and explanations and references to standards was effective. The find- ings of Hendry, Bromberger, and Armstrong’s (2011) study were similar to those of a small-scale survey study on exemplars conducted by Handley and Williams (2011) among students of one course run by a team of tutors of multi-disciplines in an Austra- lian university. The findings were based on 63 responses to a survey questionnaire out of 400 students. Students found that the online exemplars annotated with comments as feedforward were effective mainly because they helped students have specific infor- mation on structure and layout before they wrote the assignment; in addition, they also tended to initiate dialogue between tutors and students. These studies on types of effective feedback seemed to contradict the review result of Parboteeah and Anwar (2009), who concluded that ‘the type, style and pattern of feedback makes no difference to students’ learning’ (757). The reason, perhaps, is that Parboteeah and Anwar’s (2009) conclusion was based on the review of three studies (Ashwell 2000; Chandler 2003; Fazio 2001) on feedback given to second language writing, which focus on feedback on grammatical errors. However, the common information expressed by the studies reviewed above on types of feedback is that students preferred feedforward that could help them clarify 382 J. Li and R. De Luca Downloaded by [Allama Iqbal Open University] at 06:31 22 August 2017
  • 7. the requirements; they also expected comments to be personal, explanatory, and cri- teria-related. These features of feedback overlapped with a group of studies in the Aus- tralian context that explored the features of effective feedback by means of focus groups (Poulos and Mahony 2008), document analysis (Lizzio and Wilson 2008), and surveys (Ferguson 2011). Poulos and Mahony (2008) used four focus groups to investigate undergraduate students’ perspectives on effective feedback in a faculty of health sciences in an Australian university. They found that students preferred timely, consist- ent, transparent, and criteria-referenced assessment feedback. They concluded that the effect of feedback related to students’ opinions of those who assessed their work. In another Australian university, some other aspects of effective feedback were found by Lizzio and Wilson (2008), who explored students’ perspectives on effective feed- back in two studies. In the first study, they analysed the content of 238 written com- ments from 57 psychology, law and arts students who were asked to describe effective and ineffective feedback they had received on various types of written work. Effective feedback was found to have the following aspects: to support learning by relating to goals and strategies, to demonstrate the assessor’s engagement with the assessed written work so as to provide fair assessment, to acknowledge achievements and efforts, and to be considerate when commenting on negative aspects. Basing their research on the results of the first study, they surveyed 277 students in wider disci- plines. The common aspects of effective feedback found by the two studies are that feedback should be able to promote learning development, and be encouraging and fair. The findings of Poulos and Mahony (2008) and Lizzio and Wilson (2008) were partly confirmed by Ferguson (2011), who surveyed 101 undergraduate and 465 gradu- ate students who majored in education in an Australian university. The survey demon- strated that students preferred timely, personalised, criteria-referenced, positive, and clear feedback that could not only acknowledge their achievement but also lead to improvement. In addition, some UK studies reported features of ineffective feedback in relation to the causes. Weaver (2006) studied students’ opinions of feedback data that were col- lected by questionnaires returned by 44 students in the faculties of Business and Art & Design out of a total number of 510 students. Twenty-two students participated in group discussion, and provided samples of feedback they had received. The study found that students had positive opinions on the feedback they received but also pointed out that feedback could have been more usable if it had been too vague or general, negative, unrelated to criteria or marks, and without further explanation and advice on how to use the feedback. In addition, divergent opinions were found between students of the two majors, mainly on the usefulness of feedback given at the end of each module. This difference, according to Weaver (2006), resulted from how modules were designed and how feedback was provided according to the modules. The study concluded that the problems of feedback practice were due to tutors’ insufficient knowledge of effective practice, time limits because of workload, and perhaps personal beliefs on the purpose of feedback. Weaver (2006) acknowledged that the study was limited due to the low response to the questionnaire. With similar data collection methods but a larger number of participants, Hounsell et al. (2008) studied the quality of feedback and assessment guidance to students. They collected data from first and final year students at three biosciences departments in a UK university. Data comprised 782 questionnaires and 23 group interviews with 69 stu- dents. The survey data demonstrated that students overall had positive opinions on staff regarding their supportive attitude but less positive opinions on the feedback Studies in Higher Education 383 Downloaded by [Allama Iqbal Open University] at 06:31 22 August 2017
  • 8. and guidance they received. The interview data revealed that the process of feedback followed a six-step feedback loop: students’ prior experiences, initial guidance, ongoing guidance, feedback on achievement, supplementary support, and feedforward. However, the following problems were identified in this feedback loop: guidance was provided before assessment yet this was insufficient or misunderstood; although ongoing guidance was available, students were reluctant to approach staff to seek gui- dance; feedback was not given or was insufficient; the quality of feedback was incon- sistent because assignments of large classes were often marked by a group of markers; feedback related to a specific piece of work rather than being embedded in the daily process of learning; and, finally, as the survey data showed, students felt feedback sometimes was ineffective due to delay for various reasons. Moreover, Pokorny and Pickford (2010) pointed out the social-contextual influ- ences on the effectiveness of feedback. They explored students’ perspectives on feed- back using four focus groups of altogether 18 business majors who were in the first and final years of study in a UK university. They found that students did not believe that written feedback itself was often effective; the effectiveness of feedback was related to the social relationship between students and teachers. Moreover, they found that the final-year students had a broader and more complex view of feedback than the first-year students. The final-year students’ perspectives were close to what was suggested in literature, that feedback should clarify standards and goals, current per- formance, and strategies to achieve the goals. Pokorny and Pickford (2010) suggested that feedback should be contextually integrated in the whole process of learning. Higgins, Hartley, and Skelton (2002), basing their findings on interviews with 19 first-year students in Business and Humanities in two institutions in the UK and 94 questionnaires, concluded that feedback should be timely, should include explanation and suggestions for further improvement, should focus on higher-order concerns such as level of argument and critical analysis, should use language the students could under- stand rather than academic discourse, and should integrate peer-assessment and further tutor–student discussions. Overall, the selected studies on effectiveness of feedback were mainly carried out among students in various disciplines in UK and Australian contexts. The main approaches of data collection were survey, interview, and/or focus group. Students’ assessment preferences included timely, personal, criteria-referenced feedback that could be used for further improvement. 3. Studies on teachers’ beliefs about assessment feedback and their practices The foci of this group of studies fell into three categories: teachers’ practices in giving assessment feedback, teachers’ beliefs about assessment feedback, and convergences and divergences between beliefs and practices. 3.1 Practices of assessment feedback A common approach to exploring the practice of feedback was document analysis of recorded written comments on students’ work. One early and widely quoted study was conducted by Ivanic, Clark, and Rimmershaw (2000) in a university in the UK and a university in South Africa. Ivanic et al. compared five tutors of social sciences with four EAP (English for Academic Purpose) tutors regarding their practice of 384 J. Li and R. De Luca Downloaded by [Allama Iqbal Open University] at 06:31 22 August 2017
  • 9. providing feedback. Nine pieces of written comment were collected and analysed. Six categories were found: . explain the grade in terms of strengths and weaknesses; . correct or edit the student’s work; . evaluate the match between the student’s essay and an ‘ideal’ answer; . engage in dialogue with the student; . give advice which will be useful in writing the next essay; . give advice on rewriting the essay (55). The study found that the written comments varied widely in quantity and wording. More- over,subjecttutorstendedtousecommenttojustifygradeswhereasEAPtutorsaimedmore to help students rewrite the essays. Suggestions were given on staff development regarding how to write feedback, such as giving feedback on both drafts and final submission in relationtoquality,quantityandappropriatetiming,explainingcriteriatostudents,engaging students in feedback dialogue, considering the styles and messages conveyed by the feed- back, including both positive and negative feedback, using a personal tone, and combining written feedback with oral discussion. Suggestions were also given to EAP teachers especially with regard to facilitating students’ writing in their own disciplines. Later studies on assessment feedback had a relatively larger amount of data but with a different focus of study. For example, Mutch (2003) analysed the content of 122 feed- back sheets on essays written by undergraduates in 39 modules of a business course in a UK university. The categories of analysis were: Factual, Comment, Developmental, Implied developmental, Conversation, and Neutral. The study found that feedback was written in categorical terms and focused on knowledge and content. Feedback on how to improve was given to the less satisfactory work rather than the excellent essays. The study points out the need to reflect on programme design and support to teachers in order to have effective feedback practices. One limitation of the study was that the feedback sheet may not have been enough to present a whole picture of assessment feedback as some comments may have been written on the assessed paper. With the same approach, Stern and Solomon (2006) studied whether feedback prac- tice was in alignment with principles suggested in literature. They analysed written comments on 598 graded papers from 30 departments of a university in the USA. The content of the comments was coded into 23 categories. They found that most com- ments were given on micro-level items, especially lexical errors, and there was a lack of detailed rubrics and personal connections in the comments. Stern and Solomon (2006) compared the comments on papers for English courses with those for other subjects; unlike what was found by Ivanic et al. (2000), few differences were found except that more micro-level comments were given on papers for English. Stern and Solomon (2006) also compared the comments with the principles of effective written comments they summarised from literature, namely: to provide positive comments in addition to corrections; to provide feedback only on a few select areas that are deemed important for that particular writing assignment – those tied to the student learning goals for the paper assignment; and to provide comments that identify patterns of weaknesses, errors, and strengths. (25–27) The result of the comparison was that the feedback practices were not in alignment with the principles. Stern and Solomon (2006) suggested that further research should focus Studies in Higher Education 385 Downloaded by [Allama Iqbal Open University] at 06:31 22 August 2017
  • 10. on what kind of training teachers received on feedback practice. Compared with Ivanic et al.’s (2000) study, this study was larger in size and provided more detailed analysis on the content of feedback and focused on the common features of assessment. Although the findings of the study did indicate some divergent practices, this was not the focus of the study. Divergent practices within one discipline were found in the study by Read, Francis, and Robson (2005) on assessment and feedback in relation to gender differences and bias. The participants were 50 historians (half male, half female) from institutions across England and Wales. These historians were asked to assess two sample essays written by two students of different genders. After the assessment, the participants were interviewed on their rationale for assessment. The major finding of the study was that there were large variations in comments and grades given by the participants. Read, Francis, and Robson (2005) challenged objectivity as an assessment standard. In addition, the study did not find clear evidence of gender effect. A recent study conducted by Carless et al. (2011) explored effective feedback prac- tice using interviews with 10 teachers rated as excellent across 10 departments of a Hong Kong university. They found four common features of effective feedback prac- tice among these teachers: dialogic; incorporated with peer feedback or using portfolio assessment; facilitated by technologies such as online dialogues and blogs; and com- bined with activities that promote self-regulation. Moreover, they pointed out that the design of assessment tasks was important in order to make feedback sustainable. These studies revealed the variation of teachers’ practices and the need for pro- fessional training. However, the studies were mainly based on document analysis and interview; no data were collected by observations of the practices. 3.2 Beliefs about assessment feedback A limited number of studies have explored teachers’ beliefs about assessment feedback (Bailey and Garner 2010; Grainger, Purnell, and Zipf 2008; Harman and McDowell 2011). The common aspects of these studies were that data were collected mainly by interviews with experienced teachers and the foci are various, such as the purpose and content of assessment feedback. Grainger, Purnell, and Zipf (2008) studied markers’ beliefs about assessment stan- dards in an Australian university by analysing the conversations between five experi- enced lecturers at two meetings, during which the lecturers were asked to talk about their responses to some copies of students’ work. The findings of the study firstly con- firmed Sadler’s (1989) findings that markers focused on both technical aspects (grammar, referencing, paraphrasing, quotes, word limit, academic genre, structure, and English expression) and the content of the writing (level of detail, depth of analysis, depth of understanding, justification and evaluation). Moreover, the study found that although markers agreed on some common criteria of assessment, they interpreted the quality of standards differently. The study also mentioned that markers tended to make a holistic judgement first and then match aspects of the assessed work with specific items of criteria. Bailey and Garner (2010) explored the purposes of assessment feedback from tea- chers’ perspectives by interviewing 48 lecturers across disciplines in a UK university on their beliefs and experiences of feedback in assessment. The lecturers believed feed- back was used to inform learning, to justify the grade, and to meet the institutional requirements. However, they were uncertain whether students could respond to 386 J. Li and R. De Luca Downloaded by [Allama Iqbal Open University] at 06:31 22 August 2017
  • 11. feedback. They believed priority was often given to institutional requirements at the cost of its educational function. Harman and McDowell (2011) studied teachers’ believed roles in assessment activity in a UK university by 11 interviews and six follow-up interviews with design lecturers. Discourse analysis of the data revealed that the lecturers believed that their roles in assessment were providing expert guidance, encouragement and support, and objective judgement, as well as maintaining professional standards. However, the teachers felt pressure in that they were supposed to be objective judges in assessment activity, and that this identity was in contradiction with their positioning in other teaching activities such as the position of facilitator. Harman and McDowell (2011) pointed out that teachers needed training and support to help them confront dif- ficulties in assessment activity. 3.3 Comparison between teachers’ beliefs and practices A very limited number of studies have compared teachers’ beliefs with their practices in providing assessment feedback. The common findings of the studies were that teachers’ practices were not in agreement with their beliefs. One study was carried out by Orrell (2006) in a university in Australia. Data on the beliefs and practices of 16 experienced teachers of education courses were collected by two methods: think-aloud protocol and teachers’ reflection. The tape-recorded data were transcribed and coded using a grounded theory approach with content analysis. The think-aloud data revealed that teachers responded to students’ work in the following ways: teaching, editing, and feedback dialogue. The feedback varied significantly regarding the quantity and focus. Two teachers provided only grades. The interview data demonstrated that teachers believed feedback aimed to improve learning, but tea- chers were not sure about the utility of feedback. Furthermore, Orrell (2006) compared the teachers’ beliefs and practices and found very little convergence (22%). Although teachers believed assessment feedback aimed to facilitate self-evaluation and learning improvement, their feedback was more ‘defensive and summative’ (453). Teachers were frustrated because they were aware that the formative purpose of feedback had to give priority to the summative role. Teachers also felt pressure because teaching and learning were both driven by the measurement role of assessment. The study pointed out that the improvement of feedback practices requires the support of institutional pol- icies. Another study that compared beliefs with practices of assessment feedback was conducted by Li and Barnard (2010). They studied the beliefs and practices of tutors in a faculty of arts in a New Zealand university. Data were collected by survey among tutors in the faculty, interviews with 16 tutors, followed up with nine think-aloud and stimulated recall sessions, and two focus group meetings. The audio-recorded data were transcribed and analysed with a grounded theory approach. The study found that tutors’ major concern in practice was to justify their grading instead of their believed purpose of learning improvement. This finding aligns with other studies (e.g. Orrell 2006; Bailey and Garner 2010). Moreover, the study found that tutors often felt less authoritative in providing feedback and felt confused when they received different advice from those who supervised them; tutors’ practice of providing feedback was based on their personal experiences of receiving and giving feedback. The common issue from the studies that compared teachers’ beliefs and practices was that teachers needed professional training and support in their practice, an issue that was also identified by the studies on teachers’ practices and beliefs. Studies in Higher Education 387 Downloaded by [Allama Iqbal Open University] at 06:31 22 August 2017
  • 12. 4. Comparison between teachers’ and students’ beliefs Studies in different countries have compared students’ beliefs with those of teachers. One major common finding of the studies is that students and teachers often have diver- gent understandings of assessment and feedback. A leading study in this area was con- ducted by Lea and Street (2000). Their research methods included in-depth, semi- structured interviews with a total of 23 staff and 47 students who were involved in cross-disciplinary writing at two universities in England. It was revealed that tutors were ‘mainly influenced by specific conceptualizations of their own disciplines or subject area in their assessments of students’ writing’ (39). Special attention was given to ‘“structure” and “argument”’ (39). Though the teachers had a general idea of what good writing is, they could not describe explicitly ‘what a well-developed argu- ment looks like in a written assignment’ (39). Their interviews with students and learn- ing support teachers indicated that the main difficulty in academic writing resulted from ‘the conflicting and contrasting requirements for writing on different courses and from the fact that these requirements were frequently left implicit’ (38). They concluded that the requirements and feedback relating to writing assignments were complex and implicit, which required understanding of both linguistic consideration and ‘social and institutional relationships associated with it’ (45). Carless (2006) explored students’ and teachers’ perceptions of assessment and feed- back by a large-scale survey of 460 staff and 1740 students in eight universities in Hong Kong, a further small-scale, open-ended survey of 52 students, semi-structured inter- views with 15 students and six further interviews with students in Cantonese, and inter- views with five teachers. Divergent opinions were found between teachers and students: teachers believed their feedback was effective whereas students had different opinions. A consensus between teachers and students was that it was difficult for stu- dents to fully understand criteria. Meyer et al. (2010) surveyed 1238 first-year undergraduates and 879 teachers of undergraduates at four tertiary institutions in New Zealand regarding their attitudes towards assessment. They also collected assessment documents and conducted 14 inter- views with management staff not only in the above institutions but also in three additional institutions. According to the survey result, both teachers and students agreed that students’ work should be assessed by their teachers instead of independent markers. However, there were divergent opinions between teachers and students: stu- dents tended to believe that assessment was for accountability whereas teachers were more likely to believe assessment was for learning improvement; most students agreed that grade moderation or adjustment could be made to achieve consistency whereas most teachers disagreed. Moreover, the qualitative data revealed that insti- tutional policies emphasised assessment procedures and assessment of learning. The perspectives of both teachers and students of assessment and feedback were further explored by two recent studies in the UK by a combination of focus group and interview. Scaife and Wellington (2010) studied teachers’ and students’ opinions by group interview involving 60 students, and interviews with eight teachers in five fac- ulties across a university in the UK. The study found students were positive towards assessment without marks, peer assessment and diagnostic feedback. However, Scaife and Wellington (2010) were not able to fully compare students’ opinions with those of teachers because teachers did not follow all the interview questions. Neverthe- less, the study found that teachers had different understandings of assessment terms. Some teachers used diagnostic feedback but were not aware of it. Moreover, both 388 J. Li and R. De Luca Downloaded by [Allama Iqbal Open University] at 06:31 22 August 2017
  • 13. teachers and students found the integration of peer assessment and technology such as online blog effective, and opportunities to discuss with tutors to receive advice before the final assessment. They suggested that students should have two stages of sub- mission that could allow them opportunities to react to feedback. They also suggested there be space for teacher training and development. Beaumont, O’Doherty, and Shannon (2011) compared the perspectives on assessment feedback between students and teachers, including tutors, by focus group and interview data. Participants included 145 undergraduates and 23 teachers, including tutors across three disciplines in English universities. They found that although tutors believed feedback should aim at learning improvement, they also stated its summative purpose; they found that unmatched per- spectives between students and tutors were due to their different values and expec- tations with regard to feedback and the decrease in university resources that could support the activity of assessment and feedback. 5. The use of innovative approaches to assessment feedback Despite the theoretical discussions on assessment and feedback, Taras (2006) found, by analysis of historically documented assessment feedback in three faculties of a univer- sity in the UK, that the practice of assessment and feedback was hardly influenced by innovative principles such as interactive feedback and students’ participation. Three recent studies in UK settings have reported the use of intervention, mainly by individual tutorials, to increase first-year students’ engagement with assessment feed- back in educational programmes (Cramp 2011; Murtagh and Baker 2009; Prowse et al. 2007). The intervention approach reported by Prowse et al. (2007) comprised for- mative feedback and grade, together with suggestions on how to improve the grade. Extra marks were given on the subsequent submission for the students in an education programme who followed the formative feedback. The study concluded that this approach enhanced learning improvement. Murtagh and Baker (2009) reported the use of a one-to-one tutorial as intervention, during which students could discuss with the tutors the feedback and questions in relation to the next assignment and the goal of improvement. The effect of the intervention was compared with that of conventional written feedback using questionnaires among a cohort of first-year students in an edu- cation programme. The study concluded that the intervention was more effective in engaging students than some types of conventional written feedback. Similarly, in Cramp’s (2011) study, an intervention was provided to first-year students in an edu- cational programme in order to increase their engagement with feedback. The interven- tion included the following steps: initiating students’ reflection on feedback they had received; and suggesting students make plans to improve and have dialogue with tutors on feedback, assignments, and ways to improve. Cramp’s (2011) study con- cluded that the intervention improved students’ understanding of academic skills needed, assessment requirements, and written feedback; it also helped to construct stu- dents’ academic identities. Also in the UK context, a similar study was conducted by Fisher, Cavanagh, and Bowles (2011) on the effectiveness of intervention for first-year business students. The intervention was in the form of feedback on drafts together with individual tutor- ials. One hundred and six out of 539 students who chose to submit drafts on their assignment had one-to-one discussion with their tutors on the submitted draft and received a feedback sheet. The analysis of grades on final submission demonstrated Studies in Higher Education 389 Downloaded by [Allama Iqbal Open University] at 06:31 22 August 2017
  • 14. that the feedback on drafts was effective in increasing students’ grades. The effective- ness was also supported by students’ comments. These types of tutor–student interventions, along with peer feedback and assess- ment (see, for example, Higgins, Hartley, and Skelton 2002), give students a participant role in assessment, which is traditionally a hierarchical exercise. Reynolds and Trehan (2000), taking a critical literacy approach to participative assessment, caution that social hierarchies may still operate within these social groupings formed for assessment purposes and tutors need to be aware of the complexities of new power relationships. In addition, some other studies on alternative approaches to assessment also indi- cated how assessment feedback could be used. For example, Schalkwyk (2010) reported the practice of an early assessment system used in a university in South Africa. Students’ work was assessed during the first few weeks of each module. The assessment result was then sent to the students and other stakeholders to inform them of the students’ level of achievement compared with that of their peers. The assessment aimed to set up goals of development for students, to track students’ improvement, and to ensure their success. The study concluded that this early formative assessment approach enhanced the collaboration between different stakeholders and promoted positive changes in attitudes towards and practices of assessment, despite the problems such as increased workload. Moreover, a number of studies have reported the use of portfolio assessment in different countries. A most recent study was con- ducted by Dysthe and Engelsen (2011) on the portfolio practices in Norwegian tertiary institutes by a survey questionnaire among teachers and programme leaders randomly selected in five universities and 25 colleges across the country. The 303 responses were analysed with SPSS software. The study found that the educational policies boosted the use of portfolios; however, the effect was limited by the module forms of courses, class size, and differences in pedagogical traditions and disciplines. Very recently, an increasing number of studies have focused on the use of technologies such as Turnitin (Davis 2007) on assessment and feedback; the use of e-assessment was also reported in disciplinary areas (Brinka and Lautenbach 2011). 6. Conclusion and suggestions This review of 37 empirical studies on assessment feedback for undergraduates in various disciplines demonstrates the following themes. Firstly, teachers believed that assessment feedback should inform learning as well as justify grading. However, they found it hard to balance these roles in their practice. Secondly, students had various expectations of assessment feedback, the most common descriptors of which included timely, personal, explicable, criteria-referenced, objective, and applicable to further improvement. Thirdly, the major divergent perspective between students and teachers was regarding how well assessment feedback served the formative role. More- over, there were contextual limitations of assessment feedback, including modular pat- terned programmes, teachers’ workloads, and institutional policies. From this review, we noticed the tensions between formative and summative roles of assessment and the need for professional training and support for teachers. We also noticed that there is a lack of collaboration and consultation between subject teachers and those who teach academic writing. Of special note among studies reviewed are the limited number of studies on the actual practices of feedback; the limited number of both countries and academic disci- plines as research sites; the narrow range of data collection methods; the occurrence of 390 J. Li and R. De Luca Downloaded by [Allama Iqbal Open University] at 06:31 22 August 2017
  • 15. divergent and even contradictory findings; and the pioneer studies on approaches that could improve the effectiveness of assessment feedback. Further research should focus on the following topics: firstly and most urgently, a framework that can guide the practice of assessment feedback in assessing writing in the various disciplines. Perhaps the first step is to articulate beliefs and assump- tions of various stakeholders (Broad 2003; Huot 2002), including students and tea- chers within a programme as well as programme managers, course coordinators, and policy makers. Further studies can work on the issues and insights gained from the cross-disciplinary and cross-section dialogues. Secondly, studies should not only focus on specific contextual issues but also investigate general issues across disci- plines so that some general principles can be applied to different contexts. It may be worthwhile exploring whether there is any difference in assessment feedback pro- vided in hard science and in the humanities. Thirdly, more studies are needed on teachers’ actual practice of providing assessment feedback, difficulties they come across, and strategies and solutions they seek; and more studies are needed on the impact of assessment on teachers. Fourthly, studies are needed on how the use of new technologies such as e-assessment can affect the practice of assessment feed- back. Finally, studies need to use multiple methods of data collection to explore various aspects of assessment practice to provide more holistic pictures of assess- ment feedback. References Ashwell, T. 2000. Patterns of teacher response to student writing in a multiple-draft composition classroom: Is content feedback followed by form feedback the best method? Journal of Second Language Writing 9, no. 3: 227–57. Bailey, R., and M. Garner. 2010. Is the feedback in higher education assessment worth the paper it is written on? Teachers’ reflections on their practices. Teaching in Higher Education 15, no. 2: 187–98. Beaumont, C., M. O’Doherty, and L. Shannon. 2011. Reconceptualising assessment feedback: A key to improving student learning? Studies in Higher Education 36, no. 6: 1–17. Bloxham, S., and L. Campbell. 2010. Generating dialogue in assessment feedback: Exploring the use of interactive cover sheets. Assessment & Evaluation in Higher Education 35, no. 3: 291–300. Brinka, R., and G. Lautenbach. 2011. Electronic assessment in higher education. Educational Studies 37, no. 5: 503–12. Broad, B. 2003. What we really value: Beyond rubrics in teaching and assessing writing. Logan, UT: Utah State University Press. Brown, E., and C. Glover. 2006. Evaluating written feedback. In Innovative assessment in higher education, ed. C. Bryan and K. Clegg, 81–91. Abingdon: Routledge. Carless, D. 2006. Differing perceptions in the feedback process. Studies in Higher Education 31, no. 2: 219–33. Carless, D., D. Salter, M. Yang, and J. Lam. 2011. Developing sustainable feedback practices. Studies in Higher Education 36, no. 4: 395–407. Chandler, J. 2003. The efficacy of various kinds of error feedback for improvement in the accu- racy and fluency of L2 student writing. Journal of Second Language Writing 12, no. 3: 267– 96. Cramp, A. 2011. Developing first-year engagement with written feedback. Active Learning in Higher Education 12, no. 2: 113–24. Crisp, B.R. 2007. Is it worth the effort? How feedback influences students’ subsequent sub- mission of assessable work. Assessment & Evaluation in Higher Education 32, no. 5: 571–81. Davis, M. 2007. The role of Turnitin within the formative process of academic writing. Brookes eJournal of Learning and Teaching 2, no. 2, http://bejlt.brookes.ac.uk/article/. Studies in Higher Education 391 Downloaded by [Allama Iqbal Open University] at 06:31 22 August 2017
  • 16. DeLuca, C., and D.A. Klinger. 2010. Assessment literacy development: Identifying gaps in teacher candidates’ learning. Assessment in Education 17, no. 4: 419–38. Dysthe, O., and K.S. Engelsen. 2011. Portfolio practices in higher education in Norway in an international perspective: Macro-, meso- and micro-level influences. Assessment & Evaluation in Higher Education 36, no. 1: 63–79. Fazio, L. 2001. The effects of corrections and commentaries on the Journal writing accuracy of minority and majority language students. Journal of Second Language Writing 10, no. 4: 235–49. Ferguson, P. 2011. Student perceptions of quality feedback in teacher education. Assessment & Evaluation in Higher Education 36, no. 1: 51–62. Fisher, R., J. Cavanagh, and A. Bowles. 2011. Assisting transition to university: Using assess- ment as a formative learning tool. Assessment & Evaluation in Higher Education 36, no. 2: 225–37. Grainger, P., K. Purnell, and R. Zipf. 2008. Judging quality through substantive conversations between markers. Assessment & Evaluation in Higher Education 33, no. 2: 133–42. Guba, E., and Y. Lincoln. 1989. Fourth generation evaluation. Beverly Hills, CA: Sage. Handley, K., and L. Williams. 2011. From copying to learning: Using exemplars to engage stu- dents with assessment criteria and feedback. Assessment & Evaluation in Higher Education 36, no. 1: 95–108. Harman, K., and L. McDowell. 2011. Assessment talk in design: The multiple purposes of assessment in HE. Teaching in Higher Education 16, no. 1: 41–52. Hattie, J., and H. Timperley. 2007. The power of feedback. Review of Educational Research 77, no. 1: 81–112. Hendry, G.D., N. Bromberger, and S. Armstrong. 2011. Constructive guidance and feedback for learning: The usefulness of exemplars, marking sheets and different types of feedback in a first year law subject. Assessment & Evaluation in Higher Education 36, no. 1: 1–11. Higgins, R., P. Hartley, and A. Skelton. 2002. The conscientious consumer: Reconsidering the role of assessment feedback in student learning. Studies in Higher Education 27, no. 1: 53–64. Hounsell, D., V. McCune, J. Hounsell, and J. Litjens. 2008. The quality of guidance and feed- back to students. Higher Education Research & Development 27, no. 1: 55–67. Huot, B. 2002. (Re)articulating writing assessment for teaching and learning. Logan, UT: Utah State University Press. Huxham, M. 2007. Fast and effective feedback: Are model answers the answer? Assessment & Evaluation in Higher Education 32, no. 6: 601–11. Ivanic, R., R. Clark, and R. Rimmershaw. 2000. What am I supposed to make of this? The mess- ages conveyed to students by tutors’ written comments. In Student writing in higher edu- cation: New contexts, ed. M.R. Lea and B. Stierer, 47–65. Buckingham: Society for Research into Higher Education & Open University Press. Joughin, G. 2008. Assessment, learning and judgement in higher education. London: Springer. Lea, M.R., and B.V. Street. 2000. Student writing and staff feedback in higher education: An academic literacy approach. In Student writing in higher education: New contexts, ed. M. R. Lea and B. Stierer, 31–46. Buckingham: Society for Research into Higher Education & Open University Press. Li, J., and R. Barnard. 2010. Academic tutors’ beliefs about and practices of giving feedback on students’ written assignments: A New Zealand case study. Assessing Writing 16, no. 2: 137–48. Lizzio, A., and K. Wilson. 2008. Feedback on assessment: Students’ perceptions of quality and effectiveness. Assessment & Evaluation in Higher Education 33, no. 3: 263–75. Meyer, L.H., S. Davidson, L. McKenzie, M. Rees, H. Anderson, R. Fletcher, and P.M. Johnston. 2010. An investigation of tertiary assessment policy and practice: Alignment and contradic- tions. Higher Education Quarterly 64, no. 3: 331–50. Murtagh, L., and N. Baker. 2009. Feedback to feed forward: Student response to tutors’ written comments on assignments. Practitioner Research in Higher Education 3, no. 1: 20–28. Mutch, A. 2003. Exploring the practice of feedback to students. Active Learning in Higher Education 4, no. 1: 24–38. Nicol, D.J., and D. Macfarlane-Dick. 2006. Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education 31, no. 2: 199–218. 392 J. Li and R. De Luca Downloaded by [Allama Iqbal Open University] at 06:31 22 August 2017
  • 17. Orrell, J. 2006. Feedback on learning achievement: Rhetoric and reality. Teaching in Higher Education 11, no. 4: 441–56. Orsmond, P., S. Merry, and K. Reiling. 2002. The use of exemplars and student derived marking criteria in peer and self-assessment. Assessment & Evaluation in Higher Education 27, no. 4: 309–23. Orsmond, P., S. Merry, and K. Reiling. 2005. Biology students’ utilization of tutors’ formative feedback: A qualitative interview study. Assessment & Evaluation in Higher Education 30, no. 4: 369–86. Parboteeah, S., and M. Anwar. 2009. Thematic analysis of written assignment feedback: Implications for nurse education. Nurse Education Today 29, no. 7: 753–57. Pokorny, H., and P. Pickford. 2010. Complexity, cues and relationships: Student perceptions of feedback. Active Learning in Higher Education 11, no. 1: 21–30. Poulos, A., and M.J. Mahony. 2008. Effectiveness of feedback: The students’ perspective. Assessment & Evaluation in Higher Education 33, no. 2: 143–54. Price, M., J. Carroll, B. O’ Donovan, and C. Rust. 2011. If I was going there I wouldn’t start from here: A critical commentary on current assessment practice. Assessment & Evaluation in Higher Education 36, no. 4: 479–92. Prowse, S., N. Duncan, J. Hughes, and D. Burke. 2007. ‘… do that and I’ll raise your grade’. Innovative module design and recursive feedback. Teaching in Higher Education 12, no. 4: 437–45. Read, B., B. Francis, and J. Robson. 2005. Gender, ‘bias’, assessment and feedback: Analyzing the written assessment of undergraduate history essays. Assessment & Evaluation in Higher Education 30, no. 3: 241–60. Reynolds, M., and K. Trehan. 2000. Assessment: A critical perspective. Studies in Higher Education 25, no. 3: 267–78. Russell, D.R., and A. Yañez. 2003. ‘Big picture people rarely become historians’: Genre systems and the contradictions of general education. In Writing selves/writing societies: Research from activity perspectives, ed. C. Bazerman and D. Russell, 331–62, http://wac. colostate.edu/books/selves_societies/. Rust, C., B. O’ Donovan, and M. Price. 2005. A social constructivist assessment process model: How the research literature shows us this could be best practice. Assessment & Evaluation in Higher Education 30, no. 3: 231–40. Sadler, D.R. 1989. Formative assessment and the design of instructional systems. Instructional Science 18, no. 2: 119–44. Sakyi, A.A. 2000. Validation of holistic scoring for ESL writing assessment: How raters evalu- ate compositions. In Fairness and validation in language assessment, ed. J.J. Kunnan, 129– 52. Cambridge: Cambridge University Press. Scaife, J., and J. Wellington. 2010. Varying perspectives and practices in formative and diagnos- tic assessment: A case study. Journal of Education for Teaching 36, no. 2: 137–51. Schalkwyk, S. 2010. Early assessment: Using a university-wide student support initiative to effect real change. Teaching in Higher Education 15, no. 3: 299–310. Shute, V.J. 2008. Focus on formative feedback. Review of Educational Research 78, no. 1: 153– 89. Stern, L.A., and A. Solomon. 2006. Effective faculty feedback: The road less travelled. Assessing Writing 11, no. 1: 22–41. Taras, M. 2002. Using assessment for learning and learning from assessment. Assessment & Evaluation in Higher Education 27, no. 6: 501–10. Taras, M. 2006. Do unto others or not: Equity in feedback for undergraduates. Assessment & Evaluation in Higher Education 31, no. 3: 365–77. Walker, M. 2009. An investigation into written comments on assignments: Do students find them usable? Assessment & Evaluation in Higher Education 34, no. 1: 67–78. Weaver, M.R. 2006. Do students value feedback? Student perceptions of tutors’ written responses. Assessment & Evaluation in Higher Education 31, no. 3: 379–94. Studies in Higher Education 393 Downloaded by [Allama Iqbal Open University] at 06:31 22 August 2017