The document provides guidance for teachers to evaluate student work consistently according to shared standards. It outlines ground rules for leaving personal biases at the door and basing scores solely on evidence from the work. A formal scoring protocol is presented to guide teachers through individually assessing samples, comparing scores, and coming to agreement on anchor pieces to exemplify each level of performance. Reflection questions after scoring aim to draw insights about student strengths/weaknesses and ways to improve future assignments and standards. The goal is for teachers to work together toward fair and coherent assessment of student performance.
1. Ground Rules for
Looking at Student Work
(Adapted from the Maine Department of Education Assessment Materials)
1. Belief that shared standards are possible.
2. Agree to agree – work toward shared standards.
Everyone has to work toward consistency and agreement on how to evaluate
student work. Agree to use the student work itself to support personal points of view on
benchmarking and scoring, and then agree to let go of a personal expectation or position
in order to score student work fairly.
3. Leave your personal standards at the door.
Everyone has his or her own standards and expectations about student
performance. These standards are important in the classroom, but need to be put aside
when striving to evaluate student work against a Performance Indicator-based scoring
guide.
4. Discussions are for clarifying, not arguments to win.
Discussions about student work may take on the tone of an argument, but they
are not arguments to be won or lost. They serve a useful purpose in developing shared
standards and expectations, and in making sure that scoring is fair to all students.
5. Match evidence in the student work to the descriptors in the rubric
and scoring guide.
Scoring decisions must be based on particular evidence in the student work. Be
able to point to the evidence in the student work that is relevant to scoring fairly and
consistently. Scoring must not be based on what you think the student meant,
but on what the student actually demonstrated in the work.
6. Treat each Performance Indicator separately.
Student performance on one Performance Indicator must not influence the score
for another Indicator.
1
2. Sources of Scorer Bias
(Adapted from the Maine Department of Education Assessment Materials)
1. Appearance of student work – neatness or messiness, legibility
Avoid equating neatness with high quality work, and messiness with low quality
work. Unless the Performance Indicator assesses neatness, it should not be
considered in scoring.
2. Your personal reaction to student choice of topic, position, or
strategy.
You may personally disagree with a position the student has taken, a reference
made, or an inefficient strategy. Your personal reaction must not impact scoring
based on what the Performance Indicator requires students to demonstrate.
3. The tone of the student’s work.
Students may be surly or bright or otherwise exhibit attitudes toward the task or
the topic that you find pleasant, enduring, or disturbing. The tone of the student’s
communication should not influence scoring.
4. The Halo Effect.
Strong performance in one part of an assessment must not influence scoring of
another part. Treat each indicator separately.
5. Apparent effort or improvement from previous efforts.
Effort isn’t scored on Performance Indicator-based rubrics. Students who clearly
tried hard, or those who clearly didn’t, are still scored on what they actually
demonstrated.
6. Length or complexity of student response.
Longer responses aren’t automatically higher in quality or complexity, and shorter
responses don’t necessarily indicate poor quality. Evaluate each response for its
content.
7. Relative quality (“It’s better than others I’ve seen.”)
Standards-based scoring does not include relative quality. Each piece of work is
scored against the standard – not against all the other pieces of work in the
sample.
8. Familiarity with the student.
Put aside preconceived ideas of what this student can or should be able to
demonstrate, and concentrate on what has been demonstrated.
2
3. Working for Assessment Coherence
A Formal Scoring Protocol
1. Discuss the purpose of the process (to agree that everyone will score student work
in the same way and to create a knowledge base upon which to build the
continuous improvement of student learning).
2. Read ground rules, explore potential sources of bias, and review the process.
3. Review the performance indicators (elements in the left-hand column of the
rubric, checklist, or assessment standard that you have agreed to work with).
4. Review the context of the assignment that created the student work and any other
supporting information that might be available.
5. Make sure that everyone has some samples of student work to work with –
PREFERABLY from a group of students not in his/her own class.
6. BEFORE you work with a partner, try to INDIVIDUALLY score the indicators ONE
AT A TIME by sorting the student work into piles or areas according to the
following guidelines – clearly 0, clearly 3, possible 1, possible 2 (if you are not
using a four step rubric, use the same guidelines with whatever scale you have,
high-low-middle, whatever).
7. Once you have the work sorted, seek agreement with your partner on the 0 and 3
piles – identify 1 mutually agreed example of each. Write a brief explanation on
the form as to why these samples are what they are. Cite specific evidence in the
student work and relate it to the descriptions in the rubric.
8. Record any questions raised during the validation process.
9. Begin a scoring discussion with the anchor pieces identified as a "1s" and “2s” for
this Performance Indicator, and explain why they are exemplars for these scores.
Cite specific evidence in the student work. Record any observations or questions.
Work until you have identified a piece of work you can all agree on for these
elements. Write a brief explanation as to why these samples are what they are.
Cite specific evidence in the student work and relate it to the descriptions in the
rubric.
10. Once you have an anchor for each indicator at each level, sort through the
remaining samples and score them according to the models you have identified.
11. Once you have scored all the work you have, reconvene with the peers who
have been scoring the same work. Share your anchor sets and the rationale’s you
created for them – what similarities and differences do you notice? Try to
reconcile any discrepancies and agree on a master set of anchors for the standard
and/or assignment you were working with.
12. Answer the reflection questions.
3
4. Working for Assessment Coherence
Post-Scoring Reflection Questions
1. What inferences can you draw from the work you scored regarding the general
strengths and weaknesses of student performance in this area?
2. What inferences can you draw from the work you scored regarding the general
strengths and weaknesses of the assignment/teaching strategy that was used in
this activity?
3. Do you have any suggestions for how the scoring standard itself might be
improved based on your experiences scoring this student work? Did you find
major differences between how the work was scored from person to person? If
so, why?
4. What do you think the most important teaching/achievement improvement is
suggested as a result of this work?
Teaching Achievement
4