1. 1
Summary of Observations and Recommendations Based on Scoring Students’ Written Communication
Using the VALUE Written Communication Rubric and Institutionally Developed Written
Communication Rubrics on Feb. 29, 2012
Overall Observations about the VALUE Rubric
Did not like language of the levels—too easy to attach to years of study
Had difficulty differentiating between content development and genre/disciplinary conventions
because of organization
Circumstances surround the writing task—how closely do we want to evaluate students’
adherence to a task (assignment? Is that what “following directions” means?
Do we need normative information—examples, by levels to use across institutions?
Should we be tracking students’ performance chronologically?
Three out of 4 preferred institutionally designed rubric if “sources” criterion could be added and
an “NA” column could be added. VALUE rubric contains criterion for sources; institutional
example did not as some assignments may or may not requires sources
Institutionally developed rubric has “development “of topic—good criterion lacking in VALUE
rubric
VALUE rubric does not have an absolute “Fail”—zero (group 2)
Too much packed into the VALUE rubric if given to students—would be more confusing for
them. Institutional would be easier for them to understand
Sources of evidence -- quality is a disciplinary-based discussion. Need to have the disciplinary
perspective represented in the exercise.
We did note the need to have a "0" to spread out the scale (because some will not meet
benchmark) and to have a don't know/not applicable category -- depending upon the assignment,
what we know about the references, disciplinary differences, etc.).
We also discussed the issue of intra-rater reliability -- we all come at this
Assignment C with a different set point.
What are the goals -- what is the purpose?? Going on the street to communicate, sure. Really
represent the discipline, not so much.
We have to be very careful about selecting evidence, examples. The assignment
affects the scoring ...variability creates more variability.
Need for extensive training/norming
2. 2
Question about guidelines for assignments
Thought there was a lot of overlap between categories
Good assignments are critical
Assessing outside your expertise is difficult
Category: Genre and Disciplinary Conventions and Category Content Development: these seem
most related to the assignment *should talk specifically; re: format of documentation
Category: Sources and Evidence: seems as though there are 2 vectors in the rubric: sound
versus unsound sources and success in using them consistently; difficult to parse these two
Ways in Which Criteria between VALUE rubric and Institutionally Developed Rubric Are
Similar
Mechanics and syntax are common criteria
Sources are common in both
Both address most of same concerns
Purpose
Sources & evidence
Mechanics
Organization
Syntax
Ways in Which Criteria between VALUE Rubric and Institutionally Developed Rubric
Differ
Like 4 better in VALUE rubric
VALUE has broader categories and institutional rubrics have more specifics (e.g. tone)
Institutional rubrics—more opportunity for constructive feedback (due to specificity)
Critical thinking is not broken out in VALUE rubric
3. 3
NO critical thinking or synthesis in VALUE (sources and evidence not quite sufficient as a
category)
VALUE rubric brings in idea of attending to disciplinary conventions
Institutional rubric may be more limited in terms of applicability to various types of writing, but
is also more detailed which is good
Different terminology for similar categories (both category title and within them)
Experiences Using the VALUE Rubric to Score Samples
We certainly need a zero because this isn’t as bad as it could get; we do need a don’t know
Difficult to judge sources of evidence therefore a range in our scoring of sample C
Much as to do with how “develop” and “explore” are defined
Would like to know more about the discipline
Disagreement about “conforming to assignment”
What does “some errors” mean?
Sample A
Regarding the VALUE rubric and Sample A. For our group this was somewhat
straightforward -- with more similarity than on C. We did note the need to have a "0" to spread
out the scale (because some will not meet benchmark) and to have a don't know/not applicable
category -- depending upon the assignment, what we know about the references, disciplinary
differences, etc.).
Sample C
Regarding VALUES rubric and Sample C: we were more mixed on this one -- a
problem of the rubric -- it is a much better paper, but the scores aren't that different. Clearly an
upper level English paper.
We all agree it was a better paper but we didn't agree on ratings -- and in some cases, the second
paper got lower ratings from the reviewer than was given on paper A. Much of our discussion
about why this was is tied to the assignment--its complexity/sophistication and the choice the
student made in how to respond.
4. 4
We also discussed the issue of intra-rater reliability -- we all come at this
assignment with a difference set point.
We also got into standards for judgment -- one person said this shouldn't be
graded as an English paper -- the standard is more -- do they communicate the
capacity to "communicate on the street" and with this standard, the student
would meet the requirement.
Ongoing discussion of whether or not there was a thesis -- some felt there was (fitting with
assignment requirement) others saw none, or one poorly developed.
sources of evidence -- quality is a disciplinary-based discussion. Need to have the disciplinary
perspective represented in the exercise.
differences in goals for the exercise -- do you want them to be able to hit the streets with this
level of skill...sure. But in four year, want to show embedded in the discipline.
Speaks to the need to work on the rubrics -- they are problematic in that they do not discern the
greater difference in quality.
What are the goals -- what is the purpose?? Going on the street to communicate, sure. Really
represent the discipline, not so much.
We are doing this in a comparative purpose (compared to A, compared to
Expectations in the disciplines) -- but the rubric asks you to do this on their
own terms.
We have to be very careful about selecting evidence, examples. The assignment
affects the scoring ...variability creates more variability.
Representative from one campus is looking at the possibility of
reviewing/assessing student work at three levels -- how is it assessed at the
course-level, the writing program level, and the disciplinary/departmental
level. This also made us wonder how this multi-level approach could work for the vision project
goals of comparability, "best" standard, etc.
Discussion of UMB Criteria/Rubric
We were mixed on whether the UMB rubric could map against VALUE --one felt
There was minimal mapping across the two (and that the UMB rubric was very much tied to
their specific assignment, and couldn't work easily for the other assignments we were given).
Others felt there was connection (particularly if we consider the Critical Thinking and Critical
Reading rubrics as well).
5. 5
...the UMB unpacks the value rubric, which makes it easier to use. Sure, I
Could map the two against each other and identify the commonalities (for
example, in a grid) -- but the UMB is easier to use.
The elements of writing are all common -- there will be lots of overlap.
Although we didn't all see this commonality in this rubric -- some felt the
connections are more implied -- processes "behind" what the VALUE rubric
identifies (for example, you can't adequately consider audience context without critical reading
and thinking skills, etc. Other relevant value rubrics (Critical Thinking and Critical Reading)
overlap with this. If we are going to be evaluating communication at the state level we need to
take these together.
The advantage of the UMB is it really is a gen ed-focused assessment tool, as
opposed to the value rubric. Question of relative first-time pass rates by major discipline --
assumption being some majors will find the task easier than others.
This is much more a reading rubric; focused very much on the readings assigned.
The rubric can’t be used to assess an outside paper -- but, they are being asked to carefully read
resources. In sample C, some of us could make connections --for using "source" material. So
some could be used.
Student C doesn't do the assignment correctly, so this is not correct. Business person says didn't
do assignment, it fails. (Rubric doesn't ask, did you do the assignment well.)
If assignment were targeted for the UMB rubric we could use it.
We also wondered about this exercise within the context, purpose, and audience for the statewide
Visions assessment--with goals of "best", comparability --focus on three outcomes, etc. Need to
be clear about these issues for the Vision/LEAP task as a whole.