2. Does assessment matter?
“Assessment makes more difference to the way
that students spend their time, focus their effort,
and perform, than any other aspect of the course
they study, including the teaching. If [lecturers]
want to make their course work better, then
there is more leverage through changing
aspects of the assessment than anywhere
else...” (Gibbs, 2010)
3. Three focuses for today
Turning course objectives into learning
focuses (from lecturer focus to student focus)
Thinking about assessment in terms of levels
of cognitive/affective/behavioural ‘demand’
Analysing the extent to which student
response on assessment is evidence that
learning focus has been achieved
4. Why do we assess?
To check how much/what students know
To grade performance; certificate
To assess our teaching
To facilitate learning
To differentiate/separate/classify students
To promote/model thinking
To reflect on our purposes/aims/goals (Newton
2007)
5. Contributions of student learning
research
Focused on learning from the learner’s
perspective
Drew attention to qualitative differences in the
way students learn
Learning related to the learner’s conception of
the task
Possible to alter the quality of learning through
changing the assessment task (Marton &
Säljö, 1976a & b)
6. Implications for assessment
Assessing for knowledge acquisition: what
knowledge is valued and why?
Assessing for application of
knowledge/procedure/process: what is meant
by application?
Assessing for understanding: what is
“understanding”?
Assessing for transformation/critical
thinking/independent thought: meaning?
7. Alignment between learning and
assessment
Recall Applicatio
n
Understan
-ding
Transform
a-tion/re-
working
Review/cri
-
tique/eval
u-ation
Content
Concept
Process
Argument
Applicatio
n
Theory
Comparis
on
Scenario
............
8. Assessment: design questions
What do we want to assess?
How do the assessment activities align with
course objectives/content/learning?
What ‘signals’ do assessments provide about
the course and to students (values/learning)?
What form/s of assessment (e.g. selected
response; case-study; essay) best suit what
we are assessing?
9. Assessment: design questions
How will students know how and what to
prepare for?
What will a ‘model’ answer to an assessment
look like and how will we know?
How will the assessment be assessed and
who will mark it?
How and in what form will feedback be given?
How will one assessment ‘fit’ with another?
10. Issues of validity
Traditional conceptions of validity: do tests
measure what they are intended to measure?
BUT: how do we know anyway?
Assessment (like most educational activity) is
a socially constructed, contextualised,
interpretive act (cf. Gipps, 1999)
Everything affects validity!
Collection of validity evidence (Messick, 1994)
11. Impacts on validity
What we are assessing
The degree/programme/course objectives
How we conceive of the assessment and its
outcome
What we communicate to students
The conditions under which they do the
assessment
How it is marked and moderated
12. Assessment grading
What I wish I had told students about how I
was going to grade / mark their work
What I wish colleagues had told me about
what I should tell students about how their
work is going to be graded / marked
15. Rubrics/templates/schemes/
memoranda/guidelines
Advantages:
Enable one to think about what one is trying to
assess and at what level/s
Allow for thinking about what is ‘measurable’ and
how it might be measured
Allow for standardisation of assessment
Create conditions under which assessment can
be replicated
Create framework for feedback to students
16. Rubrics/templates/schemes/
memoranda/guidelines
Advantages (continued):
— Allow other academics access to the
assessment
― A modified rubric can give students an
insight into how to prepare for the
assessment (caveat)
— Can enable an opportunity to evaluate the
relative importance of a
topic/theme/concept/aspect
— Allow external examiners a chance to verify
the judgment
17. Rubrics/templates/schemes/
memoranda/guidelines
Limitations
May block creative/original responses not cued in
the guideline
Could ‘blind’ the assessor to an unexpected
response
May assume that learning is tangible/
‘measurable’/visible – how does one measure the
more intangible or abstract or complex?
May trivialise the learning / create dogma
18. Rubrics/templates/schemes/
memoranda/guidelines
Limitations (continued):
May assume that an interpretive act of judgment
is uniformly understood or amenable to other
assessors – there is uniform/universal
understanding of what a ‘good’ answer is; how a
problem is solved; that what is tacit can be made
explicit
Potentially encourage students to ‘find the right
answer’
19. Some useful resources
Angelo, T. & Cross, P. (1993). Classroom
Assessment Techniques: a Handbook for
College Teachers. San Francisco: Jossey-Bass
Downing, S.M. & Haladyna, T.M. (Eds) (2006).
Handbook of Test Development. New Jersey:
Lawrence Erlbaum
20. Some useful resources
Freeman, R. & Lewis, R. (1998). Planning and
Implementing Assessment. London: Kogan Page
Gibbs, G. (2010). Using Assessment to Support
Student Learning. Leeds: Leeds Met Press
Nitko, A.J. (2001). Educational Assessment of
Students. New Jersey: Merrill