SlideShare ist ein Scribd-Unternehmen logo
1 von 222
Contextual Factors
Definition
Factors which reflect a particular context, characteristics unique
to a particular group, community, society and individual.
Context –Educational setting
Characteristics – particular to a person place or
thing(characteristics of educational setting which you will
report on). Provide discussion of the contextual factors in your
school.
Specifics for Discussion in Contextual
COMMUNITY
Urban or rural
Community composition(ethnic, political, progressive)
Student Population(what is it made up of: black/white/ girls/
boys
Student achievement level. A, B C students. Can offer test
scores as explanation. Are there adjustments needed to be
made to ensure student achievement? Where does these students
live in your community?
What type of social community (working class, farming, middle
class. lower class What drives employment ( high paying / low
paying. Family’s income level
How typical is your school in comparison to other schools
small, large regular ethnic, political, progressive
Characteristics of School itself ( age of building number of
classrooms, typical classroom size
What grade level are your students. What grade levels exit is
your school?
Describe characteristics of classroom small, large, windows,
doors etc..
Describe classroom atmosphere
Combine you a list of the Following and then discuss in your
TWS(identified previously)
Classroom Characteristics (ex: The classroom was small and
not well lighted. There are 15 desk in the classroom and one
blackboard in the back wall of the classroom. Two book
shelves are located as you enter the room on each side of the
door. The lighting in the room was not good as several bulbs
needs replacing………….ect. Must describe
Student Characteristics
Community characteristics
District Characteristics
Building Characteristics (this may vary in each building of
school)
Identify a groups of students with similar characteristics and
discuss that group( remain with contextual subject)
May also identify 1 student characteristics and discuss
For Your Information
Follow all guidelines and make sure you discuss what is being
asked of you. This is contextual make sure you follow and
discuss only contextual
You should have 1-2 pages for contextual
Draw Conclusions. What conditions result into low grades.
Some maybe poor attendance, overcrowded classrooms, lack of
parent involvement. Lack of qualified staff and so on. What
conditions improve student achievement (classrooms that are
not overcrowded).. Implications that may cause a particular
state is what you want to report. Use what you are reporting to
make this a good section of your paper.
Don’t include student or parent names in your report
Know who are you teaching
Learning Goals
Now that you have contextual you may begin to develop
learning goals.
Align goals with the national, state or local standards
Have 3 to 6 learning goals
Clearly state learning goals
Review Blooms Taxonomy 6 levels of learning: knowledge,
Comprehension Application, Analysis, Synthesis and Evaluation
Don’t list learning goals as activities
Discuss why the goals you are using are important
State Learning Goal – Make sure it is significant, clear,
challenging and appropriate for students
Use subheadings to justify reader of what you are explaining:
appropriateness, challenging, significant, etc.. Subheadings are
in bold in each of the TWS processes
Explain how learning goal is aligned with Blooms taxonomy.
You may be brief in your explanation to ensure you meet page
requirements.
Use subheadings to justify reader of what you are explaining:
appropriateness, challenging, significant, etc..
Write clear explanation of why the learning goal is appropriate
Look at the following learning goals and determine which are
written properly
Students will develop good skills Yes/ No
Students will understand how to identify verbs Yes/ No
Students will be able to identify a complete sentence Yes/ No
Students will state all the months in a year Yes/ No
Students will be familiar with the rules of tennis Yes/ No
Students will grow corn at home Yes/ No
Circle yes or no for each statement. Discuss why it was not a
goal
Assessment / 3
While constructing the assessment Plan the teacher uses
multiple assessment modes and approaches aligned with
learning goals to assess student learning before , during and
after instruction
Charts must be used in this section. You may use the following
chart but can be modified for your goals.
Learning Goals Assessments Format of assessment
Adaptations
Learning Goals should list each of your learning goals.
Remember to write learning goals as stated in the learning goal
section
Assessments to be used in this section should be pre
assessments , formative assessments and post assessment as
well as others that you may choose to use.
Format of Assessment would include what method of
assessment will you use at this point to assess your goals.
These are example of activities you may use(bell ringers,
homework, projects, written test ect.
Adaptations would be how would you use adaptations or
substitutes for students. For example child cant see written
material due to his broken glasses at recess. What would you
do so he can participate in the assessment process? You would
use adaptation for children which you are aware of that has a
barrier of the learning process.
Next you would explain using subsections to guide your
response. Always list the subsections as they appear and
respond in writing how you are doing this. The first thing listed
on your information sheet of the TWS is learning goals. At this
point you have completed a section of learning goals. The next
session is show how the local state and federal guidel ines are
aligned with learning goals. Remember you must use goals
from local state or federal guidelines. The next section is
Describe the types and levels of your learning goals. Here
discuss the level and type of learning goal you are using. The
level should be consistent with the standards chosen and
appropriate for the student levels. The last bullet on the TWS
says you must identify how your goals are appropriate. Make
sure each of these are discussed and sub headed. Length 1-2
pages
Copyright
Gail Burnaford and Tara Brown
Teaching and Learning in 21st Century Learning Environments:
A Reader
Editor in Chief, AVP: Steve Wainwright
Sponsoring Editor: Cheryl Cechvala
Development Editor: Cheryl Cechvala
Assistant Editor: Amanda Nixon
Senior Editorial Assistant: Nicole Sanchez-Sullivan
Production Editor: Lauren LePera
Senior Product Manager: Peter Galuardi
Cover Design: Jelena Mirkovic Jankovic
Printing Services: Bordeaux
Production Services: Lachina Publishing Services
ePub Development: Lachina Publishing Services
Permission Editor: Karen Ehrmann
Video Production: Ed Tech Productions
Cover Image: Stockbyte/Jupiterimages/Thinkstock
ISBN-10: 1621781496
ISBN-13: 978-1-62178-149-3
Copyright © 2014 Bridgepoint Education, Inc.
All rights reserved.
GRANT OF PERMISSION TO PRINT: The copyright owner of t
his material hereby grants the holder of this publication the righ
t to print these materials for personal use. The holder of this ma
terial may print the materials herein for personal use only. Any
print, reprint, reproduction or distribution of these materials for
commercial use without the express written consent of the copy
right owner constitutes a violation of the U.S. Copyright Act, 17
U.S.C. §§ 101-810, as amended.Preface
Teaching and Learning in 21st Century Learning Environments:
A Reader prepares readers to enter the field of education ready t
o address the needs of 21st-
century learners. The book is intended to serve as a bridge betw
een coursework that participants have taken, and the ongoing pr
ofessional development that graduates are encouraged to pursue
upon course and program completion.
The text presents excerpts from leading voices in education, pro
viding insight on crucial topics such as differentiation for diver
se learners, curriculum and instruction, professional growth and
leadership, and skills for digital age learning. The authors integ
rate theory, research studies, and practical application to provid
e readers with a set of tools and strategies for continuing to lear
n and grow in the field of education. Finally, embedded video in
terviews with practicing educators offer a real-
world perspective of important topics.
Textbook Features
Teaching and Learning in 21st Century Learning Environments:
A Reader includes a number of features to help students underst
and key concepts:
Voices From the Field feature boxes: Provide personal stories fr
om educators based on real experiences in the field, giving read
ers a sense of what it really means to be an educator in the 21st
century.
Tying It All Together feature boxes: Provide guidance to assist
students in synthesizing the information presented within each c
hapter.
Videos: Provide real-
world perspectives from practicing educators on key topics in 2
1st-century education.
Critical Thinking and Discussion Questions: Are found at the en
d of each article. These questions prompt students to critically e
xamine the information presented in each excerpt and draw conn
ections to their own experiences.
Accessible Anywhere. Anytime.
With Constellation, faculty and students have full access to eTe
xtbooks at their fingertips. The eTextbooks are instantly accessi
ble on web, mobile, and tablet.
iPad
To download the Constellation iPad app, go to the App Store on
your iPad, search for "Constellation for UAGC," and download
the free application. You may log in to the iPad application with
the same username and password used to access Constellation o
n the web.
NOTE: You will need iOS version 7.0 or higher.
Android Tablet and Phone
To download the Constellation Android app, go to the Google Pl
ay Store on your Android Device, search for "Constellation for
UAGC," and download the free application. You may log in to t
he Android application with the same username and password us
ed to access Constellation on the web.
NOTE: You will need a tablet or phone running Android version
2.3 (Gingerbread) or higher.
About the Authors
Gail Burnaford
Gail Burnaford holds a Ph.D. in Curriculum and Instruction fro
m Georgia State University, and is currently Professor in the De
partment of Curriculum, Culture and Educational Inquiry at Flor
ida Atlantic University. Prior to moving to Florida, she directed
the Undergraduate Teacher Education and School Partnerships
Program at Northwestern University’s School of Education and
Social Policy.
Dr. Burnaford is the author of four books and numerous articles
on topics related to teacher learning, professional development,
arts integration and curriculum design. She has served as Princi
pal Investigator on multiple program evaluations focused on art
s integration partnerships, including those funded through the U
.S. Department of Education’s Professional Development Grants
. Dr. Burnaford has acquired eLearning Certification and teache
s courses including research in curriculum and instruction, educ
ational policy, documentation and assessment, and curriculum le
adership in hybrid, online and face-to-
face learning environments. Her current research focuses on fac
ulty’s use of iPads in teaching and the nature/impact of faculty f
eedback on student work.
Acknowledgments
The authors would like to acknowledge the many people who we
re involved in the development of this text. Special thanks are d
ue to Cheryl Cechvala, sponsoring editor and development edito
r; Amanda Nixon, assistant editor; Nicole Sanchez-
Sullivan, senior editorial assistant; and Lauren LePera, producti
on editor. Thanks also to the following Ashford faculty and advi
sors for their helpful advice and suggestions: Amy Gray, Stephe
n Halfaker, Kathleen Lunsford, Andrew Shean, Melissa Phillips,
Tony Valley, Gina Warren, and Laurie Wellner.
Finally, the authors would like to thank the following reviewers
for their valuable feedback and insight:
Paula Conroy, University of Northern Colorado
Graham Crookes, University of Hawaii
Tara Brown
Tara M. Brown is an Assistant Professor of Education at the Uni
versity of Maryland, College Park. She holds a doctorate degree
from the Harvard Graduate School of Education, and is a forme
r secondary classroom teacher in alternative education.
Tara’s research focuses on the experiences of low-
income adolescents and young adults served by urban schools, p
articularly as related to disciplinary exclusion and dropout. She
specializes in qualitative, community-
based, participatory, and action research methodologies. Her mo
st recent research is entitled Uncredentialed: Young Adults Livi
ng without a Secondary Degree. This community-
based participatory study focuses on the social, educational, and
economic causes and implications of school dropout among pri
marily Latina/o young adults living in mid-sized, post-
industrial city.
Ch 3: Assessment in the 21st Century
3.1 Five Assessment Myths and Their Consequences, by Rick St
iggins
Introduction
Rick Stiggins is a well-
known consultant and expert in the field of assessment. He foun
ded the Assessment Training Institute, which provides professio
nal development in assessment for teachers and school leaders.
He has served on the faculty at Michigan State University, the
University of Minnesota, and Lewis and Clark College. Stiggins
has also served as director of the American College Testing Pro
gram.
Stiggins’ article emphasizes the importance of paying attention
to assessment at the classroom level. He notes that in the curren
t educational climate, there is huge investment in yearly standar
dized tests rather than daily assessments that are a part of teachi
ng. Stiggins states, however, that teachers are not well-
prepared to assess effectively and have not had much assessmen
t training in their teacher education programs.
Stiggins’ article about the myths that drive assessment is especi
ally important because of his attention to students and their role
in assessment. He laments that nowhere in the assessment litera
ture over the past 60 years do we find reference to students as “
users” and “instructional decision makers.” Finally, the author d
escribes the power of assessing for learning rather than relying
on grades and test scores to motivate students.
Excerpt
The following is an excerpt from Stiggins, R. (2007). Five asses
sment myths and their consequences. Education Week, 27(8), 28
–29. Reprinted with permission from the author.
America has spent 60 years building layer upon layer of district,
state, national, and international assessments at immense cost
—
and with little evidence that our assessment practices have impr
oved learning. True, testing data have revealed achievement pro
blems. But revealing problems and helping fix them are two enti
rely different things.
As a member of the measurement community, I find this legacy
very discouraging. It causes me to reflect deeply on my role and
function. Are we helping students and teachers with our assess
ment practices, or contributing to their problems?
My reflections have brought me to the conclusion that assessme
nt’s impact on the improvement of schools has been severely li
mited by several widespread but erroneous beliefs about what ro
le it ought to play. Here are five of the most problematic of thes
e assessment myths:Myth 1: The Path to School Improvement Is
Paved With Standardized Tests.
Evidence of the strength of this belief is seen in the evolution, i
ntensity, and immense investment in our large-
scale testing programs. We have been ranking states on the basi
s of average college-admission-
test scores since the 1950s, comparing schools based on district-
wide testing since the 1960s, comparing districts based on state
assessments since the 1970s, comparing states based on national
assessment since the 1980s, and comparing nations on the basis
of international assessments since the l990s. Have schools impr
oved as a result?
The problem is that once-a-
year assessments have never been able to meet the information n
eeds of the decisionmakers who contribute the most to determini
ng the effectiveness of schools: students and teachers, who mak
e such decisions every three to four minutes. The brief history o
f our investment in testing outlined above includes no reference
to day-to-
day classroom assessment, which represents 99.9 percent of the
assessments in a student’s school life. We have almost complete
ly neglected classroom assessment in our obsession with standar
dized testing. Had we not, our path to school improvement woul
d have been far more productive.Myth 2: School and Communit
y Leaders Know How to Use Assessment to Improve Schools.
Over the decades, very few educational leaders have been traine
d to understand what standardized tests measure, how they relat
e to the local curriculum, what the scores mean, how to use the
m, or, indeed, whether better instruction can influence scores. B
eyond this, we in the measurement community have narrowed o
ur role to maximizing the efficiency and accuracy of high-
stakes testing, paying little attention to the day-to-
day impact of test scores on teachers or learners in the classroo
m.
Many in the business community believe that we get better scho
ols by comparing them based on annual test scores, and then re
warding or punishing them. They do not understand the negative
impact on students and teachers in struggling schools that conti
nuously lose in such competition. Politicians at all levels believ
e that if a little intimidation doesn’t work, a lot of intimidation
will, and assessment has been used to increase anxiety. They to
o misunderstand the implications for struggling schools and lear
ners.Myth 3: Teachers Are Trained to Assess Productively.
Teachers can spend a quarter or more of their professional time
involved in assessment-
related activities. If they assess accurately and use results effect
ively, their students can prosper. Administrators, too, use assess
ment to make crucial curriculum and resource-
allocation decisions that can improve school quality.
Given the critically important roles of assessment, it is no surpr
ise that Americans believe teachers are thoroughly trained to ass
ess accurately and use assessment productively. In fact, teachers
typically have not been given the opportunity to learn these thi
ngs during preservice preparation or while they are teaching. Th
is has been the case for decades. And lest we believe that teache
rs can turn to their principals or other district leaders for help in
learning about sound assessment practices, let it be known that
relevant, helpful assessment training is rarely included in leader
ship-
preparation programs either.Myth 4: Adult Decisions Drive Sch
ool Effectiveness.
We assess to inform instructional decisions. Annual tests inform
annual decisions made by school leaders. Interim tests used for
matively permit faculty teams to fine-
tune programs. Classroom assessment helps teachers know what
comes next in learning, or what grades go on report cards. In al
l cases, the assessment results inform the grown-
ups who run the system.
But there are other data-
based instructional decisionmakers present in classrooms whose
influence over learning success is greater than that of the adult
s. I refer, of course, to students. Nowhere in our 60-
year assessment legacy do we find reference to students as asses
sment users and instructional decisionmakers. But, in fact, they
interpret the feedback we give them to decide whether they have
hope of future success, whether the learning is worth the energ
y it will take to attain it, and whether to keep trying. If students
conclude that there is no hope, it doesn’t matter what the adults
decide. Learning stops. The most valid and reliable “high stake
s” test, if it causes students to give up in hopelessness, cannot b
e regarded as productive. It does more harm than good.Myth 5:
Grades and Test Scores Maximize Student Motivation and Learn
ing.
Most of us grew up in schools that left lots of students behind.
By the end of high school, we were ranked based on achievemen
t. There were winners and losers. Some rode winning streaks to
confident, successful life trajectories, while others failed early a
nd often, found recovery increasingly difficult, and ultimately g
ave up. After 13 years, a quarter of us had dropped out and the r
est were dependably ranked. Schools operated on the belief that
if I fail you or threaten to do so, it will cause you to try harder.
This was only true for those who felt in control of the success c
ontingencies. For the others, chronic failure resulted, and the int
imidation minimized their learning. True hopelessness always tr
umps pressure to learn.
Society has changed the mission of its schools to “leave no chil
d behind.” We want all students to meet state standards. This re
quires that all students believe they can succeed. Frequent succe
ss and infrequent failure must pave the path to optimism. This r
epresents a fundamental redefinition of productive assessment d
ynamics.
Classroom-
assessment researchers have discovered how to assess for learni
ng to accomplish this. Assessment for learning (as opposed to of
learning) has a profoundly positive impact on achievement, esp
ecially for struggling learners, as has been verified through rigo
rous scientific research conducted around the world. But, again,
our educators have never been given the opportunity to learn ab
out it.
Sound assessment is not something to be practiced once a year.
As we look to the future, we must balance annual, interim or be
nchmark, and classroom assessment. Only then will we meet the
critically important information needs of all instructional decisi
onmakers. We must build a long-
missing foundation of assessment literacy at all levels of the sys
tem, so that we know how to assess accurately and use results pr
oductively. This will require an unprecedented investment in pr
ofessional learning both at the preservice and in-
service levels for teachers and administrators, and for policyma
kers as well.
Of greatest importance, however, is that we acknowledge the ke
y role of the learner in the assessment-
learning connection. We must begin to use classroom assessmen
t to help all students experience continuous success and come to
believe in themselves as learners.
Source: Stiggins, R. (2007). Five assessment myths and their co
nsequences. Education Week 27(8), pp. 28–
29. © Rick Stiggins. As first appeared in Education Week, Octo
ber 16, 2007. Reprinted with permission from the author.
Summary
Stiggins offers five myths regarding assessment. He then sugges
ts the consequences that teachers and leaders face when the edu
cational community apparently believes these myths. The author
challenges the myth that standardized testing can be the path to
school improvement, noting that classroom assessment has muc
h more power over student learning. He asserts, contrary to pop
ular opinion, that most teachers and leaders do not know how to
use assessment data to improve schools, nor are teachers adequ
ately prepared to assess productively.
Educators and the general public appear to believe that grades a
nd test scores motivate student learning, despite the evidence th
at classroom-
based assessment for learning is actually what promotes student
success. Finally, Stiggins debunks the myth that adult decisions
drive school effectiveness and reminds readers of the role the st
udents themselves play in the process.
Critical Thinking Questions
1.
To what degree do you believe students play a pivotal role in sc
hool effectiveness as “assessment users” and “instructional deci
sion makers”? How might that role be strengthened for students
in schools?
2.
How would you evaluate your own assessment knowledge and p
reparation for teaching and leadership in assessment? How woul
d you characterize the gaps in your knowledge about assessment
?
3.
Imagine that you are speaking to a group of parents of students i
n a middle school. Explain how you would assess students daily
in order to improve your teaching.
4.
Discuss Rick Stiggins’ assertion that school improvement is not
informed by standardized test results. What are some of the pro
blems with relying on yearly standardized tests to drive curricul
um and teaching in a school?3.2 Assessment Literacy for Teach
ers: Faddish or Fundamental? by W. James Popham
Introduction
W. JamesPopham is an emeritus professor in the graduate schoo
l of the University of California, Los Angeles. He is considered
one of the premier researchers in the field of assessment and is t
he founder of IOX Assessment Associates, a research and devel
opment organization.
This article introduces the concept of assessment literacy as a fu
ndamental task for professional development in schools, especia
lly in the current context in which teacher preparation assessme
nt programs may be viewed as inadequate. Popham claims that t
eachers know very little about assessment beyond the administra
tion of traditional tests, and in this piece he describes 13 “must
understand” assessment topics for teachers, including the differ
ence between formative and summative assessment tools. He als
o differentiates between classroom assessments and accountabili
ty assessments in terms of their goals and uses by teachers and a
dministrators.
A key concept offered in this article is the idea that assessment
approaches that are instructionally sensitive can be directly rela
ted to good teaching or, conversely, poor teaching. Popham mai
ntains that teachers need to know the basics of the content area
of assessment, including reliability, the three types of validity, t
ypes of test items, and the development and scoring of alternati
ve assessments such as portfolios, exhibitions, peer, and self-
assessments.
Teachers and leaders also need to be able to interpret standardiz
ed test results and use them meaningfully to improve instruction
, because they are a key feature of today’s data-
driven practice in many schools and districts.
Finally, the article reminds readers that assessment of English-
language learners and students with disabilities remains an esse
ntial content field for all teachers.
Excerpt
The following is an excerpt from Popham, W. J. (2009). Assess
ment literacy for teachers: faddish or fundamental? Theory Into
Practice, 48, 4–11.
In recent years, increasing numbers of professional development
programs have dealt with assessment literacy for teachers and/o
r administrators. Is assessment literacy merely a fashionable foc
us for today’s professional developers or, in contrast, should it
be regarded as a significant area of professional development in
terest for many years to come? After dividing educators’ measur
ement-
related concerns into either classroom assessments or accountab
ility assessments, it is argued that educators’ inadequate knowle
dge in either of these arenas can cripple the quality of education
. Assessment literacy is seen, therefore, as a sine qua non for to
day’s competent educator. As such, assessment literacy must be
a pivotal content area for current and future staff development e
ndeavors. Thirteen must-
understand topics are set forth for consideration by those who d
esign and deliver assessment literacy programs. Until preservice
teacher education programs begin producing assessment literate
teachers, professional developers must continue to rectify this
omission in educators’ professional capabilities.
For the past several years, assessment literacy has been increasi
ngly touted as a fitting focus for teachers’ professional develop
ment programs. The sort of assessment literacy that is typically
recommended refers to a teacher’s familiarity with those measur
ement basics related directly to what goes on in classrooms. Giv
en today’s ubiquitous, externally imposed scrutiny of schools, w
e can readily understand why assessment literacy might be regar
ded as a likely target for teachers’ professional development. Y
et, is assessment literacy a legitimate focus for teachers’ profess
ional development programs or, instead, is it a fashionable but s
oon forgettable fad?The Consequences of Omission
Many of today’s teachers know little about educational assessm
ent. For some teachers, test is a four-
letter word, both literally and figuratively. The gaping gap in te
achers’ assessment-
related knowledge is all too understandable. The most obvious e
xplanation is, in this instance, the correct explanation. Regretta
bly, when most of today’s teachers completed their teacher-
education programs, there was no requirement that they learn an
ything about educational assessment. For these teachers, their o
nly exposure to the concepts and practices of educational assess
ment might have been a few sessions in their educational psycho
logy classes or, perhaps, a unit in a methods class (La Marca, 20
06; Stiggins, 2006).
Thus, many teachers in previous years usually arrived at their fi
rst teaching assignment quite bereft of any fundamental underst
anding of educational measurement. Happily, in recent years we
have seen the emergence of increased preservice requirements t
hat offer teacher education candidates greater insights regarding
educational assessment. Accordingly, in a decade or two, the as
sessment literacy of the nation’s teaching force is bound to be s
ubstantially stronger. But for now, it must be professional devel
opment—completed subsequent to teacher education—
that will supply the nation’s teachers with the assessment relate
d skills and knowledge they need.
* * *A Quick Content Dip
Professional development programs focused on assessment liter
acy need to be tailored. Such a program designed for school ad
ministrators is likely to be similar to an assessment-
literacy program for teachers, in the sense that many of the topi
cs to be treated would be essentially identical, but some salient
content differences would—and should—
exist. To conclude this analysis, I would like to lay out the cont
ent that should be addressed—in a real-
world, practical manner rather than an esoteric, theoretical fashi
on—during an assessment-
literacy professional development program for teachers. This wi
ll only be a brief listing of potential content, but those who are i
nterested in a closer look at possible content for such programs
will find more detailed treatments of potential emphases in the l
ist of references.
Those considering what to include in an assessment literacy pro
fessional development program for teachers should seriously co
nsider focusing on a set of target skills and knowledge dealing
with the following content:
1.
The fundamental function of educational assessment, namely, th
e collection of evidence from which inferences can be made abo
ut students’ skills, knowledge, and affect. A common misconcep
tion among educators is to reify test scores, as though such scor
es are the true target of an educator’s concern. In reality, the on
ly reason we test our students is in order collect evidence regard
ing what we cannot see—
understanding, skill development, and so on. Almost all of our e
ducational goals are aimed at unseeable skills and knowledge.
We cannot tell how much history a student knows just by lookin
g at that student. Thus, we must rely on students’ overt test perf
ormances to produce evidence so we can arrive at defensible inf
erences about students’ covert skills and knowledge.
2.
Reliability of educational assessments, especially the three form
s in which consistency evidence is reported for groups of test-
takers (stability, alternate-
form, and internal consistency) and how to gauge consistency of
assessment for individual test-
takers. Many educators place absolutely unwarranted confidence
in the accuracy of educational tests, especially those high-
stakes tests created by well-
established testing companies. When educators grasp the nature
of measurement error, and realize the myriad factors that can tri
gger inconsistency in a student’s test performances, those educa
tors will regard with proper caution the imprecision of the result
s obtained on even some of our most time-
honored assessment instruments.
3.
The prominent role three types of validity evidence should play
in the building of arguments to support the accuracy of test-
based interpretations about students, namely, content-
related, criterion-related, and construct-
related evidence. Anytime an educator utters the phrase a valid t
est, that educator is—at least technically—
in error. It is not a test that is valid or invalid. Rather, it is the i
nference we base on a test-
taker’s score whose validity is at issue. Moreover, the types of
validity evidence we collect are fundamentally different. As a c
onsequence, for example, classroom teachers need to know that
the chief kind of validity evidence they need to attend to should
be content-related.
4.
How to identify and eliminate assessment bias that offends or u
nfairly penalizes test-
takers because of personal characteristics such as race, gender,
or socioeconomic status. During the past two decades, the meas
urement community has devised both judgmental and empirical
ways of dramatically reducing the amount of assessment bias in
our large-
scale educational tests. Classroom teachers need to know how to
identify and eliminate bias in their own teacher-made tests.
5. Construction and improvement of selected-
response and constructed-
response test items. Through the years, measurement specialists
have been assembling a collection of guidelines regarding how t
o create wonderful, rather than wretched, test items. Moreover,
once a set of test items has been constructed, there are easily us
ed procedures available for making those items even better. Edu
cators who generate tests need to be conversant with the creatio
n and honing of test items.
6. Scoring of students’ responses to constructed-
response tests items, especially the distinctive contribution mad
e by well-formed rubrics. Although constructed-
response test items such as essay and short answer items often p
rovide particularly illuminating evidence about students’ skills
and knowledge, the scoring of students’ responses to such items
often goes haywire because of loose judgmental procedures. Te
achers need to know how to create and use rubrics, that is, scori
ng guides, so students’ performances on constructed-
response items can be accurately appraised.
7.
Development and scoring of performance assessments, portfolio
assessments, exhibitions, peer assessments, and self-
assessments. Gone are the days when teachers only had to know
how to score tests by distinguishing between a circled T or F for
students’ answers to true–
false items. Given the current use of assessment procedures call
ing for students to respond in dramatically diverse ways, today’
s teachers need to learn how to generate and perhaps score a con
siderable variety of assessment strategies.
8.
Designing and implementing formative assessment procedures c
onsonant with both research evidence and experience-
based insights regarding such procedures’ likely success. Forma
tive assessment is a process, not a particular type of test. Becau
se there is now substantial evidence at hand that properly emplo
yed formative assessment can meaningfully boost students’ achi
evement (Black & Wiliam, 1998a), today’s educators need to un
derstand the innards of this potent classroom process.
9.
How to collect and interpret evidence of students’ attitudes, inte
rests, and values. When considering the importance of students’
acquisition of cognitive versus affective outcomes, it could be a
rgued that inattention to students’ attitudes, interests, and value
s can have a lasting, negative impact on those students. Teacher
s, therefore, should at least learn how to assess their students’ a
ffect so that, if those teachers choose to do so, they can get an a
ccurate fix on their students’ affective dispositions.
10. Interpreting students’ performances on large-
scale, standardized achievement and aptitude assessments. Beca
use students’ performances are of interest to both teachers and s
tudents’ parents, teachers must understand the most widely used
techniques for reporting students’ scores on today’s oft-
administered standardized examinations, including, for example,
what is meant by a scale score.
11.
Assessing English Language Learners and students with disabili
ties. Although most of the measurement concepts that educators
need to understand will apply across the board to all types of st
udents, there are special assessment issues associated with stude
nts whose first language is not English and for students with dis
abilities. Because today’s educators have been adjured to attend
to such students with more care than was seen in the past, it is i
mportant for all teachers to become conversant with the assessm
ent procedures most suitable for these subgroups of students.
12.
How to appropriately (and not inappropriately) prepare students
for high-
stakes tests. Given the pressures on educators to have their stud
ents shine on state and, sometimes, district accountability tests,
there have been reports of test-
preparation practices that are patently inappropriate. In many in
stances, such unsound practices arise simply because teachers h
ad not devoted attention to the question of how students should
and should not be readied for important tests. They should be pr
epared to do so.
13.
How to determine the appropriateness of an accountability test f
or use in evaluating the quality of instruction. It is not safe to a
ssume that, because an accountability test has been officially ad
opted in a state, this test is suitable for evaluating schools. Mor
e than ever before, educators need to understand what makes a t
est suitable for appraising the quality of instruction.
All but a few of these 13 content recommendations are applicabl
e to both classroom assessments and accountability assessments.
The recommendations regarding the determination of an accoun
tability test’s evaluative appropriateness and interpreting studen
ts’ performances on large-
scale, standardized tests, of course, refer only to accountability
assessments. Conversely, the recommendation regarding learnin
g about formative assessment procedures clearly deals with clas
sroom assessments rather than accountability assessments. Beyo
nd those dissimilarities, however, a professional development pr
ogram aimed at the promotion of teachers’ assessment literacy s
hould show how the bulk of the content recommended here has
clear relevance to both classroom assessments and accountabilit
y assessments.
Of particular merit these days is the use of professional learning
communities as an adjunct to, or in place of, more traditional p
rofessional development activities. Such communities consist of
small groups of teachers and/or administrators who meet period
ically over an extended period of time, for instance, one or more
school years, to focus on topics such as those identified above.
If such a group consists exclusively of teachers, then it is typica
lly referred to as a teacher learning community. If administrator
s are involved, then the label professional learning community i
s usually affixed. Given access to at least some written or electr
onic materials as a backdrop (e.g., Popham, 2006, which is avail
able gratis to such learning communities), collections of educat
ors with similar interest can prove to be remarkably effective in
helping educators acquire significant new insights.Fad-
Free Focus?
The presenting question that initiated this analysis was whether
professional development programs aimed at enhancing teachers
’ assessment literacy were warranted, either in the short-
term or long-term. I identified two sets of teachers’ assessment-
related decisions that could be illuminated by such programs, na
mely, those decisions related to classroom assessments and thos
e decisions related to accountability assessments. Although, at t
he current time, teachers are surely faced with assessment-
dependent choices stemming from both of these sorts of assessm
ents, will both types of assessments be with us over the long ha
ul?
The answer to that question is, in my view, an emphatic Yes. Wi
th regard to classroom assessments, the influential work of Blac
k and Wiliam (1998a, 1998b) lends powerful empirical support
attesting to the learning dividends of instructionally oriented cla
ssroom assessment. When classroom assessments are conceived
as assessments for learning, rather than assessments of learning,
students will learn better what their teacher wants them to learn
. Not only is the evidence supporting such a formative approach
to classroom assessment demonstrably effective, but there are
—happily—
diverse ways to implement an instructionally oriented approach
to classroom assessment. As the two British researchers point o
ut:
The range of conditions and contexts under which studies have s
hown that gains can be achieved must indicate that the principle
s that underlie achievement of substantial improvements in lear
ning are robust. Significant gains can be achieved by many diffe
rent routes, and initiatives here are not likely to fail through neg
lect of delicate and subtle features. (Black & Wiliam, 1998a, pp
. 61–62)
It appears, then, that teachers who want to be optimally effectiv
e ought to be learning about the essentials of classroom assessm
ent for a long while to come.
Turning to accountability assessment, there seems little reason t
o believe that the demand for test-
based evidence of teachers’ effectiveness will evaporate—
ever. Accountability pressure on educators springs from taxpaye
rs’ doubts that their public schools are as effective as they ough
t to be. It will take decades of consistent educational success sto
ries before the public is disabused of its skeptical regard for pu
blic schools. Even if the public were ever to relax its demands f
or educational accountability evidence, thoughtful educators stil
l ought to insist on the collection of such evidence. That is the k
ind of requirement that any self-
respecting profession ought to impose on itself.
Thus, it seems that assessment literacy is a commodity needed b
y teachers for their own long-term well-
being, and for the educational well-
being of their students. For the foreseeable future, teachers are l
ikely to exist in an environment where test-
elicited evidence plays a prominent instructional and evaluative
role. In such environments, those who control the tests tend to c
ontrol the entire enterprise. Until preservice teacher educators r
outinely provide meaningful assessment literacy for prospective
teachers, the architects of professional development programs
will need to offer assessment-
literacy programs. We can only hope they do it well.References
Black, P., & Wiliam, D. (1998a). Assessment and classroom lea
rning. Assessment in Education: Principles, Policy, and Practice
, 5(1), 7–73.
Black, P., & Wiliam, D. (1998b). Inside the black box: Raising
standards through classroom assessment. Phi Delta Kappan, 80(
2), 139–148.
La Marca. P. (2006). Assessment literacy: Building capacity for
improving student learning. Paper presented at the National Co
nference on Large-
Scale Assessment, Council of Chief State School Officers, San
Francisco, CA.
Popham, W. J. (2006). Mastering assessment: A self-
service system for educators. New York: Routledge.
Stiggins, R. J. (2006). Assessment for learning: A key to studen
t motivation and learning. Phi Delta Kappa Edge, 2(2), 1–19.
Source: Popham, W. J. (2009). Assessment Literacy for Teacher
s: Faddish or Fundamental? Theory Into Practice 48: 4–
11. Taylor and Francis. Copyright © 2009 Routledge.
Summary
Popham’s article presents a range of assessment topics that teac
hers and leaders should be knowledgeable about; he terms comp
etence in these content areas as “assessment literacy” and assert
s that professional development in school districts should focus
explicitly on these areas in order to improve schools and enhanc
e student learning.
The author asserts that the word assessment, for most teachers, i
s synonymous with the word test. He poses the critical question,
“What kinds of assessments do teachers most need to understan
d?” and responds with a list of 13 topics.
The article suggests that teachers and leaders need to be able no
t only to apply meaningful and varied assessments but also to un
derstand and be “literate” in the field of assessment itself. The a
uthor claims that standardized testing in the United States tends
to be “instructionally insensitive,” meaning that the results have
little or no relationship to how well students are taught.
Finally, the author challenges professional development leaders
to consider how to embed these important concepts and practice
s into ongoing teacher learning venues in schools, and he menti
ons professional learning communities (PLCs) as a promising ap
proach.
Critical Thinking Questions
1.
Design a year of PLC meetings in which teachers engage in con
scious assessment literacy learning. What would such meetings l
ook like? How would teachers engage with each other in learnin
g more about assessment in PLCs?
2.
Popham writes that school administrators need assessment litera
cy training that is, in some ways, like the professional developm
ent needed by teachers. He then mentions that there would be so
me differences in terms of what administrators need to know. W
hat might those differences be?
3.
One of the 13 “must understand” topics refers to eliminating ass
essments that offend or penalize students because of race, gende
r, or socioeconomic status. Discuss this topic in terms of your e
xperience and the students you have encountered. How might sc
hools and teachers work toward bias-free assessment?
4.
This article briefly refers to the need for teachers to assess stud
ents’ affect, that is, their attitudes, interests, and values. Why is
this important, and how might teachers do this as part of their p
ractice?
5.
What is your overall impression of this article and the author’s
presentation of the tenets of assessment literacy
3.3 Seven Keys to Effective Feedback, by Grant Wiggins
Introduction
Grant Wiggins has been a central contributor to the field of asse
ssment in the last 25 years, due in part to his landmark book, Ed
ucative Assessment: Designing Assessments to Inform and Impr
ove Student Performance, as well as his work with Jay McTighe
. Wiggins and coauthor McTighe have written many books and a
rticles focused on backward design for curriculum and assessme
nt. Used in hundreds of school districts around the country, bac
kward design is a process of planning curriculum from the goals
or aims “backwards.”
This article directs readers’ attention to feedback as a means of
providing learners with information about how they are doing in
their efforts to reach a specific goal. Wiggins is clear about the
need for a goal in order for feedback to be meaningful to learne
rs. The author also asserts that feedback is not evaluative or jud
gmental, nor is it advice-driven. Effective feedback is user-
friendly, timely, ongoing and consistent.
Wiggins also calls attention to the responsibilities of the learner
to be open to and use feedback. He writes: “If I am not clear on
my goals or if I fail to pay attention to them, I cannot get helpf
ul feedback” (p. 18). Finally, Wiggins explains that research sh
ows the power of teaching less in order to provide more feedbac
k. A careful consideration of this concept may be the essential n
ext step in improving assessment practices.
Excerpt
The following is an excerpt from Wiggins, G. (2012). 7 keys to
effective feedback. Educational Leadership, 70(1), 10–19.
Who would dispute the idea that feedback is a good thing? Both
common sense and research make it clear: Formative assessment
, consisting of lots of feedback and opportunities to use that fee
dback, enhances performance and achievement.
Yet even John Hattie (2008), whose decades of research reveale
d that feedback as among the most powerful influences on achie
vement, acknowledges that he has “struggled to understand the c
oncept” (p. 173). And many writings on the subject don’t even a
ttempt to define the term. To improve formative assessment prac
tices among both teachers and assessment designers, we need to
look more closely at just what feedback is—and isn’t.
What Is Feedback, Anyway?
The term feedback is often used to describe all kinds of commen
ts made after the fact, including advice, praise, and evaluation.
But none of these are feedback, strictly speaking.
Basically, feedback is information about how we are doing in ou
r efforts to reach a goal. I hit a tennis ball with the goal of keep
ing it in the court, and I see where it lands—
in or out. I tell a joke with the goal of making people laugh, and
I observe the audience’s reaction—
they laugh loudly or barely snicker. I teach a lesson with the go
al of engaging students, and I see that some students have their
eyes riveted on me while others are nodding off.
Here are some other examples of feedback:
·
A friend tells me, “You know, when you put it that way and spe
ak in that softer tone of voice, it makes me feel better.”
·
A reader comments on my short story, “The first few paragraphs
kept my full attention. The scene painted was vivid and interest
ing. But then the dialogue became hard to follow; as a reader, I
was confused about who was talking, and the sequence of action
s was puzzling, so I became less engaged.”
·
A baseball coach tells me, “Each time you swung and missed, y
ou raised your head as you swung so you didn’t really have your
eye on the ball. On the one you hit hard, you kept your head do
wn and saw the ball.”
Note the difference between these three examples and the first t
hree I cited—
the tennis stroke, the joke, and the student responses to teaching
. In the first group, I only had to take note of the tangible effect
of my actions, keeping my goals in mind. No one volunteered f
eedback, but there was still plenty of feedback to get and use. T
he second group of examples all involved the deliberate, explici
t giving of feedback by other people.
Whether the feedback was in the observable effects or from othe
r people, in every case the information received was not advice,
nor was the performance evaluated. No one told me as a perform
er what to do differently or how “good” or “bad” my results wer
e. (You might think that the reader of my writing was judging m
y work, but look at the words used again: She simply played bac
k the effect my writing had on her as a reader.) Nor did any of t
he three people tell me what to do (which is what many people e
rroneously think feedback is—
advice). Guidance would be premature; I first need to receive fe
edback on what I did or didn’t do that would warrant such advic
e.
In all six cases, information was conveyed about the effects of
my actions as related to a goal. The information did not include
value judgments or recommendations on how to improve.
Decades of education research support the idea that by teaching
less and providing more feedback, we can produce greater learni
ng (see Bransford, Brown, & Cocking, 2000; Hattie, 2008; Marz
ano, Pickering, & Pollock, 2001). Compare the typical lecture-
driven course, which often produces less-than-
optimal learning, with the peer instruction model developed by
Eric Mazur (2009) at Harvard. He hardly lectures at all to his 20
0 introductory physics students; instead, he gives them problem
s to think about individually and then discuss in small groups. T
his system, he writes, “provides frequent and continuous feedba
ck (to both the students and the instructor) about the level of un
derstanding of the subject being discussed” (p. 51), producing g
ains in both conceptual understanding of the subject and proble
m-
solving skills. Less “teaching,” more feedback equals better res
ults.
Feedback Essentials
Whether feedback is just there to be grasped or is provided by a
nother person, helpful feedback is goal-
referenced; tangible and transparent; actionable; user-
friendly (specific and personalized); timely; ongoing; and consi
stent.
Goal-Referenced
Effective feedback requires that a person has a goal, takes actio
n to achieve the goal, and receives goal-
related information about his or her actions. I told a joke—
why? To make people laugh. I wrote a story to engage the reade
r with vivid language and believable dialogue that captures the
characters’ feelings. I went up to bat to get a hit. If I am not cle
ar on my goals or if I fail to pay attention to them, I cannot get
helpful feedback (nor am I likely to achieve my goals).
Information becomes feedback if, and only if, I am trying to cau
se something and the information tells me whether I am on track
or need to change course. If some joke or aspect of my writing
isn’t working—a revealing, nonjudgmental phrase—
I need to know.
Note that in everyday situations, goals are often implicit, althou
gh fairly obvious to everyone. I don’t need to announce when te
lling the joke that my aim is to make you laugh. But in school, l
earners are often unclear about the specific goal of a task or less
on, so it is crucial to remind them about the goal and the criteria
by which they should self-
assess. For example, a teacher might say,
·
The point of this writing task is for you to make readers laugh.
So, when rereading your draft or getting feedback from peers, a
sk, how funny is this? Where might it be funnier?
·
As you prepare a table poster to display the findings of your sci
ence project, remember that the aim is to interest people in your
work as well as to describe the facts you discovered through yo
ur experiment. Self-
assess your work against those two criteria using these rubrics.
The science fair judges will do likewise.
Tangible and Transparent
Any useful feedback system involves not only a clear goal, but
also tangible results related to the goal. People laugh, chuckle,
or don’t laugh at each joke; students are highly attentive, some
what attentive, or inattentive to my teaching.
Even as little children, we learn from such tangible feedback. T
hat’s how we learn to walk; to hold a spoon; and to understand t
hat certain words magically yield food, drink, or a change of clo
thes from big people. The best feedback is so tangible that anyo
ne who has a goal can learn from it.
Alas, far too much instructional feedback is opaque, as revealed
in a true story a teacher told me years ago. A student came up t
o her at year’s end and said, “Miss Jones, you kept writing this
same word on my English papers all year, and I still don’t know
what it means.” “What’s the word?” she asked. “Vag-
oo,” he said. (The word was vague!)
Sometimes, even when the information is tangible and transpare
nt, the performers don’t obtain it—
either because they don’t look for it or because they are too bus
y performing to focus on the effects. In sports, novice tennis pla
yers or batters often don’t realize that they’re taking their eyes
off the ball; they often protest, in fact, when that feedback is gi
ven. (Constantly yelling “Keep your eye on the ball!” rarely wor
ks.) And we have all seen how new teachers are sometimes so b
usy concentrating on “teaching” that they fail to notice that few
students are listening or learning.
That’s why, in addition to feedback from coaches or other able
observers, video or audio recordings can help us perceive things
that we may not perceive as we perform; and by extension, suc
h recordings help us learn to look for difficult-to-
perceive but vital information. I recommend that all teachers vi
deotape their own classes at least once a month. It was a transfo
rmative experience for me when I did it as a beginning teacher.
Concepts that had been crystal clear to me when I was teaching
seemed opaque and downright confusing on tape—
captured also in the many quizzical looks of my students, which
I had missed in the moment.
Actionable
Effective feedback is concrete, specific, and useful; it provides
actionable information. Thus, “Good job!” and “You did that wr
ong” and B+ are not feedback at all. We can easily imagine the l
earners asking themselves in response to these comments, what
specifically should I do more or less of next time, based on this
information? No idea. They don’t know what was “good” or “wr
ong” about what they did.
Actionable feedback must also be accepted by the performer. M
any so-
called feedback situations lead to arguments because the givers
are not sufficiently descriptive; they jump to an inference from t
he data instead of simply presenting the data. For example, a su
pervisor may make the unfortunate but common mistake of stati
ng that “many students were bored in class.” That’s a judgment,
not an observation. It would have been far more useful and less
debatable had the supervisor said something like, “I counted on
going inattentive behaviors in 12 of the 25 students once the lec
ture was underway. The behaviors included texting under desks,
passing notes, and making eye contact with other students. How
ever, after the small-
group exercise began, I saw such behavior in only one student.”
Such care in offering neutral, goal-
related facts is the whole point of the clinical supervision of tea
ching and of good coaching more generally. Effective superviso
rs and coaches work hard to carefully observe and comment on
what they observed, based on a clear statement of goals. That’s
why I always ask when visiting a class, “What would you like m
e to look for and perhaps count?” In my experience as a teacher
of teachers, I have always found such pure feedback to be accep
ted and welcomed. Effective coaches also know that in complex
performance situations, actionable feedback about what went rig
ht is as important as feedback about what didn’t work.
User-Friendly
Even if feedback is specific and accurate in the eyes of experts
or bystanders, it is not of much value if the user cannot understa
nd it or is overwhelmed by it. Highly technical feedback will se
em odd and confusing to a novice. Describing a baseball swing t
o a 6-year-
old in terms of torque and other physics concepts will not likely
yield a better hitter. Too much feedback is also counterproducti
ve; better to help the performer concentrate on only one or two
key elements of performance than to create a buzz of informatio
n coming in from all sides.
Expert coaches uniformly avoid overloading performers with to
o much or too technical information. They tell the performers o
ne important thing they noticed that, if changed, will likely yiel
d immediate and noticeable improvement (“I was confused abou
t who was talking in the dialogue you wrote in this paragraph”).
They don’t offer advice until they make sure the performer und
erstands the importance of what they saw.
Timely
In most cases, the sooner I get feedback, the better. I don’t want
to wait for hours or days to find out whether my students were
attentive and whether they learned, or which part of my written
story works and which part doesn’t. I say “in most cases” to allo
w for situations like playing a piano piece in a recital. I don’t w
ant my teacher or the audience barking out feedback as I perfor
m. That’s why it is more precise to say that good feedback is “ti
mely” rather than “immediate.”
A great problem in education, however, is untimely feedback. V
ital feedback on key performances often comes days, weeks, or
even months after the performance—
think of writing and handing in papers or getting back results on
standardized tests. As educators, we should work overtime to fi
gure out ways to ensure that students get more timely feedback
and opportunities to use it while the attempt and effects are still
fresh in their minds.
Before you say that this is impossible, remember that feedback
does not need to come only from the teacher or even from peopl
e at all. Technology is one powerful tool—
part of the power of computer-
assisted learning is unlimited, timely feedback and opportunitie
s to use it. Peer review is another strategy for managing the loa
d to ensure lots of timely feedback; it’s essential, however, to tr
ain students to do small-
group peer review to high standards, without immature criticism
s or unhelpful praise.
Ongoing
Adjusting our performance depends on not only receiving feedb
ack but also having opportunities to use it. What makes any asse
ssment in education formative is not merely that it precedes sum
mative assessments, but that the performer has opportunities, if
results are less than optimal, to reshape the performance to bette
r achieve the goal. In summative assessment, the feedback come
s too late; the performance is over.
Thus, the more feedback I can receive in real time, the better m
y ultimate performance will be. This is how all highly successfu
l computer games work. If you play Angry Birds, Halo, Guitar
Hero, or Tetris, you know that the key to substantial improveme
nt is that the feedback is both timely and ongoing. When you fai
l, you can immediately start over—
sometimes even right where you left off—
to get another opportunity to receive and learn from the feedbac
k. (This powerful feedback loop is also user-
friendly. Games are built to reflect and adapt to our changing ne
ed, pace, and ability to process information.)
It is telling, too, that performers are often judged on their abilit
y to adjust in light of feedback. The ability to quickly adapt one
’s performance is a mark of all great achievers and problem solv
ers in a wide array of fields. Or, as many little league coaches s
ay, “The problem is not making errors; you will all miss many b
alls in the field, and that’s part of learning. The problem is whe
n you don’t learn from the errors.”
Consistent
To be useful, feedback must be consistent. Clearly, performers c
an only adjust their performance successfully if the information
fed back to them is stable, accurate, and trustworthy. In educati
on, that means teachers have to be on the same page about what
high-
quality work is. Teachers need to look at student work together,
becoming more consistent over time and formalizing their judg
ments in highly descriptive rubrics supported by anchor product
s and performances. By extension, if we want student-to-
student feedback to be more helpful, students have to be trained
to be consistent the same way we train teachers, using the same
exemplars and rubrics.
Progress Toward a Goal
In light of these key characteristics of helpful feedback, how ca
n schools most effectively use feedback as part of a system of f
ormative assessment? The key is to gear feedback to long-
term goals.
Let’s look at how this works in sports. My daughter runs the mil
e in track. At the end of each lap in races and practice races, the
coaches yell out split times (the times for each lap) and bits of
feedback (“You’re not swinging your arms!” “You’re on pace fo
r 5:15”), followed by advice (“Pick it up—
you need to take two seconds off this next lap to get in under 5:
10!”).
My daughter and her teammates are getting feedback (and advic
e) about how they are performing now compared with their final
desired time. My daughter’s goal is to run a 5:00 mile. She has
already run 5:09. Her coach is telling her that at the pace she ju
st ran in the first lap, she is unlikely even to meet her best time
so far this season, never mind her long-
term goal. Then, he tells her something descriptive about her cu
rrent performance (she’s not swinging her arms) and gives her a
brief piece of concrete advice (take two seconds off the next la
p) to make achievement of the goal more likely.
The ability to improve one’s result depends on the ability to adj
ust one’s pace in light of ongoing feedback that measures perfor
mance against a concrete, long-
term goal. But this isn’t what most school district “pacing guide
s” and grades on “formative” tests tell you.
They yield a grade against recent objectives taught, not useful f
eedback against the final performance standards. Instead of info
rming teachers and students at an interim date whether they are
on track to achieve a desired level of student performance by th
e end of the school year, the guide and the test grade just provid
e a schedule for the teacher to follow in delivering content and
a grade on that content. It’s as if at the end of the first lap of th
e mile race, my daughter’s coach simply yelled out, “B+ on that
lap!”
The advice for how to change this sad situation should be clear:
Score student work in the fall and winter against spring standar
ds, use more pre- and post-
assessments to measure progress toward these standards, and do
the item analysis to note what each student needs to work on for
better future performance.
“But There’s No Time!”
Although the universal teacher lament that there’s no time for s
uch feedback is understandable, remember that “no time to give
and use feedback” actually means “no time to cause learning.”
As we have seen, research shows that less teaching plus more fe
edback is the key to achieving greater learning. And there are n
umerous ways—through technology, peers, and other teachers—
that students can get the feedback they need.
References
Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (2000).
How people learn: Brain, mind, experience, and school. Washin
gton, DC: National Academy Press.
Hattie, J. (2008). Visible learning: A synthesis of over 800 meta
-analyses relating to achievement. New York: Routledge.
Marzano, R., Pickering, D., & Pollock, J. (2001). Classroom ins
truction that works: Research-
based strategies for increasing student achievement. Alexandria,
VA: ASCD.
Mazur, E. (2009, January 2). Farewell, lecture? Science, 323, 50
–51.
Source: Wiggins, G. (2012). 7 keys to effective feedback. Educa
tional Leadership. 70(1), 10–
19. Alexandria, VA: Association for Supervision and Curriculu
m Development. Copyright © Grant Wiggins.
Summary
Wiggins calls for feedback to be stable, accurate, and trustworth
y. He highlights the difference between feedback, evaluation, an
d grading, implicitly challenging teachers to expand their repert
oire to include all three processes on a regular basis.
Wiggins also calls for frequent feedback, claiming that the more
feedback students receive, the more learning will occur. He con
cludes the article by acknowledging the difficulty of finding the
time to provide such feedback in today’s classrooms; he sugges
ts that teachers consider teaching less and providing more feedb
ack through technology, peers, and other educators. If the goal i
s to enhance and improve learning, then time providing direct fe
edback is well spent.
Wiggins also proposes more pre-
and postassessments, more item analysis on tests in which stud
ents are provided specific information about their errors, and m
ore early practice testing (i.e., in the fall for spring tests) that c
ould provide individualized feedback as part of classroom practi
ce.
Critical Thinking Questions
1.
What do you think about the concept of teaching less in order to
provide more feedback? What might that look like in today’s cl
assrooms, whether face to face or online?
2.
Providing feedback that actually contributes to learning is not e
asy and is not a skill that educators necessarily learn through pr
eservice teacher education. How do teachers learn to provide fe
edback that is useful?
3.
Wiggins claims that feedback is not the same as evaluation. Yet,
feedback can be part of a formative assessment process that doe
s provide information to learners before it is too late. When sho
uld evaluation or judgment be avoided, and when is it important
to give evaluative comments that help students learn from their
mistakes?
4.
Design a research study in which you and your colleagues woul
d examine feedback to students provided online. Determine how
you would explore the connections between feedback provided
and subsequent student work improvement.3.4 Feedback and Fe
ed Forward, by Nancy Frey and Doug Fisher
Introduction
Nancy Frey and Doug Fisher are both professors of educational
leadership at San Diego State University. They are the founders
of Literacy for Life and have written and presented about readin
g, collaborative learning, and, most recently, the common core
English language arts standards in PLCs. They are also the auth
ors of the 2011 text, The Formative Assessment Action Plan: Pr
actical Steps to More Successful Teaching and Learning.
The evocative title of this article indicates a new perspective on
what happens after teachers provide feedback to individual stud
ents. Frey and Fisher propose that it is not enough to monitor at
the individual level; rather, teachers need to look for patterns ac
ross students’ work in order to design interventions and targeted
teaching approaches to address group needs.
Frey and Fisher make the connection between feedback, assessm
ent, and “feeding forward” to inform instruction. In their view,
any one of these practices is incomplete without the other two.
The authors also discuss the issue of the focus of feedback, noti
ng that feedback about the assigned task is the most familiar to t
eachers and students. Other types of feedback, from the work of
Hattie and Timperley (2007) include feedback about the proces
s, about self-regulation, and about “the self as a person” (p. 90).
Excerpt
The following is an excerpt from Frey, N., & Fisher, D. (2011).
Feedback and feed forward. Principal Leadership, 11(9), 90–93.
Internet searches often yield surprising results. In preparation f
or writing this column, we searched one of our favorite sayings:
“You can’t fatten sheep by weighing them.” One of the results
was an article from the April 1908 issue of the Farm Journal on
early spring lambs. Among the advice to sheep farmers was to ta
ke care in apportioning their rations so as not to overfeed, to pr
ovide healthy living conditions so they can grow, and to take ca
reful measure of their progress—
and this piece of wisdom: “Study your sheep and know them not
only as a flock but separately, and remember that they have an
individuality as surely as your horse or cow” (Brick, 1908, p. 15
4).
Students are not sheep, of course, but our role as cultivators of
young people has much in common with that of livestock farmer
s. As educators, we recognize the importance of a healthy learni
ng climate and seek to create one each day. In addition, we appo
rtion information so that students can act upon their growing kn
owledge. And we measure their progress regularly to see whethe
r they are making expected gains. As part of effective practice, t
eachers routinely check for understanding through the learning
process. This is most commonly accomplished by asking questio
ns, analyzing tasks, and administering low-
stakes quizzes to measure the extent to which students are acqui
ring new information and skills. But it’s one thing to gather info
rmation (we’re good at that); it’s another thing to respond in me
aningful ways and then plan for subsequent instruction.
Without processes to provide students with solid feedback that y
ields deeper understanding, checking for understanding devolve
s into a game of “guess what’s in the teacher’s brain.” And with
out ways to look for patterns across students, formative assessm
ents become a frustrating academic exercise. Knowing both the
flock and the individuals in it are essential practices for cultivat
ing learning.Knowing the Individual: Effective Feedback
Most of us have received poor feedback: The teacher who scraw
led “rewrite this” in the margin of an essay we wrote. The coach
who said, “No, you’re doing it wrong; keep practicing.” The co
worker who took over a task and did it for us when our progress
stalled. The frustration on the learner’s part matches that felt b
y the teacher, the coach, or the coworker: why can’t he or she g
et this? That shared vexation produces a mutual sense of defeat.
On the part of the learner, the internal dialogue becomes, “I ca
n’t do this.” The teacher thinks, “I can’t teach this.” Over time,
blame sets in, and the student and the teacher begin to find fault
with each other.
Hattie and Timperley (2007) wrote about feedback across four d
imensions: “Feedback about the task (FT), about the processing
of the task (FP), about self-
regulation (FR), and about the self as a person (FS)” (p. 90). Fo
r example, “You need to put a semicolon in this sentence” (FT)
has limited usefulness and is not usually generalized to other tas
ks. On the other hand, “Make sure that your sentences have nou
n-
verb agreements because it’s going make it easier for the reader
to understand your argument” (FP) gives feedback information a
bout a writing convention necessary in all essays. The researche
rs go on to note that feedback that moves from information abou
t the process to information about self-
regulation is the best of all: “Try reading some of your sentence
s aloud so you can hear when you have and don’t have noun-
verb agreement.” The researchers go on to say that FS (“You’re
a good writer”) is the least useful, even when it is positive in na
ture, because it doesn’t add anything to one’s learning.
Done carefully, FT can have a modest amount of usefulness, as
when editing a paper. Yet feedback about the task is by far the
most common kind we offer. The problem is that the task offers
only end-
game analysis and leaves the learner with little direction on wha
t to do, particularly when there isn’t any recourse to make chan
ges. Most writing teachers will tell you that it is not uncommon
for students to engage in limited revision, confined to the specif
ic items listed in the teacher feedback—
more recopying than revising. But feedback about the processes
used in the task and further advice about one’s self-
regulatory strategies to make revisions can leave the learner wit
h a plan for next steps.
Consider the dialogue between English teacher John Goodwin a
nd Alicia, a student in his class. Alicia has drafted an essay on
bullying, and Goodwin is providing feedback about her work. C
areful to frame his feedback so that it can result in a plan for re
vision, he draws her attention to her thesis statement and says, “
It’s helpful for writers to go back to the main point of the essay
and read to see if the evidence is there. I highlight in yellow so
I can see if I’ve done that.” The two of them reread her first thr
ee paragraphs and highlight where she has provided national sta
tistics and direct quotes from teachers she knows.
Goodwin goes on to say, “Now what I want you to do is look for
ways you’ve provided supporting evidence, like citing sources.
Let’s highlight those in green.” Alicia quickly notices that whil
e she has made claims, she hasn’t capitalized on any authoritati
ve sources. And by confining her direct quotes to teachers at her
school, she has limited the impact of her essay by failing to qu
ote more widely known sources. The little bit of green on her es
say illustrates what she needs to do next: strengthen her sources
. Goodwin ends the conversation by saying, “It sounds like you
have a plan for revising the content. Let’s meet again on Wedne
sday and you can update me on your progress.”
Feedback of this kind takes only a few minutes, yet it can add u
p in a crowded classroom. For this reason, many teachers rely o
n written forms of feedback instead of direct conversations. Eve
n in written form, the guidelines about feedback remain the sam
e: focus on the processes needed for the task, move to informati
on about behaviors within the student’s influence to make chang
es, and steer clear of comments that are either too global or too
minute to be of much use. Wiggins (1998) advises constructing
written feedback so that it meets four important criteria: first, it
must be timely so that it is paired as closely as possible with th
e attempt; second, it should be specific in nature; third, it shoul
d be written in a manner that it understandable to the student; a
nd fourth, it should be actionable so that the learner can make re
visions.Knowing the Flock: Feed Forward
Although feedback is primarily at the individual level, feed for
ward describes the process of making instructional decisions ab
out what should happen next (Frey & Fisher, in press). Data abo
ut student progress is commonly gathered using common format
ive assessments—
either commercially produced or made by the teacher. In additio
n, many school teams engage in consensus scoring with colleagu
es to calibrate practices, especially with tasks that have a signifi
cant qualitative component, such as writing (Fisher, Frey, Farna
n, Fearn, & Petersen, 2004). Lack of time to work with other col
leagues can limit these practices, however. The good news is th
at a teacher’s own classroom can serve as the unit of analysis as
well.
With all the solid feedback provided to students, it seems natura
l to take it one step further by recording results and some patter
n anaIysis. For example, mathematics teacher Ben Teichman kee
ps track of student progress across several dimensions of instruc
tion. As he provides written or verbal feedback to his students,
he notes which skills they have mastered and which ones are stil
l proving difficult for them. His error analysis record sheet enab
les him to make decisions about who needs reteaching and when
it needs to occur (see Figure 3.1). “All the feedback in the worl
d isn’t going to do much good if what they really need is more i
nstruction,” said Teichman, an insight Hattie and Timperley (20
07) share.Figure 3.1: Error analysis sheet in Algebra II: Introdu
ction to complex numbers
Teachers can use an error analysis sheet to record the initials of
students who have not mastered instructional goals.
Unlike a checklist to track mastery, Teichman’s error analysis s
heet is used to identify the students who are struggling. He logs
the initials of students in each period who are still having diffic
ulty with major concepts after initial instruction, then makes de
cisions about follow up and reteaching. For example, the error a
nalysis sheet shows that all of his classes are still having difficu
lty with understanding the relationship between different forms
of representing imaginary numbers. That tells him that reteachin
g to the whole group is in order. On the other hand, smaller gro
ups of students are having trouble with other concepts. “I need t
o pull those students into small groups, because the majority of
the class is doing fine otherwise,” he said. Fourth period is anot
her story. “I’ve got lots of students all across the board who are
struggling with this whole unit,” he said. “Time for me to take a
few steps back and revisit what they know already about radica
ls before we dive back into imaginary numbers.”Conclusion
“To be successful, [the sheep farmer] must also be gentle, with
a watchful eye for little things . . . and a hundred minor details
upon which success depends,” wrote Brick (1908, p. 154) more t
han a century ago. Feedback and feed-
forward processes in the classroom should be used to cultivate l
earning, and not just simply measure it. By providing students
with feedback they can use to revise and by tracking student pro
gress to determine who needs subsequent instruction and when i
t should occur, educators can ensure that they feed and not mere
ly weigh.References
Brick, H. (1908). Early spring lambs. The Farm Journal, 32(4),
153–154.
Frey, N., & Fisher, D. (in press). The formative assessment acti
on plan: Practical steps to more successful teaching and learnin
g. Alexandria, VA: ASCD.
Fisher, D., Frey, N., Farnan, N., Feam, L., & Petersen, F. (2004)
. Increasing writing achievement in an urban middle school. Mi
ddle School Journal, 36(2), 21–26.
Hattie, J., & Timperley, H. (2007). The power of feedback. Revi
ew of Educational Research, 77, 81–112.
Wiggins, G. (1998). Educative assessment: Designing assessme
nts to inform and improve student performance. San Francisco,
CA: Jossey-Bass.
Source: Frey, N. & Fisher, D. (2011). Feedback and feed forwar
d. Principal Leadership, 11(9), 90–
93. Copyright (2014) National Association of Secondary School
Principals. For more information on NASSP products and servi
ces to promote excellence in middle level and high school leade
rship, visit www.nassp.org.
Summary
Frey and Fisher provide specifics on one type of data regarding
feedback that teachers would find beneficial. They suggest that
teachers keep checklists or error analysis sheets in order to dete
rmine the types of feedback they have provided to individual stu
dents as well as the types of errors that students make.
Collecting these data, however, is only the first step. Frey and F
isher challenge teachers to then use these data to determine thei
r next steps in teaching, reteaching, or other follow-
up steps. Although the article does not provide great detail in te
rms of the implications for planning, the authors claim that simp
ly measuring learning without considering what those measurem
ents mean for teaching will not lead to improved learning. Frey
and Fisher note that these practices can work for individual teac
hers but would be even more effective if shared across class sec
tions or schools. Finally, these authors remind readers of the val
ue of verbal as well as written feedback.
Critical Thinking Questions
1.
The notions of feedback and feed forward are unique in the liter
ature on assessment. What are the implications for this concept
for online teaching and learning? How can feedback (given and
received) in an online environment contribute to instructional d
esign?
2.
What’s the difference between feedback concerning a task and f
eedback concerning the process of completing the task? Give an
example from your own experience or subject context.
3.
Research is beginning to emerge concerning the connections bet
ween assessment data (usually with respect to standardized test
performance) and instruction, although there are few studies tha
t demonstrate effective and intentional connections that teachers
make between assessments and what they do in the classroom (
Conderman & Hedin, 2012; Watts-
Taffe et al., 2012). How might teachers explore these connectio
ns as a PLC or grade-
level team? What might the evidence of follow-
up and reteaching and use of assessment data look like?
4.
Make the argument that teachers almost intuitively give this kin
d of verbal and written feedback followed by feeding forward to
inform their teaching. Then, counter that argument and suggest,
as Frey and Fisher do, that teachers in fact don’t do this enough
and that they need to be taught to connect assessment and desig
n of their instruction.
3.5 Four Steps in Grading Reform, by Thomas Guskey and Lee
Ann Jung
Introduction
Thomas Guskey is a well-
known professor and expert in the area of professional develop
ment for educators. Guskey and co-
author Lee Ann Jung maintain a teacher-
learning lens in this article that focuses on four proposals for m
eaningful grading.
Grading students’ performance is seldom written about or discus
sed, though it is usually the culmination of the assessment proce
ss in classrooms and schools. Educators and leaders in schools a
ssume that teachers know how to determine the letter grades tha
t appear on report cards each term. Guskey and Jung challenge t
hat assumption. They call for reform and rethink grading practic
es at the school and classroom levels.
The authors note that there is great variety in the ways that teac
hers determine grades. Moreover, there is seldom a clearly state
d purpose for the grades printed on the report card for parents a
nd students to see. The article discusses an alternative to just on
e letter grade for a subject or content area and instead discusses
the value of process, progress, and product grades to fully repre
sent students’ learning during the term.
Finally, Guskey and Jung propose a means of grading students
who are not currently on grade level, reminding us that the stan
dards movement offers great potential for gain for students who
are struggling and need feedback based on exactly where they ar
e in their learning.
Excerpt
The following is an excerpt from Guskey, T. R., & Jung, L. A. (
2012). Four steps in grading reform. Principal Leadership, 13(4)
, 22–28.
The field of education is rapidly moving toward a standards-
based approach to grading. School leaders have become increasi
ngly aware of the tremendous variation that exists in grading pr
actices, even among teachers of the same courses in the same de
partment in the same school. Consequently, students’ grades oft
en have little relation to their performance on state assessments
—
an issue that has education leaders and parents alike concerned.
Such inconsistencies lead many to perceive grading as a distinct
ively idiosyncratic process that is highly subjective and often u
nfair to students.
Complicating reform efforts, however, is the fact that few schoo
l leaders have extensive knowledge of various grading methods,
the advantages and shortcomings of those methods, and the effe
cts that different grading policies have on students (Brookhart,
2011a; Brookhart & Nitko, 2008; Stiggins, 1993; Stiggins & Ch
appuis, 2011). As a result, attempts at grading reform often lack
direction and coherence and rarely bring about significant impr
ovement in the accuracy or relevance of the grades students rece
ive.
Effective grading reform requires four steps. Although each step
addresses a different aspect of grading and reporting, all of the
steps are related. Together, the four steps are the foundation of
grading policies and practices that are fair, meaningful, educati
onally sound, and beneficial to students.
Be Clear About the Purpose
One of the major reasons that school leaders run into difficultie
s in their attempts to reform grading and reporting is that they f
ail to identify the purpose of grading. Enamored of the promise
of new online grade books and reporting software, they charge a
head without giving serious thought to the function of grades as
communication tools. In particular, they fail to consider what in
formation they want grades to communicate, who is the primary
audience for that information, and what outcome they want to ac
hieve. As a result, predictable problems arise that thwart even t
he most dedicated attempts at reform.
Compounding the problem, parents, teachers, students, and scho
ol leaders typically see report cards serving quite different purp
oses. Some suggest that those differences stem from the conflict
ing opinions about the report cards’ intended audience. Are they
designed to communicate information primarily to parents, stud
ents, or school personnel?
Although a variety of purposes for grades and report cards may
be considered legitimate, educators seldom agree on the primary
purpose. This lack of consensus leads to attempts to develop a r
eporting device that addresses multiple purposes but ends up ad
dressing no purpose very well (Austin & McCann, 1992; Brookh
art, 1991, 2011a; Cross & Frary, 1999).
The simple truth is that no single reporting device can serve all
purposes well. In fact, some purposes are actually counter to oth
ers.
For example, suppose that nearly all students in a particular sch
ool attain high levels of achievement and earn high grades. Thos
e results pose no problem if the purpose of the report cards is to
communicate information about students’ achievement to paren
ts or to provide information to students for the purpose of self-
evaluation. But that same result poses major problems, if the pu
rpose of the report cards is to select students for special educati
onal paths or to evaluate the effectiveness of instructional progr
ams.
To use grades for selection or evaluation purposes requires vari
ation in the grades—
and the more variation, the better! For those purposes, grades sh
ould be dispersed across all possible categories to maximize the
differences among students and programs. How else can approp
riate selection take place or one program be judged as being bett
er than another if all students receive the same high grades? Det
ermining differences under such conditions is impossible.
The first decision that must be made in any reform effort, theref
ore, is determining the purpose of the grades and report card. Th
e struggles that most school leaders experience in reforming gra
ding policies and practices stem from changing their grading me
thods before they reach consensus about the purpose of grades a
nd report cards (Brookhart, 2011b). All changes in grading polic
y and practice must build from a clearly articulated purpose stat
ement, which should be printed on the report card itself so that
all who look at the report card understand its intent. When a cle
ar purpose is defined, decisions about the most appropriate poli
cies and practices are much easier to make.
Use Multiple Grades
Another issue that poses a significant obstacle to grading and re
porting reform is the insistence that students receive a single gr
ade for each subject area or course. The simplest logic reveals t
hat this practice makes little sense. If someone proposed combin
ing measures of height, weight, diet, and exercise into a single n
umber or mark to represent a person’s physical condition, we w
ould consider it ridiculous. How could the combination of such
diverse measures yield anything meaningful? Yet every day, tea
chers combine evidence of student achievement, attitude, respon
sibility, effort, and behavior into a single grade, and no one que
stions it.
In determining students’ grades, teachers frequently merge scor
es from major exams, compositions, quizzes, projects, and repor
ts with evidence from homework, punctuality in turning in assig
nments, class participation, work habits, and effort. Computeriz
ed grading programs help teachers apply different weights to ea
ch of those categories (Guskey, 2002a), which they then combin
e in widely varied ways (see McMillan, 2001; McMillan, Myran
, & Workman, 2002). The result is what researchers refer to as a
“hodgepodge grade” (Cross & Frary, 1999).
Another more meaningful approach is to offer separate grades f
or product, process, and progress learning criteria (Guskey, 200
6; Guskey & Bailey, 2010).
Product criteria reflect what students know and are able to do at
a specific point in time. In other words, they reflect students’ c
urrent level of achievement. Evidence of meeting product criteri
a comes from culminating or “summative” evaluations of studen
t performance (O’Connor, 2009). Teachers who use product crit
eria typically base grades on final examination scores; final rep
orts, projects, or exhibits; overall assessments; and other culmin
ating demonstrations of learning.
Process criteria are emphasized by educators who believe that p
roduct criteria do not provide a complete picture of student lear
ning. They contend that grades should reflect not only the final
results but also how students got there. Teachers who consider r
esponsibility, effort, or work habits when assigning grades use p
rocess criteria. So do those who count classroom quizzes, forma
tive assessments, homework, punctuality turning in assignments
, class participation, or attendance.
Progress criteria are based on how much students have gained fr
om their learning experiences. Other names for progress criteria
include learning gain, improvement scoring, value-
added learning, and educational growth. Teachers who use progr
ess criteria look at how much improvement students have made
over a particular period of time, rather than just where they are
at a given moment. As a result, scoring criteria may be highly in
dividualized among students. For example, grades might be base
d on the number of skills or standards in a learning progression
that students mastered and on the adequacy of that level of prog
ress for each student. Most of the research evidence on progress
criteria comes from studies of individualized instruction (Esty
& Teppo, 1992) and special education programs (Gersten, Vaug
hn, & Brengelman, 1996; Jung & Guskey, 2012).
After establishing specific indicators of product, process, and pr
ogress learning criteria, teachers then assign separate grades to
each set of indicators. In this way, they keep grades for achieve
ment separate from grades for responsibility, learning skills, eff
ort, work habits, or learning progress (Guskey, 2002b; Stiggins,
2008). This allows a more accurate and comprehensive picture o
f what students accomplish in school.
Reporting separate grades for product, process, and progress cri
teria also makes grading more meaningful. Grades for academic
achievement reflect precisely that—academic achievement—
and not some confusing amalgamation that’s impossible to inter
pret and that rarely presents a true picture of students’ proficien
cy (Guskey, 2002a). Teachers also indicate that students take pr
ocess elements, such as homework, more seriously when it’s rep
orted separately. Parents favor the practice because it provides a
more comprehensive profile of their children’s performance in
school (Guskey, Swan, & Jung, 2011). The key to success in rep
orting multiple grades, however, rests in the clear specification
of the indicators that relate to product, process, and progress cri
teria. Teachers must be able to describe how they plan to evalua
te students’ achievement, attitude, effort, behavior, and progress
. Then they must clearly communicate those criteria to students,
parents, and others.
Change Procedures for Selecting the Class Valedictorian and Eli
minate Class Rank
The third step involves challenging a long-
held tradition in education. Most school leaders today understan
d the negative consequences of grading on the curve. They reco
gnize that when grades are based on students’ relative standing
among classmates, rather than on what students actually achieve
, it’s impossible to tell if anyone learned anything.
Most school leaders also see that grading on the curve makes le
arning highly competitive for students who must battle one anot
her for the few scarce rewards (high grades) awarded by the tea
cher. Such competition discourages students from cooperating o
r helping one another because doing so might hurt the helper’s c
hance at success (Krumboltz & Yeh, 1996). Similarly, teachers
may refrain from helping individual students under those condit
ions because some students might construe this as showing favo
ritism and biasing the competition (Gray, 1993). School leaders
may fail to recognize that other common school policies yield si
milar negative consequences, such as calculating students’ class
rank on the basis of weighted GPAs and selecting the top stude
nt as the class valedictorian.
There is nothing wrong with recognizing excellence in academic
performance. But when calculating class rank, the focus is on s
orting and selecting talent, rather than on developing talent. The
struggle to be on top of the sorting process and then chosen as
class valedictorian leads to serious and sometimes bitter compet
ition among high-
achieving students. Early in their high school careers, top stude
nts analyze the selection procedures and then, often with the hel
p of their parents, find ingenious ways to improve their standing
. Gaining that honor requires not simply high achievement; it re
quires outdoing everyone else. And sometimes the difference a
mong top-achieving students is as little as one-
thousandth of a decimal point in a weighted GPA.
Ironically, the term valedictorian has nothing to do with achieve
ment. It comes from the Latin, vale dicere, which means “to say
farewell.” The first reference to the term appeared in the diary
of the Reverend Edward Holyoke, president of Harvard College
in 1759, who noted that “Officers of the Sophisters chose a Vale
dictorian.” Lacking any established criteria, the Sophisters (seni
or class members) arbitrarily selected the classmate with the hig
hest academic standing to deliver the commencement address.
Within a few years, most colleges and universities moved away
from competitive ranking procedures to identify honor students
and, instead, adopted the criterion-
based Latin system, graduating students cum laude, magna cum
laude, and summa cum laude. Most also altered their procedures
for selecting a commencement speaker, using such means as stu
dent votes and appointments made by faculty members on the ba
sis of not only grades but also involvement in service projects a
nd participation in extracurricular activities.
More and more high schools today are moving away from comp
etitive ranking systems and adopting criterion-
based systems similar to those used in colleges and universities.
Rigorous academic criteria are established for attaining the hig
h honor categories, but no limit is set on the number of students
who might attain that level of achievement. Schools that establi
sh such policies generally find that student achievement rises as
more students strive to attain the honor. In addition, students b
egin helping each other gain the honor because helping a classm
ate can actually help, rather than hinder the helper’s chance of s
uccess. Instead of pitting students against each other, such a sys
tem unites students and teachers in efforts to master the curricul
um and meet rigorous academic standards.
Recognizing excellence in academic performance is a vital aspe
ct of any learning community. But such recognition need not be
based on arbitrary criteria and deleterious competition. Instead,
it can and should be based on clear models of excellence that ex
emplify the highest standards and goals for students. (See Gusk
ey & Bailey, 2010.) Educators can then take pride in helping the
largest number of students possible meet those rigorous criteria
and high standards of excellence.
Give Honest, Accurate, and Meaningful Grades
The fourth step in effective reform of grading and reporting is t
o ensure honest, accurate, and meaningful grades for exceptiona
l and struggling learners. Of all of the students in a school’s po
pulation, those who have disabilities or who are struggling learn
ers have the most to gain from a standards-
based approach. For those students, intervention decisions depe
nd on having clear and complete information on their performan
ce.
But moving to standards-
based grading presents a serious challenge. By removing non-
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul
Contextual FactorsDefinitionFactors which reflect a particul

Weitere ähnliche Inhalte

Ähnlich wie Contextual FactorsDefinitionFactors which reflect a particul

GCU College of EducationLESSON PLAN TEMPLATESection 1 Lesso.docx
GCU College of EducationLESSON PLAN TEMPLATESection 1 Lesso.docxGCU College of EducationLESSON PLAN TEMPLATESection 1 Lesso.docx
GCU College of EducationLESSON PLAN TEMPLATESection 1 Lesso.docxshericehewat
 
Executive Program Practical Connection Assignment Component .docx
Executive Program Practical Connection Assignment Component .docxExecutive Program Practical Connection Assignment Component .docx
Executive Program Practical Connection Assignment Component .docxelbanglis
 
This week youve learned about various facets of sexual identity a
This week youve learned about various facets of sexual identity aThis week youve learned about various facets of sexual identity a
This week youve learned about various facets of sexual identity aTakishaPeck109
 
GCU College of EducationLESSON PLAN TEMPLATESection 1 Lesso
GCU College of EducationLESSON PLAN TEMPLATESection 1 LessoGCU College of EducationLESSON PLAN TEMPLATESection 1 Lesso
GCU College of EducationLESSON PLAN TEMPLATESection 1 LessoMatthewTennant613
 
EEL What Is EEIJust like our students, each teacher is differe.docx
EEL What Is EEIJust like our students, each teacher is differe.docxEEL What Is EEIJust like our students, each teacher is differe.docx
EEL What Is EEIJust like our students, each teacher is differe.docxSALU18
 
Directions Please answer the discussion questions in 150 words ea
Directions Please answer the discussion questions in 150 words eaDirections Please answer the discussion questions in 150 words ea
Directions Please answer the discussion questions in 150 words eaAlyciaGold776
 
Challenge 4 6 project
Challenge 4 6 projectChallenge 4 6 project
Challenge 4 6 projectLaydy
 
Coteaching differentiation and udl
Coteaching differentiation and udlCoteaching differentiation and udl
Coteaching differentiation and udlErin Waltman
 
Assignment 2 Fink Step 3Due Week 7 and worth 200 pointsFor .docx
Assignment 2 Fink Step 3Due Week 7 and worth 200 pointsFor .docxAssignment 2 Fink Step 3Due Week 7 and worth 200 pointsFor .docx
Assignment 2 Fink Step 3Due Week 7 and worth 200 pointsFor .docxbraycarissa250
 
GCU College of EducationLESSON PLAN TEMPLATESection 1 Lesso
GCU College of EducationLESSON PLAN TEMPLATESection 1 LessoGCU College of EducationLESSON PLAN TEMPLATESection 1 Lesso
GCU College of EducationLESSON PLAN TEMPLATESection 1 LessoMerrileeDelvalle969
 
CLINICAL SOURCEBOOK (PORTFOLIO) Each student will develop a
CLINICAL SOURCEBOOK (PORTFOLIO)  Each student will develop a CLINICAL SOURCEBOOK (PORTFOLIO)  Each student will develop a
CLINICAL SOURCEBOOK (PORTFOLIO) Each student will develop a WilheminaRossi174
 
Review this week’s Instructor Guidance for additional informatio
Review this week’s Instructor Guidance for additional informatioReview this week’s Instructor Guidance for additional informatio
Review this week’s Instructor Guidance for additional informatioDioneWang844
 
LESSON PLAN TEMPLATESection 1 Lesson PreparationTeacher Can.docx
LESSON PLAN TEMPLATESection 1 Lesson PreparationTeacher Can.docxLESSON PLAN TEMPLATESection 1 Lesson PreparationTeacher Can.docx
LESSON PLAN TEMPLATESection 1 Lesson PreparationTeacher Can.docxwashingtonrosy
 
LESSON PLAN TEMPLATESection 1 Lesson PreparationTeacher Can.docx
LESSON PLAN TEMPLATESection 1 Lesson PreparationTeacher Can.docxLESSON PLAN TEMPLATESection 1 Lesson PreparationTeacher Can.docx
LESSON PLAN TEMPLATESection 1 Lesson PreparationTeacher Can.docxjeremylockett77
 
EBM SPRING 2020Exercise #1 Individual Worksheet FormatAudien.docx
EBM SPRING 2020Exercise #1 Individual Worksheet FormatAudien.docxEBM SPRING 2020Exercise #1 Individual Worksheet FormatAudien.docx
EBM SPRING 2020Exercise #1 Individual Worksheet FormatAudien.docxmadlynplamondon
 
LARC: Lesson analysis, Differentiation, Assessment 2011
LARC: Lesson analysis, Differentiation, Assessment  2011LARC: Lesson analysis, Differentiation, Assessment  2011
LARC: Lesson analysis, Differentiation, Assessment 2011Toni Theisen
 
Clinical Field Experience B Humanities Instructional and Engageme
Clinical Field Experience B Humanities Instructional and EngagemeClinical Field Experience B Humanities Instructional and Engageme
Clinical Field Experience B Humanities Instructional and EngagemeWilheminaRossi174
 

Ähnlich wie Contextual FactorsDefinitionFactors which reflect a particul (19)

Phase2 lesson plan
Phase2 lesson planPhase2 lesson plan
Phase2 lesson plan
 
GCU College of EducationLESSON PLAN TEMPLATESection 1 Lesso.docx
GCU College of EducationLESSON PLAN TEMPLATESection 1 Lesso.docxGCU College of EducationLESSON PLAN TEMPLATESection 1 Lesso.docx
GCU College of EducationLESSON PLAN TEMPLATESection 1 Lesso.docx
 
Executive Program Practical Connection Assignment Component .docx
Executive Program Practical Connection Assignment Component .docxExecutive Program Practical Connection Assignment Component .docx
Executive Program Practical Connection Assignment Component .docx
 
This week youve learned about various facets of sexual identity a
This week youve learned about various facets of sexual identity aThis week youve learned about various facets of sexual identity a
This week youve learned about various facets of sexual identity a
 
Setting Priority Expectations
Setting Priority ExpectationsSetting Priority Expectations
Setting Priority Expectations
 
GCU College of EducationLESSON PLAN TEMPLATESection 1 Lesso
GCU College of EducationLESSON PLAN TEMPLATESection 1 LessoGCU College of EducationLESSON PLAN TEMPLATESection 1 Lesso
GCU College of EducationLESSON PLAN TEMPLATESection 1 Lesso
 
EEL What Is EEIJust like our students, each teacher is differe.docx
EEL What Is EEIJust like our students, each teacher is differe.docxEEL What Is EEIJust like our students, each teacher is differe.docx
EEL What Is EEIJust like our students, each teacher is differe.docx
 
Directions Please answer the discussion questions in 150 words ea
Directions Please answer the discussion questions in 150 words eaDirections Please answer the discussion questions in 150 words ea
Directions Please answer the discussion questions in 150 words ea
 
Challenge 4 6 project
Challenge 4 6 projectChallenge 4 6 project
Challenge 4 6 project
 
Coteaching differentiation and udl
Coteaching differentiation and udlCoteaching differentiation and udl
Coteaching differentiation and udl
 
Assignment 2 Fink Step 3Due Week 7 and worth 200 pointsFor .docx
Assignment 2 Fink Step 3Due Week 7 and worth 200 pointsFor .docxAssignment 2 Fink Step 3Due Week 7 and worth 200 pointsFor .docx
Assignment 2 Fink Step 3Due Week 7 and worth 200 pointsFor .docx
 
GCU College of EducationLESSON PLAN TEMPLATESection 1 Lesso
GCU College of EducationLESSON PLAN TEMPLATESection 1 LessoGCU College of EducationLESSON PLAN TEMPLATESection 1 Lesso
GCU College of EducationLESSON PLAN TEMPLATESection 1 Lesso
 
CLINICAL SOURCEBOOK (PORTFOLIO) Each student will develop a
CLINICAL SOURCEBOOK (PORTFOLIO)  Each student will develop a CLINICAL SOURCEBOOK (PORTFOLIO)  Each student will develop a
CLINICAL SOURCEBOOK (PORTFOLIO) Each student will develop a
 
Review this week’s Instructor Guidance for additional informatio
Review this week’s Instructor Guidance for additional informatioReview this week’s Instructor Guidance for additional informatio
Review this week’s Instructor Guidance for additional informatio
 
LESSON PLAN TEMPLATESection 1 Lesson PreparationTeacher Can.docx
LESSON PLAN TEMPLATESection 1 Lesson PreparationTeacher Can.docxLESSON PLAN TEMPLATESection 1 Lesson PreparationTeacher Can.docx
LESSON PLAN TEMPLATESection 1 Lesson PreparationTeacher Can.docx
 
LESSON PLAN TEMPLATESection 1 Lesson PreparationTeacher Can.docx
LESSON PLAN TEMPLATESection 1 Lesson PreparationTeacher Can.docxLESSON PLAN TEMPLATESection 1 Lesson PreparationTeacher Can.docx
LESSON PLAN TEMPLATESection 1 Lesson PreparationTeacher Can.docx
 
EBM SPRING 2020Exercise #1 Individual Worksheet FormatAudien.docx
EBM SPRING 2020Exercise #1 Individual Worksheet FormatAudien.docxEBM SPRING 2020Exercise #1 Individual Worksheet FormatAudien.docx
EBM SPRING 2020Exercise #1 Individual Worksheet FormatAudien.docx
 
LARC: Lesson analysis, Differentiation, Assessment 2011
LARC: Lesson analysis, Differentiation, Assessment  2011LARC: Lesson analysis, Differentiation, Assessment  2011
LARC: Lesson analysis, Differentiation, Assessment 2011
 
Clinical Field Experience B Humanities Instructional and Engageme
Clinical Field Experience B Humanities Instructional and EngagemeClinical Field Experience B Humanities Instructional and Engageme
Clinical Field Experience B Humanities Instructional and Engageme
 

Mehr von AlleneMcclendon878

Explain in your own words why it is important to read a statistical .docx
Explain in your own words why it is important to read a statistical .docxExplain in your own words why it is important to read a statistical .docx
Explain in your own words why it is important to read a statistical .docxAlleneMcclendon878
 
Explain how Matthew editedchanged Marks Gospel for each of the fol.docx
Explain how Matthew editedchanged Marks Gospel for each of the fol.docxExplain how Matthew editedchanged Marks Gospel for each of the fol.docx
Explain how Matthew editedchanged Marks Gospel for each of the fol.docxAlleneMcclendon878
 
Explain the degree to which media portrayal of crime relates to publ.docx
Explain the degree to which media portrayal of crime relates to publ.docxExplain the degree to which media portrayal of crime relates to publ.docx
Explain the degree to which media portrayal of crime relates to publ.docxAlleneMcclendon878
 
Explain the difference between genotype and phenotype. Give an examp.docx
Explain the difference between genotype and phenotype. Give an examp.docxExplain the difference between genotype and phenotype. Give an examp.docx
Explain the difference between genotype and phenotype. Give an examp.docxAlleneMcclendon878
 
Explain the history behind the Black Soldier of the Civil War In t.docx
Explain the history behind the Black Soldier of the Civil War In t.docxExplain the history behind the Black Soldier of the Civil War In t.docx
Explain the history behind the Black Soldier of the Civil War In t.docxAlleneMcclendon878
 
Explain the fundamental reasons why brands do not exist in isolation.docx
Explain the fundamental reasons why brands do not exist in isolation.docxExplain the fundamental reasons why brands do not exist in isolation.docx
Explain the fundamental reasons why brands do not exist in isolation.docxAlleneMcclendon878
 
Explain the difference between hypothetical and categorical imperati.docx
Explain the difference between hypothetical and categorical imperati.docxExplain the difference between hypothetical and categorical imperati.docx
Explain the difference between hypothetical and categorical imperati.docxAlleneMcclendon878
 
Explain in 100 words provide exampleThe capital budgeting decisi.docx
Explain in 100 words provide exampleThe capital budgeting decisi.docxExplain in 100 words provide exampleThe capital budgeting decisi.docx
Explain in 100 words provide exampleThe capital budgeting decisi.docxAlleneMcclendon878
 
Explain how Supreme Court decisions influenced the evolution of the .docx
Explain how Supreme Court decisions influenced the evolution of the .docxExplain how Supreme Court decisions influenced the evolution of the .docx
Explain how Supreme Court decisions influenced the evolution of the .docxAlleneMcclendon878
 
Explain how an offender is classified according to risk when he or s.docx
Explain how an offender is classified according to risk when he or s.docxExplain how an offender is classified according to risk when he or s.docx
Explain how an offender is classified according to risk when he or s.docxAlleneMcclendon878
 
Explain a lesson plan. Describe the different types of information.docx
Explain a lesson plan. Describe the different types of information.docxExplain a lesson plan. Describe the different types of information.docx
Explain a lesson plan. Describe the different types of information.docxAlleneMcclendon878
 
explain the different roles of basic and applied researchdescribe .docx
explain the different roles of basic and applied researchdescribe .docxexplain the different roles of basic and applied researchdescribe .docx
explain the different roles of basic and applied researchdescribe .docxAlleneMcclendon878
 
Explain the basics of inspirational and emotion-provoking communicat.docx
Explain the basics of inspirational and emotion-provoking communicat.docxExplain the basics of inspirational and emotion-provoking communicat.docx
Explain the basics of inspirational and emotion-provoking communicat.docxAlleneMcclendon878
 
Explain how leaders develop through self-awareness and self-discipli.docx
Explain how leaders develop through self-awareness and self-discipli.docxExplain how leaders develop through self-awareness and self-discipli.docx
Explain how leaders develop through self-awareness and self-discipli.docxAlleneMcclendon878
 
Explain five ways that you can maintain professionalism in the meeti.docx
Explain five ways that you can maintain professionalism in the meeti.docxExplain five ways that you can maintain professionalism in the meeti.docx
Explain five ways that you can maintain professionalism in the meeti.docxAlleneMcclendon878
 
Explain security awareness and its importance.Your response should.docx
Explain security awareness and its importance.Your response should.docxExplain security awareness and its importance.Your response should.docx
Explain security awareness and its importance.Your response should.docxAlleneMcclendon878
 
Experimental Design AssignmentYou were given an Aedesaegyp.docx
Experimental Design AssignmentYou were given an Aedesaegyp.docxExperimental Design AssignmentYou were given an Aedesaegyp.docx
Experimental Design AssignmentYou were given an Aedesaegyp.docxAlleneMcclendon878
 
Expand your website plan.Select at least three interactive fea.docx
Expand your website plan.Select at least three interactive fea.docxExpand your website plan.Select at least three interactive fea.docx
Expand your website plan.Select at least three interactive fea.docxAlleneMcclendon878
 
Exercise 7 Use el pronombre y la forma correcta del verbo._.docx
Exercise 7 Use el pronombre y la forma correcta del verbo._.docxExercise 7 Use el pronombre y la forma correcta del verbo._.docx
Exercise 7 Use el pronombre y la forma correcta del verbo._.docxAlleneMcclendon878
 
Exercise 21-8 (Part Level Submission)The following facts pertain.docx
Exercise 21-8 (Part Level Submission)The following facts pertain.docxExercise 21-8 (Part Level Submission)The following facts pertain.docx
Exercise 21-8 (Part Level Submission)The following facts pertain.docxAlleneMcclendon878
 

Mehr von AlleneMcclendon878 (20)

Explain in your own words why it is important to read a statistical .docx
Explain in your own words why it is important to read a statistical .docxExplain in your own words why it is important to read a statistical .docx
Explain in your own words why it is important to read a statistical .docx
 
Explain how Matthew editedchanged Marks Gospel for each of the fol.docx
Explain how Matthew editedchanged Marks Gospel for each of the fol.docxExplain how Matthew editedchanged Marks Gospel for each of the fol.docx
Explain how Matthew editedchanged Marks Gospel for each of the fol.docx
 
Explain the degree to which media portrayal of crime relates to publ.docx
Explain the degree to which media portrayal of crime relates to publ.docxExplain the degree to which media portrayal of crime relates to publ.docx
Explain the degree to which media portrayal of crime relates to publ.docx
 
Explain the difference between genotype and phenotype. Give an examp.docx
Explain the difference between genotype and phenotype. Give an examp.docxExplain the difference between genotype and phenotype. Give an examp.docx
Explain the difference between genotype and phenotype. Give an examp.docx
 
Explain the history behind the Black Soldier of the Civil War In t.docx
Explain the history behind the Black Soldier of the Civil War In t.docxExplain the history behind the Black Soldier of the Civil War In t.docx
Explain the history behind the Black Soldier of the Civil War In t.docx
 
Explain the fundamental reasons why brands do not exist in isolation.docx
Explain the fundamental reasons why brands do not exist in isolation.docxExplain the fundamental reasons why brands do not exist in isolation.docx
Explain the fundamental reasons why brands do not exist in isolation.docx
 
Explain the difference between hypothetical and categorical imperati.docx
Explain the difference between hypothetical and categorical imperati.docxExplain the difference between hypothetical and categorical imperati.docx
Explain the difference between hypothetical and categorical imperati.docx
 
Explain in 100 words provide exampleThe capital budgeting decisi.docx
Explain in 100 words provide exampleThe capital budgeting decisi.docxExplain in 100 words provide exampleThe capital budgeting decisi.docx
Explain in 100 words provide exampleThe capital budgeting decisi.docx
 
Explain how Supreme Court decisions influenced the evolution of the .docx
Explain how Supreme Court decisions influenced the evolution of the .docxExplain how Supreme Court decisions influenced the evolution of the .docx
Explain how Supreme Court decisions influenced the evolution of the .docx
 
Explain how an offender is classified according to risk when he or s.docx
Explain how an offender is classified according to risk when he or s.docxExplain how an offender is classified according to risk when he or s.docx
Explain how an offender is classified according to risk when he or s.docx
 
Explain a lesson plan. Describe the different types of information.docx
Explain a lesson plan. Describe the different types of information.docxExplain a lesson plan. Describe the different types of information.docx
Explain a lesson plan. Describe the different types of information.docx
 
explain the different roles of basic and applied researchdescribe .docx
explain the different roles of basic and applied researchdescribe .docxexplain the different roles of basic and applied researchdescribe .docx
explain the different roles of basic and applied researchdescribe .docx
 
Explain the basics of inspirational and emotion-provoking communicat.docx
Explain the basics of inspirational and emotion-provoking communicat.docxExplain the basics of inspirational and emotion-provoking communicat.docx
Explain the basics of inspirational and emotion-provoking communicat.docx
 
Explain how leaders develop through self-awareness and self-discipli.docx
Explain how leaders develop through self-awareness and self-discipli.docxExplain how leaders develop through self-awareness and self-discipli.docx
Explain how leaders develop through self-awareness and self-discipli.docx
 
Explain five ways that you can maintain professionalism in the meeti.docx
Explain five ways that you can maintain professionalism in the meeti.docxExplain five ways that you can maintain professionalism in the meeti.docx
Explain five ways that you can maintain professionalism in the meeti.docx
 
Explain security awareness and its importance.Your response should.docx
Explain security awareness and its importance.Your response should.docxExplain security awareness and its importance.Your response should.docx
Explain security awareness and its importance.Your response should.docx
 
Experimental Design AssignmentYou were given an Aedesaegyp.docx
Experimental Design AssignmentYou were given an Aedesaegyp.docxExperimental Design AssignmentYou were given an Aedesaegyp.docx
Experimental Design AssignmentYou were given an Aedesaegyp.docx
 
Expand your website plan.Select at least three interactive fea.docx
Expand your website plan.Select at least three interactive fea.docxExpand your website plan.Select at least three interactive fea.docx
Expand your website plan.Select at least three interactive fea.docx
 
Exercise 7 Use el pronombre y la forma correcta del verbo._.docx
Exercise 7 Use el pronombre y la forma correcta del verbo._.docxExercise 7 Use el pronombre y la forma correcta del verbo._.docx
Exercise 7 Use el pronombre y la forma correcta del verbo._.docx
 
Exercise 21-8 (Part Level Submission)The following facts pertain.docx
Exercise 21-8 (Part Level Submission)The following facts pertain.docxExercise 21-8 (Part Level Submission)The following facts pertain.docx
Exercise 21-8 (Part Level Submission)The following facts pertain.docx
 

Kürzlich hochgeladen

Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxDr.Ibrahim Hassaan
 
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYKayeClaireEstoconing
 
Science 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptxScience 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptxMaryGraceBautista27
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Jisc
 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxChelloAnnAsuncion2
 
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Celine George
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceSamikshaHamane
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Celine George
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...Nguyen Thanh Tu Collection
 
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...Postal Advocate Inc.
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Celine George
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)lakshayb543
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Mark Reed
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfSpandanaRallapalli
 
Q4 English4 Week3 PPT Melcnmg-based.pptx
Q4 English4 Week3 PPT Melcnmg-based.pptxQ4 English4 Week3 PPT Melcnmg-based.pptx
Q4 English4 Week3 PPT Melcnmg-based.pptxnelietumpap1
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxAnupkumar Sharma
 

Kürzlich hochgeladen (20)

OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
 
Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptx
 
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
 
Science 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptxScience 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptx
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...
 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
 
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in Pharmacovigilance
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
 
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdf
 
Q4 English4 Week3 PPT Melcnmg-based.pptx
Q4 English4 Week3 PPT Melcnmg-based.pptxQ4 English4 Week3 PPT Melcnmg-based.pptx
Q4 English4 Week3 PPT Melcnmg-based.pptx
 
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptxLEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
 

Contextual FactorsDefinitionFactors which reflect a particul

  • 1. Contextual Factors Definition Factors which reflect a particular context, characteristics unique to a particular group, community, society and individual. Context –Educational setting Characteristics – particular to a person place or thing(characteristics of educational setting which you will report on). Provide discussion of the contextual factors in your school. Specifics for Discussion in Contextual COMMUNITY Urban or rural Community composition(ethnic, political, progressive) Student Population(what is it made up of: black/white/ girls/ boys Student achievement level. A, B C students. Can offer test scores as explanation. Are there adjustments needed to be made to ensure student achievement? Where does these students live in your community? What type of social community (working class, farming, middle class. lower class What drives employment ( high paying / low paying. Family’s income level How typical is your school in comparison to other schools small, large regular ethnic, political, progressive Characteristics of School itself ( age of building number of classrooms, typical classroom size What grade level are your students. What grade levels exit is your school? Describe characteristics of classroom small, large, windows, doors etc..
  • 2. Describe classroom atmosphere Combine you a list of the Following and then discuss in your TWS(identified previously) Classroom Characteristics (ex: The classroom was small and not well lighted. There are 15 desk in the classroom and one blackboard in the back wall of the classroom. Two book shelves are located as you enter the room on each side of the door. The lighting in the room was not good as several bulbs needs replacing………….ect. Must describe Student Characteristics Community characteristics District Characteristics Building Characteristics (this may vary in each building of school) Identify a groups of students with similar characteristics and discuss that group( remain with contextual subject) May also identify 1 student characteristics and discuss For Your Information Follow all guidelines and make sure you discuss what is being asked of you. This is contextual make sure you follow and discuss only contextual You should have 1-2 pages for contextual Draw Conclusions. What conditions result into low grades. Some maybe poor attendance, overcrowded classrooms, lack of
  • 3. parent involvement. Lack of qualified staff and so on. What conditions improve student achievement (classrooms that are not overcrowded).. Implications that may cause a particular state is what you want to report. Use what you are reporting to make this a good section of your paper. Don’t include student or parent names in your report Know who are you teaching Learning Goals Now that you have contextual you may begin to develop learning goals. Align goals with the national, state or local standards Have 3 to 6 learning goals Clearly state learning goals Review Blooms Taxonomy 6 levels of learning: knowledge, Comprehension Application, Analysis, Synthesis and Evaluation Don’t list learning goals as activities Discuss why the goals you are using are important State Learning Goal – Make sure it is significant, clear, challenging and appropriate for students Use subheadings to justify reader of what you are explaining: appropriateness, challenging, significant, etc.. Subheadings are in bold in each of the TWS processes Explain how learning goal is aligned with Blooms taxonomy. You may be brief in your explanation to ensure you meet page requirements. Use subheadings to justify reader of what you are explaining: appropriateness, challenging, significant, etc.. Write clear explanation of why the learning goal is appropriate
  • 4. Look at the following learning goals and determine which are written properly Students will develop good skills Yes/ No Students will understand how to identify verbs Yes/ No Students will be able to identify a complete sentence Yes/ No Students will state all the months in a year Yes/ No Students will be familiar with the rules of tennis Yes/ No Students will grow corn at home Yes/ No Circle yes or no for each statement. Discuss why it was not a goal Assessment / 3 While constructing the assessment Plan the teacher uses multiple assessment modes and approaches aligned with learning goals to assess student learning before , during and after instruction Charts must be used in this section. You may use the following chart but can be modified for your goals. Learning Goals Assessments Format of assessment Adaptations Learning Goals should list each of your learning goals. Remember to write learning goals as stated in the learning goal section Assessments to be used in this section should be pre assessments , formative assessments and post assessment as well as others that you may choose to use. Format of Assessment would include what method of assessment will you use at this point to assess your goals. These are example of activities you may use(bell ringers,
  • 5. homework, projects, written test ect. Adaptations would be how would you use adaptations or substitutes for students. For example child cant see written material due to his broken glasses at recess. What would you do so he can participate in the assessment process? You would use adaptation for children which you are aware of that has a barrier of the learning process. Next you would explain using subsections to guide your response. Always list the subsections as they appear and respond in writing how you are doing this. The first thing listed on your information sheet of the TWS is learning goals. At this point you have completed a section of learning goals. The next session is show how the local state and federal guidel ines are aligned with learning goals. Remember you must use goals from local state or federal guidelines. The next section is Describe the types and levels of your learning goals. Here discuss the level and type of learning goal you are using. The level should be consistent with the standards chosen and appropriate for the student levels. The last bullet on the TWS says you must identify how your goals are appropriate. Make sure each of these are discussed and sub headed. Length 1-2 pages Copyright Gail Burnaford and Tara Brown Teaching and Learning in 21st Century Learning Environments: A Reader Editor in Chief, AVP: Steve Wainwright Sponsoring Editor: Cheryl Cechvala
  • 6. Development Editor: Cheryl Cechvala Assistant Editor: Amanda Nixon Senior Editorial Assistant: Nicole Sanchez-Sullivan Production Editor: Lauren LePera Senior Product Manager: Peter Galuardi Cover Design: Jelena Mirkovic Jankovic Printing Services: Bordeaux Production Services: Lachina Publishing Services ePub Development: Lachina Publishing Services Permission Editor: Karen Ehrmann Video Production: Ed Tech Productions Cover Image: Stockbyte/Jupiterimages/Thinkstock ISBN-10: 1621781496 ISBN-13: 978-1-62178-149-3 Copyright © 2014 Bridgepoint Education, Inc. All rights reserved. GRANT OF PERMISSION TO PRINT: The copyright owner of t his material hereby grants the holder of this publication the righ t to print these materials for personal use. The holder of this ma terial may print the materials herein for personal use only. Any print, reprint, reproduction or distribution of these materials for commercial use without the express written consent of the copy right owner constitutes a violation of the U.S. Copyright Act, 17 U.S.C. §§ 101-810, as amended.Preface Teaching and Learning in 21st Century Learning Environments: A Reader prepares readers to enter the field of education ready t o address the needs of 21st- century learners. The book is intended to serve as a bridge betw een coursework that participants have taken, and the ongoing pr ofessional development that graduates are encouraged to pursue upon course and program completion. The text presents excerpts from leading voices in education, pro viding insight on crucial topics such as differentiation for diver se learners, curriculum and instruction, professional growth and leadership, and skills for digital age learning. The authors integ rate theory, research studies, and practical application to provid
  • 7. e readers with a set of tools and strategies for continuing to lear n and grow in the field of education. Finally, embedded video in terviews with practicing educators offer a real- world perspective of important topics. Textbook Features Teaching and Learning in 21st Century Learning Environments: A Reader includes a number of features to help students underst and key concepts: Voices From the Field feature boxes: Provide personal stories fr om educators based on real experiences in the field, giving read ers a sense of what it really means to be an educator in the 21st century. Tying It All Together feature boxes: Provide guidance to assist students in synthesizing the information presented within each c hapter. Videos: Provide real- world perspectives from practicing educators on key topics in 2 1st-century education. Critical Thinking and Discussion Questions: Are found at the en d of each article. These questions prompt students to critically e xamine the information presented in each excerpt and draw conn ections to their own experiences. Accessible Anywhere. Anytime. With Constellation, faculty and students have full access to eTe xtbooks at their fingertips. The eTextbooks are instantly accessi ble on web, mobile, and tablet. iPad To download the Constellation iPad app, go to the App Store on your iPad, search for "Constellation for UAGC," and download the free application. You may log in to the iPad application with the same username and password used to access Constellation o n the web.
  • 8. NOTE: You will need iOS version 7.0 or higher. Android Tablet and Phone To download the Constellation Android app, go to the Google Pl ay Store on your Android Device, search for "Constellation for UAGC," and download the free application. You may log in to t he Android application with the same username and password us ed to access Constellation on the web. NOTE: You will need a tablet or phone running Android version 2.3 (Gingerbread) or higher. About the Authors Gail Burnaford Gail Burnaford holds a Ph.D. in Curriculum and Instruction fro m Georgia State University, and is currently Professor in the De partment of Curriculum, Culture and Educational Inquiry at Flor ida Atlantic University. Prior to moving to Florida, she directed the Undergraduate Teacher Education and School Partnerships Program at Northwestern University’s School of Education and Social Policy. Dr. Burnaford is the author of four books and numerous articles on topics related to teacher learning, professional development, arts integration and curriculum design. She has served as Princi pal Investigator on multiple program evaluations focused on art s integration partnerships, including those funded through the U .S. Department of Education’s Professional Development Grants . Dr. Burnaford has acquired eLearning Certification and teache s courses including research in curriculum and instruction, educ ational policy, documentation and assessment, and curriculum le adership in hybrid, online and face-to- face learning environments. Her current research focuses on fac
  • 9. ulty’s use of iPads in teaching and the nature/impact of faculty f eedback on student work. Acknowledgments The authors would like to acknowledge the many people who we re involved in the development of this text. Special thanks are d ue to Cheryl Cechvala, sponsoring editor and development edito r; Amanda Nixon, assistant editor; Nicole Sanchez- Sullivan, senior editorial assistant; and Lauren LePera, producti on editor. Thanks also to the following Ashford faculty and advi sors for their helpful advice and suggestions: Amy Gray, Stephe n Halfaker, Kathleen Lunsford, Andrew Shean, Melissa Phillips, Tony Valley, Gina Warren, and Laurie Wellner. Finally, the authors would like to thank the following reviewers for their valuable feedback and insight: Paula Conroy, University of Northern Colorado Graham Crookes, University of Hawaii
  • 10.
  • 11. Tara Brown Tara M. Brown is an Assistant Professor of Education at the Uni versity of Maryland, College Park. She holds a doctorate degree from the Harvard Graduate School of Education, and is a forme r secondary classroom teacher in alternative education. Tara’s research focuses on the experiences of low- income adolescents and young adults served by urban schools, p articularly as related to disciplinary exclusion and dropout. She specializes in qualitative, community- based, participatory, and action research methodologies. Her mo st recent research is entitled Uncredentialed: Young Adults Livi ng without a Secondary Degree. This community- based participatory study focuses on the social, educational, and economic causes and implications of school dropout among pri marily Latina/o young adults living in mid-sized, post- industrial city. Ch 3: Assessment in the 21st Century 3.1 Five Assessment Myths and Their Consequences, by Rick St iggins Introduction Rick Stiggins is a well-
  • 12. known consultant and expert in the field of assessment. He foun ded the Assessment Training Institute, which provides professio nal development in assessment for teachers and school leaders. He has served on the faculty at Michigan State University, the University of Minnesota, and Lewis and Clark College. Stiggins has also served as director of the American College Testing Pro gram. Stiggins’ article emphasizes the importance of paying attention to assessment at the classroom level. He notes that in the curren t educational climate, there is huge investment in yearly standar dized tests rather than daily assessments that are a part of teachi ng. Stiggins states, however, that teachers are not well- prepared to assess effectively and have not had much assessmen t training in their teacher education programs. Stiggins’ article about the myths that drive assessment is especi ally important because of his attention to students and their role in assessment. He laments that nowhere in the assessment litera ture over the past 60 years do we find reference to students as “ users” and “instructional decision makers.” Finally, the author d escribes the power of assessing for learning rather than relying on grades and test scores to motivate students. Excerpt The following is an excerpt from Stiggins, R. (2007). Five asses sment myths and their consequences. Education Week, 27(8), 28 –29. Reprinted with permission from the author. America has spent 60 years building layer upon layer of district, state, national, and international assessments at immense cost — and with little evidence that our assessment practices have impr oved learning. True, testing data have revealed achievement pro blems. But revealing problems and helping fix them are two enti rely different things. As a member of the measurement community, I find this legacy very discouraging. It causes me to reflect deeply on my role and function. Are we helping students and teachers with our assess
  • 13. ment practices, or contributing to their problems? My reflections have brought me to the conclusion that assessme nt’s impact on the improvement of schools has been severely li mited by several widespread but erroneous beliefs about what ro le it ought to play. Here are five of the most problematic of thes e assessment myths:Myth 1: The Path to School Improvement Is Paved With Standardized Tests. Evidence of the strength of this belief is seen in the evolution, i ntensity, and immense investment in our large- scale testing programs. We have been ranking states on the basi s of average college-admission- test scores since the 1950s, comparing schools based on district- wide testing since the 1960s, comparing districts based on state assessments since the 1970s, comparing states based on national assessment since the 1980s, and comparing nations on the basis of international assessments since the l990s. Have schools impr oved as a result? The problem is that once-a- year assessments have never been able to meet the information n eeds of the decisionmakers who contribute the most to determini ng the effectiveness of schools: students and teachers, who mak e such decisions every three to four minutes. The brief history o f our investment in testing outlined above includes no reference to day-to- day classroom assessment, which represents 99.9 percent of the assessments in a student’s school life. We have almost complete ly neglected classroom assessment in our obsession with standar dized testing. Had we not, our path to school improvement woul d have been far more productive.Myth 2: School and Communit y Leaders Know How to Use Assessment to Improve Schools. Over the decades, very few educational leaders have been traine d to understand what standardized tests measure, how they relat e to the local curriculum, what the scores mean, how to use the m, or, indeed, whether better instruction can influence scores. B eyond this, we in the measurement community have narrowed o ur role to maximizing the efficiency and accuracy of high-
  • 14. stakes testing, paying little attention to the day-to- day impact of test scores on teachers or learners in the classroo m. Many in the business community believe that we get better scho ols by comparing them based on annual test scores, and then re warding or punishing them. They do not understand the negative impact on students and teachers in struggling schools that conti nuously lose in such competition. Politicians at all levels believ e that if a little intimidation doesn’t work, a lot of intimidation will, and assessment has been used to increase anxiety. They to o misunderstand the implications for struggling schools and lear ners.Myth 3: Teachers Are Trained to Assess Productively. Teachers can spend a quarter or more of their professional time involved in assessment- related activities. If they assess accurately and use results effect ively, their students can prosper. Administrators, too, use assess ment to make crucial curriculum and resource- allocation decisions that can improve school quality. Given the critically important roles of assessment, it is no surpr ise that Americans believe teachers are thoroughly trained to ass ess accurately and use assessment productively. In fact, teachers typically have not been given the opportunity to learn these thi ngs during preservice preparation or while they are teaching. Th is has been the case for decades. And lest we believe that teache rs can turn to their principals or other district leaders for help in learning about sound assessment practices, let it be known that relevant, helpful assessment training is rarely included in leader ship- preparation programs either.Myth 4: Adult Decisions Drive Sch ool Effectiveness. We assess to inform instructional decisions. Annual tests inform annual decisions made by school leaders. Interim tests used for matively permit faculty teams to fine- tune programs. Classroom assessment helps teachers know what comes next in learning, or what grades go on report cards. In al l cases, the assessment results inform the grown-
  • 15. ups who run the system. But there are other data- based instructional decisionmakers present in classrooms whose influence over learning success is greater than that of the adult s. I refer, of course, to students. Nowhere in our 60- year assessment legacy do we find reference to students as asses sment users and instructional decisionmakers. But, in fact, they interpret the feedback we give them to decide whether they have hope of future success, whether the learning is worth the energ y it will take to attain it, and whether to keep trying. If students conclude that there is no hope, it doesn’t matter what the adults decide. Learning stops. The most valid and reliable “high stake s” test, if it causes students to give up in hopelessness, cannot b e regarded as productive. It does more harm than good.Myth 5: Grades and Test Scores Maximize Student Motivation and Learn ing. Most of us grew up in schools that left lots of students behind. By the end of high school, we were ranked based on achievemen t. There were winners and losers. Some rode winning streaks to confident, successful life trajectories, while others failed early a nd often, found recovery increasingly difficult, and ultimately g ave up. After 13 years, a quarter of us had dropped out and the r est were dependably ranked. Schools operated on the belief that if I fail you or threaten to do so, it will cause you to try harder. This was only true for those who felt in control of the success c ontingencies. For the others, chronic failure resulted, and the int imidation minimized their learning. True hopelessness always tr umps pressure to learn. Society has changed the mission of its schools to “leave no chil d behind.” We want all students to meet state standards. This re quires that all students believe they can succeed. Frequent succe ss and infrequent failure must pave the path to optimism. This r epresents a fundamental redefinition of productive assessment d ynamics. Classroom- assessment researchers have discovered how to assess for learni
  • 16. ng to accomplish this. Assessment for learning (as opposed to of learning) has a profoundly positive impact on achievement, esp ecially for struggling learners, as has been verified through rigo rous scientific research conducted around the world. But, again, our educators have never been given the opportunity to learn ab out it. Sound assessment is not something to be practiced once a year. As we look to the future, we must balance annual, interim or be nchmark, and classroom assessment. Only then will we meet the critically important information needs of all instructional decisi onmakers. We must build a long- missing foundation of assessment literacy at all levels of the sys tem, so that we know how to assess accurately and use results pr oductively. This will require an unprecedented investment in pr ofessional learning both at the preservice and in- service levels for teachers and administrators, and for policyma kers as well. Of greatest importance, however, is that we acknowledge the ke y role of the learner in the assessment- learning connection. We must begin to use classroom assessmen t to help all students experience continuous success and come to believe in themselves as learners. Source: Stiggins, R. (2007). Five assessment myths and their co nsequences. Education Week 27(8), pp. 28– 29. © Rick Stiggins. As first appeared in Education Week, Octo ber 16, 2007. Reprinted with permission from the author. Summary Stiggins offers five myths regarding assessment. He then sugges ts the consequences that teachers and leaders face when the edu cational community apparently believes these myths. The author challenges the myth that standardized testing can be the path to school improvement, noting that classroom assessment has muc h more power over student learning. He asserts, contrary to pop ular opinion, that most teachers and leaders do not know how to use assessment data to improve schools, nor are teachers adequ
  • 17. ately prepared to assess productively. Educators and the general public appear to believe that grades a nd test scores motivate student learning, despite the evidence th at classroom- based assessment for learning is actually what promotes student success. Finally, Stiggins debunks the myth that adult decisions drive school effectiveness and reminds readers of the role the st udents themselves play in the process. Critical Thinking Questions 1. To what degree do you believe students play a pivotal role in sc hool effectiveness as “assessment users” and “instructional deci sion makers”? How might that role be strengthened for students in schools? 2. How would you evaluate your own assessment knowledge and p reparation for teaching and leadership in assessment? How woul d you characterize the gaps in your knowledge about assessment ? 3. Imagine that you are speaking to a group of parents of students i n a middle school. Explain how you would assess students daily in order to improve your teaching. 4. Discuss Rick Stiggins’ assertion that school improvement is not informed by standardized test results. What are some of the pro blems with relying on yearly standardized tests to drive curricul um and teaching in a school?3.2 Assessment Literacy for Teach ers: Faddish or Fundamental? by W. James Popham Introduction W. JamesPopham is an emeritus professor in the graduate schoo l of the University of California, Los Angeles. He is considered one of the premier researchers in the field of assessment and is t he founder of IOX Assessment Associates, a research and devel
  • 18. opment organization. This article introduces the concept of assessment literacy as a fu ndamental task for professional development in schools, especia lly in the current context in which teacher preparation assessme nt programs may be viewed as inadequate. Popham claims that t eachers know very little about assessment beyond the administra tion of traditional tests, and in this piece he describes 13 “must understand” assessment topics for teachers, including the differ ence between formative and summative assessment tools. He als o differentiates between classroom assessments and accountabili ty assessments in terms of their goals and uses by teachers and a dministrators. A key concept offered in this article is the idea that assessment approaches that are instructionally sensitive can be directly rela ted to good teaching or, conversely, poor teaching. Popham mai ntains that teachers need to know the basics of the content area of assessment, including reliability, the three types of validity, t ypes of test items, and the development and scoring of alternati ve assessments such as portfolios, exhibitions, peer, and self- assessments. Teachers and leaders also need to be able to interpret standardiz ed test results and use them meaningfully to improve instruction , because they are a key feature of today’s data- driven practice in many schools and districts. Finally, the article reminds readers that assessment of English- language learners and students with disabilities remains an esse ntial content field for all teachers. Excerpt The following is an excerpt from Popham, W. J. (2009). Assess ment literacy for teachers: faddish or fundamental? Theory Into Practice, 48, 4–11. In recent years, increasing numbers of professional development programs have dealt with assessment literacy for teachers and/o r administrators. Is assessment literacy merely a fashionable foc us for today’s professional developers or, in contrast, should it
  • 19. be regarded as a significant area of professional development in terest for many years to come? After dividing educators’ measur ement- related concerns into either classroom assessments or accountab ility assessments, it is argued that educators’ inadequate knowle dge in either of these arenas can cripple the quality of education . Assessment literacy is seen, therefore, as a sine qua non for to day’s competent educator. As such, assessment literacy must be a pivotal content area for current and future staff development e ndeavors. Thirteen must- understand topics are set forth for consideration by those who d esign and deliver assessment literacy programs. Until preservice teacher education programs begin producing assessment literate teachers, professional developers must continue to rectify this omission in educators’ professional capabilities. For the past several years, assessment literacy has been increasi ngly touted as a fitting focus for teachers’ professional develop ment programs. The sort of assessment literacy that is typically recommended refers to a teacher’s familiarity with those measur ement basics related directly to what goes on in classrooms. Giv en today’s ubiquitous, externally imposed scrutiny of schools, w e can readily understand why assessment literacy might be regar ded as a likely target for teachers’ professional development. Y et, is assessment literacy a legitimate focus for teachers’ profess ional development programs or, instead, is it a fashionable but s oon forgettable fad?The Consequences of Omission Many of today’s teachers know little about educational assessm ent. For some teachers, test is a four- letter word, both literally and figuratively. The gaping gap in te achers’ assessment- related knowledge is all too understandable. The most obvious e xplanation is, in this instance, the correct explanation. Regretta bly, when most of today’s teachers completed their teacher- education programs, there was no requirement that they learn an ything about educational assessment. For these teachers, their o nly exposure to the concepts and practices of educational assess
  • 20. ment might have been a few sessions in their educational psycho logy classes or, perhaps, a unit in a methods class (La Marca, 20 06; Stiggins, 2006). Thus, many teachers in previous years usually arrived at their fi rst teaching assignment quite bereft of any fundamental underst anding of educational measurement. Happily, in recent years we have seen the emergence of increased preservice requirements t hat offer teacher education candidates greater insights regarding educational assessment. Accordingly, in a decade or two, the as sessment literacy of the nation’s teaching force is bound to be s ubstantially stronger. But for now, it must be professional devel opment—completed subsequent to teacher education— that will supply the nation’s teachers with the assessment relate d skills and knowledge they need. * * *A Quick Content Dip Professional development programs focused on assessment liter acy need to be tailored. Such a program designed for school ad ministrators is likely to be similar to an assessment- literacy program for teachers, in the sense that many of the topi cs to be treated would be essentially identical, but some salient content differences would—and should— exist. To conclude this analysis, I would like to lay out the cont ent that should be addressed—in a real- world, practical manner rather than an esoteric, theoretical fashi on—during an assessment- literacy professional development program for teachers. This wi ll only be a brief listing of potential content, but those who are i nterested in a closer look at possible content for such programs will find more detailed treatments of potential emphases in the l ist of references. Those considering what to include in an assessment literacy pro fessional development program for teachers should seriously co nsider focusing on a set of target skills and knowledge dealing with the following content: 1. The fundamental function of educational assessment, namely, th
  • 21. e collection of evidence from which inferences can be made abo ut students’ skills, knowledge, and affect. A common misconcep tion among educators is to reify test scores, as though such scor es are the true target of an educator’s concern. In reality, the on ly reason we test our students is in order collect evidence regard ing what we cannot see— understanding, skill development, and so on. Almost all of our e ducational goals are aimed at unseeable skills and knowledge. We cannot tell how much history a student knows just by lookin g at that student. Thus, we must rely on students’ overt test perf ormances to produce evidence so we can arrive at defensible inf erences about students’ covert skills and knowledge. 2. Reliability of educational assessments, especially the three form s in which consistency evidence is reported for groups of test- takers (stability, alternate- form, and internal consistency) and how to gauge consistency of assessment for individual test- takers. Many educators place absolutely unwarranted confidence in the accuracy of educational tests, especially those high- stakes tests created by well- established testing companies. When educators grasp the nature of measurement error, and realize the myriad factors that can tri gger inconsistency in a student’s test performances, those educa tors will regard with proper caution the imprecision of the result s obtained on even some of our most time- honored assessment instruments. 3. The prominent role three types of validity evidence should play in the building of arguments to support the accuracy of test- based interpretations about students, namely, content- related, criterion-related, and construct- related evidence. Anytime an educator utters the phrase a valid t est, that educator is—at least technically— in error. It is not a test that is valid or invalid. Rather, it is the i nference we base on a test-
  • 22. taker’s score whose validity is at issue. Moreover, the types of validity evidence we collect are fundamentally different. As a c onsequence, for example, classroom teachers need to know that the chief kind of validity evidence they need to attend to should be content-related. 4. How to identify and eliminate assessment bias that offends or u nfairly penalizes test- takers because of personal characteristics such as race, gender, or socioeconomic status. During the past two decades, the meas urement community has devised both judgmental and empirical ways of dramatically reducing the amount of assessment bias in our large- scale educational tests. Classroom teachers need to know how to identify and eliminate bias in their own teacher-made tests. 5. Construction and improvement of selected- response and constructed- response test items. Through the years, measurement specialists have been assembling a collection of guidelines regarding how t o create wonderful, rather than wretched, test items. Moreover, once a set of test items has been constructed, there are easily us ed procedures available for making those items even better. Edu cators who generate tests need to be conversant with the creatio n and honing of test items. 6. Scoring of students’ responses to constructed- response tests items, especially the distinctive contribution mad e by well-formed rubrics. Although constructed- response test items such as essay and short answer items often p rovide particularly illuminating evidence about students’ skills and knowledge, the scoring of students’ responses to such items often goes haywire because of loose judgmental procedures. Te achers need to know how to create and use rubrics, that is, scori ng guides, so students’ performances on constructed- response items can be accurately appraised. 7. Development and scoring of performance assessments, portfolio
  • 23. assessments, exhibitions, peer assessments, and self- assessments. Gone are the days when teachers only had to know how to score tests by distinguishing between a circled T or F for students’ answers to true– false items. Given the current use of assessment procedures call ing for students to respond in dramatically diverse ways, today’ s teachers need to learn how to generate and perhaps score a con siderable variety of assessment strategies. 8. Designing and implementing formative assessment procedures c onsonant with both research evidence and experience- based insights regarding such procedures’ likely success. Forma tive assessment is a process, not a particular type of test. Becau se there is now substantial evidence at hand that properly emplo yed formative assessment can meaningfully boost students’ achi evement (Black & Wiliam, 1998a), today’s educators need to un derstand the innards of this potent classroom process. 9. How to collect and interpret evidence of students’ attitudes, inte rests, and values. When considering the importance of students’ acquisition of cognitive versus affective outcomes, it could be a rgued that inattention to students’ attitudes, interests, and value s can have a lasting, negative impact on those students. Teacher s, therefore, should at least learn how to assess their students’ a ffect so that, if those teachers choose to do so, they can get an a ccurate fix on their students’ affective dispositions. 10. Interpreting students’ performances on large- scale, standardized achievement and aptitude assessments. Beca use students’ performances are of interest to both teachers and s tudents’ parents, teachers must understand the most widely used techniques for reporting students’ scores on today’s oft- administered standardized examinations, including, for example, what is meant by a scale score. 11. Assessing English Language Learners and students with disabili ties. Although most of the measurement concepts that educators
  • 24. need to understand will apply across the board to all types of st udents, there are special assessment issues associated with stude nts whose first language is not English and for students with dis abilities. Because today’s educators have been adjured to attend to such students with more care than was seen in the past, it is i mportant for all teachers to become conversant with the assessm ent procedures most suitable for these subgroups of students. 12. How to appropriately (and not inappropriately) prepare students for high- stakes tests. Given the pressures on educators to have their stud ents shine on state and, sometimes, district accountability tests, there have been reports of test- preparation practices that are patently inappropriate. In many in stances, such unsound practices arise simply because teachers h ad not devoted attention to the question of how students should and should not be readied for important tests. They should be pr epared to do so. 13. How to determine the appropriateness of an accountability test f or use in evaluating the quality of instruction. It is not safe to a ssume that, because an accountability test has been officially ad opted in a state, this test is suitable for evaluating schools. Mor e than ever before, educators need to understand what makes a t est suitable for appraising the quality of instruction. All but a few of these 13 content recommendations are applicabl e to both classroom assessments and accountability assessments. The recommendations regarding the determination of an accoun tability test’s evaluative appropriateness and interpreting studen ts’ performances on large- scale, standardized tests, of course, refer only to accountability assessments. Conversely, the recommendation regarding learnin g about formative assessment procedures clearly deals with clas sroom assessments rather than accountability assessments. Beyo nd those dissimilarities, however, a professional development pr ogram aimed at the promotion of teachers’ assessment literacy s
  • 25. hould show how the bulk of the content recommended here has clear relevance to both classroom assessments and accountabilit y assessments. Of particular merit these days is the use of professional learning communities as an adjunct to, or in place of, more traditional p rofessional development activities. Such communities consist of small groups of teachers and/or administrators who meet period ically over an extended period of time, for instance, one or more school years, to focus on topics such as those identified above. If such a group consists exclusively of teachers, then it is typica lly referred to as a teacher learning community. If administrator s are involved, then the label professional learning community i s usually affixed. Given access to at least some written or electr onic materials as a backdrop (e.g., Popham, 2006, which is avail able gratis to such learning communities), collections of educat ors with similar interest can prove to be remarkably effective in helping educators acquire significant new insights.Fad- Free Focus? The presenting question that initiated this analysis was whether professional development programs aimed at enhancing teachers ’ assessment literacy were warranted, either in the short- term or long-term. I identified two sets of teachers’ assessment- related decisions that could be illuminated by such programs, na mely, those decisions related to classroom assessments and thos e decisions related to accountability assessments. Although, at t he current time, teachers are surely faced with assessment- dependent choices stemming from both of these sorts of assessm ents, will both types of assessments be with us over the long ha ul? The answer to that question is, in my view, an emphatic Yes. Wi th regard to classroom assessments, the influential work of Blac k and Wiliam (1998a, 1998b) lends powerful empirical support attesting to the learning dividends of instructionally oriented cla ssroom assessment. When classroom assessments are conceived as assessments for learning, rather than assessments of learning, students will learn better what their teacher wants them to learn
  • 26. . Not only is the evidence supporting such a formative approach to classroom assessment demonstrably effective, but there are —happily— diverse ways to implement an instructionally oriented approach to classroom assessment. As the two British researchers point o ut: The range of conditions and contexts under which studies have s hown that gains can be achieved must indicate that the principle s that underlie achievement of substantial improvements in lear ning are robust. Significant gains can be achieved by many diffe rent routes, and initiatives here are not likely to fail through neg lect of delicate and subtle features. (Black & Wiliam, 1998a, pp . 61–62) It appears, then, that teachers who want to be optimally effectiv e ought to be learning about the essentials of classroom assessm ent for a long while to come. Turning to accountability assessment, there seems little reason t o believe that the demand for test- based evidence of teachers’ effectiveness will evaporate— ever. Accountability pressure on educators springs from taxpaye rs’ doubts that their public schools are as effective as they ough t to be. It will take decades of consistent educational success sto ries before the public is disabused of its skeptical regard for pu blic schools. Even if the public were ever to relax its demands f or educational accountability evidence, thoughtful educators stil l ought to insist on the collection of such evidence. That is the k ind of requirement that any self- respecting profession ought to impose on itself. Thus, it seems that assessment literacy is a commodity needed b y teachers for their own long-term well- being, and for the educational well- being of their students. For the foreseeable future, teachers are l ikely to exist in an environment where test- elicited evidence plays a prominent instructional and evaluative role. In such environments, those who control the tests tend to c ontrol the entire enterprise. Until preservice teacher educators r
  • 27. outinely provide meaningful assessment literacy for prospective teachers, the architects of professional development programs will need to offer assessment- literacy programs. We can only hope they do it well.References Black, P., & Wiliam, D. (1998a). Assessment and classroom lea rning. Assessment in Education: Principles, Policy, and Practice , 5(1), 7–73. Black, P., & Wiliam, D. (1998b). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80( 2), 139–148. La Marca. P. (2006). Assessment literacy: Building capacity for improving student learning. Paper presented at the National Co nference on Large- Scale Assessment, Council of Chief State School Officers, San Francisco, CA. Popham, W. J. (2006). Mastering assessment: A self- service system for educators. New York: Routledge. Stiggins, R. J. (2006). Assessment for learning: A key to studen t motivation and learning. Phi Delta Kappa Edge, 2(2), 1–19. Source: Popham, W. J. (2009). Assessment Literacy for Teacher s: Faddish or Fundamental? Theory Into Practice 48: 4– 11. Taylor and Francis. Copyright © 2009 Routledge. Summary Popham’s article presents a range of assessment topics that teac hers and leaders should be knowledgeable about; he terms comp etence in these content areas as “assessment literacy” and assert s that professional development in school districts should focus explicitly on these areas in order to improve schools and enhanc e student learning. The author asserts that the word assessment, for most teachers, i s synonymous with the word test. He poses the critical question, “What kinds of assessments do teachers most need to understan d?” and responds with a list of 13 topics. The article suggests that teachers and leaders need to be able no t only to apply meaningful and varied assessments but also to un
  • 28. derstand and be “literate” in the field of assessment itself. The a uthor claims that standardized testing in the United States tends to be “instructionally insensitive,” meaning that the results have little or no relationship to how well students are taught. Finally, the author challenges professional development leaders to consider how to embed these important concepts and practice s into ongoing teacher learning venues in schools, and he menti ons professional learning communities (PLCs) as a promising ap proach. Critical Thinking Questions 1. Design a year of PLC meetings in which teachers engage in con scious assessment literacy learning. What would such meetings l ook like? How would teachers engage with each other in learnin g more about assessment in PLCs? 2. Popham writes that school administrators need assessment litera cy training that is, in some ways, like the professional developm ent needed by teachers. He then mentions that there would be so me differences in terms of what administrators need to know. W hat might those differences be? 3. One of the 13 “must understand” topics refers to eliminating ass essments that offend or penalize students because of race, gende r, or socioeconomic status. Discuss this topic in terms of your e xperience and the students you have encountered. How might sc hools and teachers work toward bias-free assessment? 4. This article briefly refers to the need for teachers to assess stud ents’ affect, that is, their attitudes, interests, and values. Why is this important, and how might teachers do this as part of their p ractice? 5. What is your overall impression of this article and the author’s presentation of the tenets of assessment literacy
  • 29. 3.3 Seven Keys to Effective Feedback, by Grant Wiggins Introduction Grant Wiggins has been a central contributor to the field of asse ssment in the last 25 years, due in part to his landmark book, Ed ucative Assessment: Designing Assessments to Inform and Impr ove Student Performance, as well as his work with Jay McTighe . Wiggins and coauthor McTighe have written many books and a rticles focused on backward design for curriculum and assessme nt. Used in hundreds of school districts around the country, bac kward design is a process of planning curriculum from the goals or aims “backwards.” This article directs readers’ attention to feedback as a means of providing learners with information about how they are doing in their efforts to reach a specific goal. Wiggins is clear about the need for a goal in order for feedback to be meaningful to learne rs. The author also asserts that feedback is not evaluative or jud gmental, nor is it advice-driven. Effective feedback is user- friendly, timely, ongoing and consistent. Wiggins also calls attention to the responsibilities of the learner to be open to and use feedback. He writes: “If I am not clear on my goals or if I fail to pay attention to them, I cannot get helpf ul feedback” (p. 18). Finally, Wiggins explains that research sh ows the power of teaching less in order to provide more feedbac k. A careful consideration of this concept may be the essential n ext step in improving assessment practices. Excerpt The following is an excerpt from Wiggins, G. (2012). 7 keys to effective feedback. Educational Leadership, 70(1), 10–19. Who would dispute the idea that feedback is a good thing? Both common sense and research make it clear: Formative assessment , consisting of lots of feedback and opportunities to use that fee dback, enhances performance and achievement. Yet even John Hattie (2008), whose decades of research reveale d that feedback as among the most powerful influences on achie vement, acknowledges that he has “struggled to understand the c oncept” (p. 173). And many writings on the subject don’t even a
  • 30. ttempt to define the term. To improve formative assessment prac tices among both teachers and assessment designers, we need to look more closely at just what feedback is—and isn’t. What Is Feedback, Anyway? The term feedback is often used to describe all kinds of commen ts made after the fact, including advice, praise, and evaluation. But none of these are feedback, strictly speaking. Basically, feedback is information about how we are doing in ou r efforts to reach a goal. I hit a tennis ball with the goal of keep ing it in the court, and I see where it lands— in or out. I tell a joke with the goal of making people laugh, and I observe the audience’s reaction— they laugh loudly or barely snicker. I teach a lesson with the go al of engaging students, and I see that some students have their eyes riveted on me while others are nodding off. Here are some other examples of feedback: · A friend tells me, “You know, when you put it that way and spe ak in that softer tone of voice, it makes me feel better.” · A reader comments on my short story, “The first few paragraphs kept my full attention. The scene painted was vivid and interest ing. But then the dialogue became hard to follow; as a reader, I was confused about who was talking, and the sequence of action s was puzzling, so I became less engaged.” · A baseball coach tells me, “Each time you swung and missed, y ou raised your head as you swung so you didn’t really have your eye on the ball. On the one you hit hard, you kept your head do wn and saw the ball.” Note the difference between these three examples and the first t hree I cited— the tennis stroke, the joke, and the student responses to teaching . In the first group, I only had to take note of the tangible effect of my actions, keeping my goals in mind. No one volunteered f eedback, but there was still plenty of feedback to get and use. T
  • 31. he second group of examples all involved the deliberate, explici t giving of feedback by other people. Whether the feedback was in the observable effects or from othe r people, in every case the information received was not advice, nor was the performance evaluated. No one told me as a perform er what to do differently or how “good” or “bad” my results wer e. (You might think that the reader of my writing was judging m y work, but look at the words used again: She simply played bac k the effect my writing had on her as a reader.) Nor did any of t he three people tell me what to do (which is what many people e rroneously think feedback is— advice). Guidance would be premature; I first need to receive fe edback on what I did or didn’t do that would warrant such advic e. In all six cases, information was conveyed about the effects of my actions as related to a goal. The information did not include value judgments or recommendations on how to improve. Decades of education research support the idea that by teaching less and providing more feedback, we can produce greater learni ng (see Bransford, Brown, & Cocking, 2000; Hattie, 2008; Marz ano, Pickering, & Pollock, 2001). Compare the typical lecture- driven course, which often produces less-than- optimal learning, with the peer instruction model developed by Eric Mazur (2009) at Harvard. He hardly lectures at all to his 20 0 introductory physics students; instead, he gives them problem s to think about individually and then discuss in small groups. T his system, he writes, “provides frequent and continuous feedba ck (to both the students and the instructor) about the level of un derstanding of the subject being discussed” (p. 51), producing g ains in both conceptual understanding of the subject and proble m- solving skills. Less “teaching,” more feedback equals better res ults. Feedback Essentials Whether feedback is just there to be grasped or is provided by a nother person, helpful feedback is goal-
  • 32. referenced; tangible and transparent; actionable; user- friendly (specific and personalized); timely; ongoing; and consi stent. Goal-Referenced Effective feedback requires that a person has a goal, takes actio n to achieve the goal, and receives goal- related information about his or her actions. I told a joke— why? To make people laugh. I wrote a story to engage the reade r with vivid language and believable dialogue that captures the characters’ feelings. I went up to bat to get a hit. If I am not cle ar on my goals or if I fail to pay attention to them, I cannot get helpful feedback (nor am I likely to achieve my goals). Information becomes feedback if, and only if, I am trying to cau se something and the information tells me whether I am on track or need to change course. If some joke or aspect of my writing isn’t working—a revealing, nonjudgmental phrase— I need to know. Note that in everyday situations, goals are often implicit, althou gh fairly obvious to everyone. I don’t need to announce when te lling the joke that my aim is to make you laugh. But in school, l earners are often unclear about the specific goal of a task or less on, so it is crucial to remind them about the goal and the criteria by which they should self- assess. For example, a teacher might say, · The point of this writing task is for you to make readers laugh. So, when rereading your draft or getting feedback from peers, a sk, how funny is this? Where might it be funnier? · As you prepare a table poster to display the findings of your sci ence project, remember that the aim is to interest people in your work as well as to describe the facts you discovered through yo ur experiment. Self- assess your work against those two criteria using these rubrics. The science fair judges will do likewise. Tangible and Transparent
  • 33. Any useful feedback system involves not only a clear goal, but also tangible results related to the goal. People laugh, chuckle, or don’t laugh at each joke; students are highly attentive, some what attentive, or inattentive to my teaching. Even as little children, we learn from such tangible feedback. T hat’s how we learn to walk; to hold a spoon; and to understand t hat certain words magically yield food, drink, or a change of clo thes from big people. The best feedback is so tangible that anyo ne who has a goal can learn from it. Alas, far too much instructional feedback is opaque, as revealed in a true story a teacher told me years ago. A student came up t o her at year’s end and said, “Miss Jones, you kept writing this same word on my English papers all year, and I still don’t know what it means.” “What’s the word?” she asked. “Vag- oo,” he said. (The word was vague!) Sometimes, even when the information is tangible and transpare nt, the performers don’t obtain it— either because they don’t look for it or because they are too bus y performing to focus on the effects. In sports, novice tennis pla yers or batters often don’t realize that they’re taking their eyes off the ball; they often protest, in fact, when that feedback is gi ven. (Constantly yelling “Keep your eye on the ball!” rarely wor ks.) And we have all seen how new teachers are sometimes so b usy concentrating on “teaching” that they fail to notice that few students are listening or learning. That’s why, in addition to feedback from coaches or other able observers, video or audio recordings can help us perceive things that we may not perceive as we perform; and by extension, suc h recordings help us learn to look for difficult-to- perceive but vital information. I recommend that all teachers vi deotape their own classes at least once a month. It was a transfo rmative experience for me when I did it as a beginning teacher. Concepts that had been crystal clear to me when I was teaching seemed opaque and downright confusing on tape— captured also in the many quizzical looks of my students, which I had missed in the moment.
  • 34. Actionable Effective feedback is concrete, specific, and useful; it provides actionable information. Thus, “Good job!” and “You did that wr ong” and B+ are not feedback at all. We can easily imagine the l earners asking themselves in response to these comments, what specifically should I do more or less of next time, based on this information? No idea. They don’t know what was “good” or “wr ong” about what they did. Actionable feedback must also be accepted by the performer. M any so- called feedback situations lead to arguments because the givers are not sufficiently descriptive; they jump to an inference from t he data instead of simply presenting the data. For example, a su pervisor may make the unfortunate but common mistake of stati ng that “many students were bored in class.” That’s a judgment, not an observation. It would have been far more useful and less debatable had the supervisor said something like, “I counted on going inattentive behaviors in 12 of the 25 students once the lec ture was underway. The behaviors included texting under desks, passing notes, and making eye contact with other students. How ever, after the small- group exercise began, I saw such behavior in only one student.” Such care in offering neutral, goal- related facts is the whole point of the clinical supervision of tea ching and of good coaching more generally. Effective superviso rs and coaches work hard to carefully observe and comment on what they observed, based on a clear statement of goals. That’s why I always ask when visiting a class, “What would you like m e to look for and perhaps count?” In my experience as a teacher of teachers, I have always found such pure feedback to be accep ted and welcomed. Effective coaches also know that in complex performance situations, actionable feedback about what went rig ht is as important as feedback about what didn’t work. User-Friendly Even if feedback is specific and accurate in the eyes of experts or bystanders, it is not of much value if the user cannot understa
  • 35. nd it or is overwhelmed by it. Highly technical feedback will se em odd and confusing to a novice. Describing a baseball swing t o a 6-year- old in terms of torque and other physics concepts will not likely yield a better hitter. Too much feedback is also counterproducti ve; better to help the performer concentrate on only one or two key elements of performance than to create a buzz of informatio n coming in from all sides. Expert coaches uniformly avoid overloading performers with to o much or too technical information. They tell the performers o ne important thing they noticed that, if changed, will likely yiel d immediate and noticeable improvement (“I was confused abou t who was talking in the dialogue you wrote in this paragraph”). They don’t offer advice until they make sure the performer und erstands the importance of what they saw. Timely In most cases, the sooner I get feedback, the better. I don’t want to wait for hours or days to find out whether my students were attentive and whether they learned, or which part of my written story works and which part doesn’t. I say “in most cases” to allo w for situations like playing a piano piece in a recital. I don’t w ant my teacher or the audience barking out feedback as I perfor m. That’s why it is more precise to say that good feedback is “ti mely” rather than “immediate.” A great problem in education, however, is untimely feedback. V ital feedback on key performances often comes days, weeks, or even months after the performance— think of writing and handing in papers or getting back results on standardized tests. As educators, we should work overtime to fi gure out ways to ensure that students get more timely feedback and opportunities to use it while the attempt and effects are still fresh in their minds. Before you say that this is impossible, remember that feedback does not need to come only from the teacher or even from peopl e at all. Technology is one powerful tool— part of the power of computer-
  • 36. assisted learning is unlimited, timely feedback and opportunitie s to use it. Peer review is another strategy for managing the loa d to ensure lots of timely feedback; it’s essential, however, to tr ain students to do small- group peer review to high standards, without immature criticism s or unhelpful praise. Ongoing Adjusting our performance depends on not only receiving feedb ack but also having opportunities to use it. What makes any asse ssment in education formative is not merely that it precedes sum mative assessments, but that the performer has opportunities, if results are less than optimal, to reshape the performance to bette r achieve the goal. In summative assessment, the feedback come s too late; the performance is over. Thus, the more feedback I can receive in real time, the better m y ultimate performance will be. This is how all highly successfu l computer games work. If you play Angry Birds, Halo, Guitar Hero, or Tetris, you know that the key to substantial improveme nt is that the feedback is both timely and ongoing. When you fai l, you can immediately start over— sometimes even right where you left off— to get another opportunity to receive and learn from the feedbac k. (This powerful feedback loop is also user- friendly. Games are built to reflect and adapt to our changing ne ed, pace, and ability to process information.) It is telling, too, that performers are often judged on their abilit y to adjust in light of feedback. The ability to quickly adapt one ’s performance is a mark of all great achievers and problem solv ers in a wide array of fields. Or, as many little league coaches s ay, “The problem is not making errors; you will all miss many b alls in the field, and that’s part of learning. The problem is whe n you don’t learn from the errors.” Consistent To be useful, feedback must be consistent. Clearly, performers c an only adjust their performance successfully if the information fed back to them is stable, accurate, and trustworthy. In educati
  • 37. on, that means teachers have to be on the same page about what high- quality work is. Teachers need to look at student work together, becoming more consistent over time and formalizing their judg ments in highly descriptive rubrics supported by anchor product s and performances. By extension, if we want student-to- student feedback to be more helpful, students have to be trained to be consistent the same way we train teachers, using the same exemplars and rubrics. Progress Toward a Goal In light of these key characteristics of helpful feedback, how ca n schools most effectively use feedback as part of a system of f ormative assessment? The key is to gear feedback to long- term goals. Let’s look at how this works in sports. My daughter runs the mil e in track. At the end of each lap in races and practice races, the coaches yell out split times (the times for each lap) and bits of feedback (“You’re not swinging your arms!” “You’re on pace fo r 5:15”), followed by advice (“Pick it up— you need to take two seconds off this next lap to get in under 5: 10!”). My daughter and her teammates are getting feedback (and advic e) about how they are performing now compared with their final desired time. My daughter’s goal is to run a 5:00 mile. She has already run 5:09. Her coach is telling her that at the pace she ju st ran in the first lap, she is unlikely even to meet her best time so far this season, never mind her long- term goal. Then, he tells her something descriptive about her cu rrent performance (she’s not swinging her arms) and gives her a brief piece of concrete advice (take two seconds off the next la p) to make achievement of the goal more likely. The ability to improve one’s result depends on the ability to adj ust one’s pace in light of ongoing feedback that measures perfor mance against a concrete, long- term goal. But this isn’t what most school district “pacing guide s” and grades on “formative” tests tell you.
  • 38. They yield a grade against recent objectives taught, not useful f eedback against the final performance standards. Instead of info rming teachers and students at an interim date whether they are on track to achieve a desired level of student performance by th e end of the school year, the guide and the test grade just provid e a schedule for the teacher to follow in delivering content and a grade on that content. It’s as if at the end of the first lap of th e mile race, my daughter’s coach simply yelled out, “B+ on that lap!” The advice for how to change this sad situation should be clear: Score student work in the fall and winter against spring standar ds, use more pre- and post- assessments to measure progress toward these standards, and do the item analysis to note what each student needs to work on for better future performance. “But There’s No Time!” Although the universal teacher lament that there’s no time for s uch feedback is understandable, remember that “no time to give and use feedback” actually means “no time to cause learning.” As we have seen, research shows that less teaching plus more fe edback is the key to achieving greater learning. And there are n umerous ways—through technology, peers, and other teachers— that students can get the feedback they need. References Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (2000). How people learn: Brain, mind, experience, and school. Washin gton, DC: National Academy Press. Hattie, J. (2008). Visible learning: A synthesis of over 800 meta -analyses relating to achievement. New York: Routledge. Marzano, R., Pickering, D., & Pollock, J. (2001). Classroom ins truction that works: Research- based strategies for increasing student achievement. Alexandria, VA: ASCD. Mazur, E. (2009, January 2). Farewell, lecture? Science, 323, 50 –51. Source: Wiggins, G. (2012). 7 keys to effective feedback. Educa
  • 39. tional Leadership. 70(1), 10– 19. Alexandria, VA: Association for Supervision and Curriculu m Development. Copyright © Grant Wiggins. Summary Wiggins calls for feedback to be stable, accurate, and trustworth y. He highlights the difference between feedback, evaluation, an d grading, implicitly challenging teachers to expand their repert oire to include all three processes on a regular basis. Wiggins also calls for frequent feedback, claiming that the more feedback students receive, the more learning will occur. He con cludes the article by acknowledging the difficulty of finding the time to provide such feedback in today’s classrooms; he sugges ts that teachers consider teaching less and providing more feedb ack through technology, peers, and other educators. If the goal i s to enhance and improve learning, then time providing direct fe edback is well spent. Wiggins also proposes more pre- and postassessments, more item analysis on tests in which stud ents are provided specific information about their errors, and m ore early practice testing (i.e., in the fall for spring tests) that c ould provide individualized feedback as part of classroom practi ce. Critical Thinking Questions 1. What do you think about the concept of teaching less in order to provide more feedback? What might that look like in today’s cl assrooms, whether face to face or online? 2. Providing feedback that actually contributes to learning is not e asy and is not a skill that educators necessarily learn through pr eservice teacher education. How do teachers learn to provide fe edback that is useful? 3. Wiggins claims that feedback is not the same as evaluation. Yet, feedback can be part of a formative assessment process that doe s provide information to learners before it is too late. When sho
  • 40. uld evaluation or judgment be avoided, and when is it important to give evaluative comments that help students learn from their mistakes? 4. Design a research study in which you and your colleagues woul d examine feedback to students provided online. Determine how you would explore the connections between feedback provided and subsequent student work improvement.3.4 Feedback and Fe ed Forward, by Nancy Frey and Doug Fisher Introduction Nancy Frey and Doug Fisher are both professors of educational leadership at San Diego State University. They are the founders of Literacy for Life and have written and presented about readin g, collaborative learning, and, most recently, the common core English language arts standards in PLCs. They are also the auth ors of the 2011 text, The Formative Assessment Action Plan: Pr actical Steps to More Successful Teaching and Learning. The evocative title of this article indicates a new perspective on what happens after teachers provide feedback to individual stud ents. Frey and Fisher propose that it is not enough to monitor at the individual level; rather, teachers need to look for patterns ac ross students’ work in order to design interventions and targeted teaching approaches to address group needs. Frey and Fisher make the connection between feedback, assessm ent, and “feeding forward” to inform instruction. In their view, any one of these practices is incomplete without the other two. The authors also discuss the issue of the focus of feedback, noti ng that feedback about the assigned task is the most familiar to t eachers and students. Other types of feedback, from the work of Hattie and Timperley (2007) include feedback about the proces s, about self-regulation, and about “the self as a person” (p. 90). Excerpt The following is an excerpt from Frey, N., & Fisher, D. (2011). Feedback and feed forward. Principal Leadership, 11(9), 90–93.
  • 41. Internet searches often yield surprising results. In preparation f or writing this column, we searched one of our favorite sayings: “You can’t fatten sheep by weighing them.” One of the results was an article from the April 1908 issue of the Farm Journal on early spring lambs. Among the advice to sheep farmers was to ta ke care in apportioning their rations so as not to overfeed, to pr ovide healthy living conditions so they can grow, and to take ca reful measure of their progress— and this piece of wisdom: “Study your sheep and know them not only as a flock but separately, and remember that they have an individuality as surely as your horse or cow” (Brick, 1908, p. 15 4). Students are not sheep, of course, but our role as cultivators of young people has much in common with that of livestock farmer s. As educators, we recognize the importance of a healthy learni ng climate and seek to create one each day. In addition, we appo rtion information so that students can act upon their growing kn owledge. And we measure their progress regularly to see whethe r they are making expected gains. As part of effective practice, t eachers routinely check for understanding through the learning process. This is most commonly accomplished by asking questio ns, analyzing tasks, and administering low- stakes quizzes to measure the extent to which students are acqui ring new information and skills. But it’s one thing to gather info rmation (we’re good at that); it’s another thing to respond in me aningful ways and then plan for subsequent instruction. Without processes to provide students with solid feedback that y ields deeper understanding, checking for understanding devolve s into a game of “guess what’s in the teacher’s brain.” And with out ways to look for patterns across students, formative assessm ents become a frustrating academic exercise. Knowing both the flock and the individuals in it are essential practices for cultivat ing learning.Knowing the Individual: Effective Feedback Most of us have received poor feedback: The teacher who scraw led “rewrite this” in the margin of an essay we wrote. The coach who said, “No, you’re doing it wrong; keep practicing.” The co
  • 42. worker who took over a task and did it for us when our progress stalled. The frustration on the learner’s part matches that felt b y the teacher, the coach, or the coworker: why can’t he or she g et this? That shared vexation produces a mutual sense of defeat. On the part of the learner, the internal dialogue becomes, “I ca n’t do this.” The teacher thinks, “I can’t teach this.” Over time, blame sets in, and the student and the teacher begin to find fault with each other. Hattie and Timperley (2007) wrote about feedback across four d imensions: “Feedback about the task (FT), about the processing of the task (FP), about self- regulation (FR), and about the self as a person (FS)” (p. 90). Fo r example, “You need to put a semicolon in this sentence” (FT) has limited usefulness and is not usually generalized to other tas ks. On the other hand, “Make sure that your sentences have nou n- verb agreements because it’s going make it easier for the reader to understand your argument” (FP) gives feedback information a bout a writing convention necessary in all essays. The researche rs go on to note that feedback that moves from information abou t the process to information about self- regulation is the best of all: “Try reading some of your sentence s aloud so you can hear when you have and don’t have noun- verb agreement.” The researchers go on to say that FS (“You’re a good writer”) is the least useful, even when it is positive in na ture, because it doesn’t add anything to one’s learning. Done carefully, FT can have a modest amount of usefulness, as when editing a paper. Yet feedback about the task is by far the most common kind we offer. The problem is that the task offers only end- game analysis and leaves the learner with little direction on wha t to do, particularly when there isn’t any recourse to make chan ges. Most writing teachers will tell you that it is not uncommon for students to engage in limited revision, confined to the specif ic items listed in the teacher feedback— more recopying than revising. But feedback about the processes
  • 43. used in the task and further advice about one’s self- regulatory strategies to make revisions can leave the learner wit h a plan for next steps. Consider the dialogue between English teacher John Goodwin a nd Alicia, a student in his class. Alicia has drafted an essay on bullying, and Goodwin is providing feedback about her work. C areful to frame his feedback so that it can result in a plan for re vision, he draws her attention to her thesis statement and says, “ It’s helpful for writers to go back to the main point of the essay and read to see if the evidence is there. I highlight in yellow so I can see if I’ve done that.” The two of them reread her first thr ee paragraphs and highlight where she has provided national sta tistics and direct quotes from teachers she knows. Goodwin goes on to say, “Now what I want you to do is look for ways you’ve provided supporting evidence, like citing sources. Let’s highlight those in green.” Alicia quickly notices that whil e she has made claims, she hasn’t capitalized on any authoritati ve sources. And by confining her direct quotes to teachers at her school, she has limited the impact of her essay by failing to qu ote more widely known sources. The little bit of green on her es say illustrates what she needs to do next: strengthen her sources . Goodwin ends the conversation by saying, “It sounds like you have a plan for revising the content. Let’s meet again on Wedne sday and you can update me on your progress.” Feedback of this kind takes only a few minutes, yet it can add u p in a crowded classroom. For this reason, many teachers rely o n written forms of feedback instead of direct conversations. Eve n in written form, the guidelines about feedback remain the sam e: focus on the processes needed for the task, move to informati on about behaviors within the student’s influence to make chang es, and steer clear of comments that are either too global or too minute to be of much use. Wiggins (1998) advises constructing written feedback so that it meets four important criteria: first, it must be timely so that it is paired as closely as possible with th e attempt; second, it should be specific in nature; third, it shoul d be written in a manner that it understandable to the student; a
  • 44. nd fourth, it should be actionable so that the learner can make re visions.Knowing the Flock: Feed Forward Although feedback is primarily at the individual level, feed for ward describes the process of making instructional decisions ab out what should happen next (Frey & Fisher, in press). Data abo ut student progress is commonly gathered using common format ive assessments— either commercially produced or made by the teacher. In additio n, many school teams engage in consensus scoring with colleagu es to calibrate practices, especially with tasks that have a signifi cant qualitative component, such as writing (Fisher, Frey, Farna n, Fearn, & Petersen, 2004). Lack of time to work with other col leagues can limit these practices, however. The good news is th at a teacher’s own classroom can serve as the unit of analysis as well. With all the solid feedback provided to students, it seems natura l to take it one step further by recording results and some patter n anaIysis. For example, mathematics teacher Ben Teichman kee ps track of student progress across several dimensions of instruc tion. As he provides written or verbal feedback to his students, he notes which skills they have mastered and which ones are stil l proving difficult for them. His error analysis record sheet enab les him to make decisions about who needs reteaching and when it needs to occur (see Figure 3.1). “All the feedback in the worl d isn’t going to do much good if what they really need is more i nstruction,” said Teichman, an insight Hattie and Timperley (20 07) share.Figure 3.1: Error analysis sheet in Algebra II: Introdu ction to complex numbers Teachers can use an error analysis sheet to record the initials of students who have not mastered instructional goals. Unlike a checklist to track mastery, Teichman’s error analysis s heet is used to identify the students who are struggling. He logs the initials of students in each period who are still having diffic ulty with major concepts after initial instruction, then makes de cisions about follow up and reteaching. For example, the error a
  • 45. nalysis sheet shows that all of his classes are still having difficu lty with understanding the relationship between different forms of representing imaginary numbers. That tells him that reteachin g to the whole group is in order. On the other hand, smaller gro ups of students are having trouble with other concepts. “I need t o pull those students into small groups, because the majority of the class is doing fine otherwise,” he said. Fourth period is anot her story. “I’ve got lots of students all across the board who are struggling with this whole unit,” he said. “Time for me to take a few steps back and revisit what they know already about radica ls before we dive back into imaginary numbers.”Conclusion “To be successful, [the sheep farmer] must also be gentle, with a watchful eye for little things . . . and a hundred minor details upon which success depends,” wrote Brick (1908, p. 154) more t han a century ago. Feedback and feed- forward processes in the classroom should be used to cultivate l earning, and not just simply measure it. By providing students with feedback they can use to revise and by tracking student pro gress to determine who needs subsequent instruction and when i t should occur, educators can ensure that they feed and not mere ly weigh.References Brick, H. (1908). Early spring lambs. The Farm Journal, 32(4), 153–154. Frey, N., & Fisher, D. (in press). The formative assessment acti on plan: Practical steps to more successful teaching and learnin g. Alexandria, VA: ASCD. Fisher, D., Frey, N., Farnan, N., Feam, L., & Petersen, F. (2004) . Increasing writing achievement in an urban middle school. Mi ddle School Journal, 36(2), 21–26. Hattie, J., & Timperley, H. (2007). The power of feedback. Revi ew of Educational Research, 77, 81–112. Wiggins, G. (1998). Educative assessment: Designing assessme nts to inform and improve student performance. San Francisco, CA: Jossey-Bass. Source: Frey, N. & Fisher, D. (2011). Feedback and feed forwar d. Principal Leadership, 11(9), 90–
  • 46. 93. Copyright (2014) National Association of Secondary School Principals. For more information on NASSP products and servi ces to promote excellence in middle level and high school leade rship, visit www.nassp.org. Summary Frey and Fisher provide specifics on one type of data regarding feedback that teachers would find beneficial. They suggest that teachers keep checklists or error analysis sheets in order to dete rmine the types of feedback they have provided to individual stu dents as well as the types of errors that students make. Collecting these data, however, is only the first step. Frey and F isher challenge teachers to then use these data to determine thei r next steps in teaching, reteaching, or other follow- up steps. Although the article does not provide great detail in te rms of the implications for planning, the authors claim that simp ly measuring learning without considering what those measurem ents mean for teaching will not lead to improved learning. Frey and Fisher note that these practices can work for individual teac hers but would be even more effective if shared across class sec tions or schools. Finally, these authors remind readers of the val ue of verbal as well as written feedback. Critical Thinking Questions 1. The notions of feedback and feed forward are unique in the liter ature on assessment. What are the implications for this concept for online teaching and learning? How can feedback (given and received) in an online environment contribute to instructional d esign? 2. What’s the difference between feedback concerning a task and f eedback concerning the process of completing the task? Give an example from your own experience or subject context. 3. Research is beginning to emerge concerning the connections bet
  • 47. ween assessment data (usually with respect to standardized test performance) and instruction, although there are few studies tha t demonstrate effective and intentional connections that teachers make between assessments and what they do in the classroom ( Conderman & Hedin, 2012; Watts- Taffe et al., 2012). How might teachers explore these connectio ns as a PLC or grade- level team? What might the evidence of follow- up and reteaching and use of assessment data look like? 4. Make the argument that teachers almost intuitively give this kin d of verbal and written feedback followed by feeding forward to inform their teaching. Then, counter that argument and suggest, as Frey and Fisher do, that teachers in fact don’t do this enough and that they need to be taught to connect assessment and desig n of their instruction. 3.5 Four Steps in Grading Reform, by Thomas Guskey and Lee Ann Jung Introduction Thomas Guskey is a well- known professor and expert in the area of professional develop ment for educators. Guskey and co- author Lee Ann Jung maintain a teacher- learning lens in this article that focuses on four proposals for m eaningful grading. Grading students’ performance is seldom written about or discus sed, though it is usually the culmination of the assessment proce ss in classrooms and schools. Educators and leaders in schools a ssume that teachers know how to determine the letter grades tha t appear on report cards each term. Guskey and Jung challenge t hat assumption. They call for reform and rethink grading practic es at the school and classroom levels. The authors note that there is great variety in the ways that teac hers determine grades. Moreover, there is seldom a clearly state d purpose for the grades printed on the report card for parents a
  • 48. nd students to see. The article discusses an alternative to just on e letter grade for a subject or content area and instead discusses the value of process, progress, and product grades to fully repre sent students’ learning during the term. Finally, Guskey and Jung propose a means of grading students who are not currently on grade level, reminding us that the stan dards movement offers great potential for gain for students who are struggling and need feedback based on exactly where they ar e in their learning. Excerpt The following is an excerpt from Guskey, T. R., & Jung, L. A. ( 2012). Four steps in grading reform. Principal Leadership, 13(4) , 22–28. The field of education is rapidly moving toward a standards- based approach to grading. School leaders have become increasi ngly aware of the tremendous variation that exists in grading pr actices, even among teachers of the same courses in the same de partment in the same school. Consequently, students’ grades oft en have little relation to their performance on state assessments — an issue that has education leaders and parents alike concerned. Such inconsistencies lead many to perceive grading as a distinct ively idiosyncratic process that is highly subjective and often u nfair to students. Complicating reform efforts, however, is the fact that few schoo l leaders have extensive knowledge of various grading methods, the advantages and shortcomings of those methods, and the effe cts that different grading policies have on students (Brookhart, 2011a; Brookhart & Nitko, 2008; Stiggins, 1993; Stiggins & Ch appuis, 2011). As a result, attempts at grading reform often lack direction and coherence and rarely bring about significant impr ovement in the accuracy or relevance of the grades students rece ive. Effective grading reform requires four steps. Although each step addresses a different aspect of grading and reporting, all of the steps are related. Together, the four steps are the foundation of
  • 49. grading policies and practices that are fair, meaningful, educati onally sound, and beneficial to students. Be Clear About the Purpose One of the major reasons that school leaders run into difficultie s in their attempts to reform grading and reporting is that they f ail to identify the purpose of grading. Enamored of the promise of new online grade books and reporting software, they charge a head without giving serious thought to the function of grades as communication tools. In particular, they fail to consider what in formation they want grades to communicate, who is the primary audience for that information, and what outcome they want to ac hieve. As a result, predictable problems arise that thwart even t he most dedicated attempts at reform. Compounding the problem, parents, teachers, students, and scho ol leaders typically see report cards serving quite different purp oses. Some suggest that those differences stem from the conflict ing opinions about the report cards’ intended audience. Are they designed to communicate information primarily to parents, stud ents, or school personnel? Although a variety of purposes for grades and report cards may be considered legitimate, educators seldom agree on the primary purpose. This lack of consensus leads to attempts to develop a r eporting device that addresses multiple purposes but ends up ad dressing no purpose very well (Austin & McCann, 1992; Brookh art, 1991, 2011a; Cross & Frary, 1999). The simple truth is that no single reporting device can serve all purposes well. In fact, some purposes are actually counter to oth ers. For example, suppose that nearly all students in a particular sch ool attain high levels of achievement and earn high grades. Thos e results pose no problem if the purpose of the report cards is to communicate information about students’ achievement to paren ts or to provide information to students for the purpose of self- evaluation. But that same result poses major problems, if the pu rpose of the report cards is to select students for special educati onal paths or to evaluate the effectiveness of instructional progr
  • 50. ams. To use grades for selection or evaluation purposes requires vari ation in the grades— and the more variation, the better! For those purposes, grades sh ould be dispersed across all possible categories to maximize the differences among students and programs. How else can approp riate selection take place or one program be judged as being bett er than another if all students receive the same high grades? Det ermining differences under such conditions is impossible. The first decision that must be made in any reform effort, theref ore, is determining the purpose of the grades and report card. Th e struggles that most school leaders experience in reforming gra ding policies and practices stem from changing their grading me thods before they reach consensus about the purpose of grades a nd report cards (Brookhart, 2011b). All changes in grading polic y and practice must build from a clearly articulated purpose stat ement, which should be printed on the report card itself so that all who look at the report card understand its intent. When a cle ar purpose is defined, decisions about the most appropriate poli cies and practices are much easier to make. Use Multiple Grades Another issue that poses a significant obstacle to grading and re porting reform is the insistence that students receive a single gr ade for each subject area or course. The simplest logic reveals t hat this practice makes little sense. If someone proposed combin ing measures of height, weight, diet, and exercise into a single n umber or mark to represent a person’s physical condition, we w ould consider it ridiculous. How could the combination of such diverse measures yield anything meaningful? Yet every day, tea chers combine evidence of student achievement, attitude, respon sibility, effort, and behavior into a single grade, and no one que stions it. In determining students’ grades, teachers frequently merge scor es from major exams, compositions, quizzes, projects, and repor ts with evidence from homework, punctuality in turning in assig nments, class participation, work habits, and effort. Computeriz
  • 51. ed grading programs help teachers apply different weights to ea ch of those categories (Guskey, 2002a), which they then combin e in widely varied ways (see McMillan, 2001; McMillan, Myran , & Workman, 2002). The result is what researchers refer to as a “hodgepodge grade” (Cross & Frary, 1999). Another more meaningful approach is to offer separate grades f or product, process, and progress learning criteria (Guskey, 200 6; Guskey & Bailey, 2010). Product criteria reflect what students know and are able to do at a specific point in time. In other words, they reflect students’ c urrent level of achievement. Evidence of meeting product criteri a comes from culminating or “summative” evaluations of studen t performance (O’Connor, 2009). Teachers who use product crit eria typically base grades on final examination scores; final rep orts, projects, or exhibits; overall assessments; and other culmin ating demonstrations of learning. Process criteria are emphasized by educators who believe that p roduct criteria do not provide a complete picture of student lear ning. They contend that grades should reflect not only the final results but also how students got there. Teachers who consider r esponsibility, effort, or work habits when assigning grades use p rocess criteria. So do those who count classroom quizzes, forma tive assessments, homework, punctuality turning in assignments , class participation, or attendance. Progress criteria are based on how much students have gained fr om their learning experiences. Other names for progress criteria include learning gain, improvement scoring, value- added learning, and educational growth. Teachers who use progr ess criteria look at how much improvement students have made over a particular period of time, rather than just where they are at a given moment. As a result, scoring criteria may be highly in dividualized among students. For example, grades might be base d on the number of skills or standards in a learning progression that students mastered and on the adequacy of that level of prog ress for each student. Most of the research evidence on progress criteria comes from studies of individualized instruction (Esty
  • 52. & Teppo, 1992) and special education programs (Gersten, Vaug hn, & Brengelman, 1996; Jung & Guskey, 2012). After establishing specific indicators of product, process, and pr ogress learning criteria, teachers then assign separate grades to each set of indicators. In this way, they keep grades for achieve ment separate from grades for responsibility, learning skills, eff ort, work habits, or learning progress (Guskey, 2002b; Stiggins, 2008). This allows a more accurate and comprehensive picture o f what students accomplish in school. Reporting separate grades for product, process, and progress cri teria also makes grading more meaningful. Grades for academic achievement reflect precisely that—academic achievement— and not some confusing amalgamation that’s impossible to inter pret and that rarely presents a true picture of students’ proficien cy (Guskey, 2002a). Teachers also indicate that students take pr ocess elements, such as homework, more seriously when it’s rep orted separately. Parents favor the practice because it provides a more comprehensive profile of their children’s performance in school (Guskey, Swan, & Jung, 2011). The key to success in rep orting multiple grades, however, rests in the clear specification of the indicators that relate to product, process, and progress cri teria. Teachers must be able to describe how they plan to evalua te students’ achievement, attitude, effort, behavior, and progress . Then they must clearly communicate those criteria to students, parents, and others. Change Procedures for Selecting the Class Valedictorian and Eli minate Class Rank The third step involves challenging a long- held tradition in education. Most school leaders today understan d the negative consequences of grading on the curve. They reco gnize that when grades are based on students’ relative standing among classmates, rather than on what students actually achieve , it’s impossible to tell if anyone learned anything. Most school leaders also see that grading on the curve makes le arning highly competitive for students who must battle one anot her for the few scarce rewards (high grades) awarded by the tea
  • 53. cher. Such competition discourages students from cooperating o r helping one another because doing so might hurt the helper’s c hance at success (Krumboltz & Yeh, 1996). Similarly, teachers may refrain from helping individual students under those condit ions because some students might construe this as showing favo ritism and biasing the competition (Gray, 1993). School leaders may fail to recognize that other common school policies yield si milar negative consequences, such as calculating students’ class rank on the basis of weighted GPAs and selecting the top stude nt as the class valedictorian. There is nothing wrong with recognizing excellence in academic performance. But when calculating class rank, the focus is on s orting and selecting talent, rather than on developing talent. The struggle to be on top of the sorting process and then chosen as class valedictorian leads to serious and sometimes bitter compet ition among high- achieving students. Early in their high school careers, top stude nts analyze the selection procedures and then, often with the hel p of their parents, find ingenious ways to improve their standing . Gaining that honor requires not simply high achievement; it re quires outdoing everyone else. And sometimes the difference a mong top-achieving students is as little as one- thousandth of a decimal point in a weighted GPA. Ironically, the term valedictorian has nothing to do with achieve ment. It comes from the Latin, vale dicere, which means “to say farewell.” The first reference to the term appeared in the diary of the Reverend Edward Holyoke, president of Harvard College in 1759, who noted that “Officers of the Sophisters chose a Vale dictorian.” Lacking any established criteria, the Sophisters (seni or class members) arbitrarily selected the classmate with the hig hest academic standing to deliver the commencement address. Within a few years, most colleges and universities moved away from competitive ranking procedures to identify honor students and, instead, adopted the criterion- based Latin system, graduating students cum laude, magna cum laude, and summa cum laude. Most also altered their procedures
  • 54. for selecting a commencement speaker, using such means as stu dent votes and appointments made by faculty members on the ba sis of not only grades but also involvement in service projects a nd participation in extracurricular activities. More and more high schools today are moving away from comp etitive ranking systems and adopting criterion- based systems similar to those used in colleges and universities. Rigorous academic criteria are established for attaining the hig h honor categories, but no limit is set on the number of students who might attain that level of achievement. Schools that establi sh such policies generally find that student achievement rises as more students strive to attain the honor. In addition, students b egin helping each other gain the honor because helping a classm ate can actually help, rather than hinder the helper’s chance of s uccess. Instead of pitting students against each other, such a sys tem unites students and teachers in efforts to master the curricul um and meet rigorous academic standards. Recognizing excellence in academic performance is a vital aspe ct of any learning community. But such recognition need not be based on arbitrary criteria and deleterious competition. Instead, it can and should be based on clear models of excellence that ex emplify the highest standards and goals for students. (See Gusk ey & Bailey, 2010.) Educators can then take pride in helping the largest number of students possible meet those rigorous criteria and high standards of excellence. Give Honest, Accurate, and Meaningful Grades The fourth step in effective reform of grading and reporting is t o ensure honest, accurate, and meaningful grades for exceptiona l and struggling learners. Of all of the students in a school’s po pulation, those who have disabilities or who are struggling learn ers have the most to gain from a standards- based approach. For those students, intervention decisions depe nd on having clear and complete information on their performan ce. But moving to standards- based grading presents a serious challenge. By removing non-