Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
Colorado assessment summit_teacher_eval
1. Presenter - John Cronin, Ph.D.
Contacting us:
NWEA Main Number: 503-624-1951
E-mail: rebecca.moore@nwea.org
This PowerPoint presentation and recommended resources are
available at our our Slideshare site
http://www.slideshare.net/JohnCronin4/colorado-assessment-
summitteachereval
Considerations when using tests for
teacher evaluation
2. Key Colorado requirements related to
testing
• Assessment constitutes 50% of the evaluation.
• Statewide summative assessments for subjects in which available.
Districts will be on their own for other subjects.
• Use of the Colorado Growth Model with statewide assessment.
• A measure of individually attributed or collectively attributed student
growth.
• Local measure must be credible, valid (aligned), reliable, and inferences
from the measure must be supportable by evidence and logic.
• The law requires that the measures should support consistent inferences.
• Rating of ineffective or partially effective can lead to loss of non-
probationary status.
• If a value-added model is used the model must be transparent enough to
permit external evaluation.
3. Unique characteristics of the
Colorado approach
• Student progress counts for 50% of the
evaluation.
• Teachers are evaluated on both a “catch up”
and “keep up” metric (at least on TCAP)
• The Colorado Growth Model will be used to
evaluate progress (at least on TCAP)
4. A finding of effectiveness or ineffectiveness is
more defensible when it is arrived at by:
1. Two or more assessments of different designs.
2. Two or more models of different designs.
3. As many cases as possible.
It is not good to choose tests or models for local
assessment in hopes that they will mimic the
state assessment.
5. If evaluators do not
differentiate their
ratings, then all
differentiation
comes from the test.
7. Results of Tennessee Teacher Evaluation
Pilot
0%
10%
20%
30%
40%
50%
60%
1 2 3 4 5
Value-added result
Observation Result
8. Results of Georgia Teacher Evaluation Pilot
Evaluator Rating
ineffective
Minimally Effective
Effective
Highly Effective
9. Bill and Melina Gates Foundation (2013, January). Ensuring Fair and Reliable Measures of Effective
Teaching: Culminating Findings from the MET Projects Three-Year Study
Observation by Reliability coefficient
(relative to state test
value-added gain)
Proportion of test
variance
explained
Model 1 – State test – 81%
Student surveys 17% Classroom
Observations – 2%
.51 26.0%
Model 2 – State test – 50%
Student Surveys – 25%
Classroom Observation – 25%
.66 43.5%
Model 3 – State test – 33% -
Student Surveys – 33%
Classroom Observations – 33%
.76 57.7%%
Model 4 – Classroom Observation
50%
State test – 25%
Student surveys – 25%
.75 56.2%
Reliability of evaluation weights in predicted
stability of student growth gains year to year
10. Bill and Melina Gates Foundation (2013, January). Ensuring Fair and Reliable Measures of Effective
Teaching: Culminating Findings from the MET Projects Three-Year Study
Observation by Reliability coefficient
(relative to state test
value-added gain)
Proportion of test
variance
explained
Principal – 1 .51 26.0%
Principal – 2 .58 33.6%
Principal and other administrator .67 44.9%
Principal and three short
observations by peer observers
.67 44.9%
Two principal observations and
two peer observations
.66 43.6%
Two principal observations and
two different peer observers
.69 47.6%
Two principal observations one
peer observation and three short
observations by peers
.72 51.8%
Reliability of a variety of teacher observation
implementations
11. Testing
Metric (Growth or Gain Score)
Analysis (Value Added Effect
Size and/or ranking)
Evaluation (Performance
Rating)
How tests are used to evaluate teachers and
principals
12. Issues in the use of growth measures
Instructional alignment
Tests used for teacher evaluation
must align to the teacher’s
instructional responsibilities.
13. Common problems with instructional
alignment
• Using school level math and reading
results in the evaluation of music,
art, and other specials teachers.
• Using general tests of a discipline
(reading, math, science) as a major
component of the evaluation high
school teachers delivering specialized
courses.
14. Florida Teachers Sue Over Evaluation System
New York Times, April 17, 2013
Seven Florida teachers have brought a federal lawsuit to protest job evaluation
policies that tether individual performance ratings to the test scores of students
who are not even in their classes. The suit, which was filed Tuesday in
conjunction with three local affiliates of the National Education Association in
Federal District Court for the Northern District of Florida in Gainesville, says
Florida’s two-year-old evaluation system violates teachers’ rights of due process
and equal protection. Under a 2011 law, schools and districts must evaluate
teachers in part based on how much their students learn, as measured by
standardized tests. But since Florida, like most states, administers only math and
reading tests and only in selected grades, many teachers do not teach tested
subjects. One of the plaintiffs, a first-grade teacher, was rated on the
basis of test scores of students in a different school in her
district, and another, who teaches vocational classes to aspiring
health care workers, was rated based on test scores of students in
grades and subjects she had never taught. “This lawsuit highlights the
absurdity of the current evaluation system,” said Andy Ford, president of the
Florida Education Association.
16. Inconsistency occurs because
• Of differences in test design.
• Differences in testing conditions.
• Differences in models being applied to
evaluate growth.
17. Test Retest
Test 1
Time 1
Test 2
Time 1
Test 1
Time 2
Test 2
Time 2
The reliability problem –
Inconsistency in testing conditions
18. Test 1
Time 1
Test 2
Time 1
Test 1
Time 2
Test 2
Time 2
The reliability problem –
Inconsistency in testing conditions
Test 1
Time 1
Test 2
Time 1
Test 1
Time 2
Test 2
Time 2
Test 1
Time 1
Test 2
Time 1
Test 1
Time 2
Test 2
Time 2
19. The problem with spring-spring testing
3/11 4/11 5/11 6/11 7/11 8/11 9/11 10/11 11/11 12/11 1/12 2/12 3/12
Teacher 1 Summer Teacher 2
20. Characteristics of value-added metrics
• Value-added metrics are inherently NORMATIVE.
• If below average = partially effective then half of the
average staff will be partially effective.
• Value-added metrics can’t measure progress of the
larger group over time.
• Extreme performance is more likely to have alternate
explanations.
21. New York City
• Margins of error can be very large
• Increasing n doesn't always decrease the
margin of error
• The margin of error in math is typically less
than reading
22. Los Angeles Unified
• Teachers can easily rate in multiple categories
• The choice of model can have a large impact
• Models effect English more than Math
• Teachers do better in some subjects than
others
• More complex models don't necessarily favor
the teacher
23. “The findings indicate that these modeling
choices can significantly influence outcomes
for individual teachers, particularly those in
the tails of the performance distribution who
are most likely to be targeted by high-stakes
policies.”
Ballou, D., Mokher, C. and Cavalluzzo, L. (2012) Using Value-Added Assessment for Personnel
Decisions: How Omitted Variables and Model Specification Influence Teachers’ Outcomes.
Instability at the tails of the
distribution
LA Times Teacher #1
LA Times Teacher #2
24. “Significant evidence of bias plagued the value-added model
estimated for the Los Angeles Times in 2010, including significant
patterns of racial disparities in teacher ratings both by the race of
the student served and by the race of the teachers (see
Green, Baker and Oluwole, 2012). These model biases raise the
possibility that Title VII disparate impact claims might also be filed
by teachers dismissed on the basis of their value-added estimates.
Additional analyses of the data, including richer models using
additional variables mitigated substantial portions of the bias in the
LA Times models (Briggs & Domingue, 2010).”
Baker, B. (2012, April 28). If it’s not valid, reliability doesn’t
matter so much! More on VAM-ing & SGP-ing
Teacher Dismissal.
Possible racial bias in models
26. Issues with the Colorado Growth
Model
• When applied to MAP it discards the
advantages of a cross-grade scale and robust
growth norms.
• It is a descriptive and not a causal model.
• As currently applied it does not control for
factors outside the teacher’s influence that
may affect student growth.
27. A brief commentary on the Colorado Growth
Model
It’s limitations
•It does not support inference.
•It does not take advantage of the
useful characteristics of a vertical
scale.
•It uses only prior scores and past
testing history to evaluate growth.
28. A brief commentary on the Colorado Growth
Model
Other limitations
•The model can’t be used for cross-
state comparisons.
• The model is problematic for
assessing long-term trends.
30. Translating ranked data to ratings -
principles
• There is no “science” per se around translating a
ranking to a rating. If you call a bottom 40% teacher
ineffective that is a judgment.
• The rating process can be politicized.
• The process is easy to over-engineer.
31. New York Rating System
• 60 points assigned from classroom observation
• 20 points assigned from state assessment
• 20 points assigned from local assessment
• A score of 64 or less is rated ineffective.
33. Cheating
Atlanta Public Schools
Crescendo Charter Schools
Philadelphia Public Schools
Washington DC Public Schools
Houston Independent School
District
Michigan Public Schools
34. Unintended Consequences?
• Many principals and teachers (including good ones)
will seek schools or teaching assignments that they
think will improve their results.
• Principals and teachers may game the system,
inadvertently or intentionally.
• Many teachers will seek opportunities to avoid
grades with standardized tests.
• Ranking metrics can discourage cooperation among
principals and teachers – finding ways to reward
teamwork and cooperation are important.
35. Case Study #1 - Mean value-added performance in mathematics by
school – fall to spring
-8.00
-6.00
-4.00
-2.00
0.00
2.00
4.00
6.00
36. Case Study #1 - Mean spring and fall test duration in minutes by
school
0.00
10.00
20.00
30.00
40.00
50.00
60.00
70.00
80.00
90.00
Spring term
Fall term
38. Differences in fall-spring test durations
Case Study # 2
15%
25%
60%
Mathematics
Spring < Fall Spring = Fall Spring > Fall
0.0
1.0
2.0
3.0
4.0
5.0
6.0
Spring < Fall Spring = Fall Spring > Fall
GrowthIndex
Mathematics
Differences in growth index score
based on fall-spring test durations
39. Case Study # 2
42%
33%
25%
Fall < Spring Fall = Spring Fall > Spring
-5.0
-4.5
-4.0
-3.5
-3.0
-2.5
-2.0
-1.5
-1.0
-0.5
0.0
Fall < Spring Fall = Spring Fall >Spring
Differences in spring -fall test durations Differences in raw growth based by
spring-fall test duration
How much of summer loss is really summer loss?
40. Case Study # 2
0.0
1.0
2.0
3.0
4.0
5.0
6.0
7.0
8.0
9.0
10.0
0
20
40
60
80
100
120
140
160
180
200
GrowthIndex
Minutes
School
Growth Index Fall test duration Spring test duration
Differences in fall-spring test duration (yellow-black) and
Differences in growth index scores (green) by school
41. Negotiated goals – Student Learning
Objectives
• Negotiated goals (SLOs) are likely to be
necessary in some subjects.
• It is difficult to set fair and reasonable goals
for improvement absent norms or context.
• It is likely that some goals will be absurdly high
and others way too low.
42. Ways to evaluate the attainability of a goal
• Prior performance
• Performance of peers within the system
• Performance of a norming group
43. One approach to evaluating the attainment
of goals.
Students in La Brea Elementary School show
mathematics growth equivalent to only 2/3 of the
average for students in their grade.
Level 4 – (Aspirational) – Students in La Brea Elementary School will
improve their mathematics growth equivalent to 1.5 times the average
for their grade.
Level 3 – (Proficient) Students in La Brea Elementary School will
improve their mathematics growth equivalent to the average for their
grade.
Level 2 – (Marginal) Students in La Brea Elementary School will
improve their mathematics growth relative to last year.
Level 1 – (Unacceptable) Students in La Brea Elementary School
do not improve their mathematics growth relative to last year.
44. Is this goal attainable?
62% of students at John Glenn Elementary met or exceeded
proficiency in Reading/Literature last year. Their goal is to improve
their rate to 82% this year. Is the goal attainable?
362 351 291
173
73 14 3
0
100
200
300
400
Growth
> -30%
> -20% > -10% > 0% > 10% > 20% > 30%
Oregon schools – change in
Reading/Literature proficiency 2009-10 to
2010-11 among schools that started with
60% proficiency rates
45. Is this goal attainable and
rigorous?
45% of the students at La Brea elementary showed average growth or
better last year. Their goal is to improve that rate to 50% this year. Is
their goal reasonable?
0%
20%
40%
60%
80%
100%
Students with average or better annual
growth in Repus school district
46. The selection of metrics matters
Students at LaBrea Elementary School
will show growth equivalent to 150% of
grade level.
Students at Etsaw Middle School will
show growth equivalent to 150% of grade
level.
48. Percent of a year’s growth in
mathematics
0%
20%
40%
60%
80%
100%
120%
140%
160%
180%
200%
2 3 4 5 6 7 8 9
PercentofaYear’sGrowth
Mathematics
49. Assessing the difficult to measure
• Encourage use of performance assessment and rubrics.
• Encourage outside scoring
– Use of peers in other buildings, professionals in the field,
contest judges
• Make use of resources
– Music educator, art educator, vocational professional
associations
– Available models – AP art portfolio.
– Use your intermediate agency
– Work across buildings
• Make use of classroom observation.
50. Possible legal issues
• Title VII of the Civil Rights Act of 1964 –
Disparate impact of sanctions on a protected
group.
• State statutes that provide tenure and other
related protections to teachers.
• Challenges to a finding of “incompetence”
stemming from the growth or value-added
data.
51. Recommendations
• Embrace the formative advantages of growth
measurement as well as the summative.
• Create comprehensive evaluation systems with
multiple measures of teacher effectiveness (Rand,
2010)
• Select measures as carefully as value-added models.
• Use multiple years of student achievement data.
• Understand the issues and the tradeoffs.
52. Presenter - John Cronin, Ph.D.
Contacting us:
NWEA Main Number: 503-624-1951
E-mail: rebecca.moore@nwea.org
This PowerPoint presentation and recommended
resources are available at our Slideshare site
http://www.slideshare.net/JohnCronin4/colorado-
assessment-summitteachereval
Thank you for attending this event