This document summarizes Westat's evaluation of Ohio's Teacher Incentive Fund (OTIF) programs. It provides details on the goals and funding rounds of the OTIF from 2006 to 2012. It then outlines Westat's evaluation approach, which includes analyzing program documents, surveying teachers and administrators, conducting case studies, and assessing student achievement data. Previous findings from OTIF1 showed small or no effects on student test scores and mixed stakeholder responses. Recommendations focused on improving communication, incentives, and sustainability. The evaluation aims to identify best practices from Ohio's varied performance pay models.
1. Strategic Compensation
and Evaluation of the
Ohio Teacher Incentive Fund
Keith MacAllum, Westat
OERC June 28, 2012
2. The Teacher Incentive Fund
• Initial Round (2006) OTIF1
16 grants $240M
• Second Round (2007)
18 grants $237M
• Third Round (2010) OTIF3
62 grants $400M
• Fourth Round (July 2012)
30 grants from $500K to $12M each
2
3. Two Rounds of OHIO TIF
• OTIF1
2006-2011
Four largest urban districts
2 used national TAP model, 2 used home grown models
• OTIF3
2010-2015
24 districts, mostly rural
All using home grown models
3
4. Major Research Questions
• Do incentives lead to increased student
achievement?
• How do teachers and school systems respond?
• How best to measure teacher effectiveness?
• What are the implementation and logistical
barriers?
• Systemic reform of compensation & sustainability
Recruitment, retention, promotion, dismissal, evaluation
4
5. Objectives of the Current OTIF
Evaluation
• Program Implementation & Lessons Learned
• Teacher Effectiveness and Behavior –
subjectively perceived and objectively
measured
• Administrative Behavior and School/LEA
Processes
• Student Outcomes
• Sustainability
5
6. Features of OTIF Evaluation
• Formative and Substantive
• Mixed methods (quantitative & qualitative)
• Identify and disseminate generalizable best
practices from over 20 customized
performance-based compensation models
• Coordinate with ODE, Battelle for Kids, and
Mathematica on data collection activities
6
7. National Evaluation of TIF
• National evaluation of 3rd round TIF is being
conducted by Mathematica Policy Research
• All OTIF districts will complete annual surveys
in the fall of 2011, 2013, and 2014
• Only Cincinnati is participating fully in the
national evaluation
• Westat has excluded Cincinnati from most of its
evaluation activities
7
8. OTIF Evaluation – Data Collection
• Reviews of program documents and
administrative data
• Surveys of teachers, principals, and
coordinators
• Case studies of individual schools, consisting of
interviews with stakeholders
• Analyses of state, district, and school level
data, such as test scores
8
9. Data Collection – Reviews of Program
Documents and Administrative Data
• Program documents will include:
District compensation models
Communication plans and materials
Professional development resources
Program status and monitoring reports
• Purposes of document reviews:
Develop understanding of OTIF
Inform the development of data collection instruments
Track changes to program models over time
• Administrative data include student demographics,
teacher characteristics, and program expenditures
9
10. Data Collection – Surveys
• Teacher surveys: a strategic sample 9 districts
(30 schools) in spring 2012 and spring 2014
District size
Grade levels
Program components
• Principal surveys: all OTIF schools (excluding
Cincinnati) in spring 2013 and spring 2015
• OTIF Coordinator surveys each year, with
timing to be determined, probably fall
10
11. Data Collection – Surveys
• Common core of questions, with some
customization based on local model
• Topics include:
Perceptions of the program and school conditions
Engagement in activities, new roles, and PD
Use of value added measures
Changes in teacher effectiveness and behavior
Changes in administrative behavior and school
processes
11
12. Data Collection – Case Studies
• Strategically selected sample of six districts
District size
Grade levels involved
Geographical location
Program components
• Annual visits to 12 schools within those districts
• Initial visits took place Spring 2012
• Subsequent visits likely during Spring semester
12
13. Data Collection – Case Studies
• Interviews with teachers, principals, and
district and union staff
• Topics will include:
Perceived strengths and weaknesses of
local models
Communication
Perceived changes in behavior, processes
Lessons learned
13
14. Data Collection – Student
Outcomes
• Value-added data to measure the impact
of incentives on teacher effectiveness
• Ohio has contracted with BFK to obtain
student and teacher data and calculate
scores for individual teachers
• Westat will coordinate with BFK and ODE
to obtain value-added data
14
15. Findings from OTIF1
• Student Achievement
No statistical relationship between OTIF participation and
OAT reading and math scores in the TAP districts
In non-TAP districts, a small but statistically significant
effect on reading achievement was observed, but no
effect on math scores
Most of this effect was attributable to reading score
increases in one district
• Toledo distributed the most incentives both in terms of
numbers and total value
15
16. Findings from OTIF1
• Stakeholder Response and Implementation
Support was relatively high from outset
Teachers valued recognition over financial incentive
Teachers unlikely to respond positively to incentive
criteria they perceive to be outside of their control, of
inadequate value, or unrealistic to achieve
Most perceived improvement in collaboration
Some principals never engaged their staff or actively
moved to adopt program model
Time delay before payouts seen as problematic
16
17. What Accounts for Findings?
• Lack of awareness and knowledge
• Dollar value of incentives not perceived as
meaningful
• Site coordinators not empowered to make
decisions
• Initiative not perceived to be sustainable
• Cost share provision not pursued
• Contextual factors
• “We’re already working as hard as we can.”
17
18. Recommendations from OTIF1
• Ensure effective, ongoing communication
• Offer professional development on using value
added data and other teacher evaluation data
• Apply meaningful, appropriate, achievable criteria
• Build into district personnel policy, not layer on
• Connect with larger educational reforms
• Plan for absorbing costs or modifying compensation
structure to make program sustainable
18
19. Questions
• Westat evaluation team
Keith MacAllum, Project Director
keithmacallum@westat.com
John Wells, Survey Manager
johnwells@westat.com
Liam Ristow, Case Study Manager
liamristow@westat.com
19