A Neighborhood Survey in the Nation’s Capital: Balancing Rigor, Resources, a...
Program Evaluation Basics - Center for Nonprofit Success slides
1. Program Evaluation:
Using evaluation data to set direction,
expand impact, and maintain
accountability
October 21, 2014
Presented by:
Isaac D. Castillo
Director of Data and Evaluation
DC Promise Neighborhood Initiative
Twitter: @isaac_outcomes
2. Today’s Agenda
Why should you care about program
evaluation?
What is program evaluation?
How can your organization successfully
conduct program evaluation work?
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
3. Why Should You Care About
Program Evaluation?
LAYC domestic violence story
LeapOfReason.org
First Do No Harm…Then Do More Good
New domestic violence program
component designed to teach three
things:
o Partner violence is not an OK expression of love
o Partner violence is not OK in Latino culture
o There are safe ways to get out of violent
relationships
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
4. Why Should You Care About
Program Evaluation?
No human being is perfect.
Staff will make mistakes
Organizations will make mistakes
Services will be delivered poorly
Despite the best of intentions, some people
will be harmed.
How do you know you are not harming
people with your services?
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
5. What is Program Evaluation?
Process to determine if your program /
intervention / approach is effective.
Need to define what ‘success’ is for your
program.
Program evaluation does NOT need to be
done by specialists or outsiders – but those
people do add credibility and rigor (in most
cases)
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
6. The Basics of Program
Evaluation –
An Example
The concept of dieting – if you understand
dieting, you understand the basics of program
evaluation.
What is the goal of dieting (how do you
define dieting ‘success’)?
How do you know if your diet ‘works’?
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
7. Data and Dieting
Person weighs 200
Person weighs 200
Pounds
(90 Kilograms)
(90 Kilograms)
• Does that data point alone tell us anything?
Pounds
• Context Matters – what if person is 4 feet tall and 10 years old?
• Timing Matters – is this at beginning, end, or middle of diet?
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
8. Could Be About More Than
Weight
• Other things that could be measured:
• Body Mass Index (BMI)
• Physical fitness
• Blood measures (cholesterol levels)
• Own perceptions of health / feeling
• Appearance / muscle tone
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
9. Five Evaluation Concepts to
be Covered
Types and timing of evaluation
Who or what will you evaluate, and how will
they be selected?
Quantitative, Qualitative, and Mixed Methods
approaches
How detailed or rigorous does it need to be?
Who does the work – internal or external?
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
10. Types of Evaluation
What do I mean by ‘type’?
Really it is about timing - When do you
collect data?
What will you compare your data to?
Much of this discussion relies on:
o Costs
o Availability of potential comparison data
o What you are trying to learn
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
11. Traditional (Time Series)
Most common type of program evaluation.
Looking to see if things have changed over time.
What was situation before program, then what was
situation after program.
BBeefoforree P Prrooggrraamm PPrrooggrraamm D Deelilviveerreedd AAftfeterr P Prrooggrraamm
Must measure same things, in same ways, at both
points in time.
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
12. Comparison Group
A time series study that compares to another group
(that does not receive programming).
PPrrooggrraamm D Deelilviveerreedd
BBeefoforree P Prrooggrraamm
No (or minimal)
programming AAftfteerr P Prrooggrraamm
No (or minimal)
programming
More rigorous, but more challenging.
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
13. Who / What Will You
Evaluate?
Need to define the population that will be
evaluated.
Need to define ‘success measures’
(outcomes) – what are you trying to achieve?
Once these questions are answered, then
need to consider which participants will be
part of the evaluation (and maybe who gets
programming).
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
14. In Time Series, This is Simple
Usually just serve and evaluate those that
enroll in the program:
BBeefoforree P Prrooggrraamm PPrrooggrraamm D Deelilviveerreedd AAftfteerr P Prrooggrraamm
SSeelfl-fs-seeleleccttioionn
First come, first served is what is frequently
used if there are too many potential
participants.
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
15. Comparison Groups Are More
Complicated
• Can select by randomizing participants into
groups:
PPrrooggrraamm D Deelilviveerreedd
BBeefoforree P Prrooggrraamm
No (or minimal)
programming AAftfteerr P Prrooggrraamm
No (or minimal)
programming
Random
Selection
Random
Selection
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
16. Compare across high/low
dosage
• Can use self-selection:
PPrrooggrraamm D Deelilviveerreedd
BBeefoforree P Prrooggrraamm
No (or minimal)
programming AAftfteerr P Prrooggrraamm
No (or minimal)
programming
High
High
Attendance
Attendance
Low
Attendance
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
Self-selection
Self-selection
Low
Attendance
17. Quantitative, Qualitative, and
Mixed Methods Evaluation
Quantitative = more numerical information.
Qualitative = less numerical information.
One not better than other, just different types
of information. Both can be high quality –
both can be poor.
Most modern program evaluation is Mixed
Methods – both quantitative and qualitative to
varying degrees.
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
18. What does Mixed Methods
Evaluation Look Like?
Quantitative
•Survey responses (that can be
quantified)
•Numerical data (test scores,
report cards, medical data,
etc.)
•External data (collected by
others)
Qualitative
•Interviews with participants
•Focus groups with
participants
•Interviews with staff
•Interviews or focus groups
with key stakeholders
•Process / fidelity study
•Open ended survey responses
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
19. Where Does This Get Tricky?
Need to focus on outcomes – changes in
knowledge, attitudes, behavior or conditions.
Satisfaction surveys do not equal evaluation.
Just because someone liked the program it
does not mean the program led to successful
outcomes.
Some things can be either qualitative or
quantitative depending on who you ask.
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
20. How Detailed or Rigorous Does
the Evaluation Need to Be?
What do you want to do with the results?
Prove to yourself the program works?
Use the results to market/fundraise?
Publish the results through your own materials?
Publish the results in peer-reviewed journals?
How ‘certain’ do you want to be about the results?
Are you fine with some doubt?
Will you be comfortable answering concerns and criticisms?
Are you willing to live with negative results?
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
21. Using Evaluation Data to
Inform Change
Changes in program type / delivery
(this parenting program isn’t working, time for a different program)
Changes in dosage
(classes are offered once a month, increase classes to once a week)
Changes in measurement tools or approaches
(this survey question is flawed, let’s find a better one)
Changes in staff training
(staff do not seem to know how to deliver this program – time for
training)
Changes in organizational culture
(no one is taking this approach seriously – time for larger
conversation)
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
22. Communicating Evaluation
Results
Be honest about your data and the limitations of your
method.
What you omit is as telling as what you communicate.
When communicating negative / ‘bad’ results, follow
this formula:
Finding / Result + Theory + New Solution
Use different formats to communicate the information.
Celebrate the successes, and identify areas for
improvement.
No need to share everything in detail – but have it ready
if someone requests it.
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
23. Balancing Cost and Rigor
Program evaluation does have a resource
cost – but so does everything else.
Simple internal evaluation / performance
management can be done at low cost.
However, larger picture requires more
rigorous (and more expensive evaluation)
Start small and focus on 2-3 outcomes – then
expand over time.
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
24. Internal vs. External
Evaluation
Who does design of evaluation?
Who does selection / creation of data
collection tools?
Who does actual data collection?
Who does the analysis of data?
Who creates the reports / charts /
publications?
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
25. Resources - Books
The Nonprofit Outcomes Toolbox
By Robert M. Penna
Handbook of Practical Program Evaluation
By Joseph S. Wholey, Harry P. Hatry, and Kathryn E.
Newcomer - editors
Performance Measurement: Getting Results
By Harry P. Hatry
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
26. Resources - Articles
First Do No Harm…Then Do More Good
By Isaac Castillo
http://tinyurl.com/isaacLOR
Good Stories Aren’t Enough
By Martha A. Miles
http://tinyurl.com/milesgoodstories
Yes We Can! Performance Management in
Nonprofit Human Services
By David E.K. Hunter
http://tinyurl.com/hunteryeswecan
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
27. Resources - Links
PerformWell
www.performwell.org
Performance Management and Evaluation:
Two Sides of the Same Coin
By Isaac Castillo and Ann Emery
https://www.youtube.com/watch?v=nC7AG8XxrI4
Leap of Reason
http://www.leapofreason.org
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014
28. Isaac’s Contact Information
Isaac D. Castillo
Director of Data and Evaluation
DC Promise Neighborhood Initiative
On Twitter: @Isaac_outcomes
Email: Isaac.Castillo@dcpni.org
October 21, 2014
Isaac Castillo - @isaac_outcomes – DC Promise Neighborhood Initiative – October 21, 2014