7. …focused on teaching …focused on learning
…focused on inputs …focused on outputs
…tradition-driven …data-driven
…teacher as content authority …teacher as content guide
…teacher responsible for cause > effect …teacher iterates towards effect
…teacher is the sole proprietor of a
private practice
…teacher is a member of a Professional
Learning Community
…”best practices” …context sensitive
Educational researchEducational tradition
8. New methods of measuring impact
(for teachers and researchers)
1. Action-based research
2. Developmental evaluation
3. Design-based research
4. Control groups, A/B tests
5. SAMR
9.
10. In medical terms, “efficacy” refers
to the ability of a product to
provide a beneficial effect.
16. • The learning benefit of your product is not
self-evident
• Social metrics are often proxies for learning
outcomes
• Social metrics should be used to make
hypotheses about learning outcomes
From social metrics to learning metrics
17. Example: you have a product that teaches math online with a
parent-teacher bridge component. If you can increase parent-
teacher communication by 20 min/week you can hypothesize that
student achievement will increase based on “Parental Involvement
and Student Achievement: A Meta-Analysis” (Jeynes, 2005)
Your social metric + existing learning research = learning hypothesis
From social metrics to learning metrics
26. From Good Intentions to Real Impact
5 Levels of Efficacy Method
Level 1: Why might the product have an impact on
learning?
Hypothesis
Level 2: Data shows some change amongst users Anecdotal evidence;
surveys
Level 3: Data shows the change is because of the
product
Control Groups
Level 4: Independent evaluation of
impact/explanation of learning impact
Independent researchers
Level 5: Ability to replicate/scale the learning
impact
Replication studies
28. LEARNING GAINS
LEARNING
PROPOSITION
END USER
TECHNOLOGY
THEORY
METRICS
LEARNING OBSTACLES
Who are the
students who
use the
product?
What learning
gains do you
expect to see?
What hi-tech/low-
tech do the
students use?
What learning
theory predicts
these learning
gains?
What metrics will
you collect to prove
the learning gain?
1
4 2
3
8
6
5
7
9
SCHEDULE
How often do the
students interact
with the product?
PARTNERS
What partners
do you need
to execute?
- Age
- Grade
- Gender
- Special needs
- Socioeconomic
- Culture- Hand-outs
- iPad
- Lab equipment
- Sports equipment
- Once/week
- 30 min/day
- On demand
- Beginning of
each course
- Home use
- Increase in marks
- 21st C skills
- Complex thinking
- EQAO scores
- Scientific studies
- Theory search
- Existing products
- Benchmarks
- Marks
- Self-assessment
- Engagement
- Attendance
- Iterate on hypothesis, unanticipated metrics
- Teachers
- Researchers
- Lead students
- Principals
- Superintendents
(After implementation) What learning
gains resulted from the product’s use?
- Final metrics, confirmation of hypothesis
(After implementation) Did students get stuck
anywhere? Were there any unpredictable
negative side effects to the product’s use?
When we talk about the increased venture capital interest in edtech companies, it is common to look at graphs like this.
But if we zoom out another 10 years into the past we see that the current investment interest is not unprecedented. What happened in 2000 to sour investor?
The most common answer is the dot.com crash of 1999/2000, but another factor is that for all the increased investment in edtech and education (from a Federal level in the US), there have been no appreciable gains in learning (NB: whether learning can be gauged clearly by standardized tests is another question. For now, it is suffice to say that the metrics the education establishment have chosen to measure did not increase based on increased investment. There was a negative ROI).
Larry Cuban talks about some of the mis-guided attempts to turn education hi-tech in his book Oversold & Underused
Using data to track educational outputs is fairly new. Until recently, teachers taught according to tradition, not data.
Recently, a new wave of “data-driven” teachers are giving up their seat as content-experts and are paying attention to guiding students into learning pathways that work.
Teachers are being given new tools in which to measure their impact. (including A/B tests and control groups which are lifted straight from Medicine/Science)
The SAMR model moves upwards to increased level of technological integration. This is a tool for teachers, NOT for startups.
A tweet from #SXSWedu 2014 from the Pearson Efficacy team
http://efficacy.pearson.com
https://iris.thegiin.org. These are social metrics, not necessarily learning metrics
https://iris.thegiin.org. Many of these metrics are just “vanity metrics” and do not necessarily denote impact.
Where do you go to find out what works?
Much like we find comparables in valuation/funding space we find comparables in the learning space.
John Hattie’s list of “what works” are aligned to a correlation from 0 to 1 (0 being no effect to 1 being perfect causation).