This document discusses ways to demonstrate the value of learning initiatives. It notes that learning does not inherently have value, but rather gets value when it leads to improved performance and behavior. The document critiques traditional models like Kirkpatrick that focus on formal classroom training rather than informal learning. It suggests assessing the impact of initiatives like informal, social, and continuous learning by considering stakeholders' expectations and how the initiatives help achieve business goals. Tables then discuss examples of initiatives and ways to visualize evidence of their value in a holistic, adaptive manner.
3. “How can we better demonstrate
the value of learning initiatives?”
4.
5. Short term
This session is also in itself an experiment to
drive the impact of a learning activity.
Long term
homocompetens.com
6. Short term
Individual Exercise
• What‟s your learning mojo?
• Write down a few learning initiatives
you want to demonstrate value for.
Long term
homocompetens.com
7. In my experience way more energy goes into
discussing evaluation than doing it. (Tom Gram)
12. Kirkpatrick critiques
“... then why don’t we just “... it dates from another
do it?” age.”
• Ownership What if it is not a formal
• Cost classroom event?
• Complexity
15. ROI critiques
“... how valid is that “... who will pay for the
number really for calculation?”
learning?”
Are our CEOs only looking
• Intangible for ROI numbers anyway?
• Indirect
• Time span
• Multiple goals
16. The dominant model in corporate learning measurement is
the 4 level Kirkpatrick evaluation model, or a variation on it.
Half of the model usually gets done because we can, the
upper two levels mostly remain on the to do list. That is
actually the half model that really counts. Besides the
pragmatic issues of time, money and ownership to apply
the whole model, the model itself is a child of a time when
we thought training was learning and learning was a formal
event. Since then learning professionals
have recognized the other 80% or learning
(informal, social). What if learning is also in the flow and
process and connections rather than in a piece of content
or event? What with learning by reflecting on experience
and by doing? What if our aim is to build and model the
learning ecosystem (high impact culture, workscape, ...)
rather than training programs?
17. Table talks
• Hello, my name is...
• Hello, my learning initiative is...
• Hello, this is how we demonstrate its
value at present (and the good and
bad of that)
18. Why do we Who are our
measure stakeholders?
impact? What do they
For whom? expect?
How do you
What are the
know the most
consequences
and least
of not proving
impactful
impact?
training?
19. “Learning doesn't HAVE value. Learning GETS
value. It gets value through performance
and behavior, and always within context.”
(Source: You should say this.)
21. Table talks
• What does this bring to my
learning initiatives?
• Any lightbulbs?
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32. How do you assess whether your
informal learning, social
learning, continuous
learning, performance support
initiatives have the desired impact or
achieve the desired results?
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43. What
What is the end measurements
game? exist that you
can leverage?
How do other
Can we better
business
steer the
services
„inflow‟ of our
demonstrate
trainings?
value?
48. Table talks
• A brave new world...
• Take 1 example per table to work
around
• Report out to the group
• That will be our „cook book‟
49. What is your How would you
„world view‟ on visualise the
learning? evidence?
Working
Is the new
backwards, holi
model
stic and
actionable?
adaptive.
50. Bad news !
You haven‟t learned
anything from this
session yet
50
Hinweis der Redaktion
Kirkpatrick is the creator of the Kirkpatrick Four LevelsTM, the world-wide standard for evaluating the effectiveness of training programs. He created the model in 1954 as the subject of his Ph.D. dissertation at the University of Wisconsin. The elegantly simple model has withstood the test of time with over 50 years of application.
My 2 cents:The model itself goes from training intervention over performance into outcomes. The outcomes are split between individual and organisational ones. It's a pattern that we'll see again.There is a lot we could potentially measure, more than the ol' Kirky suggests. But should we measure all? What is the criterion to yes or no put our time and money in a specific measurement?
My 2 cents:I find this a good example of where you evaluate a total program -instead of a single intervention- working towards what really matters. Of course, for sales training that is an easy one: it is all about sales figures and those are already carefully tracked and fully quantified. It's not always as clear cut with business outcomes of other training programs.Obviously, this entire approach is embedded in the world view of cause and effect. The casual chain is however mapped out for a specific program's audience and targets. My thought is how stable these casual chains are in the agile and unpredictable network age. Can you realistically always map it out and assume it will keep holding? Does it need frequent updating or continuous validation? The stages of the chain go from individual up to organisational performance. Does it really make sense to take individual performance into account these days? I know it is a deeply embedded world view in the western part of the world to track individual performance if alone for the performance review and bonus. But when you really think about it, close to nothing gets accomplished by an individual anymore. Why would a corporation care about anything but the performance of the project team? You can always go down from the team performance metrics and point out the low or high performers within the group for bonus reasons, but the focal point for measurement should move up to team/organisational level. That is what counts and it is NOT the sum of the individual contributions.
I like the statistical approach with the 'control group' that did not get the training. This really points out exactly what the value of the program is. But of course a corporation is not a lab environment where you can just administer placebo training and real training to make the statistical validation, and at the same time controlling all other variables as the scientific approach prescribes. Compliance training for example (let's not go into the discussion if those have actual learning goals) is not something you give to some and not to others. But in the cases it is possible and makes sense, such comparision between those who did vs those who didn't is a strong piece of evidence.Placebo training.
"The thing that I thought was interesting was how the maturity of your measurement strategy is basically a function of how much your learning organization has moved towards performance consulting. How can you measure business impact if your planning and gap analysis isn’t close to the business?“My 2 centsI'm going to repeat a thought of Hans here when he saw this framework and how a popular metrics solution (Metrics That Matter) implements it : "The thing that I thought was interesting was how the maturity of your measurement strategy is basically a function of how much your learning organization has moved towards performance consulting. How can you measure business impact if your planning and gap analysis isn’t close to the business?". Speaking of Metrics That Matter, that system and most others rely on surveys. Surveys tell us what people think or want to say the answer is. But if you can look at hard data instead of opinions, isn't that a better way?What I like in Bersin's approach is how they work with a holistic 'culture' and then break that down into proven differentiating elements and practises. The previous models did all build up from the interventions towards organisational benefits, but this approach starts with the performance ecosystem as I called it, or workscapes as the Internet Time Alliance calls it. It's not about the training programs, it is about getting the ecosystem right and making sure learning will happen when, where and by whom needed, to get the work done.If you use research similar to Bersin, you are going for common differentiators. Will that let you stand out from the crowd? How do you remake such study for your own context or is it not that different?My same thought as on the previous model stands on the overemphasis for individual performance as the only performance that counts is the one of the team/organisation.Your 2 cents:
My 2 centsSpeaking of Metrics That Matter, that system and most others rely on surveys. Surveys tell us what people think or want to say the answer is. But if you can look at hard data instead of opinions, isn't that a better way?
From the report and the book Simply Complexity: A Clear Guide to Complexity Theory by Neil Johnson, the outlines of a complex adaptive system are:The system contains a collection of many interacting objects or ―agents.The behavior of these objects is affected by memory or ―feedback.The objects can adapt their strategies according to their history.The system is typically ―open.The system appears to be ―alive.The system exhibits emergent phenomena that are generally surprising and may be extreme.The emergent phenomena typically arise in the absence of any sort of ―invisible hand or central controller.The system shows a complicated mix of ordered and disordered behavior.My 2 cents:I like the suggestion to look upon learning as a complex adaptive system rather than a cause/effect chain. But as long as we don't have practical ways to track impact in such a paradigm, this thought will remain in the category of 'seems right'.I do need to brush up my statistical skills for the future of learning measurement in ecosystems or complex adaptive systems will be much more on isolating KPIs and finding statistical correlations between competence building activities and business results than calculating the average of a satisfaction survey.The impact we expect, the questions we ask and the metrics we use to answer those will indeed depend on a 'world view' on learning. Event? Process? Flow? Ecosystem (eg complex adaptive system)? Network? The good thing is that those world views will have more impact on the operational side, but in a business the ultimate metric is very clear (hint: you can buy stuff with it).
My 2 cents:Learning and development are fun and interesting at best when not transferred into a valuable afterlife, but without business value. The training afterlife is not so much about the transfer into your competencies, but the ultimate transfer into valuable performance or valuable behavior. Your read it right, not just any performance or behavior will do, but the kind people appreciate with money.I like the dashboard approach, and how that shows action.I also like the reminder approach as development is indeed a continous process. I never used this particular system, but I can only hope that I can set these reminders myself. Otherwise, it's just employer spamming.... I want to have a say in how reminders are set, as it is my learning.The model still focuses heavily on content and formal approaches.
My 2 cents:A business intelligence suite is not only about learning but integrates the different talent management areas (selection, performance, workforce, rewards, succession, etc.) and other business areas (if you buy the other packages in the suite). Indeed, looking at learning on its own is so last century...I like the approach of first asking the questions, and then looking at the metrics to answer those. It makes the metrics matter and action oriented.When I look at the questions and metrics typically in the 'learning' part of these suites, I get the feeling they are all operational metrics, and don't stretch enough into real business impact.
Step 1. Identify targeted business goals and impact expectationsStep 2. Survey a large representative sample of all participants in a program to identify high impact and low impact casesStep 3. Analyze the survey data to identify: a small group of successful participants, a small group unsuccessful participantsStep 4. Conduct in-depth interviews with the two selected groups to: document the nature and business value of their application of learning, identify the performance factors that supported learning application and obstacles that prevented it.Step 5. Document and disseminate the story, report impact, applaud successes, use data to educate managers and organizationMy 2 cents:I like the simple and fast nature of the method, as we know that simple to apply models have more chance of getting done. In the end, performance or 'getting the job done' applies to measurement and evaluation as well.It reminds me of the charts of the IBM Global Sales School above: did a particular something (like a learning program, or membership in a community, or the fact you blog, or anything - this method is not limited to formal events or even to learning alone) add to performance or not? You indeed do not need to track the entire population for that. Small samples of success vs failure can do that job.This model goes also for quantitative evidence gathering, rather than for hard figures. Any type of evidence counts, as long as it is evidence that relates to performance. It does not have to be an ROI figure.
"For me, the whole point of learning & development initiatives is to support performance - so regardless of the mode or approach taken to learn something, the assessment of its impact ultimately rests on the performance stats (whatever they may be)“Kasper Spiro suggests 'output learning'. To repeat his words: "In a nutshell it works like this. You encounter a problem in the workspace , then you set your learning objectives (that lead to tackling the negative effect of the problem), determine the requirements that set the boundaries for that solution and then the worker/learner is free to solve his problem anyway he wants, as long as he stays within the boundaries set by the requirements." He also lists the following techniques as 'how' to do it: log, survey, web analytics, user generated input.My 2 cents:Blogosphere clearly goes for performance and behavior as the ultimate goal, and maybe the only one to worry about. That is the same for formal, informal and besides-learning.
My 2 cents: I like Clark's focus on action and embedding that in up front. What else do we measure for if not to act upon it? Building on that thought, you'd need a swat team that gets called into action when an alarm bell rings. (Maybe using Issue Based Consulting techniques.)In a fast, flat, small, spiky and blurry world, any impact framework needs a build-in continuous tuning process to see if it is still aligned to what matters, if the alert levels have changed, if better indicators have emerged, etc.
My 2 cents:This kind of analysis tackles the fuzziness of 'relationships' within a business process and as such adds insights that can be used for action. I think this or a similar method will be a valuable tool in a social business.As what matters is really team performance versus individual performance, we need insights in the collaboration and essential social interactions. Tools like Social Network Analysis and Value Network Analysis complete the process picture. Inge has listed for example a few learning applications to SNA, and I can see learning roles (including mentors or peers) in the value network map as well. As such, you get a holistic view on where learning roles play in the bigger picture and what deliverables it passes or receives.This model also recognises the many intanglibles.SNA and VNA maps are strong visuals. A picture says a few thousand words. It does not spit out numbers or thresholds.
How likely are you going to promote to a friend?
Why do we need to assess?
Learning AnalyticsFrom eyeballs to ...Also consider anecdotes... Via twitter etc, how do you capture that?
Fail fast, fail cheap, iterate, multiple forms, etc.