Assistant Professor (Human Resource) & Academic Coordinator
21. Aug 2014•0 gefällt mir•26,080 views
1 von 70
Evaluation of training Program
21. Aug 2014•0 gefällt mir•26,080 views
Downloaden Sie, um offline zu lesen
Melden
Personalbeschaffung & Personalwesen
Evaluation is a planned process which provides specific information about a selected session, program for the purpose of determining value or decision making.
2. Introduction
Training requires time , energy and money.
Therefore an organization needs to know whether
the investment made in training is being effectively
and is worth the effort. Top management is
concerned with evaluation as a process by which
the effectiveness of the organizations programs
and operating procedures are demonstrated.
Supervisors are concerned more with the “results”
of training as compared by changes in workers on
the job performance.
3. Evaluation is a planned process which provides
specific information about a selected session,
program for the purpose of determining value and
or decision making. Related to training evaluation
is concerned with providing information on the
effectiveness of the training activity to decision-makers
who will make decisions based on the
information. There are various models that have
been developed to describe the role of evaluation
in the training process. It is important that
evaluation be a planned or systematic effort that is
built-in from the start of the training process.
4. “The reason for evaluating is to
determine the effectiveness of a
training program.” (Kirkpatrick,
1994, )
5. 1.“To justify the existence of the
training department by showing
how it contributes to the
organizations’ objectives and
goals.”
5
6. 2. “To decide whether to continue
or discontinue training
programs.”
7. 3. “To gain information on how to
improve future training programs.”
(Kirkpatrick, 1994, )
7
8. Ascertaining Reactions Of the
Participants
As the programme gets over, it is important to
ascertain the participants views and observations
about its various components both in terms of the
contents and the training process. The evaluation can
either focus on a session-by-session review or on the
programme as a whole. The formal format that the
trainers choose to adopt for eliciting end-term appraisal
of the participants’ ranges from the relatively simple
“highlight the best and worst aspects” to a detailed
response through a properly devised questionnaire or
instrument.
9. Areas for Ascertaining Reactions of the
Participants
An important decision that the training team takes
prior to the programme is about the specific
elements it would like to include in the evaluation
exercise.
Objective
•How far the programme objectives are realistic.
•Clarity
•Individual understanding and response
•How far the training objectives are realized.
10. Trainers Performance
•Effectiveness of the presentations.
•Skill of the trainers to use the training methods
•Trainers expertise in responding to the questions of
the participants.
•Whether the trainers attitude was supportive and
helpful.
•Their relationship with the training group.
Training Methods
•Their appropriateness tot the programme and the
modules.
•The extent to which they facilitated learning.
11. Training Group
•Its size and composition
•Selection procedure
•Level of the group and quality of its participation.
Time Schedule
•Duration of the programme
•Allocation of time to each module
•Was the daily schedule hectic.
•Sequencing the topics
•General flow and momentum of the programme
12. Training Facilities
•Suitability of the sitting arrangement
•General ambience
•Appropriateness and quality of the equipment used
•General learning environment at the venue.
Physical Arrangements
•Appropriateness of the accommodation
•Arrangement for food etc.
Training Support Materials
•Usefulness and quality of the materials
•Their timely distribution.
13. Sources for Ascertaining Participants’
Reactions
•Structured Questionnaire
•Tests
•Open Forum
•Personal interviews
•Program committee meetings for ongoing
evaluation of the contents and process of the
programme.
•Observations of the trainers
•Comments of the participation
14. Purposes and Uses of Evaluation
The definition of evaluation implies 2
purposes:
1.Making decisions about improvements to
be made in the training program itself.
2.Making decisions about the value of the
training program in terms of whether to
continue to conduct the program.
15. Specifically, evaluation can be used:
a)To determine whether the training program is
accomplishing its assigned objectives and if they were the
“right” objectives.
b)To Identify the strengths and weaknesses of training
activities.
c)To determine the cost/Benefit Ratio of the Training
Program.
d)To Establish a Data base which Organization Leaders can
use to Demonstrate the Productivity and Efficiency of their
operational Procedures.
e)To Establish a Data Base which can assist Organization
Managers in Making Decision.
16. Evaluation of Training
After the training program has been completed and
the current training record filled in, the performance
of trainees can be evaluated and the attention of
the training staff drawn towards specific action for
improvement. Thus evaluation of training should
help to:
•Isolate areas of difficulty and suggest strategies
for overcoming them.
•Modify unrealistic training targets in the light of
actual group performance
17. • Determine whether motivation of trainees is of
the required level e.g. financial incentives for
greater output during training.
• Compare initial selection to actual performance
rating
• Highlight causes of absenteeism and labour
turnover during training
• Evaluate and modify instruction for future
training programme
• Formalized management training programme
have a vital role to play in the development of
management skills.
19. “What is quality training?”
“How do you measure it?”
“How do you improve it?”
20. Evaluation can be four-pronged attempt.
1.Evaluation of reaction
2.Evaluation learning
3.Evaluation behavior
4.Evaluation results
21. Training Expectation Measurement
Trainee feedback scores on initial impression on to
what extent training met their expectations of
learning, skills and knowledge.
Tool: Survey Form
Training effectiveness Measurement
Post-training normalized feedback scores and its
quarterly trends. Feedback from trainee Manager
on visible incremental changes in trainee skills,
service parameters and on-job behavior.
Tool: Quarterly Follow-on Survey from Managers
Jayadeva de Silva 21
Stu
dent
feed
bac
k
Tier-1
Expectations
Post-Training On-
Job Behavior
Survey
Tier-3
Effectiveness
Improvement in business
indicators, revenue figures
or success parameters
Tier-4
Impact
Pre-Training
Vs Post-
Training
Assessment
Tier-2
Improvement
Training Improvement Measurement
Assessment of trainee’s training exposure and
expertise gained on same set of tasks before and
after training. Includes comparison of in-training &
Post-training test performance, If applicable.
Tool: Pre-training and Post-training Survey Forms.
Post-training performance tests.
Training Impact Measurement
Impact of training for improving revenues,
enhancing business or other success factors
driving the training needs.
Tool: Comparison of Baseline data and Quarterly
business data, Impact Factor calculation sheets
22. Measuring Training Effectiveness and
Impact
I.Prior to training
•The number of people that say they need it during the
needs assessment process.
•The number of people that sign up for it.
II. At the end of training
•The number of people that attend the session.
•The number of people that paid and attended.
•Customer satisfaction at the end of training
•A measurable change in knowledge
•Ability to solve a “mock” problem at the end of training
• Willingness to try to use the skill at the end of training.
23. III. Delayed Impact (non-job)
•Customer satisfaction at X weeks after the end of
training
•Customer satisfaction at X weeks after the training
when customers know the actual costs of training
•Retention of knowledge at X weeks after the end of
training.
•Ability to solve a “mock” problem at X weeks after the
end of training.
•Willingness to try the skill at X weeks after the training.
•The three systems followed are: 1. 360 degree
2. Performance system 3. Janus performance
Management System
24. IV. On the job behavior change
•Trained individuals that self-report that they
changed their behavior , used the skill on the
job after the training.
•Trained individuals who’s managers report
that they changed their behavior or used the
skill after training.
•Trained individuals that actually are
observed to change their behavior or use the
skill after the training.
25. V. On the job performance change
•Trained individuals who self-report that their actual
job performance changed as a result of their
changed behaviour, skill.
•Trained individuals who’s manager’s report that
their actual job performance changed as a result of
their changed behaviour/ skill.
•Trained individuals whose manager’s report that
their job performance changed either through
improved performance appraisal scores or specific
notations about the training on the performance
appraisal forms.
26. • Trained individuals that have observable /
measurable improvement in their actual job
performance as a result of their changed
behaviour.
• The performance of employees that are
managed by individuals that went through the
training.
• Departmental performance in departments
with X % of employees that went through
training ROI of return on training dollar spent.
27. (a) Other measures
•CEO / top management knowledge of/ approval of/
or satisfaction with the training program.
•Rank of training seminar in forced ranking by
managers of what factors contributed most to
productivity/ profitability improvement.
•Number of referrals to the training by those who
have previously attended the training
•Additional number of people who were trained by
those who have previously attended the training.
•Popularity of the program compared to others.
28. (b) What to evaluate
One way to approach the issue of what to evaluate
is to identify what kind of information is needed.
Dr. Kirtpatrick developed four levels of
evaluation. Reaction
1. Reacting
2. Learning
3. Behaviour
4. Results
29. Feedback score collected
from each trainee at end of
the class
Record general impression
about the Training on a
numerical scale of 1-5 or
similar
Average score reflects how
closely training met
expectations of field and how
close it is aligned with
business needs identified
earlier which drives the
training.
29
30. Reaction may best be defined as how well
the trainees liked a particular training
program."
Reactions are typically measured at the end
of training.
30
31. 1. Design a questionnaire based on information
obtained during the need assessment phase.
2. Design the instrument so that the response can
be tabulated and quantified.
3. To obtain more honest opinions provide for the
anonymity of the participants.
4. Provide space for opinions about items that are
not covered in the questionnaire.
5. Pretest the questionnaire n a sample of
participants to determine its completeness.
33. 33
FEEDBACK FORM
Please take a few minutes to fill out this feedback form. Your feedback will help us strengthen the course delivery.
Course Title -
Name of the facilitator -
You may provide feedback of the program on the following criteria by putting a () mark in the space provided.
4 – Exceeds Expectation; 3 – Meets Expectation; 2 – Needs Improvement; 1 – Unsatisfactory
44 33 22 11
SSuubbjjeecctt KKnnoowwlleeddggee
PPrreesseennttaattiioonn SSttyyllee
CCoommmmuunniiccaattiioonn
EExxaammpplleess,, CCaasseess,, SSiimmuullaattiioonn && EExxeerrcciisseess
RReelleevvaannccee
SSuuppppoorrttiinngg MMaatteerriiaallss
OOvveerraallll EEvvaalluuaattiioonn
1. What did you like most in the workshop?
2. What did you dislike in the workshop?
3. What other changes would you suggest in case the same course is conducted in future?
Learning & Development
Human Resources
34. Typically 'happy sheets'
Feedback forms based on subjective personal
reaction to the training experience
Verbal reaction which can be noted and analyzed
Post-training surveys or questionnaires
Subsequent verbal or written reports given by
delegates to managers back at their jobs
34
35. Collect Pre-training and post-training
data on trainee’s skills before and after
the training. A well drafted skill based
survey can be used.
Compare improvements in exposure or
expertise gained by the trainee as seen
on post-training survey by comparing
it with pre-training survey.
Conduct in-training skill tests as part
of the qualification criteria to measure
the improvement in the learned skills.
35
36. 2. Learning
The trainer is concerned with measuring the
learning of principles, facts, techniques and
attitudes that were specified as training
objectives. There are many different measures
of learning performance including paper-and-pencil
test, learning curves etc. The objectives
determine the choice of the most appropriate
measure. There are several guideposts used in
establishing a procedure for measuring the
amount of learning that takes place.
37. 1.The learning of each participant should be
measured so that quantitative results can be
determined.
2.A before-and-after approach should be use so
that learning can be related to the program.
3.Learning should be measured in an objective
basis.
4.A group not receiving training should be
compared with the group receiving training.
5.Where possible the evaluation results should be
analyzed statistically.
Where principles and facts are taught rather
than skills, it is more difficult to evaluate learning
38. What principles, facts, and techniques
were understood and absorbed by the
participants?"
What the trainees know or can do can
be measured during and at the end of
training
38
41. WHAT?
What knowledge was acquired?
What skills were developed or enhanced?
What attitudes were changed?
HOW?
Tests before and after the training
Interview or observation can be used before and after
training.
Measurement and analysis is possible and easy on a group
scale
Reliable, clear scoring and measurements need to be
established
41
42. Collect quarterly feedback from
trainee’s manager on his
observations on improvement of
trainee’s on-job performance/
behavior or skills after attending
training.
Analyze the trends in key on-job-performance
parameters or
indicators as seen on survey with
respect to previous quarters.
Convert delta into normalized
scores to indicate the value
created by training in the work
efficiency of the trainee
Ensure continual measurement of
effectiveness after regular
intervals to assess long term
value of the training. 42
43. 3. Behaviour
The term is used in reference to the measurement
of job performance. There are several guideposts
in evaluating training programs in terms of
behaviour changes.
1.A systematic appraisal should be made on-the-job
performance on a before-and-after basis.
2.The appraisal of performance should be made by
one or more of the following groups:
a. The person receiving the training. b. The
persons supervisor. c. The persons subordinates.
d. The persons peers.
44. 3.A statistical analysis should be made to
compare the performance before and after and
to relate changes to the training program.
4.The post training appraisal should be made
three months or more after the training so that
trainees have an opportunity to put into practice
what they have learnt.
5.A group not receiving the training should be
used for comparison.
45. Changes in on-the-job behavior,Behavior
changes are acquired in training and they
then transfer (or don't transfer) to the work
place.
What skills did the learner develop, that is,
what new information is the learner using
on the job?
45
48. What?
Whether the trainee is able to transfer the learning
to the work environment
New learning is demonstrated
Whether the trainee is motivated
How?
self-assessment can be useful, using carefully
designed criteria and measurements
cooperation and skill of observers, typically line-managers,
are important factors, and difficult to
control
Use of focus groups
48
49. Measure quarterly the business indicators
of the trainee’s job or service parameters
based on nature of job. Business
indicators could be collected based on
individual job or group responsible for
the said function.
Ideally business indicator data before the
training should be used as baseline.
Record the business parameters or
governing service parameters on
quarterly basis
Calculate qualitative or quantitative
impact factor based on normalized delta.
49
50. 4. Results
Evaluations in this level are used to relate the
results of the training program to organizational
objectives. Some of the results that could be
examined are costs, turnover, grievances and
morale. Where the objectives of the training
program are closely tied to specific
organizational objectives effort should be made
to show a link between the training and the
changes called for in the organization objective.
51. Reduction of costs;
Reduction of turnover and absenteeism;
Reduction of grievances;
increase in quality and quantity or production;
or Improved morale which, it is hoped, will lead to
some of the previously stated results.
These factors are also measurable in the workplace
51
54. “The Four Levels represent a
sequence of ways to evaluate
(training) programs….As you
move from one level to the next,
the process becomes more
difficult and time-consuming,
but it also provides more
valuable information.”
(Kirkpatrick, 1994,)
55. Realize an opportunity to use the
behavioral changes.
Make the decision to use the
behavioral changes.
Decide whether or not to continue
using the behavioral changes.
56. Desire to change
Knowledge of what to do and how
to do it
Work in the right climate
Reward for (positive) change
57. The process of Evaluation---Measuring
Change
One way of visualizing a process of evaluation is to set
the four levels of evaluation against a backdrop of a
training program. Evaluation should start before the
training program is designed. The next step is to plan
the evaluation, how is the data to be gathered. Who will
perform the evaluation? What techniques will be used?
When will the evaluation be conducted? The evaluation
design needs to be used on the objectives of the
program and the criteria to be measured. Once the
plan is completed and the training program begins, the
data collection phase is initiated.
58. Validity and Reliability
Validity is a concept that means the degree to
which an evaluation technique or instrument
measures what it was intended to measure. Its a
measure of accuracy.
Reliability is the degree to which evaluation
techniques and instruments measure a given
characteristic consistently. In order to get a long
term view of a training programs effectiveness, an
instrument must measure reliably for each session
of the training program.
59. Types of Evaluation Techniques and
Instruments
The process of Training Evaluation
A training programme, like design, development,
and manufacture of a product passes through
several stages.
1. Pre-Training Evaluation
This is prior to the course and should cover an
analyses of the expectations of the trainee and his
superior officer. The existing level of knowledge
and skills are reassessed. It is carried out through
discussions, workshops etc.
60. 2. Input and Delivery Evaluation
This is done concurrent to the training. It involves
long term management programme. It is done
individually or in group. Each topic, module is
evaluated in terms of its content, presentation,
relevance and applicability. Using a questionnaire,
soon after the course, invariably on the last day the
participants reaction is obtained which generally
elicits information about the course inputs and the
impressions about the course.
61. 3.Post Training Evaluation
The objectives of training is to enhance individual
effectiveness which helps in improving
organizational performance. Thus the process of
post-evaluation must have reference to job
improvement plan. This cannot be done when the
person is training , he needs to go back to his job
and demonstrate his newly acquired knowledge
and skills. After a lapse of 6-12 months the results
obtained are used for evaluating the applicability of
training.
62. Transfer of Learning
Individual Gains
To evaluate the effect of training on the individual
after 6-12 months, the training persons stated the
significant areas where the individual development
had taken place. They were: 1. State of art
knowledge on the subject2. Development of the
analytical skills. 3. Refined interpersonal skills. 4.
Innovation/ creativity. 5. Management of stress.
63. Transfer of learning to the organization
To evaluate whether in-house training has resulted in
enhanced organization performance, trainees were
requested to indicate the resultant benefits to public
enterprises. The non-tangible benefits are: 1. Better
management practices. 2. improved problem solving.
3. job improvement plans. 4. better systems.
There is general evidence that systematic in-house
training has contributed to individual growth in public
enterprises, while the same level of organization
improvement has not taken place because of
organization climate, PE systems and Lack of
motivation in PEs
64. Alternate approach in post-course
evaluations
New approaches to post-course evaluation are being
explored. One such recent development is monitoring
the progress of the participant through post-course
project work. In this approach participants at the time of
nomination to the course in consultation with their
departmental heads identify a proble, of practical value
to the organization. During the program participants
discuss, reformulate and identify approaches to its
solution in the light of the knowledge and skill inputs in
the training sessions. This way they related the
learning to work situations and made innovating
65. Evaluation of in-house training institutes
In order to make training a continuous activity
addressing itself to specific training needs of the
organization, public enterprises have established
large number of in-house training institutes/
Centers.
There are 5 factors considered important for
evaluating the institute and methods of assigning
point rating to each.
a) Faculty Resources: The number of qualified
faculty members on full time basis at the centre,
years of experience in training are considered.
66. b) Infrastructure: This covers all infrastructure
facilities other than library, such as class rooms,
syndicate rooms, hostel, recreation facilities etc.
c) Library: This is the “brain” of any training
institution/ Centre. Relevant books, journal, cases,
audio, video etc.
d) In-house Research / Consultancy: The in-house
centers suffer from under exposure to
research development and consultancy work. Such
research/consultancy may be both internal or
external.
e) Financial Support: The annual budget
allocation, unit cost per training are items under
this factor.
67. 67
Return on Impact (RoI) is a new
approach which measures the
difference training has created on
those governing business/ service
or revenue parameters which
drives training needs.
68. increased output
reduced absenteeism and
tardiness
reduced cost of new hires
reduced turnover
increased number of employee
suggestions
climate survey data (morale and
attitudes)
68
69. sales volume
average sale size
add-on sales
close-to-call ratio
ratio of new accounts to
old accounts
number of items per order
69
70. accuracy of orders
size of orders
number of transactions per day
adherence to credit procedures
number of lost customers
amount of repeat business
number of referrals
number of complaints
70
Hinweis der Redaktion
The end results after an evaluation are hopefully positive results for both upper management and the program coordinators.
2. Pilot courses may be implemented to see if the participants have the necessary knowledge, or skills, or behavioral changes to make the program work.
3. Kirkpatrick uses eight factors on how to improve the effectiveness of a training program. These eight factors closely follow the Ten Factors of Developing a Training Program. This is a feedback statement spinning off of the Ten Factors.
These are questions asked by HRD coordinators on training performance and the beginning criteria and the expectations of the resulting training program.
Business training operations need quantitative measures as well as qualitative measures. A happy medium between these two criteria is an ideal position to fully understand the training needs and to fulfill its development.
Quantitative - the research methodology where the investigator's “values, interpretations, feelings, and musings have no place in the positivist’s view of the scientific inquiry.” (Borg and Gall, 1989)
cont.
With Reaction and Learning, evaluation should be immediate. But evaluating change in Behavior involves some decision-making.
All of these levels are important. However, in later examples of this model, you shall see where large corporations have taken the Kirkpatrick Model and used all of it, only part of it, and still some reversed the order of the levels.
The employee may -
Like the new behavior and continue using it.
Not like the new behavior and return to doing things the “old way”.
Like the change, but be restrained by outside forces that prevent his continuing to use it.
The employee must want to make the change.
The training must provide the what and the how.
The employee must return to a work environment that allows and/or encourages the change.
There should be rewards -
Intrinsic - inner feelings of price and achievement.
Extrinsic - such as pay increases or praise.