2. What is evaluation?
Evaluation describes how to assess the nature,
impact and value of an activity through the
systematic collection, analysis and interpretation
of information with a view to making an informed
decision.
Evaluation involves 3 activities:
• Outlining clear purposes
• Gathering evidences
• Judgment
Evaluation is part of development rather than
apart from it.
3. CURRICULUM EVALUATION
Intra-curricular evaluation
Teacher evaluation of students
Student evaluation of teachers
Materials evaluation
Verification of methods
Evaluation of tests and examinations
Checking the learning outcomes while on the
field
Curriculum review/ improvement/ change/
modification
System revision
4. Curriculum Evaluation
Curriculum evaluation broadly conceived, is a stock-
taking process.
A curriculum may be structured in so many ways. For
instance,
-it may be based on an assembly of courses that
are deemed necessary to meet certain job
requirements;
- it can be formed from the basics of a particular
discipline in a faculty or department;
- it can be designed to meet the needs of a
professional or technical programme,
- or it can be developed based on a systematic
5. APPROACHES TO CURRICULUM
EVALUATION
Goal-based
Determining whether pre-stated goals of
educational or training programs were met.
Goal-free
Uncovering and documenting what outcomes
were occurring in educational or training
programs without regard to whether they were
intended program goals focus.
Responsive (contingency-unforeseen event)
Comparing what was intended for instruction to
what actually was observed.
6. These approaches are based on the classical
curriculum evaluation models as presented by
Stufflebeam and Shinkfield (1990)
The decision-making
The collecting information about educational or
training programs for the purpose of decision-
making.
The accreditation
It is for forming professional judgments about the
processes used within education or training
programs.
7. World views about Evaluation
Melrose (1996) grouped existing
models into three paradigms or world views
about evaluation,
these are:
I. The functional model
ii. The transactional model; and
iii. The critical paradigms of evaluation.
Need a model for curriculum evaluation:
To provide a conceptual framework for
designing a particular evaluation
depending on the specific purpose of the
8. Models of Curriculum
Evaluation
Concept of a model
Theory:
Explains a process
(Why?)
Model
Describes a process
(How?)
Model is a representation of reality presented
with a degree of structure and order.
10. Types of Models
Conceptual Procedural Mathematical
Models
Describes:
What is meant
by the
concept?
Describes:
How to
perform a
task?
Describes:
The relationship
b/n the various
elements of a
situation or
11. Models of Curriculum
Evaluation
1. Alade’s Six Models
2. Olaitan’s Four
Models
3. Tyler’s Model
4. CIPP Model
5. Stake’s Model
6. Roger’s Model
7. Scriven’s Model
8. Stenhouse’s
Research Model
9. Stufflebeam CIPP
Model
10. Krikpatricks Model
12. 1. ALADE’S SIX MODELS
From another perspective, Lawton (1980)
cited in Alade (2006) classified models of
curriculum evaluation into six, viz:
1. The Classical Model,
2. Research and Development Model,
3. Illumination Model,
4. Briefing Decision-Makers Model,
5. Teacher as Research (Professional)
Model,
13. 2. OLAITAN’S FOUR MODELS
In respect of vocational-technical education
evaluation, Olaitan (1996)identified the
following evaluation models which had been
employed by a good number of researchers.
They include:
• the Illumination Model,
• the Goal-Free Model,
• the Context, (C) Input (I), Process (P),
Product (P) (CIPP) Model, and
• The Transactional Model.
They had been found reliable as a guide for
collecting evaluative data in curriculum
evaluation.
14. 3. TYLER’S MODEL (1949)
Key Emphasis:
Instructional Objective
Purpose:
To measure students progress towards
objectives
Method :
1. Specify Instructional Objectives
2. Collect performance Data
3. Compare performance data with the
objectives/standards specified
Limitations:
Ignores process
Not useful for diagnosis of reasons why a
curriculum has failed
15. Tyler’s Planning Model
Objectives
Selecting learning
experiences
Organizing learning
experiences
Evaluation of
Students
Performance
What educational goals should
the school seek to attain?
How can learning experiences be
selected which are likely to be
useful in attaining these
objectives?
How can learning experiences be
organized for effective
instruction?
How can the effectiveness of
learning experiences be
evaluated?
16. 4. CIPP MODEL (1971)
The CIPP model of evaluation
concentrates on: Context of the
programme
Input into the programme
Process within the programme
Product of the programme
17. Context Evaluation
Objective:
To define the context ,
Identify population,
Assess needs,
Diagnose problem.
Method:
By comparing the actual and the intended inputs
and outputs
Relation to decision making:
For deciding upon settings to be served
For changes needed in planning
Needs of Industry, Society
Future technological developments
Mobility of the students.
18. Input Evaluation
Objective:
Identify and assess system capabilities
Alternative strategies
Implementation design
Method:
Analyzing resources, solution strategies,
procedural designs for relevance, feasibility and
economy.
Relation to Decision Making:
Selecting sources
Structuring activities
Basis for judging implementation
19. Process evaluation
Objectives: To identify process defects in the
procedural design or its implementation
Method:
monitoring,
describing process,
interacting,
observing
Relation to Decision making: For implanting
and refining the programme design and
procedure for effective process control.
20. Product evaluation
Objectives: To relate outcome information to
objectives and to context input and process
information
Method: Measurement Vs Standards
interpreting the outcome
Relation to Decision making: For deciding to
continue, terminate, modify, build or refocus a
change of activity.
21. Limitation’s of CIIP Model
1.Over values efficiency
2.Undervalues students aims
CIPP approach recommends…
• Multiple observers and informants
• Mining existing information
• Multiple procedures for gathering
data; cross- check qualitative and
quantitative
• Independent review by stakeholders
and outside groups
22. 5. STAKE’s MODEL (1969)
Antecedent is any condition existing
prior to teaching and learning which
may relate to outcome.
Transactions are the countless
encounters of students with teacher,
student with student, author with
reader, parent with counselor
Outcome include measurements of the
impact of instruction on learners and
23. ANTECEDENTS - Conditions Existing prior to
Curriculum Evaluation
Students interests or prior learning
Learning Environment in the Institution
Traditions and Values of the Institution
TRANSACTIONS Interactions that occur
between:
TEACHERS STUDENTS
STUDENTS STUDENTS
STUDENTS CURRICULAR
MATERIALS
STUDENTS EDUCATIONAL
ENVIRONMENT TRANSACTIONS = PROCESS OF
24. OUTCOMES
• Learning outcomes
• Impact of curriculum implementation on
Students
Teachers
Administrators
Community
• Immediate outcomes Vs Long range outcomes
Three sets of Data
1.Antecedents-Conditions existing before
implementation
2.Transactions - Activities occurring during
implementation
3.Outcomes
•Results after implementation
•Describe the program fully
25. STAKE’s MODEL
Key Emphasis:
Description and judgment of Data
Purpose: To report the ways different people
see curriculum Focus is on Responsive
Evaluation
1.Responds to audience needs for information
2.Orients more toward program activities than
results
3. Presents all audience view points(multi
perspective)
Limitations:
26. 6. KAUFMAN ROGER’S MODEL
Need Assessment
Where are we now? Where are we to
be?
…………..Discrepancy…..………
Discrepancy between current status and Desired
status
Discrepancies should be identified in terms of
products of actual behaviors (Ends)
Not in terms of processes (Means)
Deduction:-The drawing of a particular truth from a
general, antecedently known
Rule – examples
Induction:- Rising from particular truths to a
27. 7. SCRIVEN’S GOAL FREE EVALUATION
(1973)
Proponent : Michael Scriven
Introduced the term ‘formative’ and
‘summative’
Broaden perspective of evaluation
Evaluator should not know the educational
program’s goals in order not to be
influenced by them
Evaluator therefore totally independent
Evaluator free to look at processes and
procedures, outcomes and unanticipated
28. Roles of curriculum evaluation:
Scriven differentiates between two
major roles of curriculum evaluation: the
“formative” and the “summative”.
“Formative evaluation” – during the
development of the programme
“Summative evaluation” – at its
conclusion
Methodology:
The field is open to the hunter but he
did have a ‘lethal’ checklist of criteria for
judging any aspect of the curriculum
29. 8. STENHOUSE’S RESEARCH MODEL
(1970S)
Evaluation as part of curriculum
development
Continuous cycle of formative evaluation and
curriculum improvement at school level
Relationship between curriculum developer
and evaluator is central
Curriculum developer offer solutions
Evaluator is the practical man who temper
enthusiasm with judgment
The developer is the investigator; teacher
Autonomous professional self-development
through self study
30. 9. STUFFLEBEAM CIPP
MODEL
CIPP model of curriculum development is a process
of developing the curriculum.
CIPP model of curriculum evaluation is the process
to see the effectiveness of the developed and
implemented curriculum.
These approaches are based on the classical
curriculum evaluation models as presented by
Stufflebeam and Shinkfield (1990)
The decision-making:- The collecting information
about educational or training programs for the
purpose of decision-making.
The accreditation:- It is for forming professional
judgments about the processes used within education
or training programs.
31. Context
Planning decisions
What needs are to be addressed
Defining objectives for the program
Input
Structuring decisions
What resources are available
What alternative strategies should be
considered
What plan has the best potential
32. Process
Implementing decisions
How well is the plan being implemented
What are the barriers
What revision are needed
Product
Recycling decisions
What result are obtained
Were need reduced
What should be done with the program
33. 10. Kirkpatrick's Four Levels of
Evaluation
In Kirkpatrick's four-level model, each
successive evaluation level is built on
information provided by the lower level.
ASSESSING TRAINING EFFECTIVENESS often
entails using the four-level model developed by
Donald Kirkpatrick (1994).
34. Level 1 - Reaction
Evaluation at this level measures how
participants in a training program react to it.
This type of evaluation is often called a
“smile sheet.”
Level 2 - Learning
Assessing at this level moves the evaluation
beyond learner satisfaction and attempts to
assess the extent students have advanced
in skills, knowledge, or attitude.
35. Level 3 Evaluation - Transfer
This level measures the transfer that has
occurred in learners' behavior due to the
training program.
Are the newly acquired skills, knowledge, or
attitude being used in the everyday
environment of the learner?
Level 4 Evaluation- Results
This level measures the success of the
program in terms that managers and
executives can understand -increased
production, improved quality, decreased