The curriculum development cycle has three main stages: design, implementation, and validation. During the design stage, learning objectives, content, strategies, and assessments are planned. The implementation stage involves instructors delivering training based on the curriculum. Finally, the validation stage evaluates the curriculum through expert review, pre-/post-testing, and the CIPP method to provide feedback for revisions.
Micro-Scholarship, What it is, How can it help me.pdf
CURRICULUM DEVELOPMENT CYCLE.pdf
1. CURRICULUM DEVELOPMENT CYCLE
The curriculum development cycle has three stages, namely; curriculum design, curriculum
implementation and the curriculum validation stage.
Curriculum Design involves the identification of the learning process and events intended to achieve the
competencies. Learning objectives, contents, strategies and methods, modes of training, methods of
assessment and training resources are carefully planned during the process.
Curriculum Implementation is “putting into action” the various components stipulated in the
competency-based curriculum (CBC). Competency-based training is planned and facilitated by certified
trainers based on the CBC.
Curriculum validation involves evaluation of the curriculum using the following: content validation by a
panel of experts, analysis of the results of learning through pre-test/post-test analysis and program
evaluation, analysis using the Content-Input-Process-Product (CIPP) Method of research.
Through these processes, the impact of the curriculum may also be tested.
Based on the results of the curriculum validation, feedbacks are collected, conclusions are drawn,
and recommendations for its revisions are suggested.
These data are inputs to the next cycle of the curriculum review/revision, implementation and
validation
The CIPP Evaluation Model: A Summary
A great evaluation approach is Daniel Stufflebeam’s CIPP evaluation model (Fitzpatrick, Sanders
& Worthen, 2011; Mertens & Wilson, 2012; Stufflebeam, 2003; Zhang, Zeller, Griffith, Metcalf, Williams,
Shea & Misulis, 2011). In this decision-oriented approach, program evaluation is defined as the
“systematic collection of information about the activities, characteristics, and outcomes of programs to
make judgments about the program, improve program effectiveness, and/or inform decisions about
future programming.” (Patton, 1997, p. 23). The CIPP evaluation model (see figure 1) is a framework for
guiding evaluations of programs, projects, personnel, products, institutions, and evaluation systems
(Stufflebeam, 2003).
Figure 1: Components of Stufflebeam’s (2003) CIPP Model.
2. Designed to assist administrators in making informed decisions, CIPP is a popular evaluation
approach in educational settings (Fitzpatrick et al., 2011; Zhang et al., 2011). This approach, developed in
the late 1960s, seeks to improve and achieve accountability in educational programming through a
“learning-by-doing” approach (Zhang et al., 2011). Its core concepts are context, input, process, and
product evaluation, with the intention of not to prove, but rather improve, the program itself
(Stufflebeam, 2003). An evaluation following the CIPP model may include a context, input, process, or
product evaluation, or a combination of these elements (Stufflebeam, 2003).
The context evaluation stage of the CIPP Model creates the big picture of where both the
program and evaluation fit (Mertens & Wilson, 2012). This stage assists in decision-making related to
planning, and enables the evaluator to identify the needs, assets, and resources of a community in order
to provide programming that will be beneficial (Fitzpatrick et al., 2012; Mertens & Wilson, 2012). Context
evaluation also identifies the political climate that could influence the success of the program (Mertens &
Wilson, 2012). To achieve this, the evaluator compiles and assesses background information, and
interviews program leaders and stakeholders. Key stakeholders in the evaluation are identified. In
addition, program goals are assessed, and data reporting on the program environment is collected. Data
collection can use multiple formats. These include both formative and summative measures, such as
environmental analysis of existing documents, program profiling, case study interviews, and stakeholder
interviews (Mertens, & Wilson, 2012). Throughout this process, continual dialogue with the client to
provide updates is integral.
To complement context evaluation, input evaluation can be completed. In this stage,
information is collected regarding the mission, goals, and plan of the program. Its purpose is to assess the
program’s strategy, merit and work plan against research, the responsiveness of the program to client
needs, and alternative strategies offered in similar programs (Mertens & Wilson, 2012). The intent of this
stage is to choose an appropriate strategy to implement to resolve the program problem (Fitzpatrick et
al., 2011).
In addition to context evaluation and input evaluation, reviewing program quality is a key element to
CIPP. Process evaluation investigates the quality of the program’s implementation. In this stage, program
activities are monitored, documented and assessed by the evaluator (Fitzpatrick et al., 2011; Mertens &
Wilson, 2012). Primary objectives of this stage are to provide feedback regarding the extent to which
planned activities are carried out, guide staff on how to modify and improve the program plan, and assess
the degree to which participants can carry out their roles (Sufflebeam, 2003).
The final component to CIPP, product evaluation, assesses the positive and negative effects the
program had on its target audience (Mertens & Wilson, 2012), assessing both the intended and
unintended outcomes (Stufflebeam, 2003). Both short-term and long-term outcomes are judged. During
this stage, judgments of stakeholders and relevant experts are analyzed, viewing outcomes that impact
the group, subgroups, and individual. Applying a combination of methodological techniques assure all
3. outcomes are noted and assist in verifying evaluation findings (Mertens & Wilson, 2012; Stufflebeam,
2003).
Purpose and Methods of Curriculum Validation
Introduction
Curriculum validation is the process of making value judgments about the merit or worth of a part
or the whole of a curriculum. The nature of a curriculum validation often depends on its policy makers
and other stakeholders (administrators, teachers/trainers, students/trainees, industry experts, parents
and communities) – to inform future action. A newly developed curriculum should be pilot tested in order
to be validated. Piloting is a broad term which can be used in the context of both curriculum validations,
although it occurs a relatively early stage of the curriculum development and curriculum change process.
Purpose of Curriculum Validation
Curriculum validation is conducted so that several components of the curriculum design are
evaluated for improvement of various dimensions and also to take the several recommendations for the
next curriculum revision. The learning outcomes of all trainees are assessed and conclusions are drawn
on the design and implementation of the curriculum. At the same time the impact of competency-based
training for a specific qualification is measured on various parameters of the graduates in the industry.
Based on the result of the curriculum evaluation, feedback is provided to the next cycle of the
curriculum review/revision, implementation and validation.
Curriculum validation maybe:
• Formative which may be carried out during the process of curriculum development; and
• Summative which may be carried out after offering the curriculum once or twice.
Methods of Curriculum Validation
In recent decades there has been a growing demand for empirical data to justify new curriculum
prior to wide scale implementation. The demand has arisen, in part, from the high financial cost of
curriculum development and implementation. It is important that empirical evidence is gathered to
demonstrate the quality of a curriculum and to test its practicality and utility in a “real world” setting.
Piloting in this sense is a dimension of curriculum validation.
Lewey has identified three phases of curriculum “tryout”. Each phase will adopt successively more
formal validation methods in order to provide more reliable findings:
Laboratory tryout: The first phase may begin as formative validation very early in the curriculum
development process in what is sometimes described as “laboratory tryouts”. Here elements of the
curriculum may be tested with individuals or small groups. Responses of learners are observed and
modifications to the curriculum materials may be suggested.
Pilot tryout: A “pilot tryout” may begin in a school setting as soon as a complete, albeit, a preliminary
version of a curriculum is available. Curriculum development team members may take the role of the
teacher. The purpose of this phase is to identify if it is possible to implement the curriculum or if changes
are needed, what conditions are required to ensure success.
Field tryout: When a revised version is completed based on the findings of the pilot tryout, “field tryouts”
may be conducted by teachers in their classrooms without the direct involvement of the development
team. This exercise attempts to establish whether the program may be used without the ongoing support
of the team and to demonstrate the merits of the program to potential users.
4. The Curriculum Validation Process
Introduction
The process of validating a competency-based curriculum starts with constituting a validation team.
The CBC validation team comprises of the curriculum experts, industry representative and expert trainers.
The members are selected/nominated on the basis of their:
• Competence;
• Expertise;
• Experience;
• Commitment; and the
• Ability to spare time for the validation.
The CBC Validation Process
The CBC validation team conducts the validation as per predefined plan.
1. Generally, the validation process begins by examining the CBC documents. It gathers information
from trainers, trainees, administrator and industry representative. The team reviews relevant
data from the institution. It conducts focus group discussion to draw conclusions on significant
and key issues. The validation team adopts unbiased approach.
2. They draw conclusion on the basis of data, facts and reliable information and not on the basis of
significant few persons. They collect adequate evidences and information from fairly good size of
the sample to draw conclusions and make recommendations.
3. The documented result will be analyzed and if found acceptable, the indicated comments and
recommendations would be considered in the purpose of improving the training curriculum.
4. After validating the curriculum, it should be finalized and submitted to appropriate personnel in
your training institution. It can be the submitted to the program manager, head of department,
senior teacher, training supervisor, apprenticeship/ traineeship supervisor, training
coordinator/manager, or to HR manager.