2. Background
A formative evaluation, evidence of an instructional program’s worth is gathered
for use in making decisions about how to revise the program while it is being
developed. This is why it is called "formative" evaluation, because the instruction is in its
developmental stages and is not yet "grown up". The idea is to find out if your newly
developed course works at teaching the objectives you need to teach to the learners
who need to learn them, before you present it to your target audience. In any given
formative evaluation, you can find out how to make your instruction more:
Effective
Efficient
Interesting/Motivating
Usable
Acceptable You do this by carrying out procedures that will provide you with evidence
as to the effectiveness of your instruction. The emphasis is on collecting data and
revising the instruction.
3. Objective
Describe the purposes for and various stages of formative evaluation of instructor-
developed materials, instructor- selected materials, and instructor- presented
instruction.
Describe the instruments used in a formative evaluation.
Develop an appropriate formative evaluation plan and construct instruments for a set of
instructional materials or an instructor presentation.
Collect data according to a formative evaluation plan for a given set of instructional
materials or instructor presentation.
4. Formative Evaluation
Definition
The collection of data and information during the development of
instruction that can be used to improve the effectiveness of the instruction.
Purpose
To obtain data that can be used to revise the instruction to make it more
efficient and effective.
6. 1 TO 1
PURPOSE: IDENTIFY AND REMOVE ERRORS IN INSTRUCTION
CRITERIA
CLARITY
IMPACT
FEASIBILITY
SELECTING LEARNERS
DATA COLLECTION
OUTCOMES
7. CRITERIA
During the development of the instructional strategy and the instruction itself, designers and
developers make a myriad of translations and decisions that link the content, learners,
instructional format, and instructional setting. The one- to- one trials provide designers with their
first glimpse of the viability of these links and translations from the learners’ perspective. The
three main criteria and the decisions de-signers will make during the evaluation are as follows:
1. Clarity: Is the message, or what is being presented, clear to individual target learners?
2. Impact: What is the impact of the instruction on individual learner’s attitudes and achievement
of the objectives and goals?
3. Feasibility: How feasible is the instruction given the available resources ( time/ context)?
8. LEARNER SELECTION
NOT AN EXPIEREMENT
NO RANDOM SELECTION
LEARNERS SHOULD REPRESENT A WIDE VARIETY BUT IN A SMALL
GROUP
EVALUATE
9. DATA
1: CLEAR BASIC MESSAGE
VOCAB, SENTENCE COMPLEXITY, STRUCTURE
2: LINKS
WORKS FOR LEARNER, EXAMPLES,
3: PROCEDURES
TYPE OF INSTRUCTION, VARIOTION, CLARITY MAY CHANGE IF NOT APPROPIATE
10. STEPS
a one- to- one evaluation is to explain to the learner that a new set of instructional materials has
been designed and that you would like his or her reaction to them. You should say that any
mistakes that learners might make are probably due to deficiencies in the material and not theirs.
Encourage the learners to be relaxed and to talk about the materials.
You should have the learners not only go through the instructional materials but also have them
take the test( s) provided with the materials.
11. QUESTIONAIRES
HELPS YOU SPOT MISTAKES
LETS YOU KNOW WHY THEY MADE CERTAIN CHOICES
ALLOWS THE EVALUATION TO BE BASED ON THEIR OPINION AS WELL
12. INTERPREATING DATE
The information on the clarity of instruction, impact on learner, and
feasibility of instruction needs to be summarized and focused.
Particular aspects of the instruction found to be weak can then be
reconsidered in order to plan revisions likely to improve the instruction
for similar learners.
13. OUTCOMES
The outcomes of one- to- one trials are instruction that
1) contains appropriate vocabulary, language complexity, examples, and
illustrations for the participating learner;
( 2) either yields reasonable learner attitudes and achievement or is
revised with the objective of improving learner attitudes or performance
during sub-sequent trials; and
( 3) appears feasible for use with the available learners, resources, and
setting. The instruction can be refined further using small group trials.
14. SMALL GROUP
Purposes To determine the effectiveness of changes made following
the one-to-one evaluation.
To identify any remaining learning problems that learners may have.
To determine whether learners can use the instruction without interacting
with the instructor.
15. EVALUATION
To determine Weakness(es) in the Instruction
Focusing the design only on the goals and objectives of the instruction would be too
limited.
Data on learners’ achievement of goals and objectives would be insufficient, though
important, because these data will only provide information about where errors occur
rather than why they occur.
17. DESIGN REVIEW
Does the instructional goal match the problem identified in the needs assessment?
Does the learner & environmental analysis match the audience?
Does the task analysis include all the prerequisite skills?
Are the test items reliable and valid, and do they match the objectives?
18. REVIEW
Is the content accurate & up-to-date?
Does it present a consistent perspective?
Are examples, practice exercises, & feedback realistic & accurate?
Is the pedagogy consistent with current instructional theory?
Is the instruction appropriate to the audience?
19. SUMMARY
Formative evaluation of instructional materials is conducted to determine the effectiveness of the
materials and to revise them in areas where they are ineffective. Formative evaluations should be
conducted on newly developed materials as well as existing materials that are selected based on
the instructional strategy. Evaluations are necessary for both mediated and instructor presented
materials.