1. TEST DEVELOPMENT AND
EVALUATION (6462)
DEVELOPMENT OF SUPPLY TEST ITEM
Department of Secondary Teacher Education
ALLAMA IQBAL OPEN UNIVERSITY, ISLAMABAD
2. OBJECTIVES OF THE UNIT
After studying this unit, the students will have ability to demonstrate.
1. highlight the role of assessment in teaching and learning process.
2. discuss factors affecting the selection of subjective and objective types of
questions for the classroom tests.
3. describe the various types of reporting test score
4. define an objective and an outcome
5. enlist the different types of techniques and explain their role in education
system.
3. 4.1 DETERMINING THE BEHAVIORS TO BE ASSESSED
i. Define the problem behaviour.
ii. Devise a plan to collect data.
Indirect Method
Direct Method
iii. Compare and analyse the data.
• Setting events
• Antecedents or triggering events
• The target behaviour
• Consequences that maintain the
behaviour
iv. Formulate the hypothesis.
• To get something
• To avoid and escape something
v. Develop and implement a behaviour
intervention plan.
Modifying the physical environment
• Adjusting the curriculum or instructional
strategy
• Changing the antecedents or consequences
• Finally, teaching a more acceptable
replacement
vi. Monitor the plan.
• Collect data on student progress
• Review and evaluate the behaviour goals
• Determine whether to continue or modify
the BIP.
4. 4.2 DEVELOPING NORMS OF THE TEST
Norms – This refers to the comparison of a student’s score in a test to the scores of a
reference group of students. A norm that can be followed with confidence results in a
good comparison. On the other hand, if the norms of a verbal ability test were based on
a reference group composed of native English speakers and if the examinee’s English is
his second language, his score could not be compared to the norms established by the
test.
A test norm is a set of scalar data describing the performance of a large number of
people on that test. Test norms can be represented by two important statistics:
Mean
Standard Deviation
5. 4.3 PLANNING THE TEST
1. Review Curriculum
2. Review Textbook or Learning Material
3. Compatibility between Curriculum and Textbook (Weightage: outlines & textbook)
4. Decide Categories / Types of Test Items (MCQs, SAQs, ETQs) (NRT, CRT)
5. Decide Weightage of Different Test Items and Cognitive Abilities
6. Draw Table of Specification also called Test Specification and Grid Specification
7. Develop Questions according to Test Specification / TOS / GS
8. Review Questions (improve)
9. Piloting the Test:
>Difficulty Level (0 to 1) (Should be 0.27- 0.84)
>Discrimination Index (0 to 1) may be negative (should be 0.5 to onward)
>Power of Distracters (how an option is weak and powerful)
10. Finalizing the Test
6. 4.4 ENSURING CONTENT VALIDITY (COURSE COVERAGE, CONCEPT
COVERAGE AND LEARNING OUTCOMES COVERAGE)
How is content validity measured?
Content validity is related to face validity but differs wildly in how it is evaluated. Face validity requires a
personal judgment, such as asking participants whether they thought that a test was well constructed and
useful.
7. 4.5 CONSTRUCTING A TABLE OF SPECIFICATION BASED ON
BLOOM’S TAXONOMY
i. Alignment of Student Learning Outcomes SLOs and Bloom’s Taxonomy
ii. Development of Table of Specification
Select a topic
Identify the sub-topics to be tested
Identify the levels of the taxonomy that are most appropriate to assess the content
Review the types of skills that can be tested under each level
Consider the number of items that will be used on the entire test and the time available.
Based on the time spent on/importance of each sub-topic, decide on the number of items that will be used
for each sub-topic.
Distribute the questions across the cognitive levels
iii. Usage of Table of Specification
iv. Preparing the Test Blue-Print or Table of Specifications
v. Table of Specifications Using Bloom’s Revised Taxonomy
8. Test Specifications
Item specifications:
Item specifications describe the items, prompts or tasks, and any other material such as texts, diagrams, and
charts which are used as stimuli.
Presentation model:
Presentation model provides information how the items and tasks are presented to the test takers.
Assembly model:
Assembly model helps the test developer to combine test items and tasks to develop a test format.
Delivering model:
Delivery Model tells how the actual test is delivered. It includes information regarding test administration, test
security/confidentiality and time constraint.
9.
10.
11. 4.7 WRITING TEST ITEMS (SUPPLY AND SELECTION TYPES) BASED
ON TABLE OF SPECIFICATION
Selection-type test items can be designed to measure a variety of learning outcomes from simple to
complex. It tend to be favoured in achievement testing by
• Greater control of type of response students can make
• Broader sampling of achievement
• Quicker and more objective scoring
12.
13. Department of Secondary Teacher Education
ALLAMA IQBAL OPEN UNIVERSITY, ISLAMABAD
Dr. Hina Jalal
hinansari23@gmail.com