2. 1. K __ __ W__ E __ __ E - Students mastery of
substantive subject matter reflected as test
items should only require students recall or
retrieve from their memory.
KNOWLEDGE
3. 2. __ R __ D U __ T __ - Ability to create
achievement- related skills such as written reports
designing a lesson plan.
PRODUCTS
4. 3. V __ L I __ __ T __ - to extent to which the
test measures what it intends to measure.
VALIDITY
5. 4. R __ __ I A __ I __ I __ Y - The reliability of an
assessment method refers to its consistency.
It is also a term that is synonymous with
dependability or stability.
RELIABILITY
6. 5. __ E __ SO __ I __ G - It is the ability to use
knowledge to reason and solve problem.
Involving, justifying or ending solutions to a
problem.
REASONING
8. 1
CLARITY OF LEARNING
TARGETS
1.1. CATEGORIES OF LEARNING TARGETS
1.2. COGNITIVE TARGETS
1.3. SKILLS, COMPETENCIES AND ABILITIES TARGETS
1.4. PRODUCTS, OUTPUTS AND PROJECTS TARGETS
9. CLARITY OF LEARNING TARGETS
Assessment can be made precise, accurate ad
dependable only if what are to be achieved are clearly
stated and feasible. To this end, we consider learning
targets involving knowledge, reasoning, skills,
products and effects. Learning targets need to be
stated in behavioral terms or terms that denote
something which can be observed through the
behavior of the student.
1
10. CATEGORIES OF LEARNING TARGETS
1. Knowledge - Student’s mastery of substantive subject matter.
Reflected as test items should only require students to recall or
retrieve from their memory.
2. Reasoning - The ability to use knowledge to reason and solve
problem. Involve justifying or ending solutions to a problem.
3. Skills - Ability of the student to demonstrate achievement
related skills.
4. Products - Ability to create achievement-related skills such us
written reports, designing a lesson plan.
5. Affective/deposition - Student’s attainment of effective
traits such as attitudes, values, interest, self-efficacy
1.1
11. COGNITIVE TARGETS
As early as 1850’s, Bloom, (1954), proposed a
hierarchy of educational objectives at the cognitive
level. These are:
1.2
12. COGNITIVE TARGETS
Level 1. KNOWLEDGE which refers to the acquisition of
facts, concepts and theories. Knowledge pf historical facts
like the date of the EDSA revolution, discovery of the
Philippines or scientific concepts like the scientific name of
milkfish, the chemical symbol of argon etc. all fall under
knowledge.
knowledge forms the foundation of all other cognitive
objectives for without knowledge, it is not possible to move
up to the next higher level of thinking skills in the hierarchy
of educational objectives.
1.2
13. COGNITIVE TARGETS
Level 2. COMPREHENSION refers to the same concept as
“understanding. It is a step higher than mere acquisition of
facts and involves a cognition or awareness of the
interrelationships of facts and concepts.
Level 3. APPLICATION refers to the transfer of knowledge
from on field of study to another or from one concept to
another concept in the same discipline.
1.2
14. COGNITIVE TARGETS
Level 4. ANALYSIS refers to the breaking down of a
concept or idea into its components and explaining the
concept as a composition of these concepts.
Level 5. SYNTHESIS refers to the opposite of analysis and
entails putting together the components in order to
summarize the concept.
Level 6. EVALUATION AND REASONING refers to valuing
and judgement or putting the “worth” of a concept or
principle.
1.2
15. SKILLS, COMPETENCIES AND
ABILITIES TARGETS
Skills refer to specific activities or tasks that a student
can proficiently do e.g. skills in coloring, language skills.
Related competencies characterize a student's ability
(DACUM, 2000).
It is important to recognize a student's ability in order that
the program of study can be so designed as to optimize
his/her innate abilities.
1.3
16. SKILLS, COMPETENCIES AND
ABILITIES TARGETS
Abilities can be roughly categorized into: cognitive,
psychomotor and affective abilities. For instance, the ability
to work well with others and to be trusted by every
classmate (affective ability) is an indication that the student
can most likely succeed in work that requires leadership
abilities. On the other hand, other students are better at
doing things alone like programming and web designing
(cognitive ability) and, therefore, they would be good at
highly technical individualized work.
1.3
17. PRODUCTS, OUTPUTS AND PROJECTS
TARGETS
Products, outputs and projects are tangible and
concrete evidence of a student's ability. A clear target for
products and projects need to clearly specify the level of
worksmanship of such projects e.g. expert level, skilled
level or novice level outputs. For instance, an expert output
may be characterized by the indicator "at most two
imperfections noted" while a skilled level output can be
characterized by the indicator "at most four (4)
imperfections noted" etc.
1.4
19. ASSESSMENT METHODS
1. Objective
2. Essay- You are tested on how vast is knowledge and understanding
of the subject. Essay test responses can either be restricted or
extended response.
3. Performance based - Presentation, technical report, project,
athletics, demonstration.
4. Oral question - Oral exam, conference, interviews.
5. Observation-informal or formal observation
6. Self-report – uses surveys, questionnaire in which student selected
a response by themselves. It involves asking a learner about their
feelings, attitudes and beliefs.
22. BALANCE
Allow educators to link assessments through clearly defined
learning targets, provide multiple sources of evidence to support
decision making, and document progress over time.
An assessment is balance if it considers all domains of learning
Considers many intelligence as possible
3
24. FAIRNESS
As fair assessment provides an opportunity to all students to
demonstrate achievement
1. Student have knowledge of learning targets and assessment.
2. Students possess the pre-requisite knowledge and skills.
3. Student given equal opportunity to learn.
4. Student are free from bias assessment task and procedures.
4
25. COMMUNICATION
1. Assessment targets and standards should be communicated.
2. Assessment result should be communicated to its important
users.
3. Assessment result should be communicated to students
through direct interaction.
5
27. POSITIVE CONSEQUENCES
1. Assessment should have a positive consequence to student
that is it should motivate them to learn.
2. Assessment should have a positive consequence to
teachers that is it should help improve the effectiveness of
their instruction.
6
29. ETHICS IN ASSESSMENT
The term "ethics" refers to questions of right and wrong. When teachers think
about ethics, they need to ask themselves if it is right to assess a specific
knowledge or investigate a certain question. Are there some aspects of the
teaching-learning situation that should not be assessed? Here are some
situations in which assessment may not be called for:
• Requiring students to answer checklist of their sexual fantasies;
• Asking elementary pupils to answer sensitive questions without consent
of their parents;
• Testing the mental abilities of pupils using an instrument validity and
reliability are unknown;
7
30. ETHICS IN ASSESSMENT
1. Teacher should free students from harmful consequences of
misuse or overuse of various assessment procedures such
embarrassing students and violating students right to
confidentiality.
2. Teacher should be guided by law and policies that affect their
classroom assessment.
3. Administrator and teacher should understand that it is
appropriately use standardize student’s achievement to
measure teaching effectiveness.
7
32. PRACTICALITY AND EFFICIENCY
Another quality of a good assessment procedure is practicality
and efficiency. An assessment procedure should be practical in
the sense that the teacher should be familiar with it, does not
require too much time and is in fact, implementable. A complex
assessment procedure tends to be difficult to score and interpret
resulting in a lot of misdiagnosis or too long a feedback period
which may render the test inefficient.
8
33. PRACTICALITY AND EFFICIENCY
1. Teacher familiarity with the method
2. Time required
3. Complexity of administration
4. Ease in scoring
5. Ease of interpretation
8
35. VALIDITY
Validity, in recent years, has been defined as referring to the
appropriateness, correctness, meaningfulness and usefulness of the
specific conclusions that a teacher reaches regarding the teaching-
learning situation.
Validity is the degree to which the instrument measures what it
intends to measure. It is a characteristic of a test that pertains to the
appropriateness of the inferences, uses, and results of the test or any
data gathering method. It is considered the most important criterion of
a good assessment instrument
9
36. VALIDITY
Let us explore the various ways of establishing validity:
1. Face validity is established by examining the physical appearance of the test
instrument.
2. Content/Curricular-related validity is established by ensuring that the test
objectives match lesson objectives. In other words, the lesson objectives are
reflected in the test items. A table of specifications will ensure that the
appropriate learning targets (or the lessons discussed in the class) are the ones
to be assessed in the test.
9
37. VALIDITY
3. Criterion-related Validity is established statistically such that a set of scores
revealed by the measuring instrument is correlated with the scores obtained in
another external predictor or measure. It provides validity by relating an
assessment to some valued measure (criterion) that can either provide an
estimate of current performance (concurrent validity) or predict future
performance (predictive validity).
a. Predictive validity is established by correlating sets of scores obtained from
two measures given at a longer time interval in order to describe the future
performance of an individual.
b. Concurrent validity is established by correlating the sets of scores obtained
from two measures given concurrently in order to describe the present status
of the individual.
9
38. VALIDITY
4. Construct-related validity determines which assessment is a meaningful measure
of an unobservable trait or characteristics like intelligence, reading comprehension,
honesty, motivation, attitude, learning style, anxiety, etc… It is established by
statistically comparing psychological factors that affect the scores in a test. There
are two ways on how construct-related validity is established. These are convergent
validity and divergent validity.
a. Convergent validity is established if an instrument defines another similar trait
other than what is intended to measure. For example, Mathematics Anxiety Test
may be correlated with Attitudinal Test.
b. Divergent validity is established if an instrument can describe only the intended
trait and not the other traits. For example, critical thinking test may not be
correlated with language ability test.
9
40. RELIABILITY
Reliability, on the other hand, is defined as the instrument’s
consistency.
The reliability of an assessment method refers to its consistency. It is
also a term that is synonymous with dependability or stability.
Reliability of a test may also mean the consistency of test results
when the same test is administered at two different time periods.
This is the test-retest method of estimating reliability. The estimate
of test reliability is then given by the correlation of the two test
results.
10
41. RELIABILITY
WHAT AFFECTS THE RELIABILITY OF A TEST?
• Inconsistency of the scorer as a result of subjective scoring.
• Incidental and accidental excursion of some materials in the
test resulting to limited sample
• Change in the individual examinee himself and his instability
during the examination
• Testing environment
10
42. T h a n k y o u !
Report of group 1 D a t e : M a r c h 6 , 2 0 2 3
Alexis S. Cadorna
Angeline S. Cayanan
Carla Mae Cajade
Claire R. Lung-ayan
Cliffort Pua
Erica Mae T. Madduma
Frennylen C. Asuncion
James Buscayno
Group 1