ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
Principles of student_assessment_satya_
1.
2.
3.
4.
5.
6. Prof. Dr. V. Sathyanarayanan MBBS, MD
SRM MCH & RC
*PRINCIPLES OF STUDENT ASSESSMENT
7. *
*AT THE END OF THIS SESSION, PARTICIPANTS WILL BE ABLE TO
1. Name the types of assessment
2. Compare formative and summative assessment
3. Explain the characteristics of an assessment tool
4. Identify the steps in student assessment
5. Enumerate the steps in the choice of an assessment tool
6. Express enthusiasm to apply the principles of student assessment in everyday teaching practice
7. Prepare an appropriate assessment tool
8. *
*Chief goal is to bring desirable change of
Knowledge,
Skills,
Attitudes
In the learner
16. *
*What is assessment?
*Why ?
* Parts of it
*Types
*Characteristics
*Assessment tools
*Steps
*Limitations
*Summary
17. *
*The process of determining
whether
predetermined educational objectives
have been achieved…
18.
19. *
*Assessment and evaluation are often used interchangeably
*However for our purposes…
*Assessment describes the measurement of learner outcomes
*Evaluation describes the measurement of course/program outcomes
20.
21. *
*Provides baseline data
*Provides summative and formative feedback
*“Drives learning”
*Allows to measure individual progress.
*Encourages “student” reflection
*Assures public that providers are competent
*Licensure/credentialing requirements
29. *
1.OBJECTIVE MEASUREMENT marks, rank, percentile
2.VALUE JUDGEMENT Regarding desirability of the result of measurement to relook SLO and TL Process decide on changes
35. Crucial Distinction
Assessment OF Learning (Summative):
How much have students learned as of
at a particular point in time?
Assessment FOR Learning (Formative):
How can we use assessment to help students learn more?
36. *Done for planning Teaching- Learning
*For student development
*Done at the classroom level
*To provide feedback for students and teachers
*Can help to modify T-L methods
*Before final test of competence
*Criterion referenced
*Done scoring/grading or pass/fail decision
*Done at a wider level (school, college, university , national )
*Used to find out the competence before certification
*Criterion referenced
*
37. *“When the cook tastes the soup, that’s formative assessment
38. *when the customer tastes the soup, that’s summative assessment.” (Brookhart, 1999)
42. •To attain “competent performance” – basic Knowledge, Skills & Attitudes required
•Competence - is the application of specific KSAs.
•Performance – is the “translation of competence into action”
*
55. *
*The most important characteristic of an assessment tool
*Refers to the results of the test and the choice of the test instrument
*Ask “ what is the learning outcome to be measured?”
*Matter of degree ( less valid or more valid)
*Concept to evaluate many facets of clinical competency
56.
57.
58.
59. *
*Is a measure of the reproducibility of the results of a test
*Measure of the correlation between two sets of scores obtained
when the test is repeated after an interval
( test – retest method)
*When the test is split into 2 halves and
results are compared ( split half method )
60.
61. *
*Can be improved by
1.Increase the length of the test to optimum level
2.Modifying difficulty and discrimination levels
3.Maintaining constant conditions
62.
63. *
*Assessment must be appropriate in the context of the needs
*Should be obvious to the teacher and the student
64.
65.
66.
67. *
*Is the degree to which the assessment adheres to specific criteria
*Measures To increase objectivity of scoring
1.Structuring the questions
2.Preparing model answers
3.Agreement on marking scheme
4.Independent assessment by more than one examiner
68. *
*Is the degree to which the process of assessment is practical
*And possible to implement in the circumstances
*Time
*Expertise
*cost
76. *Assessment of ‘Knows’ and ‘Knows How’:
1.Long Essay Questions (LEQ)
2.Short Answer Questions (SAQ)
3.Multiple Choice Questions (MCQ)
4.Extended Matching Items (EMI)
5.Key Features Test (KF)
6.Oral Examination/Viva
77. *Assessment of ‘Shows How’:
1.Long Case
2.Short Case
3.Objective Structured Clinical Examination (OSCE)
4.Objective Structured Practical Examination (OSPE)
78. *Assessment of ‘Does’
1.Mini Clinical Evaluation Exercise (Mini-CEX)
2.Direct Observation of Procedural Skills (DOPS)
3.Clinical Work Sampling (CWS)
4.Checklist
5.360-Degree Evaluation
6.Logbook
7.Portfolio
79. *
*Most commonly long answer or essay for theory ( assessment of knowledge)
*Long case or short case for clinical examination
*Oral examination for the assessment of practical skills
*OSCE , OSPE increase reliability
*Log book, diary attitude assessment
80. *
*Low Stake Examinations High Stake Examinations
*Long essay question —> Multiple short answer question
*Traditional long case —> Multi-station OSCE
81.
82. *
1.Define Learning objectives
2.provide T-L experiences
3.select measuring instrument
4. administer test
5.decide marking
6.score test
7.analyse result
8. make a final decision
9. not right, choose an alternative method
85. KNOWLEDGE – CAN STUDENTS RECALL INFORMATION?
Who, What, Where, When, How
Which one
How much
Name
Describe
Label
Define
List
Memorise
Reproduce
Literal questions
Recall
86. COMPREHENSION – CAN STUDENTS EXPLAIN IDEAS?
Explain
What are they saying
Describe in your own words
Explain what is happening
Inferential questions
Give an example
Summarise
State in 5 words
What would go better
Explain what is meant
Select the definition
What restriction would you add
Read the graph table
Translate
This represents
Outline
Condense this paragraph
Locate
What part doesn’t fit
Match
90. *
*“Shows How”
*“demonstration of skills in a controlled setting” (Scalese, 2008)
*Educating in these methods includes simulation based education (SBE).
*OSCE, OSPE, Simulations, log books, portfolios
*Technical skills
*Includes higher level thinking
*“Does”
*Moves from simulated environment to the real life setting
91.
92. *
*When choosing the assessment instrument,
the following should be answered:
*Is it valid
*Is it reliable
*Is it feasible
93.
94. *
*Are we measuring what we are supposed to be measuring?
*Use the appropriate instrument for the knowledge, skill, or attitude you are testing
*The major types of validity should be considered (content, predictive, and face)
95. *
*Does the test consistently measure what it is supposed to be measuring ?
*Types of reliability:
*Inter-rater (consistency over raters)
*Test-retest (consistency over time)
*Internal consistency (over different items/forms)
96. *
*Is the administration of the assessment instrument feasible in terms of time and resources?
*Time to construct?
*Time to Score ?
*Ease of interpreting the score/producing results ?
*Practical given staffing/organization ?
*Quality of feedback ?
*Learner takeaway ?
*Motivate Learner ?
97. *
*Number of students to be assessed
*Time available for the assessment
*Number of staff available
*Resources/equipment available
*Special accommodations
98.
99. *
* Correlation between
performance of students
in examinations and
performance on the job
is less than ideal..
104. Critical questions in assessment
1.WHY are we doing the assessment?
2.WHAT are we assessing?
3.HOW are we assessing it?
4.HOW WELL is the assessment working?
105. 1. WHY are we doing the assessment?
What Is its purpose?
Formative?
Summative?
106. 2. WHAT are we testing?
Elements of competence
Knowledge
factual
applied: clinical reasoning
Skills
communication
clinical
Attitudes
professional behaviour
Tomorrow’s Doctors, GMC 2003
107. 3. How are we doing the assessment? Test formats
Knows
Shows how
Knows how
Factual tests:
SBAs ( MCQs )
Knows (Clinical) Context based tests: SBAs, EMQs, SAQ
Shows Performance assessment in vitro: OSCEs
Does
Performance assessment in vivo: Video, WBA eg mini-CEX, DOPs
108. 4. HOW WELL is the assessment working?
Evaluation of assessment systems
•Is it valid?
•Is it reliable?
•Is it doing what it is supposed to be doing?
•To answer these questions, we have to consider the characteristics of assessment instruments
109. Principles of Assessment
There is no perfect assessment: compromise is always required
The compromise depends on the context of the assessment
The Quality of assessments is a matter of the integral assessment programme, rather than of the individual instruments
110.
111. *No single assessment method can provide all the data required for judgment of anything so complex as the delivery of professional services by a successful physician : George Miller 1990
113. *
*AT THE END OF THIS SESSION, PARTICIPANTS WILL BE ABLE TO
1. Name the types of assessment
2. Compare formative and summative assessment
3. Explain the characteristics of an assessment tool
4. Identify the steps in student assessment
5. Enumerate the steps in the choice of an assessment tool
6. Express enthusiasm to apply the principles of student assessment in everyday teaching practice
7. Prepare an appropriate assessment tool
115. *
*3 questions for choosing an appropriate tool for assessment :
*What is the basic purpose of assessment ?
( formative or summative )
*What is the predominant domain involved )
(cognitive, psychomotor, affective )
*How many students are to be assessed ?
( feasibility of the test)