2. Take a few minutes to journal on the topic: What do you know about using classroom assessment? What puzzles you? How can you explore this topic today?
3.
4. Use of assessment Happens? To what degree? Decisions about curriculum alignment Decisions about students’ prior knowledge Decisions about how long to teach something Decisions about effectiveness of instruction
11. What does it look like? Valid and reliable shooting Unreliable and invalid shooting Reliable but invalid shooting
12. Valid? Reliable? Both? Neither? Mertler, Craig A., (2003). Classroom Assessment: A Practical Guide for Educators. Pryczak Publishing, Los Angeles, CA.
14. To Sum Up: When selecting and implementing assessments to augment state and classroom formative assessment… Ask yourself these important questions!
15.
16.
Hinweis der Redaktion
Are we assessing for the sake of assessment, or are we assessing purposefully and thoughtfully, in a manner that makes the time invested worth it?
Are we assessing for the sake of assessment, or are we assessing purposefully and thoughtfully, in a manner that makes the time invested worth it?
Use your data teams to help you prioritize your instructional goals. Do you have power or priority standards established? How do you determine your core standards. If this work is done alone, you have the problem of differing priorities and interpretations. Part of a collaborative effort to a vertical continuum and congruence across grades.
S-17 Just an organizer for you. As we look at the four suggested uses of classroom assessment, think about whether these elements are part of your classroom practice. If you are an administrator, how aware are you of these practices occurring in your classrooms?
Note that the post test only model doesn’t factor in the students’ pre-instructional status. This makes it difficult to determine whether instruction impacted student learning, or other factors. Pretesting and then comparing post testing results to pretest results allows a teacher to determine his/her impact instructionally, on student learning. The student becomes his/her own “control” in this model. Where high mobility of students is an issue, you would analyze the cohort of students who were pre and post tested for evaluating your own impact. Evaluate all post test scores for all students to gauge student mastery of the content, but use the cohort pre to post test comparison to help you understand your instructional impact. This can also be a time saver because it wraps back to what we said assessing prior learning. You may already be doing this , but how intentional are you? How are you using the data teams/plcs to help you do this work? Helps determine instructional impact. Students can chart their own progress. Recommendation 2 IES Practices Guide
Discuss and share out. Have someone record a list popcorned out by participants.
So how do you do it? “Flexible, en route test-guided instructional scheduling can allow your students to move on to fascinating application activities or delve more deeply into other content areas.” page 12.
Have participants read the benchmark, testing tactics and instructional implications on page 24 of the book. See page 23 for set. Mention this is oversimplification for illustration only. This is going to lead to the discussion of test-triggered instruction where teachers get an idea of how the content standard is operationalized through reflecting on the test item and considering the cognitive demand of the task, as well as the skills needed to successfully answer the item. This should lead into a discussion about generalizable skill-master. Teaching to the test results in a narrow focus on specific tasks presented in the items. However, teaching toward test-represented targets considers several different ways the content standard is tested. The follow up is to determine the skills, knowledge, subskills and prior knowledge and cognitive demand of the task and to use this information to build instruction and classroom assessment. When teachers then use diverse methods of assessment of a content standard, then the teacher is seeking to get a fix on students’ ‘generalizable mastery. The more diverse the assessment techniques, the stronger the inferences you can draw about the cognitive demand your assessments are placing on your students. Once you have an idea about the cognitive demand of the task and your student’s readiness, you can use diverse, assessment grounded instructional methods to build generalizability of the skills and knowledge.
S-18 of supplemental materials. Meet with clock buddy to discuss your 3 big ideas. Switch to next clock buddy for further sharing to make sure people who did homework get chance to be with someone else who did homework.
What constitutes validity in creating an exam? In selecting one? Look for alignment. Curriculum is so large that you can’t hit it all, so prioritize. Instruction is targeted at the most important subsets of the curriculum, power standards. The assessment samples the content you taught.
S-19 Put this in supplemental and refer to this at this time for team exercise. What do validity and reliability look like? Have handout of this for them to work on individually, then they are to discuss with a partner. Then discuss next slide. The targets represent what you are trying to measure, standard, curricular goal, etc. The green lines are the scores on an assessment designed to measure the target. Think of these as four different tests of the same standard. A group of students with the same ability level take each of the four exams. The green dashes represent their scores on each of the four tests. What does each one represent in terms of validity and reliability?
Validity and reliability of scores or test results determine the extent to which you can making meaningful interpretations or inferences, as well as the degree of confidence you can place in the interpretations This is particularly important when using scores for prediction of future performance.
These questions sum up the assessment framework process we’ve been engaged in for the day. These questions should be part of your consideration in acquiring or developing assessment systems. DIBELS: Idea of form effects because scales aren’t built for progress monitoring or adjusting for skill development over time. Parallel forms not available or insufficient in amount to avoid form effect.
These questions sum up the assessment framework process we’ve been engaged in for the day. These questions should be part of your consideration in acquiring or developing assessment systems. DIBELS: Idea of form effects because scales aren’t built for progress monitoring or adjusting for skill development over time. Parallel forms not available or insufficient in amount to avoid form effect.