By Bowen Kerins
A wide-ranging team worked together to develop the Illustrative Mathematics Grades 6-8 Math curriculum. As Assessment Lead, it was my responsibility to write and curate the Shared Understandings document about assessments we used throughout the writing process, and I thought you might be interested to read some of the key features.
This quote drives a lot of the ideas about assessment:
“You want students to get the question right for the right reasons and get the question wrong for the right reasons.” – Sendhil Revuluri
The statement above is particularly true for multiple-choice items, and is a shift from the ways I used to write as a high school teacher. If there were a likely “sign error” I would include that as a multiple-choice distractor, because surely some students will make that mistake. But this is the wrong reason to be wrong: the item is meant to test a particular standard. Distractors should have good reasons for being selected that are relevant to the standard(s) being addressed.
- In general, assessment items should be targeted and short.
This particularly is true for application problems, which frequently have sentences or paragraphs that are meaningless to the task at hand.
- Items must exist in isolation, never using the result of another item. These “double whammy” items penalize students who make an error or skip items. Each part of an extended response item must not depend onanswering a previous part correctly. Each part should be answerable even if all previous parts have been skipped completely, to give students all possible opportunities to show proficiency. When you think a “double whammy” is unavoidable, think about what information would put the student in the position that a student correctly answering the first part would be. Typically, a “restart” of the item with a different name, object, equation, or example of the same context can avoid the “double whammy”.
Specifically, never ask students to use their work in part (a) to do part (b), because if they could not solve part (a), they have no ability to demonstrate the skill intended by part (b).
- Items must be method-agnostic whenever possible. Avoid “Use [method] to solve [problem]”, because this may force students to use a method or representation that runs against their preferences.
Just write “Solve [problem]”.
- Assessments as a whole should reflect a varied depth of knowledge including items that would be rated as DOK 1, 2, or 3 on this chart. In general, an assessment should have about 40% DOK 1, 40% DOK 2, and 20% DOK 3. The most typical error is not enough DOK 1.
It’s okay to include a few fastballs on a test!
- A student who has mastered the target skill should ideally be able to answer a multiple-choice item without looking at the options, then find the answer among the options. In some cases it is necessary to have the student discriminate among the options, but if this can be avoided, do so. For example, “Which of these points is in Quadrant II?” can be improved by asking “Which quadrant is (-3,4) in?”
And this is the biggest one for multiple-choice:
- Think carefully about the logic a student might use to respond to the item, and whether there are significant and relevant conceptual errors a student could make but still arrive at the correct response. Pick correct responses accordingly, or use distractors to catch these errors. For example, consider “Which of these fractions is largest? 1/3, 1/4, 2/7, 3/8”. It seems fine, but a student whose process is “a fraction with a larger numerator is larger” will select 3/8 and would be correct for the wrong reason. In this case 2/5 would be a better correct answer (replacing 2/7); if 3/8 is the intended correct response, use at least one distractor with a larger numerator.
There is a lot more to say, but hopefully this gives you some flavor for the depth of thought the Illustrative Mathematics team put into these materials. Go here for access to the materials. Thanks for reading!