Taking a quiz, a student is presented with the question below:
Unfortunately, the student does not know the answer, so leaves the box blank and moves onto the next question. The student probably hasn’t achieved much, but at least there’s no harm done.
Let’s say the student sees the question below instead:
In this case, the student doesn’t know, so they give it a good think, and pick the third option:
The third choice here is wrong, but unfortunately picking it isn’t the same as leaving it blank like before. Whereas before there was a “ah well, no harm done” element, in the case of MCQs there is potential for a negative effect. This is because when it comes to multiple choice, there are two* concerns that do not exist with short answer response:
Concern 1: Students seeing a wrong answer increases the likelihood that later they will repeat that wrong answer. This would work through something called the illusory truth effect — which is “the tendency to believe false information to be correct after repeated exposure”.
Concern 2: When students give a choice due to a logical process of reasoning (e.g. “geo means Earth and centric means to go round, so geocentric is about Earth going round something”) that wrong answer is more likely to be selected in subsequent questioning sessions. This may be because the wrong answer becomes “learnt” in the same way that anything we think hard about becomes learnt.
These concerns are legitimate, but thankfully as far as the evidence base is concerned they are easily mitigated. Top memory researchers Roediger and Butler write that:
“Although the potential for negative effects from multiple-choice tests is a real problem, the good news is that there is a simple solution: provide students with feedback. If feedback is provided after a multiple-choice test, the negative effects are completely nullified”
Which is, on the face of it, very good news, because Carousel automatically provides students with feedback on their answers:
On the face of it: problem solved. Student gets it wrong. Student sees feedback. Student knows their answer was wrong.
Ever the skeptic, my worry with all these things though is what students actually do. Are students carefully reading through every response and looking at what they got right or wrong? Or are they just mashing the “next” button in a bid to get through it, press “submit” at the end and then go play video games?
I suspect that if left to their own devices, students would do the latter. This means that even though feedback is being sent, it is not being received.
If the feedback is not being properly received, we are led right back to our central problem: how do we prevent students from “learning” the wrong answer?
I think there are three complementary routes to circumnavigating this problem:
1.Don’t over-rely on MCQ quizzes: as we said in the second post in this series, you might ordinarily want your students’ first quizzes to be MCQs, but you should be aiming to get them onto short answer quizzes as soon as students are ready for them. This prevents students becoming overly familiar with false information (see Concern 1 above).
2.Use your review wisely: when reviewing student response to a MCQ quiz, you can broadly separate class answers into three piles**:
A) Questions which most students got right:
In these questions, you will potentially want to deal with the students who got it wrong individually, but there is no great need to spend a long time going over this particular question.
B) Questions with a broad range of responses:
If every student knew the answer, you would have 100% at A. If every student was guessing, you would have 25% on each option. If you have a range of responses that is closer to the “25% on each option” than it is to the “100% at A”, it’s likely you have lots of students guessing. And if lots of students are guessing, we can surmise that students just don’t know the material. Guessing doesn’t really involve a process of thinking hard so we aren’t that worried about Concern 2 above. In this case, noting that students just don’t know the content, simply reteach it.
C) Questions with a big spike at a particular wrong answer:
This is a bit more worrying. We’re quite far away from the even distribution we would get if students were guessing, and it looks like students have deliberately plumped for one wrong answer in particular. In this case, that wrong answer is a classic misconception: for thousands of years people thought that the Sun moves across the sky because it circles the Earth, and in this case students have thought about the question, applied some logical reasoning to it, and come to the wrong answer. This lands us smack in Concern 2 territory: that students have thought hard about the wrong answer.
In this case, your feedback response is absolutely crucial. You must make sure that students hear very clearly that answer C is wrong, and you should explain why it is wrong. A reteach of the “right” answer isn’t enough, you must unpick and unravel the misconception and make sure this feedback lands in your students heads via a follow up Check for Understanding, independent practice session and subsequent quiz.
3. Even more pronounced use of the Whole Class Feedback report: let’s say this quiz was due on a Wednesday. At the start of your Wednesday lesson with this class, display the first part of the Feedback report:
Have students answer these three questions in the same way that they would answer a normal Whiteboard quiz. You can then use their responses to establish if students followed the “mash button and ignore Carousel’s feedback” route. If they did take this route, they will get these questions very wrong in class. If they did not take this route, and instead spent time and care over their quiz feedback, then they are more likely to get it right in class. What you do with that information is of course up to you, but it allows you to verbally emphasise the importance of “not leaving your screen until you know the right answer” and build a culture over time whereby it is more likely that students will take the quiz feedback seriously.
Routes 1 and 2 are remedial: they are saying “given that students won’t read the feedback properly, we need to do x, y and z.” This route is anticipatory, it is saying “how can I set things up so that when students do the next quiz they read the feedback properly?” Over the long term, a route like this will be a lot more powerful, and will help you build a positive culture around quizzing, retrieval and feedback.The quiz they just did is in the past, but using the Feedback Report better might improve your changes in their next quiz.
In sum: there is very little in education that is a universal good with no potential to do harm. Smart teaching is about anticipating and mitigating that harm. In the case of MCQs, we need to anticipate the negative effects of students “getting it wrong” and work as hard as we can to ensure they receive timely and pointed corrective feedback.
*I do have another concern: broadly, memory gets really messy when it comes to things which make us look good. We tend to romanticise the past in order to build our self image and I’m worried that someone would pick the wrong answer in a MCQ (e.g. “photosynthesis”), then later when doing it again remember that they picked photosynthesis, and assume they were right last time (as part of romanticising the past). I couldn’t find any research that directly tested this, so I didn’t put it in the main body of the blog — it’s just a theory.
**Here I am using a demo quiz, which only has around 10 responses per question. The more responses you have, the greater the reliability of your inferences. The main point is to look for these patterns in response. Please don’t haggle with my sample size.
☞ TO ENJOY A FREE TRIAL OF CAROUSEL GOLD OR TO PURCHASE A CAROUSEL GOLD SUBSCRIPTION (just £60 p.a.), LOG IN TO YOUR ACCOUNT HERE AND GO TO THE ‘MY ACCOUNT’ SECTION.
☞ TO FIND OUT ABOUT PRICES FOR CAROUSEL PLATINUM, CLICK HERE.
☞ TO LEARN MORE ABOUT CAROUSEL’S GOLD AND PLATINUM TIERS, CLICK HERE.