Sarah is about to set a quiz for her students. She opens a Question Bank, and sees that up until this point her students have covered about 250 questions. She only wants to choose 20 for her quiz, so she starts to think about three variables:
- Prior performance: she wants to choose questions that students haven’t done so well on before. Obviously, there are diminishing returns from her students continually practising things they are already good at, so she wants to make sure to target gaps in their knowledge.
- Times asked: there are some questions that she may only have asked once or twice. So even if scores were good in the past, she might want to ask that question again just to be on the safe side.
- Time since it was last asked: there could be a question that students were asked six months ago, then again a couple of weeks later and then again a couple of weeks after that. They might have done really well on it then, but that was still five months ago. Memory fades, and it fades fast, so Sarah wants to make sure that students see the question again soon.
Hopefully the descriptions above show that these three variables are both complementary and in tension. They add a certain complexity to Sarah’s decision making, but it’s a welcome complexity because it shows that she is being deliberate about her question selection, and thinking hard about what’s right for her class. So she cycles through her Question Bank and she picks this question because of that reason, another one because of the other reason, and a third because of a mixture, and so on and so forth.
There is, of course, a bit of a snag. The snag is whether or not what Sarah is doing is actually possible. Is it possible for a teacher to retain and store all of that information for each question in a bank that could be hundreds of questions long? How can any normal human remember all that information for all their classes or all their subjects? Do they even have time to go through a bank like that when setting work?
I imagine this is beyond most of us — I know I can’t do what Sarah does. So I’ve relied on Carousel’s randomiser: I ask enough questions that over time I get a good spread . I’ve accepted that even though it might not be perfect and some questions might slip through the cracks, it’s the best available option.
At least it was, until now. Because this week we have released Carousel’s game changing new feature: C-Scores.
This feature has a very simple ethos:
‘To support teachers in choosing the right questions at the right time to guarantee better learning’
In practice, this means Carousel does the heavy lifting in terms of number crunching, and then puts the information you need at your fingertips.
The process starts with you setting a quiz, which your students complete. Carousel records their scores on each question and the date each question was set. You set another quiz, and the same thing happens. Over and over, we look at and record performance on each question and its timing.
Sarah can then access and utilise that information via C-Scores. First, she starts a new quiz or Whiteboard quiz. She goes to select her questions, and notices a brand new drop down:
She chooses a class, and suddenly sees a row of beautiful orange bars:
She scrolls down and sees that some of the bars are bigger than others:
And when she hovers over the bars, she sees question-level information:
When students answered this question, they did really well on it — a score of 85% isn’t to be sniffed at. Carousel gives it 1 point because of that. But it’s been ages since they were asked it, and they were only ever asked it once. Because of this, the question gets more points on those two metrics. So overall, it has 9 points, which we call its C-Score.
She looks at the next question and sees:
This one has a much lower C-Score, because students know it really well. It’s been asked loads of times, and fairly recently.
She picks the first question to go in her quiz, and Carousel starts to calculate an average C-Score for the quiz:
Sarah adds the second question too, and the average score changes:
She decides she wants to make the quiz harder, so she uses the sort and filter function to find questions with high C-Scores:
She chooses a selection of these questions, gets an overall C-Score of 11, and displays her Whiteboard quiz:
Sarah knows that this quiz is going to be hard for the students, so she might warn them or discuss that with them afterwards. No computer can do that for her — her students trust her, respect her, and want to do well for her — not for the computer:
Put your hands up if you found that quiz quite tricky. Yeah, I thought you might! That’s because I only chose questions that you’ve struggled with in the past or ones that I haven’t asked you enough for them to really be a part of your memory. But I’m so proud of you all for trying them, and we need to realise that this is how we get better: if we only do the easy things or the things we already know, we won’t learn anything. So it might be hard in the short term right now, but it’ll be worth it over time.
Since we launched Carousel, we’ve maintained two explicit goals:
- Help students learn better
- Reduce teacher workload
There’s no doubt that C-Score is another extremely powerful lever in meeting both of these aims. It gives students a quizzing experience that is more responsive to their knowledge as it grows over time, helping close knowledge gaps and preventing memory fade over time. But it also makes teachers’ lives easier by using an enormous database to inform their decisions and processes.
We aren’t stopping with C-Score. Over the coming months, we are rolling out our Responsive Quizzing strategy, which is how we will be making Carousel more responsive to student knowledge as it grows over time. Responsive Quizzing will have three phases, and C-Score is phase 1.
Phase 2 is an upgrade to the student flashcard experience. Flashcard decks will adapt and respond to students as they work through them, and allow the students to communicate with Carousel about the questions they are doing well on and the ones they still need more support with.
Phase 3 will introduce a new type of teacher-set quiz that will use C-Score at a student rather than class level, and thereby be uniquely tailored to individual students and their knowledge as it grows over time.
It’s an exciting time for us at Carousel and we can’t wait to bring you more!
Postscript — when developing C-Scores, we took feedback from a huge range of our community of teachers, as well as world-leading researchers in the field of memory and retrieval practice. We would love to hear from you how you are finding them and if you have any questions or ideas for us, get in touch here.
A special thank you to Professor Andrew Butler, Dr Sam Sims and Dr Efrat Furst who have been extremely generous with their time and supportive of our endeavours in Responsive Quizzing.