Student retention (preventing students from quitting or dropping out) is a major concern for online learning. So why do students drop out in the first place? One reason is a lack of engagement. Indeed, some of our friends over at edX previously found that students tend to stop watching lecture videos that go beyond 4 or 5 minutes.
Our team has already done an incredible job of making online lessons more engaging. In a previous post, we showed how we increased student attention span up to 20 minutes with our highly interactive lessons. So the engagement problem has largely been solved.
But while we've reduced dropout rates, a fair number of students still drop out of our interactive lessons. So we asked ourselves: what's going on here, and what we can do about it?
Now imagine you're working your way through a step-by-step, interactive lesson. You know what you're doing, getting every question right, but suddenly you hit a snag. You just can't seem to get this next question. Now you could ask for a hint, or you could try a few different approaches, but sometimes you'll just quit out of frustration.
With an in-person tutor, this wouldn't happen. A live tutor can diagnose exactly where you confusion is and help you get "unstuck." So our hypothesis is this: students drop out when they're told their answer is wrong without immediate, useful feedback.
If this is true, then we should see a direct correlation between how often a student is given specific feedback (either they got the correct answer or we address their wrong answer with a separate branch in the lesson) and how often the student will stick with the lesson and not drop out. So we combed the data from millions of students' answers to the thousands of questions from our hundreds of lessons (so yes, that's a lot of data), and here's what we found:
As we expected, there is a very strong correlation between how often students are given specific feedback regarding their answer and how often they'll stick stick around for the next part of the lesson. (For you stat-heads out there, the Pearson correlation for this graph was r = 0.572.)
This suggests an obvious way to improve retention: give students more (and better) feedback! Now that's always been our goal here at School Yourself. We regularly update our lessons so that common wrong answers are met with specific, helpful feedback. For example, in the latest version of our lesson on point-slope form, when you enter a slope with an extra minus sign or you accidentally flip the fraction (common mistakes we saw students making), you now proceed down a new path in the lesson that specifically addresses your error. Here's how the lesson map has evolved (note the change from a linear sequence to a more branched structure):
Now if we were to keep doing this, adding new branches to lessons where we accept all the wrong answers students give and provide specific feedback, that could take us forever (and we don't have that kind of time!) So the strategy we've adopted is to address the common wrong answers, prioritizing where students frequently make similar mistakes.
So how much work would it be to provide feedback to 90% of students for each question? According to the first graph, that means we could expect at least 97% retention for each question, which would be outstanding. And the answer turns out to depend on the question type. Here are the results for checkbox questions, and numerical response (in which a student can enter any fraction or decimal):
It turns out that for most questions, students are only giving a handful of answers. For both of these question types, on average, more than 90% of students give either the correct answer or the two most common wrong answers. So if our team should addresses the top two wrong answers, we should achieve 97% retention for each question. It takes our team about a day to make a lesson, and it would only take another day to update the lesson so it offers specific feedback to more than 90% of students.
So we'll continue to update our lessons, making them increasingly adaptive over time. And we won't have to pull any all-nighters to get it done!
How often do students guess their way through?
When designing online courses, like AlgebraX and GeometryX, a common cause for concern is that students can just "guess their way through" lessons and assessments. They're either totally lost, or not paying much attention, but they can still advance and get credit by brute-forcing their way.
This can be very problematic for the student (who is grinding away without really learning) and the course (which loses credibility when this type of student behavior is common). So here at School Yourself, we've been asking ourselves: 1) How often does this happen in our courses, and 2) What can we do about it?
To figure out how often students guess their way through lessons, our team developed a formula we call a student's guess index. It's a number between 0 and 1, where a greater number means the student is probably guessing. A student's guess index takes into account the question type (guessing on a multiple choice question vs. an algebraic question are handled differently), number of attempts, what students do between attempts, and a few other factors. Over time (and with lots of student data) we have found that a guess index of greater than or equal to 0.4 means a student probably guessed their way through a lesson. An index of 0.2 or 0.3 means they might have struggled through parts of the lesson, but they worked throughout without resorting to guessing. And an index of 0 or 0.1 means they confidently proceeded through the lesson.
So here's the distribution for the most recent 10,000 completed lessons:
Students appear to be guess their way through lessons 2% of the time. And we can do the same analysis for reviews (assessments):
Students seem to guess their way through reviews less than 1% of the time. It's a little comforting to know that students guess less on reviews than they do in lessons, since the reviews are what hold them accountable for their knowledge.
One or two guessing students out of a hundred may not sound too shabby. Indeed, these numbers are likely so low because 1) we prefer asking open-ended questions (with interactive graphs, algebraic expressions, numerical, or short answer) where possible, to ensure students understand what they're doing, and 2) our lessons are built around continuous, step-by-step user interaction, where the questions never get too far ahead of the students. But even though relatively few students are guessing, these are also the very students who need the most help! So now that we can precisely measure guessing behavior, what can we do to mitigate or prevent it?
First, it's helpful to know whether students are guessing at similar rates across all lessons and reviews (indicating a problem with the overall course design, question types, etc.), or if the guessing behavior is concentrated among certain lessons.
This can be very problematic for the student (who is grinding away without really learning) and the course (which loses credibility when this type of student behavior is common). So here at School Yourself, we've been asking ourselves: 1) How often does this happen in our courses, and 2) What can we do about it?
To figure out how often students guess their way through lessons, our team developed a formula we call a student's guess index. It's a number between 0 and 1, where a greater number means the student is probably guessing. A student's guess index takes into account the question type (guessing on a multiple choice question vs. an algebraic question are handled differently), number of attempts, what students do between attempts, and a few other factors. Over time (and with lots of student data) we have found that a guess index of greater than or equal to 0.4 means a student probably guessed their way through a lesson. An index of 0.2 or 0.3 means they might have struggled through parts of the lesson, but they worked throughout without resorting to guessing. And an index of 0 or 0.1 means they confidently proceeded through the lesson.
So here's the distribution for the most recent 10,000 completed lessons:
Students appear to be guess their way through lessons 2% of the time. And we can do the same analysis for reviews (assessments):
Students seem to guess their way through reviews less than 1% of the time. It's a little comforting to know that students guess less on reviews than they do in lessons, since the reviews are what hold them accountable for their knowledge.
One or two guessing students out of a hundred may not sound too shabby. Indeed, these numbers are likely so low because 1) we prefer asking open-ended questions (with interactive graphs, algebraic expressions, numerical, or short answer) where possible, to ensure students understand what they're doing, and 2) our lessons are built around continuous, step-by-step user interaction, where the questions never get too far ahead of the students. But even though relatively few students are guessing, these are also the very students who need the most help! So now that we can precisely measure guessing behavior, what can we do to mitigate or prevent it?
First, it's helpful to know whether students are guessing at similar rates across all lessons and reviews (indicating a problem with the overall course design, question types, etc.), or if the guessing behavior is concentrated among certain lessons.
Half of the lessons exhibited zero guessing behavior, And guessing is concentrated among a handful of lessons, like Ambiguous case, Multiples, and Unsolvable equations. This is great news, because we can focus our attention on these lessons, see where students are guessing the most, and update the lessons (and add new branches providing feedback for common wrong answers).
When it comes to reviews, students don't guess as much:
The majority (69%) of reviews had zero guessing. And once again, it's a handful of reviews that are the most conducive to guessing. So that's where we'll focus our efforts.
So while students guessing their way through lessons doesn't seem to be a major problem for our content, it's something we can mitigate by focusing our effects on the most guess-prone lessons and reviews.
Subscribe to:
Posts (Atom)