Our team has already done an incredible job of making online lessons more engaging. In a previous post, we showed how we increased student attention span up to 20 minutes with our highly interactive lessons. So the engagement problem has largely been solved.

But while we've reduced dropout rates, a fair number of students still drop out of our interactive lessons. So we asked ourselves: what's going on here, and what we can do about it?

Now imagine you're working your way through a step-by-step, interactive lesson. You know what you're doing, getting every question right, but suddenly you hit a snag. You just can't seem to get this next question. Now you

*could*ask for a hint, or you

*could*try a few different approaches, but sometimes you'll just quit out of frustration.

With an in-person tutor, this wouldn't happen. A live tutor can diagnose exactly where you confusion is and help you get "unstuck." So our hypothesis is this:

**students drop out when they're told their answer is wrong without immediate, useful feedback.**

If this is true, then we should see a direct correlation between how often a student is given

**specific feedback**(either they got the correct answer or we address their wrong answer with a separate branch in the lesson) and how often the student will stick with the lesson and

**not drop out**. So we combed the data from millions of students' answers to the thousands of questions from our hundreds of lessons (so yes, that's a lot of data), and here's what we found:

As we expected, there is a very strong correlation between how often students are given specific feedback regarding their answer and how often they'll stick stick around for the next part of the lesson. (For you stat-heads out there, the Pearson correlation for this graph was

*r*= 0.572.)

This suggests an obvious way to improve retention: give students more (and better) feedback! Now that's always been our goal here at School Yourself. We regularly update our lessons so that common wrong answers are met with specific, helpful feedback. For example, in the latest version of our lesson on point-slope form, when you enter a slope with an extra minus sign or you accidentally flip the fraction (common mistakes we saw students making), you now proceed down a new path in the lesson that specifically addresses your error. Here's how the lesson map has evolved (note the change from a linear sequence to a more branched structure):

Now if we were to keep doing this, adding new branches to lessons where we accept

*all*the wrong answers students give and provide specific feedback, that could take us forever (and we don't have that kind of time!) So the strategy we've adopted is to address the

**common wrong answers**, prioritizing where students frequently make similar mistakes.

So how much work would it be to provide feedback to 90% of students for each question? According to the first graph, that means we could expect at least 97% retention for each question, which would be outstanding. And the answer turns out to depend on the question type. Here are the results for checkbox questions, and numerical response (in which a student can enter any fraction or decimal):

It turns out that for most questions, students are only giving a handful of answers. For both of these question types,

**on average, more than 90% of students give either the correct answer or the two most common wrong answers**. So if our team should addresses the top two wrong answers, we should achieve 97% retention for each question. It takes our team about a day to make a lesson, and it would only take another day to update the lesson so it offers specific feedback to more than 90% of students.

**So we'll continue to update our lessons, making them increasingly adaptive over time. And we won't have to pull any all-nighters to get it done!**