Learning analytics: What to do when your problem data liesby
You know what they say, 'you can’t argue with cold, hard facts'. Well, what if your ‘facts’ aren’t all they appear to be?
“Every single person that signed up for our core programme finished it and we have achieved 10/10 learner satisfaction.” Anaïs (not her real name) beams through the webcam at me as we discuss learner feedback. This is her very first foray into converting existing classroom workshop materials into self-paced online learning units. She is ecstatic that the programme has been so well received. “Our director is really happy with the success and wants me to build more programmes like this.”
How can you possibly improve if you don’t have an accurate picture of your starting point?
An eLearning developer, achieving a perfect score on their very first attempt - colour me intrigued. “Everyone who took the course completed it and scored it 10/10, that’s really impressive” I respond, “Could you share the reports with me?”
“Of course.” Anaïs shares the report on her screen.
She’s not wrong about the 100 per cent completion and the 10/10 scores, but she’s not right in her assertions about the success. I’m not here to rain on anyone’s parade, far from it. Organisations often engage me to look at their end-to-end L&D set up when they are trying to grow, develop and improve.
How can you possibly improve if you don’t have an accurate picture of your starting point? Finding the truth involves using a critical eye to unearth and explore anything that just doesn’t sound right. So, let’s explore this.
First of all, let’s explore the learner satisfaction ratings. These were captured with an optional post-course survey. Unfortunately, I’ve been around L&D long enough to know that there is nothing more suspicious than 100 per cent satisfaction.
Think about it, we are all different people who bring our own thoughts, ideas, and expectations to the world we live in, and, as such, we will experience things in different ways. Think about your all-time favourite film for example, search for reviews for it and I can guarantee you will find someone that didn’t enjoy it as much as you. Creating a programme that meets every learner’s expectation perfectly is nigh on impossible.
To get better data we need to find ways of making the evaluation more accessible, user friendly, time friendly and more appealing to complete
With any evaluation I want to know who responded and also who did not. According to the completion report around 315 people had participated in the programme, but only 20 had chosen to complete the evaluation. So, it is accurate to say that 100 per cent of the respondents awarded the programme 10/10, but not 100 per cent of the participants.
Next, I want to know who the respondents were. Of the 20 names on the list, I recognised six of them as fellow L&D colleagues in the organisation, and three of them were involved in the programme itself. These guys will often give you positive feedback - no matter what. They have a vested interest in being nice to you and your programme, but this doesn’t help you.
So, in total, of the 20 evaluations, we only had 14 from verified learners. This sample size is far too small to extract any meaningful data from. We need to take action to engage more of the non-respondents to get their views before we can draw any conclusions about the quality of the programme from the learner’s point of view.
To get better data we need to find ways of making the evaluation more accessible, user friendly, time friendly and ultimately more appealing to complete. That should increase the number of respondents and generate a larger data set which we can then scrutinise.
“They must have thought the training was good because they all spent seven hours completing it,” Anaïs tells me. Learners complete programmes that they are motivated to spend their time on. That motivation can come in many different guises. For example, if you need the training to keep your licence to practice, if you have paid for the training personally or if your manager is mandating that you get the training done. You can probably see where this is going.
We don’t intend to misrepresent our success, but it’s easier to accept 100 per cent results than to question them
Quite simply, if there is more stick than carrot, your completion metrics mean very little. Learners will endure hours of terrible learning design, unforgivable user interface errors and graphical disasters if their job or their money is on the line.
It is, therefore, perfectly reasonable to expect any compliance programme or programme backed by the CEO or other senior figure to achieve close to 100 per cent completion with very little effort on your part.
If the programme was self-elected, free (or at least free at point of access) and optional you would find a lot of value in analysing how many people signed up, how many of those started the programme, at what point non-completions dropped out and how many people finished the programme.
Interpreting data more accurately
Despite the title of this article, programme data does not in fact lie, it is our interpretation and presentation of it that is often misleading. We don’t intend to misrepresent our success, but it’s easier to accept 100 per cent results than to question them. It is also far easier to analyse the information you can see (the data) than the information you can’t see (the gaps in your data).
That’s not to say that we should throw away the 100 per cent completions or evaluation reports from the 14 people who gave us their time and their views. Instead, we should see them through their respective lenses and look for multiple sources of data to build a more complete picture of our learners’ experiences and our programme successes.
Interested in this topic? Read Learn, data, action: How to make your learning data actionable.
Harri Candy is an Online Learning Specialist at ELK Online. She focuses on helping organisations tackle online learning challenges such as material design and delivery; engagement from stakeholders through to end users; and effective evaluation metrics.