Learning evaluation: Research methods
There is a wide range of research methods to choose from, each with their own advantages and disadvantages. Here we describe a number of methods that can be used to evaluate learning, and discuss how to select the appropriate method for your evaluation.
Choosing a research method
- What sort of information are you trying to collect? eg is it quantitative or qualitative?
- Who has the information? eg is it held as administrative data, the experience of learners, or the opinions of line managers?
- What is the timescale? eg how much time is there to plan and organise the research? How much time do participants have to participate in the evaluation?
- What resources and facilities do you have access to? eg do you have contact details for participants? Do you have a venue for face-to-face research?
- Research skills - do you have access to the skills needed to carry out the research?
- Data analysis - how will you analyse the data? Do you have the skills to analyse it? Do you need any special software or equipment?
Questionnaires, surveys and feedback forms
Care must be taken in designing a questionnaire. If it is too long, or there are too many open questions, response rates will be low or forms will be left incomplete. Questions need to be clear, minimising scope for misunderstanding or differing interpretation. Research has shown that the order in which questions are asked can 'contaminate' other questions, so thought should be given to the grouping and order of questions. When sample sizes are large there may be scope for testing different versions of a questionnaire to capture these effects, but this will not be possible in smaller samples.
Focus groups are commonly used to gather data relating to the application of learning in the workplace, and the impact of learning on organisational objectives.
The drawback of interviewing is the time and resource required. This makes it more appropriate for in-depth evaluations with smaller numbers, for use with a sub-sample of learners to dig deeper together with a broader method, or for consultation with stakeholders such as line managers or senior management.
- Sample size It is not necessary or desirable to involve every single recipient of the training in the evaluation. Conclusions can be drawn about the population based on a properly selected sample of participants. The sample size will depend on the population size, how diverse the population is, the size of the effect that you expect to measure, and on which method you are using. For quantitative methods you will need a much larger sample than for qualitative research. It is important to ensure that the sample is representative of the population of learners and that it adequately covers the diversity of participants if you are to draw conclusions from it.
- Causality Establishing causality is the act of demonstrating that the training contributed to the impact being measured. The role of causality should be definitely be considered in your research design, although the degree to which you measure it will depend on the purpose of your evaluation. The scientific approach to causality is to include a control group within you research: a group of learners who are randomly selected not to participate in the training. This however can be costly and time consuming, and is not appropriate for all settings.
Causality can also be argued through a well-evidenced impact model, through measuring other factors contributing to impact, or through application of advanced statistical techniques. It is important to consider that training is only one factor contributing to organisational outcomes, and it relies on and interacts with other factors such as management support and peer/informal learning. Isolating the 'pure' training effect may not really be helpful, but there does need to be clear evidence that the training has contributed.
- Ethics Research ethics are important in undertaking all types of research, and learning evaluation is no exception. You need to ensure that participants are informed about their participation in the evaluation, and understand how it will be used. The research methods should not cause any undue distress to participants, and should not waste their time. Participation should be confidential and anonymous to encourage participants to be frank and honest, and you need to consider how you will honour this commitment. Research materials should be stored securely, and identifying materials such as questionnaires or audio recording safely destroyed after the evaluation is complete.
While selecting an appropriate approach and model for your learning evaluation is important, there is no substitute for robust research methods. The methods you choose should be integral to the evaluation planning, and you need to ensure that you have the skills in place to both gather and analyse whatever form(s) of data you decide to collect. However, remember that research methods do not have to be frightening; there are a range of resources and help out there to ensure that the delivery of your evaluation is as good as the planning.
Kenneth Fee and Dr Alasdair Rutherford are the founding directors of learning evaluation firm Airthrey Ltd. Ken is a career learning and development professional, whose latest book, 101 Learning & Development Tools, deals with evaluation among other topics. Alasdair is an evaluation and econometrics specialist, and a Research Fellow at the University of Stirling
You might also be interested in
Charity Director. Consultant in OD, strategic HR and leadership development. Author of five books including Delivering E-Learning (Kogan Page, 2009). Interested in strategy, OD, leadership, technology, evidence and evaluation. Available for writing and speaking.