Share this content

Learning evaluation: Research methods

21st May 2012
Share this content
Kenneth Fee and Dr Alasdair Rutherford talk the community through the various research methods of evaluation.
With so much choice of tools and approaches to the evaluation of learning, the research methods used to gather data can often be overlooked. But if you are to gather robust evidence of the impact of your training you need to both select the right methods and implement them well.

There is a wide range of research methods to choose from, each with their own advantages and disadvantages. Here we describe a number of methods that can be used to evaluate learning, and discuss how to select the appropriate method for your evaluation.

Choosing a research method

Research methods should be considered in the evaluation planning stages, as they need to fit together with the research questions you are asking, the timing of the evaluation, the participants and the audience. You need to think about:
  • What sort of information are you trying to collect? eg is it quantitative or qualitative?
  • Who has the information? eg is it held as administrative data, the experience of learners, or the opinions of line managers?
  • What is the timescale? eg how much time is there to plan and organise the research? How much time do participants have to participate in the evaluation?
  • What resources and facilities do you have access to? eg do you have contact details for participants?  Do you have a venue for face-to-face research?
  • Research skills - do you have access to the skills needed to carry out the research?
  • Data analysis - how will you analyse the data?  Do you have the skills to analyse it?  Do you need any special software or equipment?
Too often evaluators say "I'm going to use a focus group to evaluate my training course - what should I ask?" It should really be the other way round - the questions you want to answer will determine which research method is right for you.

Questionnaires, surveys and feedback forms

Questionnaires provide a relatively quick and easy way to collect the views of learners, and the 'happy sheet' is certainly the most common evaluation research method. In paper form they can provide instant reaction feedback from learners at the end of a session, while electronically they provide a way to reach a large number of learners across workplaces. They can be used to collect both quantitative data through scales or multiple choice questions, and qualitative data through open questions.

Care must be taken in designing a questionnaire. If it is too long, or there are too many open questions, response rates will be low or forms will be left incomplete. Questions need to be clear, minimising scope for misunderstanding or differing interpretation. Research has shown that the order in which questions are asked can 'contaminate' other questions, so thought should be given to the grouping and order of questions. When sample sizes are large there may be scope for testing different versions of a questionnaire to capture these effects, but this will not be possible in smaller samples.

Focus groups

A focus group is a session with a small group of people to discuss a specific topic or topics.  The group usually has a facilitator to encourage participation and discussion, and to keep the group on topic. The ideal size for a focus group is around four to 10 participants, but larger groups are possible.

Focus groups are commonly used to gather data relating to the application of learning in the workplace, and the impact of learning on organisational objectives.


Interviews provide a great way to gather in-depth evaluation information. They are particularly suited to gathering data on the experience of putting learning into practice following training, and for investigating impacts on business objectives. They are also ideal for use with wider stakeholders, such as line managers or senior management. Interviews can be conducted face-to-face or by telephone. Telephone interviews can be more time-efficient and flexible to fit with work schedules, but they are more appropriate for more structured interviews. A face-to-face interview allows the interviewer to react to more subtle cues such as body language making a less structured interview using a topic guide possible.

The drawback of interviewing is the time and resource required.  This makes it more appropriate for in-depth evaluations with smaller numbers, for use with a sub-sample of learners to dig deeper together with a broader method, or for consultation with stakeholders such as line managers or senior management.

Participant observation/shadowing

This is the most intrusive of the research methods discussed here, and will not be suitable for all settings. However, for situations where good performance is hard to quantify it can be a helpful way to observe the application of learning in action. The aim is for the evaluator to experience the work situations where learning can be applied in the manner in which the subjects under study also experience these events. It is important that the participants understand the purpose of the researcher's presence, and that it is the learning that is being evaluated not the person. Whilst observing and experiencing as a participant, the researcher must retain a level of objectivity in order to understand, analyse and explain the situation under study.

Critical issues

In designing your evaluation plan there are a number of research issues that you need to consider:
  1. Sample size It is not necessary or desirable to involve every single recipient of the training in the evaluation. Conclusions can be drawn about the population based on a properly selected sample of participants. The sample size will depend on the population size, how diverse the population is, the size of the effect that you expect to measure, and on which method you are using. For quantitative methods you will need a much larger sample than for qualitative research. It is important to ensure that the sample is representative of the population of learners and that it adequately covers the diversity of participants if you are to draw conclusions from it.
  2. Causality Establishing causality is the act of demonstrating that the training contributed to the impact being measured. The role of causality should be definitely be considered in your research design, although the degree to which you measure it will depend on the purpose of your evaluation. The scientific approach to causality is to include a control group within you research: a group of learners who are randomly selected not to participate in the training. This however can be costly and time consuming, and is not appropriate for all settings.
    Causality can also be argued through a well-evidenced impact model, through measuring other factors contributing to impact, or through application of advanced statistical techniques. It is important to consider that training is only one factor contributing to organisational outcomes, and it relies on and interacts with other factors such as management support and peer/informal learning. Isolating the 'pure' training effect may not really be helpful, but there does need to be clear evidence that the training has contributed.
  3. Ethics Research ethics are important in undertaking all types of research, and learning evaluation is no exception. You need to ensure that participants are informed about their participation in the evaluation, and understand how it will be used. The research methods should not cause any undue distress to participants, and should not waste their time. Participation should be confidential and anonymous to encourage participants to be frank and honest, and you need to consider how you will honour this commitment. Research materials should be stored securely, and identifying materials such as questionnaires or audio recording safely destroyed after the evaluation is complete.


We have given just a brief overview of some of the most common methods to help you in designing your learning evaluation. We have also outlined some research issues that you will encounter. Upon answering the questions at the start you should be able to identify which methods are appropriate for your evaluation needs. It will often be the case that more than one method is required to pin down the impacts that you are trying to measure. You may also mix quantitative and qualitative methods; for example by running a survey on a large sample of learners to provide a broad picture of impact, and then following up with focus groups on a sub-sample to explore impacts in more detail.

While selecting an appropriate approach and model for your learning evaluation is important, there is no substitute for robust research methods. The methods you choose should be integral to the evaluation planning, and you need to ensure that you have the skills in place to both gather and analyse whatever form(s) of data you decide to collect. However, remember that research methods do not have to be frightening; there are a range of resources and help out there to ensure that the delivery of your evaluation is as good as the planning.

This is article first appeared on the Airthrey website

Kenneth Fee and Dr Alasdair Rutherford are the founding directors of learning evaluation firm Airthrey Ltd. Ken is a career learning and development professional, whose latest book, 101 Learning & Development Tools, deals with evaluation among other topics. Alasdair is an evaluation and econometrics specialist, and a Research Fellow at the University of Stirling


You might also be interested in

Replies (0)

Please login or register to join the discussion.

There are currently no replies, be the first to post a reply.