Employee feedback: how to design shorter 360 surveys without losing impactby
Attention spans for surveys are not usually high and you may be under pressure to make yours shorter – but how can you keep your questions within the ‘golden ten minute’ mark without losing valuable insights?
Gathering a range of views about you and your performance can be extremely enlightening but it does take time. You might want to use an anonymous online survey tool to automate the process and ensure all your reviewers feel safe to speak out, but this doesn’t necessarily save time. A 360 survey can take anything between a few minutes and an hour – especially if you have lots to say – so what is the best length for a 360?
The best approach to 360 survey design is to aim for as short as possible, whilst still delivering your objectives.
It’s a reality that general tolerance and attention span for surveys is not increasing and, at the same time, expectations of surveys are going up. It needs to look good, be available on a simple click, look like it takes no more than ten minutes and be easy and straightforward to complete. Important conclusions may be drawn from the outputs from these surveys, so it is important to get the absolute best value from those golden ten minutes.
Eight key considerations
The best approach to 360 survey design is to aim for as short as possible, whilst still delivering your objectives. These are the eight factors to consider in clarifying how short you can safely go:
1) Is it fit for purpose?
Your 360 needs to achieve your primary objective. If your 360 is about introducing a new competency or values model then it doesn’t necessarily need to go deep and can indeed be very short, but if it is to be used for detailed diagnostics of your top leaders then you are likely to need more detail and a more robust approach.
2) Do your survey questions differentiate?
Check that your questions are ‘working’ for you. Check the average and variance of ratings per question. For instance, if a question generally scores the top rating, then it is probably not adding much value. If a question has a very low average rating it may be too hard for people to achieve and needs tweaking. If the variance is very low (i.e. most reviewers give it the same rating) then again it is probably not adding value. A ‘good’ question is one with a low mean and a high variance.
3) Are the questions clear and simple?
Aim to use plain English with simple construction and as few words as possible – ideally keep within four to eight words and definitely no more than 12. The length of your questions will add time to the survey completion just as the number of questions does. Complex, overly conceptually worded questions add time, irritation and confusion to your reviewers.
Always consider the most junior, least English-speaking reviewer in writing your survey and consider offering other languages if appropriate. If your reviewers fail to understand your questions you can guarantee the resulting data will be unhelpful at best, and misleading at worst. Avoid any negatives in your questions as they usually confuse.
4) Have you done a robust assessment of your dimensions?
If you are summarising data at competency or value level then you need to ensure your questions combine to form a robust and accurate assessment of these dimensions. Just one rogue question within a competency grouping can mislead your participants. For example, a question in a competency around ‘personal impact’ that is about timekeeping among others that are about visibility, networking etc may well lead to a confusing picture – unless timekeeping ratings closely correlate with these others.
Understanding the validity of your tool (and therefore the robustness of your measures) requires a review of the groupings of questions and their connection with the dimension itself as well as the overlap between the dimensions. As a guide, you generally need four to eight questions per dimension for a robust survey.
If your reviewers fail to understand your questions you can guarantee the resulting data will be unhelpful at best, and misleading at worst.
5) Have you included a rating scale to differentiate?
Your rating scale needs to work for your people and your purpose. It may need to be the same as another used elsewhere or it might need to be very different from another. Indeed, 360 data is positively skewed whichever scale you choose, but you should aim to generate as left-sided distribution as possible so as to be sure you are differentiating between those who are good and those who are not so good.
The goal is to use a rating scale that is clear, logical and understandable by all. Keep it simple and it will feel easier and quicker for your reviewers. A straightforward one to five rating scale (poor to excellent) can work surprisingly well. You may incline towards a frequency scale instead as it feels less judgemental, but this leads to some confusion. Are you asking reviewers to rate how often they do things or how well they do them? In addition, an ‘N/A’ option is always advisable to ensure reviewers with no evidence can respond accurately and meaningfully.
6) Have you offered an opportunity to clarify and give examples?
Open text commentary is extremely useful as it allows the reviewers to say more should they feel the need without adding to the mandatory completion time. Clarity on why specific ratings have been given can be invaluable; examples can be like gold dust and, overall, the reviewers feel heard.
7) Have you ranked assessments?
If you want to guarantee every participant is clear on their strengths and development needs – however positive, negative or bland their data may be – then you can build in a ranked assessment without adding much time to the whole process. This can work well using competency headings when the model has between seven and 16 competencies asking, for instance, each reviewer to choose two competencies as the strongest and one that is the weakest. The aggregate analysis of this data will write your strategic training plan. The time/value ratio for this data is high!
8) Have you considered the relevance of the content?
If the survey questions sound and feel relevant to your participants, then the reviewers will not resist and the process will flow quickly for them. It is therefore important to keep your survey current, to gather recent behavioural examples to form your questions and to pilot before rolling out widely. Another tip is to give participants the opportunity to add a personal question of their own so it feels extremely useful and pertinent.
This may seem like quite a long list but, if each of these factors is considered in your survey design, you can be sure you have the shortest survey to achieve your objectives accurately and robustly.
If your survey still meets the ‘it’s taking too long’ complaint then be sure to monitor the actual time reviewers are taking. You might then usefully go back to your senior leaders with a response of ‘is 12 minutes feedback per person per year too much to ask of your managers?’ If so, perhaps it’s time to take on a different project altogether.
Interested in this topic? Read 'Learning how to give candid feedback.'
Elva Ainsworth was born into a family of people-watchers and has cultivated a real love of people pattern spotting. This combination led her to a career in HR after a psychology degree at Bristol University. In HR she enjoyed implementing the brand new psychometrics, as well as designing culture change and personal development tools.