Author Profile Picture

Nick Lindsay

Elemental CoSec

Director

Read more from Nick Lindsay

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Reassessing unconscious bias

default-16x9

No matter how much equality, inclusion, and unconscious bias training you run there is one area which may be letting you down. Almost invisible and taken for granted, your internal processes and computer algorithms may not be as even handed as you would like to think.

Trusting that algorithms are unbiased is an easy assumption to make both on the part of the business and its customers. So much so that one 2021 study [1] concluded that “algorithms appear less discriminatory than humans” before adding that this makes “people (potentially erroneously) more comfortable with their use.”

It’s that ‘potentially erroneously’ comment which is of concern. At the end of the day processes and algorithms are generally designed by people, and people can carry unconscious biases. And whilst groups of individuals in an IT or other department may generate less bias than an individual making decisions on their own, the potential for bias is still real.

That’s one reason why Ofcom, together with the Competition and Markets Authority, the Financial Conduct Authority, and the Information Commissioner’s Office has launched a review [2] on whether more should be done to regulate or audit company algorithms. Together, the digital regulators are concerned that algorithmic systems can pose significant risks; introducing or amplifying harmful biases that “lead to discriminatory decisions or unfair outcomes that reinforce inequalities.” This, the regulators add can “be used to mislead consumers and distort competition.”

Training biased algorithms with unrepresentative data

It’s not just consumers who can be adversely affected by unconscious bias. A 2020 study [3] carried out a systematic review of ‘discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development.’ This review made the interesting point that not only might bias be written into algorithms, if the algorithms are then trained on inaccurate or unrepresentative input data, this could lead to biased decision making and outcomes.

 There is a very real danger here that biased hiring and promotional decisions made by an algorithm could start to skew the balance of the workforce; creating a self-perpetuating negative spiral. And whilst it may be easy to say that this outcome would be countered by a vigilant HR team, there have been plenty of real world examples of ‘computer says no’ in recent years which have demonstrated all too well how computer output has held sway over common sense.

In a bizarre way, assuming that your computer systems and processes are not biased could be the blind spot in your own unconscious bias training.  Perhaps it is time to check a few results and see whether your own systems could do with a bit of equality retraining.

[1] https://journals.sagepub.com/doi/10.1177/01461672211016187

[2] https://www.ofcom.org.uk/news-centre/2022/uks-digital-watchdogs-take-a-closer-look-at-algorithms-as-plans-set-out-for-year-ahead

[3] https://link.springer.com/article/10.1007/s40685-020-00134-w

Author Profile Picture
Nick Lindsay

Director

Read more from Nick Lindsay
Newsletter

Get the latest from TrainingZone.

Elevate your L&D expertise by subscribing to TrainingZone’s newsletter! Get curated insights, premium reports, and event updates from industry leaders.

Thank you!