AI bias needs tackling

5 mins read

Earlier this year new research, issued in collaboration with the World Economic Forum, found that a third of organisations faced a direct business impact as a result of artificial intelligence (AI) bias.

DataRobot’s ‘State of AI Bias Report’ looked at how the risk of AI bias can impact organisations, and how business leaders can and should look to manage and mitigate this risk. The company brings together subject-matter experts, data scientists and ethicists to build more transparent and explainable AI for companies and businesses.

The report was based on conversations with over 350 organisations from a variety of industries and revealed that the majority of companies have deep concerns around the risk of bias in AI with over 80 per cent suggesting that government regulation might be necessary to prevent bias.

Today, AI is an essential technology for many businesses helping them to grow and to drive operational efficiency. However, it is fair to say that many are struggling to implement AI effectively and fairly at scale.

When it comes to the issue of bias, how do you define it? For those working in this space AI bias is an anomaly in the output of machine learning algorithms, caused by the prejudiced assumptions that have been made during the algorithm development process or from prejudices in the training data.

There are cognitive biases which are unconscious errors in thinking that will affect an individuals’ judgements and decisions, and these arise from the brain’s attempt to simplify processing information about the world. To date, more than 180 human biases have been defined and classified by psychologists. It is quite possible, inevitable even, for cognitive biases to seep into machine learning algorithms via either designers unknowingly introducing them or using training data sets which include those biases.

A lack of complete data can also cause bias especially if that data is derived from a specific group and does not represent the broader population.

Business risk

Bias is not without financial or reputational risk. According to the State of AI Bias report one in three (36%) organisations have experienced challenges or direct business impacts due to an occurrence of AI bias in their algorithms. These impacts included lost revenue, lost customers and lost employees, legal action and damaged brand reputation.  

So, while AI has the potential to deliver tremendous value to businesses it is also proving to be problematic when it comes to accurately representing entire populations.

But while concern around AI bias has risen what that actually means in terms of being fair in decision-making is an incredibly complex question.

According to DataRobot AI needs to be both trusted and explainable and seen to be fair and unbiased.

“DataRobot’s research shows that the line of what is and is not ethical when it comes to AI solutions has been too blurry for too long,” said Kay Firth-Butterfield, Head of AI and Machine Learning, World Economic Forum. “The CIOs, IT directors and managers, data scientists, and development leads polled in this research clearly understand and appreciate the gravity and impact at play when it comes to AI and ethics.”

Yet, while many organisations say they want to eliminate bias from their algorithms, they are struggling to do so.

Over half (54%) of technology leaders said that they are very or extremely concerned about it as they see it having a negative impact in terms of customer trust, compromised brand reputation, increased regulatory scrutiny and a loss of employee trust and when they were asked about the types of inadvertent discrimination that had been identified almost a third said they included gender, age and racial discrimination.  

The research did find that 77 per cent of organisations had an AI bias or algorithm test in place prior to discovering forms of bias, but many accepted that they needed to re-evaluate those tests and conceded that they were still real challenges when it comes to eliminating bias.

To understand bias there needs to be an understanding of a specific AI decision and of the patterns that exist between input values and AI decisions.

According to data experts, bias will only be removed by developing trustworthy algorithms and determining what data is used to train AI.

“The market for responsible AI solutions will double in 2022,” said Forrester VP and Principal Analyst Brandon Purcell in his report Predictions 2022: Artificial Intelligence. “Responsible AI solutions offer a range of capabilities that help companies turn AI principles such as fairness and transparency into consistent practices. Demand for these solutions will likely double next year as interest extends into all enterprises using AI for critical business operations.”

According to Ted Kwartler, VP of Trusted AI at DataRobot, “The core challenge to eliminate bias is understanding why algorithms arrive at certain decisions in the first place. Organisations need guidance when it comes to navigating AI bias and the complex issues attached. There has been progress, including the EU proposed AI principles and regulations, but there’s still more to be done to ensure models are fair, trusted and explainable.”

Starting to fix the problem

So how do you go about fixing algorithms and the data that’s used?

IT professionals point to AI platform features to better detect bias with AI bias “guardrails” that automatically detect bias in datasets as an important feature when choosing an AI platform.

According to the report almost all those questioned said that, “platforms with standardised workflow and automated bias detection features can reduce instances of human bias and error.”

Many of those questioned also said that there was a role for government regulations – like the EU’s newly proposed AI principles – that could also help organisations mitigate AI bias. In fact, 81% of respondents think government regulation would be helpful in defining and preventing AI bias. However, respondents expressed concerns at regulation with almost half worried that increased AI regulation would grow their company’s costs and make AI adoption more difficult.

Without regulation though, a third were worried that AI would have harmful effects on protected classes of people.

In addressing the issue of bias there is a need to first examine the training dataset - is it representative and large enough to prevent common biases such as sampling bias – and then to monitor the model over time against biases. Algorithms will change as they learn, or as new training data is added.

There is also a need for a debiasing strategy which should include a portfolio of technical, operational, and organisational actions which can help identify potential sources of bias and reveal any traits that could affect the accuracy of the model.

In terms of operational strategies companies are being urged to improve the collection of data and use an auditor to monitor what and how data is collected.

Critically the process needs to be transparent with human-driven processes improved. In the process of building AI models, companies can identify these biases and use this knowledge to understand the reasons for bias. Through training, improved process design and cultural changes, companies will be able to actively reduce bias.

But at the end of the day eliminating bias will be a multidisciplinary strategy. Ethicists, social scientists, and experts will be required and companies will have to look at employing experts in their AI projects.

Crucially diversity in the AI community will help to address and identify biases, those who experience it in real life will be the first to notice bias issues, so maintaining a diverse AI team can help mitigate unwanted AI biases.

DataRobot’s report found that 70 per cent of organisations were conducting data quality checks to avoid AI bias while half were training employees on AI bias, hiring an AI bias or ethics expert and/or measuring AI decision-making factors.

Technology leaders are also evaluating the third-party systems they use, with over 80 per cent requiring their suppliers to provide evidence that their systems are not biased.

The repercussions of AI bias are significant, and companies have a lot to lose by failing to address it. Organisations need to be responsible and ethical when leveraging AI but also need to have the resources that will ensure the success of such efforts.

Leaders must ensure the products they use to identify and prevent bias have guardrails and they must educate employees on what types of data to use and when, as well as create guidelines for the entire company to adhere to.

With this holistic approach, biases in AI algorithms can be diminished but human involvement in AI systems remains will remain essential and by using AI experts who understand both sides of the human-AI coin, organisations will be able to ensure that AI is free from human flaws, and humans are free from AI biases.