desk-with-laptopNatalie Cramp, CEO of data science company Profusion discusses how bias found in AI-driven initiatives is a symptom of larger problems.

Over the last few years we’ve heard many stories of AI-driven systems being inherently discriminatory. From Amazon’s sexist recruitment algorithms to Nazi-sympathising chatbots, there is growing evidence that all is not quite right within the world of AI. The recent BLM protests have placed renewed focus on issues of diversity and inclusion. This has naturally led to fears that, unless something is done, there could be a wave of AI tech that perpetuates inequality. However, identifying the issue is a far cry from solving or even understanding the problem. The reality is that tackling bias in AI is a much more complex and difficult challenge than most people realise.

The first thing to understand is AI – or more accurately, machine learning algorithms – is not a sentient being. We are many decades away from that level of AI. What is currently used is essentially a series of algorithms, designed by data scientists, that analyze datasets and ‘learns’ as it goes to become more accurate. The outputs from AI are a reflection of the data it ingests. Therefore, if the data itself is biased then so too is the AI.

So how do we make the data less biased? Sadly, there is no quick fix. Data will always have some form of bias. Even if you were to erase potentially morally objectionable variables from a dataset such as gender and race so that an AI algorithm cannot use them as predictors, there are likely to be other variables in the data that correlate with these factors. The algorithm will then make predictions based on these data points and end up producing answers that are influenced by factors that people may find deeply uncomfortable.

This is not to say that AI is in any way a lost cause. We just need to think of the problem in both the short and long term. We can start by recognising that AI is at the start of its journey and approach the results with a degree of scepticism. Just because it was produced by fancy algorithms does not mean it should override human judgement. Knowing that the way an AI behaves could be discriminatory will enable those that action its outputs to tread with caution.

In some cases we can also lessen the chance of sampling bias. Often, organisations rely solely on their own customer data to fuel their algorithms. The data collected is naturally limited to a representation of that organisation’s customer base. In certain circumstances this sampling bias can lead to groups of people being misrepresented and subsequent analysis – including the outputs of an AI algorithm – becoming discriminatory. By supplementing data with information collected from more diverse sources at a much wider level, a more representative sample of the population at large can be gathered and, as a result, the chance of bias outputs can be reduced.

Next, we can look at the people who create the algorithms that underpin AI tools. As is the case in most parts of the tech sector, data science is dominated by white men. Through no fault of their own, they can inadvertently design tools or create models that are inherently discriminatory. This is simply down to a lack of different perspectives and experiences. By diversifying data teams we can bring in more views that will help to identify applications of AI that could be problematic before they leave the drawing board.

Of course, both these approaches will only help us go so far in eradicating biased AI. Long term, AI will only be free from these problems if society itself is less discriminatory. AI is, after all, a reflection of the data and the data is a representation of society. We cannot have ‘moral’ AI until society itself is fair and just.


WeAreTechWomen covers the latest female centric news stories from around the world, focusing on women in technology, careers and current affairs. You can find all the latest gender news here

Don’t forget, you can also follow us via our social media channels for the latest up-to-date gender news. Click to follow us on Twitter, Facebook and YouTube