Artificial intelligence. Human head outline with circuit board inside, AIAI represents one of the biggest opportunities in terms of economic growth now and in the future.

It can be used to augment human performance and capabilities, bolster online and digital security, develop unique and customised products and services, and create completely new opportunities we would never have conceived of on our own.

But AI is an opportunity that is open to everyone, meaning that if we underuse it, we may get left behind and miss out on opportunities to competitors, on both a micro and macro-economic scale. It’s a balancing act, though, because there is also the risk of over or misuse, which can feed into fears, misinformation, misplaced concerns or excessive reaction, leading us as a society to use AI technologies below their full potential.

KTN is a UK organisation dedicated to facilitating collaboration between industry and academia to accelerate R&D and innovation in a range of sectors. In AI, KTN’s mission has been to accelerate the adoption of AI in the UK public and private sectors. This mission, led by Dr Caroline Chibelushi, one of the UK’s leading experts in AI ethics, involves creating partnerships between the suppliers and consumers of AI through which a general sense of fear, lack of trust and concerns with ethical issues related to the deployment of AI technologies and services has been uncovered.

Research commissioned by Microsoft (2019) confirms that AI adoption in the UK is exceptionally slow, no doubt due to these concerns, and as such the UK is at risk of compromising its competitiveness. According to one estimate, if this current rate of adoption continues, the UK economy is at risk of missing out on £315 billion by 2035 (UKTech).

But are these fears unfounded?  There are more than 180 human biases that have been defined and classified, and although one of AI’s biggest strengths is removing human error from simple process, they were also built by humans, and we have exported many of our biases to the AI systems of today completely unknowingly. As one old IT saying goes, Garbage In, Garbage Out.

A great and recent example of this phenomenon is the reckless judgements made by the 2020 A-level results algorithms. The algorithm was programmed to use a ranking measure, however ranking measures are not robust, nor are they recommended by statisticians. In this case, not only was the processing flawed to start with, but testing also found the accuracy of the model to be low (50% – 60%) due to other data problems. Yet the algorithm was allowed to generate results which proved to be biased against pupils from disadvantaged areas who were disproportionally hit the hardest (NS Tech).

Another example of how we unknowingly train our AI systems to be biased is through training. A lot of AI systems are currently using images from ImageNet to train their system. However, two thirds of the images in ImageNet are from western world (USA, England, Spain, Italy, Australia) (Shanka et al., 2017). But these AI tools are more often than not applied to and for other people of different races and cultures within the western world itself and the rest of the world.

Likewise, AI models and algorithms have been widely adopted in a variety of decision-making scenarios, such as criminal justice, traffic control, financial loans, and medical diagnosis. This emerging proliferation of AI-based automatic decision-making systems is introducing potential risks in many aspects, including safety and fairness.

The bias in AI systems is the by-product of cognitive biases in humans, in order to stop it, AI approaches require detailed ethics investigation to understand their positive and negative impacts on people and society just the same as commercial benefits and efficiency gains.

So, in short, no, the fears are not unfounded. With AI being so fundamental to our everyday lives we need to ensure we uncover and eliminate all biases we possible can, and the way to do that is through inclusion at every step. Safe and secure development and adoption of AI may reduce the fear, increase the trust and accelerate the adoption of AI because responsible and explainable AI will allow users to understand why and how the AI algorithms reached certain conclusions.

What AI needs is a framework that will help to identify tools that are biased and not safe. This framework would extract information about the team which formulated the idea, developers, data used to train the system, where and how the system was tested, the audience tested, and if the information about these processes is transparent and available.

But for this framework to be a success, it needs a level of rigour. For that we can look to the drug discovery process.

In order for a drug to be allowed into the market, it goes through a preclinical drug trials, animal trials and human clinical trials. AI will be so critical to our lives going forward, and will have so much power and potential to do damage, it should go through similar sets of tests before getting to the market.

In this context, preclinical trials would be the use of adversarial models which are currently proving to have capability to reduce bias in AI models. Animal trials would be investigation into how the data was collected, the type of data, the level of awareness of the types of biases that could be contained within the data, the level of inclusivity of the data and whether it has been validated for biases.

The human clinical trial stage is literally that, human involvement, i.e. the teams involved in ideation, development, validating and testing the AI system. The results of these three stages should be transparent, published and available for the AI consumer to examine before purchasing the AI system.

Compared to US China and Germany, UK government investment in AI is low. However, we lead in research and innovation and therefore have a clear opportunity to be a world leader in AI development and adoption of explainable AI.  If we treat AI development with the rigour it deserves, the same rigour we expect from the pharmaceutical industry, we will be able to reduce fear, bias and unsafe AI. We will develop trust and allow smooth and fast adoption of AI in the UK.

Critically, AI is capable of reducing human bias in the society, but only if we stop humans from exporting the biases to AI systems.

About the author

Dr. Caroline Chibelushi takes a vision and makes it reality through sound strategy development.

She intuitively sees the threads of opportunity that wind through an organization, brings them together into a coherent whole, helps others extend their thinking, and drives innovation into business for competitive advantage. Her contribution in AI includes leading an initiative to increase the number of women in AI. She is a founder and executive chair for UK Association for AI which promotes responsible, ethical and sustainable AI.


WeAreTechWomen covers the latest female centric news stories from around the world, focusing on women in technology, careers and current affairs. You can find all the latest gender news here

Don’t forget, you can also follow us via our social media channels for the latest up-to-date gender news. Click to follow us on Twitter, Facebook and YouTube