AI-powered tools and machine learning are rapidly becoming the hot topics of 2023, with headlines around generative AI such as ChatGPT waking the wider public up to the potential of the technology even as businesses race to leverage AI to improve efficiency, writes Giulia Ometto, Management Consultant at Bip xTech.

Julia OmettoBut this gold rush is not without its issues. One problem in particular is the unintended bias that can be present – and often well hidden – within the AI tools we are coming to rely on. Gender bias is one facet of this, and if not addressed could have significant negative impacts on our progress toward gender equality.

So, where does this bias come from, how can we address it, and is there a way AI can contribute to gender equality if used correctly?

The newborn savant and the case for ethical AI development

When speaking about machine learning (ML) algorithms, we often associate them with newborn children who have been given many books to read but do not know how to assimilate them. The problem is one of context.

This however raises the question of the ethical responsibilities of the creators, as the ethical context underpinning society is something that children largely learn through osmosis from parents, family and society. For an ML algorithm, its developers and researchers are its ‘parents’ and ‘family’, and therefore these developers need to prioritise ethical considerations in AI development just as parents do. Biases in AI used in recruitment, HR, or even policing, is a worrying prospect and needs to be addressed.

However, ethical AI development is eminently achievable, with the right mindset. It involves conducting thorough bias assessments to identify and mitigate potential biases during the development process. The AI community should strive for transparency by documenting the limitations and potential biases of AI systems, as AI by itself cannot be naturally fair or gender-neutral, as largely demonstrated.

At the same time, businesses should promote diversity within AI development teams. By including individuals from different genders, backgrounds, and experiences, biases can be identified and addressed more effectively.

Strong foundations – why diverse and representative data is essential

A diverse and representative team involved in AI development is essential to ethical AI, but equally important is using diverse and representative data to train AI models. Gender bias in AI often arises from biased or incomplete training data, as an AI trained on biased data will naturally inherit this bias.

Ensuring a wide range of perspectives in the training data is key to fair and equitable algorithms.

Talking about fairness in the data, as reported by the Harvard Business Review, “ML is teaching us that fairness is not simple but complex and that it is not an absolute but a matter of trade-offs.”  In a way, ML is confronting us with issues and questions that we have left unanswered for too long already.

We are questioning what fairness means and how it should be sought in every situation with a new level of precision, with a new vocabulary, and spurring us on to try out adjustments to see the best ways to optimise the system for the values we care about. Improving training data to remove bias results in more than just ethical AI, it results in more ethical approaches to research as a whole.

This new awareness is a big step forward, leading us to be scrupulous in our work and potentially positively impacting our society.

From challenge to opportunity – how AI can advance gender equality.

But, if used correctly, ML can be a powerful tool to advance gender equality. For example, AI-powered tools can aid in detecting and addressing gender-based violence, supporting gender-sensitive healthcare, and promoting equal opportunities in education and employment. By consciously designing AI systems with gender equality goals in mind, we can leverage technology to create positive social change.

Governments and regulatory bodies play a crucial role in this process, establishing guidelines and standards that address gender bias and promote fair and equitable AI systems. Regulation should include provisions for transparency, accountability, and audits of AI systems to ensure compliance with ethical principles.

Gender bias in AI is a multi-faceted problem and, as such, will require a multi-dimensional approach involving data, development practices, user feedback and regulatory measures to be adequately addressed.

However, it must be recognised that gender bias is just one of the biases AI can have and this recognition needs to apply to all other biases that hinder progress for a more diverse and inclusive society. If ethical AI development becomes the norm, the benefits to society as well as business could be significant.