artificial intelligence

Article by Heather Dawe, Head of Data at UST.

Democratising AI is the term given to the ways in which we seek to ensure the development and delivery of Artificial Intelligence (AI) is available for all.

In today’s technology-driven market this is a common sales-pitch for data science and AI platforms – some of which strive to ensure the development of AI is accessible to people other than experienced data scientists, making it easier faster and cheaper for businesses to leverage and benefit from the implementation of AI.

While speed, ease and cost are important arguments for democratising AI, I would argue that these are not the most important and that they carry their own risk, as there is a quality angle that also must be considered. While it is straightforward to develop a machine learning model using a democratic AI platform, it can be very difficult for the person who is developing it to understand if the model is fit for purpose. The ways in which the quality of machine learning model can be assessed can quickly get highly technical and nuanced, requiring a trained data scientist to assess.

Machine learning models that are not fit for purpose not only mean poor quality AI services, they can lead to adverse outcomes such as inaccurate recommendations, the wrong decision or prejudiced behaviours. If such poor quality leads to me being recommended a fly-fishing book by Amazon when I have absolutely no interest in fly-fishing, this is not too much of a problem. However, if the outcome is the wrong clinical decision or seemingly racist behaviours by a social media platform then the impact can be severe. While I’m taking this argument to extremes to illustrate the risks, it is important to reflect on the relative impact of poor quality AI. There are times when a lower quality AI service matters less than others and the cost-savings facilitated by democratic AI platforms can be highly beneficial.

In my view democratising AI goes beyond facilitating the development of machine learning models and AI by citizen data scientists using technology in the shape of technology platforms. It is also very important to democratise further by ensuring diversity in the data-scientists who develop AI.

The effectiveness of AI can be significantly impacted by bias. Sources of bias in AI include the data machine learning models are trained on. While it is very important to be aware of and try to eliminate these biases from the data, they can be hard to control as they reflect the biases and prejudices that exist in society.

Another source of bias is in the developers themselves – the data scientists. Given that globally the data scientist workforce is currently predominantly male, this is an acknowledged issue. And it goes beyond gender. One of the key elements in ensuring the AI we develop is fair for all is in ensuring that we actively train and recruit a data science workforce that reflects the equalities in society we are striving for. Sometimes this will be through positive discrimination, in helping those with less access to the high-quality education that itself discriminates, and actively seeking to ensure that under-represented groups are present within data scientist communities.