Agata Nowakowska, Area Vice President EMEA at Skillsoft

A couple of years ago, AI seemed the ideal solution for remedying those temporary lapses in good judgement, unforced errors and gut instinct impulsiveness that are part and parcel of the human condition.

As AI adoption accelerated, it seemed as though high stakes decisions were increasingly being delegated to AI systems. Suddenly, AI algorithms were determining everything from someone’s suitability for a job role, to whether or not they’d be selected for a university course, or if their application for credit was accepted.

Before long, however, a growing awareness of bias in AI systems began to raise some disquieting concerns. The resulting soul searching led to heated debates about whether organisations using AI systems were actually trading fairness for consistency or comprising social justice in their pursuit of streamlined efficiencies.

Suddenly, it seemed like we had all fallen out of love with AI.

The problem with technology bias

AI systems are versatile, accurate, reliable, autonomic (self-correcting), fast and affordable. Which is why some 64% of today’s businesses now depend on them for productivity growth. But in the rush to take advantage of the benefits this technology confers, organisations have learned the hard way that it’s a risky business proposition to depend exclusively on AI systems if bias isn’t checked.

The problem is that AI applications can be just as unfair, prejudiced, or discriminatory as the humans who create them. An issue not helped by the fact that the development community is still, by and large, predominantly composed of white males. And when AI systems make mistakes, the scale and scope of their operation means the consequences impact a significant number of people.

Awareness is growing that the machine learning (ML) used to train AI systems represents a key entry point for bias. For example, the data sets selected for ML training can create an echo chamber that amplifies bias. Similarly, historical data used to train AI systems will reflect the prevalent thinking and cultural mores of an era.

With experience comes wisdom

AI systems have proved highly successful at tackling a variety of complex workplace and public safety challenges – whether that is handling hazardous situations using AI-guided robots to fight fires, disable bombs or clean up chemical skills. A more recent example is helping millions of people access digital banking services during the coronavirus pandemic.

To successfully harness the potential of AI, however, organisations will need to ensure that their AI systems do not repeat the mistakes of the past. In other words, applying the lessons learned about the disruptive impact of bias to achieve fairer and more equitable outcomes for all.

For example, back in 2015 Amazon was forced to ditch an automated AI recruitment screening tool that favoured men for technical jobs and penalised women. The in-house programme had been developed using data accumulated from CVs submitted over the past decade, which reflected the dominance of men across the tech industry. The firm now uses a much watered-down version of the recruiting engine to help with some rudimentary chores like culling duplicate candidate profiles from databases.

Restoring trust in algorithms and AI systems: the top steps to take

Delivering on the promise of AI starts with the creation of fairness metrics and measuring fairness at each step of the technology development process: design, coding, testing, feedback, analysis, reporting and risk mitigation.

This should include creating design models that test AI systems and challenge results, using approaches like counterfactual testing to ensure that outcomes can be repeated and explained. Performing side-by-side AI and human testing, using third party external judges to challenge the accuracy and possible results biases will also be crucial.

Re-aligning cultural thinking across the organisation is another mission-critical task. Alongside educating employees that driving out bias is everyone’s mandate, diversifying the organisation’s software development community will mitigate against the ‘group-think’ mentality that introduces bias into AI systems.

Falling in love with AI – again

Realising the opportunities offered by AI means that the way systems are developed, deployed, and used must be carefully managed to prevent the perpetuation of human or societal biases. That includes thinking carefully about the fairness of any underlying data attributes used, ensuring everyone has access to the tools and processes needed to counter unfair bias, and boosting the diversity of the AI community. On occasion that may include crowd-sourcing opinions from the widest number of interested participants to address unconscious bias and assure mass acceptance and uptake.

Understanding how bias in data works is a critical first step to controlling bias in AI systems. This is why some forward thinking organisations are utilising new tools to tackle bias. For example, LinkedIn is using LIFT, an Open Source toolkit, to identify bias in job search algorithms. It has now joined forces with IBM and Accenture to build toolkits that combat bias in business. Similarly, an app that enables rapid DNA testing of wastewater for COVID-19 is an example of an innovative AI system that can detect a coronavirus hotspot without any community bias. Once COVID-19 is detected, hospitals and first responders can gear up for an increased caseload.

Armed with the right tools, processes and determination to ensure fairness is a design characteristic built into every aspect of algorithm and AI system development, there’s every indication that the love affair with AI is set to flourish once again.

Agata Nowakowska, SkillsoftAbout the author

Agata Nowakowska is Area Vice President EMEA at Skillsoft, where she leads the field operations, to include enterprise and small & mid-market, as well as channel sales/strategic alliances across Europe, Middle East and Africa.


WeAreTechWomen covers the latest female centric news stories from around the world, focusing on women in technology, careers and current affairs. You can find all the latest gender news here

Don’t forget, you can also follow us via our social media channels for the latest up-to-date gender news. Click to follow us on Twitter, Facebook and YouTube