artificial intelligence

Regardless of how we feel about Artificial Intelligence (AI), it’s already an established part of many businesses, and is growing rapidly.

Like many technologies, AI is not inherently “good” or “bad”; its effects reflect the intentions and actions of people who create and use it.

Here are three reasons to be optimistic about its future, along with why each should be tempered with caution.

AI Means More Emphasis on Data in Decision-making

AI works by examining large quantities of data, and applying mathematical and statistical rules to make decisions – like approving a loan – or evaluate the likelihood of certain events – like a customer enjoying a movie. The use of AI encourages objectivity in how those decisions and evaluations are made.

For example, HR responsibilities such as recruitment and promotions generally require some subjectivity. Getting decisions right can be hard, and tougher still if there are perceptions of unfairness against individuals or groups.

Using AI here incorporates data from across (and possibly outside) an organisation to add objectivity to such decisions. AI can also demonstrate that decisions are blind to race and gender data. Such decisions will always involve human judgement and subjectivity; AI can increase the role of objective data, and ensure human influence remains transparent.

BUT . . . 

AI uses data, but people decide what that data should be and how it’s used. AI can’t by itself stop people using inappropriate data, or using relevant data inappropriately. That may be deliberate or, (hopefully) more likely, through human error.

For example, when creating an AI recruitment system for a company with a predominantly male workforce, some AI techniques will automatically reflect this skew. This will result in a recruitment system biased towards male applicants, unless the data is treated with techniques that remove this.

So, AI’s reliance on data can enable great steps forward in accuracy and fairness of business decisions, but can also achieve the opposite if not used with care, skill and good intentions.

AI Forces Rules to Be Established Up-front

A common word in AI conversations is “algorithm”, a set of maths, logic and statistics calculations that process data in an AI system to achieve a complicated result that previously only humans could manage. This is the AI designer’s interpretation of what a business wants the AI system to achieve.

However, AI doesn’t decide how a given business problem is solved – that’s the responsibility of AI designers and business people. The business decides what needs to be achieved, and describes how a human would do that in the form of business rules, usually complex and sometimes subjective. Introducing AI forces clarity on how choices and assessment are made to achieve a result.

The skill of the AI designer is choosing and configuring appropriate algorithms, selecting the data to use, and specifying how to use it. This can include powerful techniques to deal with subjectivity and ambiguity.

For example, HR will tell the AI designer how they evaluate interviews and decide whether to hire, reject or further assess them. The AI designer creates an algorithm that makes the same decision using data about this and past candidates. This is tested and adjusted until it works at least as well as humans alone.

AI provides consistency in how complex processes and evaluations are performed. This can lead to better results and the ability to understand clearly how decisions were made. We expect this from regular business systems, but AI extends this to activities previously only done by people.

BUT . . .

The main problem is of course when business rules are inappropriately translated into algorithms, generally through human error in business knowledge or AI design.

Another challenge is that algorithms can be too complex to check decisions retrospectively. “Transparent AI” is an AI design approach that ensures it’s always possible to understand individual AI decisions.

AI Shines a Light on Ethics, Fairness and Accountability

The third reason to be optimistic about AI in business is the flip side of the biggest concerns about it: ethics, fairness and accountability. In a world of #BLM, fake news and data privacy issues, many concerns about AI are symptoms of wider problems facing many businesses.

But if a business wants to reap the rewards of AI, it needs to be ready to answer questions about ethics, fairness and accountability sooner rather than later.

To introduce successful AI that grows revenues and reduces costs, a business will need to address issues around bias in its data, fairness in its business rules and other such considerations.

The other issue AI raises is governance – who in an organisation is responsible for problems around ethics and fairness that may be uncovered, and how do they get addressed. As this is to do with technology and data, they may initially be problems for IT departments and CIOs.

But AI is likely to ask questions of businesses that go further.

BUT . . .

Businesses – especially big, successful ones – have thrived and survived since commerce first appeared without necessarily treating ethics and fairness as importantly as profit and growth.

Many do, and there have been many examples over the years of business practices that used to be acceptable no longer being tolerated.

AI, and the way it uses data, is a catalyst for discussion of some big issues around fairness in business. But whether the discussions lead to change is down to factors way beyond a piece of technology.

Was RahmanAbout the author

Was Rahman is an expert in the ethics of artificial intelligence, the CEO of AI Prescience and the author of AI and Machine Learning. See more at

WeAreTechWomen covers the latest female centric news stories from around the world, focusing on women in technology, careers and current affairs. You can find all the latest gender news here.

Don’t forget, you can also follow us via our social media channels for the latest up-to-date gender news. Click to follow us on Twitter, Facebook and YouTube.