Black woman working on computer, engineering, CodeGen Developer Challenge

How AI can help create a more inclusive recruitment process

Black woman working on computer, engineering

Article by Jen Shorten, Board Member at Clu

Organisations are facing record lows for talent retention. This issue largely stems from the fact that the focus on employee experience and success has been next to non-existent in most companies until very recently.

But this isn’t the only cause of the great resignation. Around 80% of employees don’t actually have the right skills for their jobs which leads to poor motivation, productivity, and job satisfaction. When layered against the reported success rate of recruitment sitting at just under 50% and a recent study by Checkster finding that 78% of people are lying on their CVs and cover letters when applying for roles, a case is evidenced that the current recruitment process is no longer accurate, fit for purpose or supporting talent retention.

Many sectors have proven that AI adds demonstrable value and can deliver exceptional results, so why isn’t the world of recruitment following suit in more effective ways? We have some thoughts.

AI in recruitment currently focuses on increasing the breadth of applications received and automating CV sifting using keyword-based algorithms. This is where AI can be used to make the most significant advancements, particularly in the way that we unearth talent, understand it, and set it up for success in our organisations.

The traditional process of scanning and filtering out CVs is rife with bias and anchors towards more of the same, instead of finding something different. According to one intensive academic study, minority applicants who ‘whitened’ their CVs were more than twice as likely to receive calls for interviews, and it did not matter whether the organisations they applied to claimed to value diversity or not.

AI can play a critical role in the democratisation of the job market and mitigating these entrenched biases.

It can be used to safeguard the inclusion of candidates from non-traditional backgrounds throughout the hiring process without needing to hide or erase their identity in the process. AI can help us understand candidates better and see them for what they can do, not just what they have or haven’t done previously to determine their relevancy for a role. It can also improve and enhance the way candidates and organisations find one another, improving the accuracy, experience and outcomes of recruitment for everyone.

Platforms such as Clu, will be pivotal in these advancements. To reduce bias, Clu removes CVs from the hiring process and helps organisations inclusively source and assess each candidate in consistent and transparent ways to create a more level playing field.  Over our years of research into building high-performing and diverse workforces, we have found that implementing hiring strategies that focus on both soft and technical skills and strategies that encourage collaboration between team members in the hiring process is the key to incremental improvement.

We believe it is important that AI is anchored to auditable, diverse and actionable data. Within our context, this ensures that recommendations to candidates and organisations are equitable, ethical and make it easier to mitigate and challenge unconscious and implicit bias. Our algorithm captures significantly more data sets across demographics, skills and target remuneration than other solutions in the Rec-tech market. We can use these new data points to allow organisations to track their performance in safeguarding holistic diversity and inclusion over time. Something a traditional CV-sifting algorithm could never do.

But AI driving better performance is only one part of the puzzle, it must also help with learning to be truly effective. We have spoken to over 1,000 hiring managers during our R&D phase and far too many couldn’t confidently dissect the soft and technical skills needed for a role. We can use AI to not only inform how skills can be matched to fill gaps and enhance teams, it can also be used to help hiring and recruitment managers create more attainable and actionable skills profiles to match candidates to.

Learning from the way people identify themselves, demographically and cognitively, can help elevate those who are systemically overlooked and excluded by standard recruitment to not only fill critical skills gaps currently facing organisations but also greater fulfil their potential.

We can improve the accuracy of recruitment by understanding more about what people from certain locations, with certain career and experience histories are most interested in. We can see what organisations are over indexing in their own cultures and start making recommendations of how they can better balance themselves to create the more holistic, innovative, and productive environments that indirectly impact recruitment and talent attraction.

An example that brings this to life is in an anecdote of a Harvard economics and law fellow Michael Rosenbaum sharing a new idea with the White House in the millennium. He argued that underserved, urban populations were an untapped source of talent for software development. The White House dismissed his idea, however, so he went on to found Catalyte which specialises in getting non-traditional talent into coding jobs and it is has gone on to have huge success.

We want to inspire policy makers and highlight how technology offers hope and positive conversation changes. We want to show that talent shines through in unexpected places when you look at what people can do.

Jen ShortenAbout the author

Jen has worked in software development for over 20 years specialising in large scale data integration and IT modernisation projects. She is on the Board of category-defining inclusive recruitment software, Clu. This cloud-based tool enriches every touchpoint of the hiring process for organisations, to hire the talent you want to and to build your team.

The irony of artificial intelligence: Addressing inequality in AI

artificial intelligence

There is a cruel irony in the world of AI; the representation of gender identity, race and ethnicity and sexual orientation is itself ‘artificial’; it doesn’t represent society and therefore this leads to unbalanced outcomes for the people the industry is intended to serve.

Diversity in the workplace is critical in providing a wide range of perspectives and lived experiences for the design and implementation of an AI system and removing bias from the equation.

Take women for example: the percentage of female AI PhD graduates and tenure-track computer science (CS) faculty have remained low for more than a decade. Female graduates of AI PhD programs in North America have accounted for less than 18% of all PhD graduates on average, according to an annual survey from the Computing Research Association (CRA). Furthermore, women make up just 26% of data and AI positions in the workforce according to a 2020 World Economic Forum report.

Then you look at race and the picture is even more concerning; just 2.4% of PhD graduates in the same survey were African American, compared to 45% white. Women in Machine Learning and Black in AI groups have gone some way to reducing the gap but much more work is needed in this area to encourage representation across the board. Plus, these statistics do not focus on those with learning disabilities, hidden disabilities or people with low incomes. We are just touching the surface of inequality.

This problem lies beyond the remit of traditional recruitment; it starts with early STEM education which shows young women and girls how AI roles can impact their life. It’s also the case that girls are dissuaded from STEM careers as there is a false belief that they don’t excel in those subjects.

The responsibility lies with women already in the field who can mentor and inspire the next generation of AI leaders. Companies are recognising that they need diversity embedded deep within their organisations to really achieve great things. Those women already in the industry need to stand up and be counted.

The problem is women struggle to gain credibility and feel the need to ‘earn their stripes’ compared to their male counterparts. It also needs to be recognised that AI professionals do not all need to come from a computer science background; mathematical, ethical and business heads are required too.

From the words of those who have been there:

“Don’t be afraid to venture into an unfamiliar discipline to maximize your opportunity for impact.”

“However, doing so effectively requires engaging with collaborators from other disciplines openly, constructively, and with respect: One needs to be willing to ask naive questions in order to learn.” Daphne Koller CEO & Founder Insitro.

As well as encouraging women to enter the world of AI, there is also much work to be done on retention of those staff. “Many women feel they are not treated the same as men in AI, and it is driving many of them out. Over half (58 percent) of all our women respondents said they left an employer due to different treatment of men and women.” (Deloitte). Pay and career path are the main areas where women receive unfavourable treatment compared to their male counterparts; that can be easily rectified. More education is needed on the career paths available too.

Typically, emerging professions hire from closed networks before they become mainstream. AI is at that stage. So, what’s the solution? We need to encourage women from non-tech backgrounds to enter the world of AI as well as encouraging girls into STEM subjects. To be truly diverse, the workforce needs to contain that blend of skills.

Artificial intelligence (AI) has become embedded in everyday life around the world, touching how we work, play, purchase and communicate. Whilst we agree that AI is largely technical, it also includes politics, labour, culture and capital. To understand AI operationalised or in context, one needs to understand: What part of society being improved by AI? For whom is this improvement being done? Who decides the remit of the AI that is rolled out to society?

Who is leading the way through this maze? Kate Crawford’s book ‘Atlas in AI’ gives an insightful account of how this sector is developing whilst Deloitte has formed an academy to address inequality in AI (We and AI). There also trailblazers such as Kay Firth-Butterfield, Beena Ammanath and Joy Buolamwini who are inspiring the next generation through their work. It is these people we should look to for the answer to ‘what next?’.

It’s imperative that we get the next steps right in this field, that we encourage girls into AI careers, that we welcome a range of skills and that those who have succeeded show others the way. It’s critical that diversity becomes embedded in all organisations so that they can truly serve the population they are intended for, otherwise we are building a false world for the future.

Sandra MottohAbout the author

Sandra Mottoh, who after working in Regulatory Compliance and Governance in the banking sector for the past 20 years, is now also focussing her social enterprise ‘AI White Box’ to identify the compliance gaps in the emerging AI sector. As a black woman, she is also passionately campaigning to help more women enter the world of AI, particularly those coming from financially challenged and ethic minority backgrounds.

Beyond bias: Is it time to fall in love with AI systems again?

Agata Nowakowska, Area Vice President EMEA at Skillsoft

A couple of years ago, AI seemed the ideal solution for remedying those temporary lapses in good judgement, unforced errors and gut instinct impulsiveness that are part and parcel of the human condition.

As AI adoption accelerated, it seemed as though high stakes decisions were increasingly being delegated to AI systems. Suddenly, AI algorithms were determining everything from someone’s suitability for a job role, to whether or not they’d be selected for a university course, or if their application for credit was accepted.

Before long, however, a growing awareness of bias in AI systems began to raise some disquieting concerns. The resulting soul searching led to heated debates about whether organisations using AI systems were actually trading fairness for consistency or comprising social justice in their pursuit of streamlined efficiencies.

Suddenly, it seemed like we had all fallen out of love with AI.

The problem with technology bias

AI systems are versatile, accurate, reliable, autonomic (self-correcting), fast and affordable. Which is why some 64% of today’s businesses now depend on them for productivity growth. But in the rush to take advantage of the benefits this technology confers, organisations have learned the hard way that it’s a risky business proposition to depend exclusively on AI systems if bias isn’t checked.

The problem is that AI applications can be just as unfair, prejudiced, or discriminatory as the humans who create them. An issue not helped by the fact that the development community is still, by and large, predominantly composed of white males. And when AI systems make mistakes, the scale and scope of their operation means the consequences impact a significant number of people.

Awareness is growing that the machine learning (ML) used to train AI systems represents a key entry point for bias. For example, the data sets selected for ML training can create an echo chamber that amplifies bias. Similarly, historical data used to train AI systems will reflect the prevalent thinking and cultural mores of an era.

With experience comes wisdom

AI systems have proved highly successful at tackling a variety of complex workplace and public safety challenges – whether that is handling hazardous situations using AI-guided robots to fight fires, disable bombs or clean up chemical skills. A more recent example is helping millions of people access digital banking services during the coronavirus pandemic.

To successfully harness the potential of AI, however, organisations will need to ensure that their AI systems do not repeat the mistakes of the past. In other words, applying the lessons learned about the disruptive impact of bias to achieve fairer and more equitable outcomes for all.

For example, back in 2015 Amazon was forced to ditch an automated AI recruitment screening tool that favoured men for technical jobs and penalised women. The in-house programme had been developed using data accumulated from CVs submitted over the past decade, which reflected the dominance of men across the tech industry. The firm now uses a much watered-down version of the recruiting engine to help with some rudimentary chores like culling duplicate candidate profiles from databases.

One Tech World Virtual Conference 2022

01 APRIL 2022

Book your place now to what is becoming the largest virtual conference for women in technology in 2022


Restoring trust in algorithms and AI systems: the top steps to take

Delivering on the promise of AI starts with the creation of fairness metrics and measuring fairness at each step of the technology development process: design, coding, testing, feedback, analysis, reporting and risk mitigation.

This should include creating design models that test AI systems and challenge results, using approaches like counterfactual testing to ensure that outcomes can be repeated and explained. Performing side-by-side AI and human testing, using third party external judges to challenge the accuracy and possible results biases will also be crucial.

Re-aligning cultural thinking across the organisation is another mission-critical task. Alongside educating employees that driving out bias is everyone’s mandate, diversifying the organisation’s software development community will mitigate against the ‘group-think’ mentality that introduces bias into AI systems.

Falling in love with AI – again

Realising the opportunities offered by AI means that the way systems are developed, deployed, and used must be carefully managed to prevent the perpetuation of human or societal biases. That includes thinking carefully about the fairness of any underlying data attributes used, ensuring everyone has access to the tools and processes needed to counter unfair bias, and boosting the diversity of the AI community. On occasion that may include crowd-sourcing opinions from the widest number of interested participants to address unconscious bias and assure mass acceptance and uptake.

Understanding how bias in data works is a critical first step to controlling bias in AI systems. This is why some forward thinking organisations are utilising new tools to tackle bias. For example, LinkedIn is using LIFT, an Open Source toolkit, to identify bias in job search algorithms. It has now joined forces with IBM and Accenture to build toolkits that combat bias in business. Similarly, an app that enables rapid DNA testing of wastewater for COVID-19 is an example of an innovative AI system that can detect a coronavirus hotspot without any community bias. Once COVID-19 is detected, hospitals and first responders can gear up for an increased caseload.

Armed with the right tools, processes and determination to ensure fairness is a design characteristic built into every aspect of algorithm and AI system development, there’s every indication that the love affair with AI is set to flourish once again.

Agata Nowakowska, SkillsoftAbout the author

Agata Nowakowska is Area Vice President EMEA at Skillsoft, where she leads the field operations, to include enterprise and small & mid-market, as well as channel sales/strategic alliances across Europe, Middle East and Africa.

What does it take for computers to understand human language?' wth Anna Grüebler & Mia Chang, AWS

Listen to the latest She Talks Tech podcast on 'What does it take for computers to understand human language?' with Anna Grüebler & Mia Chang, AWS

What does it take for computers to understand human language?' wth Anna Grüebler & Mia Chang, AWS

This episode is the fourth of an AWS special series of the She Talks Tech podcast.

The objective of these podcasts is to demonstrate how Cloud technology is helping transform many industries like Retail, Financial Services or even Sports. But we also want to hear from the women behind these stories who are enabling these transformations to understand what they do day to day and how they got into working in technology.

In this episode, Anna, Artificial Intelligence Specialist Solutions Architect, AWS, and Mia, Machine Learning Specialist Solutions Architect, AWS, will share with you their story about Natural Language Processing using Artificial intelligence.


‘She Talks Tech’ brings you stories, lessons and tips from some of the most inspirational women (and men!) in tech.

From robotics and drones, to fintech, neurodiversity and coronavirus apps; these incredible speakers are opening up to give us the latest information on tech in 2021.

Vanessa Valleley OBE, founder of WeAreTheCity and WeAreTechWomen brings you this latest resource to help you rise to the top of the tech industry. Women in tech make up just 17 per cent of the industry in the UK and we want to inspire that to change.

WeAreTechWomen are delighted to bring this very inspiring first series to wherever you normally listen to podcasts!

So subscribe, rate the podcast and give it a 5-star review – and keep listening every Wednesday morning for a new episode of ‘She Talks Tech’.

Produced by Pineapple Audio Production.

Discover more from our
She Talks Tech podcast


How the 4-day week, AI, and Adult-retraining will transform the workplace

The confluence of a global event and the awakening by many people to the existence of technologies has created a paradigm shift for employers and employees alike.

Prior to the COVID-19 pandemic, virtual technologies like Zoom and Microsoft Teams had barely penetrated many workforces let alone the general public consciousness.

It seems that everything now is different. In many ways, of course, lives, careers, and industries have fundamentally changed in the wake of the global pandemic. However, like a change in the awareness of already existing technologies, it seems that employers have figured out something that some employees but many researchers have known for more than a decade.

People are just as productive while working from home as they are at work. Call it managerial enlightenment. Call it managerial self-interested enlightenment – less rent to pay for the office space! Call it an opportunity to rebalance work-life balance. Call it organizational and human evolution. All probably carry some truth.

In many societies, compressed work weeks or four-day work weeks are already the norm. Economic research provides evidence that working shorter work weeks does not reduce employee performance or overall economic productivity. In fact, there is reason to believe that well-rested and healthy employees actually perform better than over-worked, stressed employees.

It is unfortunate that it took a global pandemic for many employers to learn these lessons, but it is important nonetheless that they have learned these lessons.

Despite the lack of awareness that many of us had toward the ongoing Fourth Industrial Revolution, that revolution has been underway for decades and will accelerate in the coming decade. Automation through machines such as robots, artificial intelligence, and machine learning have existed, in some cases, for over a half century. It is the more rapid advances in artificial intelligence – the human programming of machines to respond to situations or events in ways that humans would – and in machine learning, where the machines no longer need human programmers but learn on their own, that will accelerate trends that seem to have only appeared in the wake of a once-in-a-decade global pandemic.

Highly repetitive work with standard physical movements that requires little autonomous decision making will continue to see displacement due to technology. Jobs and industries that now dominate service-based economies, will contract if not outright disappear. Companies will no longer need dozens of accountants or financial analysts when a data scientist who creates algorithms and just a handful of accountants or financial analysts can do the same amount of work with fewer mistakes. Marketing, human resources, supply chain management, and most other job categories found in modern companies will shrink.

Even the retail sector will see a technological transformation. Fast-food or cafeteria-style restaurants will operate with very little human work, as machines will take orders, prepare and cook meals, deliver orders, take payments, and clean tables, utensils, dishes, and glassware. Pharmacies will use automated robots and kiosk-style ordering terminals to serve customers. Convenience stores will use robots to stock shelves and kiosks for checkout. While this sounds like some far-off, futuristic scenario, consider that these types of retail settings already exist in countries like China.

Even call centers, once the exemplar of outsourcing to foreign countries, will be replaced by technology. When you visit a website and a dialogue box appears on the bottom of your screen, that is an artificial intelligence bot. By the end of this decade, virtual bots will have conversations with you on the telephone. The development of hologram bots means that you might not even know that the person you Facetime with is actually an artificial intelligence bot.

All of this might scare you and make you wonder if your job will even exist in ten years. It might not. So what can you do to ensure that you have a long, vibrant career?

First, adopt a lifelong learning mindset. Recall that automation occurs for jobs that have highly repetitive functions and little independent decision-making. Seek out opportunities to up-skill, which might mean returning to formal education settings or seeking out certificate programs. Second, learn to thrive through failure. Yes, learn to fail. Find safe opportunities to try new things at which you will fail. Take cooking lessons. Try to build a piece of furniture. Attempt to paint. Put yourself into uncomfortable situations and learn to adapt. You will find that your ability to problem solve and take calculated risks will benefit you across your life. Finally, get healthy. The next decade will feel stressful. Your ability to cope with stress will benefit you. Exercise, sleep well, take your holidays, recharge your mental and physical batteries, and try your best to take breaks from your smartphone – which creates stress by even being on your person.

Do these steps guarantee a successful career as the Fourth Industrial Revolution? No. However, these steps will help you adapt to a rapidly changing future.

Anthony WheelerAbout the author

Anthony Wheeler is Dean of Business Administration, Professor of Management at Widener University and co-author with M. Ronald Buckley of HR without people? Industrial evolution in the age of automation, AI and Machine Learning.

The Art of the Possible with AI in business and how to get into the industry' with Sarah Burnett, Emergence Partners - She Talks Tech podcast 1

Listen to our latest She Talks Tech podcast on 'The Art of the Possible with AI in business and how to get into the industry' with Sarah Burnett, Emergence Partners

The Art of the Possible with AI in business and how to get into the industry' with Sarah Burnett, Emergence Partners - She Talks Tech podcast 2

Today we hear from a renowned industry analyst, Sarah Burnett, Non-Executive Director at Emergence Partners, BCS book author and AI Accelerator founder.

In this episode of She Talks Tech, Sarah discusses how businesses are opening up new possibilities with AI and gives an overview of the skills landscape to help budding enthusiasts break into this expanding field of technology.

With conversations on solutions that AI can provide for businesses and resources for AI skills development, you’re in for a real treat.

If you want to find out more about Sarah – you can connect with her on LinkedIn.


‘She Talks Tech’ brings you stories, lessons and tips from some of the most inspirational women (and men!) in tech.

From robotics and drones, to fintech, neurodiversity and coronavirus apps; these incredible speakers are opening up to give us the latest information on tech in 2021.

Vanessa Valleley OBE, founder of WeAreTheCity and WeAreTechWomen brings you this latest resource to help you rise to the top of the tech industry. Women in tech make up just 17 per cent of the industry in the UK and we want to inspire that to change.

WeAreTechWomen are delighted to bring this very inspiring first series to wherever you normally listen to podcasts!

So subscribe, rate the podcast and give it a 5-star review – and keep listening every Wednesday morning for a new episode of ‘She Talks Tech’.

Produced by Pineapple Audio Production.

Discover more from our
She Talks Tech podcast


BCS, The Chartered Institute for IT

Priorities for the National AI Strategy | BCS, The Chartered Institute for IT

BCS, The Chartered Institute for ITAs we recover from the pandemic and adapt to being outside of the EU, the UK needs to harness the power of digital technologies to deliver step-changes in resilience, productivity, innovation, and sustainable growth across the private and public sectors.

Artificial Intelligence (AI) will be at the heart of driving this transformation, provided we have the right national strategy in place, building on and amplifying the government’s earlier National Data Strategy and taking account of the current AI Council Roadmap.

The AI strategy will need to consider a number of complex and interconnected issues. These include how AI is going to be used to deliver the right societal outcomes for UK citizens, particularly how it enables the UK to be more inclusive, accelerates sustainable decarbonisation and improves prosperity for all.


Artificial intelligence. Human head outline with circuit board inside, AI

Explaining AI: It doesn’t have to be rocket science

Artificial intelligence. Human head outline with circuit board inside, AI

By Teodora Gavrilut, COO at Creatopy

As a society, we’re brilliant at curbing industries before they get out of control and cause a problem. Look at tobacco and asbestos. Both nipped in the bud.

Except of course, we didn’t get ahead of those problems. And while there are constant discussions around how we will ensure AI is controllable, regulated and used to build a better world rather than a worse one, I doubt these will be any more fruitful than discussions and campaigns in other scenarios. The EU has taken steps towards regulation, but of course this has no impact on the wider global landscape, though it could certainly put some boundaries in for EU businesses and developers.

More serious perhaps though will be consumer fears around AI once they realise what it does. Advertisers were happy for consumers to stay in the dark around how they use their data, and now that Apple is asking them for permission to carry on - they’re saying no.

However, AI is now a reality: it is already everywhere, and in future it will be in every part of our lives, from home to work to our favourite local takeaway. As individuals and businesses we can ensure that we are not part of the problems facing AI in some very simple ways.


Communiticating what AI does clearly - and investing in ‘explainable AI’ is a key part of creating a sustainable AI-driven future. Explainable AI is a programme able to reach complicated conclusions from data and execute actions, learning from past experience and evolving to become more efficient and useful - all while being able to show how it got there. Previously, you might put in some data into an AI and something would come out, and it would be really unclear how it arrived at the conclusion. Recognising that this leads to opportunity for prejudice, for mistakes and - from a developer point of view - lots of complicated reprogramming, many businesses are developing AI that is transparent about how it arrives at any given conclusion.

It makes sense both from a transparency and trust point of view, and from a simple logistics one: if an AI goes wrong, we want to be able to see where that happened. Otherwise, disasters like the Microsoft chatbot that became racist after just 16 hours of learning conversational skills from Twitter are not only inevitable, but hard to rectify within the existing software.

The point of AI is to make lots of small decisions for us and execute those actions in order to maximise outcomes. We must therefore always be able to understand why and how an AI reached any point in its programming.


AI drives value for both businesses and customers. Take Google Ads - this technology has enabled advertisers to target audiences rather than keywords via complex understanding of how individuals use Google. Not only do consumers googling for answers find the responses they need, and products or services relevant to their query, but businesses see far better returns on search advertising spend.

Our technology at Creatopy can automate, scale and test creative and enhance it for optimisation of results. This means that consumers see creative they’re more likely to enjoy, and businesses enjoy greater ROI on their campaigns - something especially crucial for businesses looking to achieve maximum goals within a set budget in an increasingly content-driven landscape.

AI is one of our greatest innovations within computing. However, as businesses we must ensure we take our responsibilities with such advanced technology seriously. By both explaining to customers how it helps them, and ensuring we can explain it ourselves, we can mitigate against concerns and AI errors as we move into a machine-led future.

About the author

Teodora Gavriluț is the Chief Operating Officer of Creatopy. With a solid marketing background of over 15 years, she handles the company’s internal affairs. By combining analytical thinking with creative processes, Teodora believes she’s fortunate to have built a career out of her love for technology and passion for marketing.


WeAreTechWomen covers the latest female centric news stories from around the world, focusing on women in technology, careers and current affairs. You can find all the latest gender news here

Don’t forget, you can also follow us via our social media channels for the latest up-to-date gender news. Click to follow us on Twitter, Facebook and YouTube

Artificial intelligence. Human head outline with circuit board inside, AI

The economic case for building trust in AI

Artificial intelligence. Human head outline with circuit board inside, AIAI represents one of the biggest opportunities in terms of economic growth now and in the future.

It can be used to augment human performance and capabilities, bolster online and digital security, develop unique and customised products and services, and create completely new opportunities we would never have conceived of on our own.

But AI is an opportunity that is open to everyone, meaning that if we underuse it, we may get left behind and miss out on opportunities to competitors, on both a micro and macro-economic scale. It’s a balancing act, though, because there is also the risk of over or misuse, which can feed into fears, misinformation, misplaced concerns or excessive reaction, leading us as a society to use AI technologies below their full potential.

KTN is a UK organisation dedicated to facilitating collaboration between industry and academia to accelerate R&D and innovation in a range of sectors. In AI, KTN’s mission has been to accelerate the adoption of AI in the UK public and private sectors. This mission, led by Dr Caroline Chibelushi, one of the UK’s leading experts in AI ethics, involves creating partnerships between the suppliers and consumers of AI through which a general sense of fear, lack of trust and concerns with ethical issues related to the deployment of AI technologies and services has been uncovered.

Research commissioned by Microsoft (2019) confirms that AI adoption in the UK is exceptionally slow, no doubt due to these concerns, and as such the UK is at risk of compromising its competitiveness. According to one estimate, if this current rate of adoption continues, the UK economy is at risk of missing out on £315 billion by 2035 (UKTech).

But are these fears unfounded?  There are more than 180 human biases that have been defined and classified, and although one of AI’s biggest strengths is removing human error from simple process, they were also built by humans, and we have exported many of our biases to the AI systems of today completely unknowingly. As one old IT saying goes, Garbage In, Garbage Out.

A great and recent example of this phenomenon is the reckless judgements made by the 2020 A-level results algorithms. The algorithm was programmed to use a ranking measure, however ranking measures are not robust, nor are they recommended by statisticians. In this case, not only was the processing flawed to start with, but testing also found the accuracy of the model to be low (50% - 60%) due to other data problems. Yet the algorithm was allowed to generate results which proved to be biased against pupils from disadvantaged areas who were disproportionally hit the hardest (NS Tech).

Another example of how we unknowingly train our AI systems to be biased is through training. A lot of AI systems are currently using images from ImageNet to train their system. However, two thirds of the images in ImageNet are from western world (USA, England, Spain, Italy, Australia) (Shanka et al., 2017). But these AI tools are more often than not applied to and for other people of different races and cultures within the western world itself and the rest of the world.

Likewise, AI models and algorithms have been widely adopted in a variety of decision-making scenarios, such as criminal justice, traffic control, financial loans, and medical diagnosis. This emerging proliferation of AI-based automatic decision-making systems is introducing potential risks in many aspects, including safety and fairness.

The bias in AI systems is the by-product of cognitive biases in humans, in order to stop it, AI approaches require detailed ethics investigation to understand their positive and negative impacts on people and society just the same as commercial benefits and efficiency gains.

So, in short, no, the fears are not unfounded. With AI being so fundamental to our everyday lives we need to ensure we uncover and eliminate all biases we possible can, and the way to do that is through inclusion at every step. Safe and secure development and adoption of AI may reduce the fear, increase the trust and accelerate the adoption of AI because responsible and explainable AI will allow users to understand why and how the AI algorithms reached certain conclusions.

What AI needs is a framework that will help to identify tools that are biased and not safe. This framework would extract information about the team which formulated the idea, developers, data used to train the system, where and how the system was tested, the audience tested, and if the information about these processes is transparent and available.

But for this framework to be a success, it needs a level of rigour. For that we can look to the drug discovery process.

In order for a drug to be allowed into the market, it goes through a preclinical drug trials, animal trials and human clinical trials. AI will be so critical to our lives going forward, and will have so much power and potential to do damage, it should go through similar sets of tests before getting to the market.

In this context, preclinical trials would be the use of adversarial models which are currently proving to have capability to reduce bias in AI models. Animal trials would be investigation into how the data was collected, the type of data, the level of awareness of the types of biases that could be contained within the data, the level of inclusivity of the data and whether it has been validated for biases.

The human clinical trial stage is literally that, human involvement, i.e. the teams involved in ideation, development, validating and testing the AI system. The results of these three stages should be transparent, published and available for the AI consumer to examine before purchasing the AI system.

Compared to US China and Germany, UK government investment in AI is low. However, we lead in research and innovation and therefore have a clear opportunity to be a world leader in AI development and adoption of explainable AI.  If we treat AI development with the rigour it deserves, the same rigour we expect from the pharmaceutical industry, we will be able to reduce fear, bias and unsafe AI. We will develop trust and allow smooth and fast adoption of AI in the UK.

Critically, AI is capable of reducing human bias in the society, but only if we stop humans from exporting the biases to AI systems.

About the author

Dr. Caroline Chibelushi takes a vision and makes it reality through sound strategy development.

She intuitively sees the threads of opportunity that wind through an organization, brings them together into a coherent whole, helps others extend their thinking, and drives innovation into business for competitive advantage. Her contribution in AI includes leading an initiative to increase the number of women in AI. She is a founder and executive chair for UK Association for AI which promotes responsible, ethical and sustainable AI.

WeAreTechWomen covers the latest female centric news stories from around the world, focusing on women in technology, careers and current affairs. You can find all the latest gender news here

Don’t forget, you can also follow us via our social media channels for the latest up-to-date gender news. Click to follow us on Twitter, Facebook and YouTube

Will AI destroy the future of work?

artificial intelligenceAs a society, we have always resisted and questioned any form of change at first.

We refused to believe that the Earth wasn’t flat, and it percolated down to us rejecting modern advancements in technology fearing societal changes or unemployment. At every juncture, we learnt one thing: Change is inevitable and in most cases, it brings significant progress to the world we live in.

Coined as the 4th Industrial Revolution by the World Economic Forum, the present moment has come a long way from machines to rapid digitization and ultimately, the age of the knowledge worker.

Contrary to popular belief, this rapid shift in the future of work will not be the end of the road for the human workforce. We are now observing a shift in requirements, concerning the human workforce. Gone are the days when jobs required a worker to just pick up an object and leave it elsewhere on the factory floor.

In the case of Call Centres, an increasing number of repetitive queries are being handled by AI. Book-keeping, Manufacturing QnC and something even as complex as Risk Assessment has all been automated with the help of AI.

What does this mean for us and the future of work? We must bear in mind that in the aforementioned cases, all the AI does is remove repetitive and mundane tasks from an employee's scope of work. By doing so, employees are pushed to utilise their time and efforts for thinking, creating and innovating.

However, you don't have to rest your blind faith in me. Enterprise Bot provides AI-powered solutions that enable businesses to respond to customer queries in real-time. Over 60% of the inbound queries that flow into a contact centre are handled by our AI virtual assistant. While this automation exercise tends to often cause discomfort amongst the workforce and result in struggles between the workers and a company's automation goals, the point-of-view gradually evolves.

It was observed that over the course of three months, employees began to warm up to the idea as it helped them do away with repetitive tasks and utilise their time for more pressing demands. Employee morale has surged, as they are now able to do much more than just initiate a password reset or process a payment refund. By being able to create a bigger impact in the workplace, employees have managed to grow and transition to elevated roles. The Bots are capable of training themselves without human intervention, which empowers employees to think outside of the box as opposed to doing the same thing they've done for 20 straight years and expect different results.

In the interest of moving forward, let's consider another hypothetical situation. Imagine yourself as a graduate, fresh out of college, who managed to land a job with an organisation. On joining, you need to be onboarded and trained. However, you haven't been given all the right tools and access codes yet. Your mail to the HR department has gone unanswered, as the executive is busy with other things. Your inbox is flooded with dozens of training manuals or videos and you are expected to be up to speed by the end of the month during the team Zoom call.

For organisations looking to push the boundaries of a favourable employee experience, the situation looks very different. EVA (Employee Virtual Assistant) is engineered to recognise you as a new employee and initiate the next few steps automatically. Skills are gauged and tools are provided before you even clock in on 'Day 1'. Furthermore, virtual coffee sessions are embedded into your and your teammates' calendars to help break the ice. EVA also functions as an intelligent virtual HR, which answers queries and guides you to the right person within the organisation in times of need.

Just like every major industrial transformation, as in the case of Henry Ford creating the first assembly line to manufacture automobiles or NASA placing the Hubble telescope in space, it all boils down to one simple fact: Placing the right technology and tools in the hands of your employees for efficiency and improved output.

Hence, it's safe to say that I'm excited about the future that awaits us all. By being able to excel in our work and focus on areas that help us make a difference, we can imagine a world where we no longer have to compile those same excel sheets constantly or complete the same task day-in-and-day-out. Instead, with Hyper-automation (AI-powered automation), we can envision a dynamic work environment that helps us open up new paradigms of innovation and success.

About the author

Pranay Jain is the co-founder and CEO of Enterprise Bot. Originally from Pune, India, he studied business at Bentley University in the United States and developed a strong interest in information technology during his internship. His work with AI caught British multinational Barclays’ attention, and the bank invited him and his team, his now-wife and Enterprise Bot Co-founder Ravina Mutha and Co-founder Sandeep Jayasankar to a hackathon in Mumbai which would task them with the job of creating an effective chatbot for the bank. From there, he continued to grow his business and in order to expand into Europe, he joined F10 in Switzerland.

While the original plan had been to spend just five weeks in Switzerland, Pranay decided to relocate to Zurich and target European Union clients.

Pranay approaches Enterprise Bot’s AI software from the perspective of an outsider. Because he doesn’t code, he aims to make sure that the technology they provide is user-friendly for business clients who are unfamiliar with the most technical aspects of IT - “no-code” and “low-code” simplification. His outlook on simplifying processes is what has made Enterprise Bot’s software approachable and appealing to business clients.

WeAreTechWomen covers the latest female centric news stories from around the world, focusing on women in technology, careers and current affairs. You can find all the latest gender news here

Don’t forget, you can also follow us via our social media channels for the latest up-to-date gender news. Click to follow us on Twitter, Facebook and YouTube