How technology can enhance diversity and inclusion


By Marina Ruggieri, IEEE fellow and professor of telecommunications at University of Roma “Tor Vergata”

If I were a painter, I would consider a canvas as a neutral means to transfer my ideas and emotions into a painting.

When we discuss the neutrality of technology, we are referring to the idea that the technology is the canvas, and technologists and scientists are the painters. We have the role, competence, and responsibility to make the canvas become artwork.

A blank canvas

The beauty of technology is its intrinsic neutrality. Technology has a huge potential to either benefit or damage the environment, and the teams working on said technology have the opportunity to shape it to fully benefit them. This is indeed a fascinating opportunity, which is open to all in a broad breath of diversity and inclusiveness. The more diverse and inclusive the technology team is, the more diverse and inclusive the application developers are, and the more beneficial the result will be. New technologies which are fair and unbiased are really the best ally when it comes to designing an attractive and lasting future for humans and the planet.

The power of AI

One example of neutral technology can be seen with artificial intelligence (AI). This particular technology often generates mixed feelings, and many individuals have a strong lack of trust with it. What worries a lot of people, is perhaps potentially an uncontrolled evolution of the algorithms which can cause damage to humans. For example, the troubles caused to the protagonist of the movie “2001: A Space Odyssey” by a super intelligent calculator are hard to forget, for people of all generations.

AI algorithms need to be trusted in the most objective way – and what is more objective than a truly diverse and inclusive team of developers? Diversity and inclusiveness could be a strong guideline for the algorithm evaluation from the performance and ethical viewpoints. AI is going to be increasingly pervasive and, if properly developed and tested, is destined to become an extremely beneficial pillar for the sustainability of the planet. AI is just one of the many examples of technology frameworks where diversity and inclusiveness can improve the results, create a powerful osmosis between the means and goals and create a natural outcome.

Collaboration is key

A deep trust in technology and its neutrality is very important to appreciate the role AI can play to create an even environment. For example, when daily activities in either professional or social domains are widely supported by the neutrality of a key-technology such as AI, diversity and inclusion can be more easily guaranteed. Neutral technology is the “guardian” of even opportunities which can contribute to various domains in the most diverse way. Only an unbalanced trust in technology could result in a lack of diversity and inclusion.

As humans, we are intrinsically non-linear, and our unconscious bias is aligned with natural behaviour. The rational approach of AI-based algorithms is an effective means to balance the human non-linear trait in various application domains, like recruiting procedures. The best outcome is teamwork between humans and AI, as this provides a contribution of rational and non-linear behaviour. In fact, the rational and data-driven approach identifies the short list of solutions to a given task or issue while the non-linear contribution helps identify the spike often associated with an ingenious solution.

Any technology which is prone to exchange knowledge from data and to allow the proper use of knowledge is an ally to diversity and inclusion. Going forward, we can expect technologies that have broad coverage and highly reliable speed and latency to be utilised within the super-connected infrastructure.

About the author

Marina Ruggieri is an IEEE fellow and Full Professor of Telecommunications Engineering at the University of Roma “Tor Vergata”. She is co-founder and Chair of the Steering Board of the interdisciplinary Center for Teleinfrastructures (CTIF) at the University of Roma “Tor Vergata”. The Center focuses on the use of the Information and Communications Technology (ICT) for vertical applications (health, energy, cultural heritage, economics, law) by integrating terrestrial, air and space communications, computing, positioning and sensing.

The irony of artificial intelligence: Addressing inequality in AI

artificial intelligence

There is a cruel irony in the world of AI; the representation of gender identity, race and ethnicity and sexual orientation is itself ‘artificial’; it doesn’t represent society and therefore this leads to unbalanced outcomes for the people the industry is intended to serve.

Diversity in the workplace is critical in providing a wide range of perspectives and lived experiences for the design and implementation of an AI system and removing bias from the equation.

Take women for example: the percentage of female AI PhD graduates and tenure-track computer science (CS) faculty have remained low for more than a decade. Female graduates of AI PhD programs in North America have accounted for less than 18% of all PhD graduates on average, according to an annual survey from the Computing Research Association (CRA). Furthermore, women make up just 26% of data and AI positions in the workforce according to a 2020 World Economic Forum report.

Then you look at race and the picture is even more concerning; just 2.4% of PhD graduates in the same survey were African American, compared to 45% white. Women in Machine Learning and Black in AI groups have gone some way to reducing the gap but much more work is needed in this area to encourage representation across the board. Plus, these statistics do not focus on those with learning disabilities, hidden disabilities or people with low incomes. We are just touching the surface of inequality.

This problem lies beyond the remit of traditional recruitment; it starts with early STEM education which shows young women and girls how AI roles can impact their life. It’s also the case that girls are dissuaded from STEM careers as there is a false belief that they don’t excel in those subjects.

The responsibility lies with women already in the field who can mentor and inspire the next generation of AI leaders. Companies are recognising that they need diversity embedded deep within their organisations to really achieve great things. Those women already in the industry need to stand up and be counted.

The problem is women struggle to gain credibility and feel the need to ‘earn their stripes’ compared to their male counterparts. It also needs to be recognised that AI professionals do not all need to come from a computer science background; mathematical, ethical and business heads are required too.

From the words of those who have been there:

“Don’t be afraid to venture into an unfamiliar discipline to maximize your opportunity for impact.”

“However, doing so effectively requires engaging with collaborators from other disciplines openly, constructively, and with respect: One needs to be willing to ask naive questions in order to learn.” Daphne Koller CEO & Founder Insitro.

As well as encouraging women to enter the world of AI, there is also much work to be done on retention of those staff. “Many women feel they are not treated the same as men in AI, and it is driving many of them out. Over half (58 percent) of all our women respondents said they left an employer due to different treatment of men and women.” (Deloitte). Pay and career path are the main areas where women receive unfavourable treatment compared to their male counterparts; that can be easily rectified. More education is needed on the career paths available too.

Typically, emerging professions hire from closed networks before they become mainstream. AI is at that stage. So, what’s the solution? We need to encourage women from non-tech backgrounds to enter the world of AI as well as encouraging girls into STEM subjects. To be truly diverse, the workforce needs to contain that blend of skills.

Artificial intelligence (AI) has become embedded in everyday life around the world, touching how we work, play, purchase and communicate. Whilst we agree that AI is largely technical, it also includes politics, labour, culture and capital. To understand AI operationalised or in context, one needs to understand: What part of society being improved by AI? For whom is this improvement being done? Who decides the remit of the AI that is rolled out to society?

Who is leading the way through this maze? Kate Crawford’s book ‘Atlas in AI’ gives an insightful account of how this sector is developing whilst Deloitte has formed an academy to address inequality in AI (We and AI). There also trailblazers such as Kay Firth-Butterfield, Beena Ammanath and Joy Buolamwini who are inspiring the next generation through their work. It is these people we should look to for the answer to ‘what next?’.

It’s imperative that we get the next steps right in this field, that we encourage girls into AI careers, that we welcome a range of skills and that those who have succeeded show others the way. It’s critical that diversity becomes embedded in all organisations so that they can truly serve the population they are intended for, otherwise we are building a false world for the future.

Sandra MottohAbout the author

Sandra Mottoh, who after working in Regulatory Compliance and Governance in the banking sector for the past 20 years, is now also focussing her social enterprise ‘AI White Box’ to identify the compliance gaps in the emerging AI sector. As a black woman, she is also passionately campaigning to help more women enter the world of AI, particularly those coming from financially challenged and ethic minority backgrounds.

Recommended Event: 13/06/22-15/06/22: CogX Festival

CogX event image

Be part of the world’s biggest and most inclusive Festival of AI, Blockchain, the Metaverse and all the latest Transformational Technologies.

CogX are excited to offer members of WeAreTechWomen a free pass to this year’s CogX.

This year’s festival takes place 13th-15th June in King’s Cross, London.  Across the festival we are Showcasing 200+ speakers including the host of Europe’s biggest podcast, ‘The Diary of a CEO’, Steven Bartlett, Vice President of Civility and Partnerships at Roblox, Tami Bhaumik and Pushmeet Kohli, Head of AI for Science at DeepMind to name but a few.

Our good friends at CogX have given us a free code which allows you to claim a Standard Pass for the 3 days for free (RRP £695).

You can claim your free pass by selecting the standard festival pass here and entering your code: STANDARDWTW

Black woman working on computer, engineering, CodeGen Developer Challenge

How AI can help create a more inclusive recruitment process

Black woman working on computer, engineering

Article by Jen Shorten, Board Member at Clu

Organisations are facing record lows for talent retention. This issue largely stems from the fact that the focus on employee experience and success has been next to non-existent in most companies until very recently.

But this isn’t the only cause of the great resignation. Around 80% of employees don’t actually have the right skills for their jobs which leads to poor motivation, productivity, and job satisfaction. When layered against the reported success rate of recruitment sitting at just under 50% and a recent study by Checkster finding that 78% of people are lying on their CVs and cover letters when applying for roles, a case is evidenced that the current recruitment process is no longer accurate, fit for purpose or supporting talent retention.

Many sectors have proven that AI adds demonstrable value and can deliver exceptional results, so why isn’t the world of recruitment following suit in more effective ways? We have some thoughts.

AI in recruitment currently focuses on increasing the breadth of applications received and automating CV sifting using keyword-based algorithms. This is where AI can be used to make the most significant advancements, particularly in the way that we unearth talent, understand it, and set it up for success in our organisations.

The traditional process of scanning and filtering out CVs is rife with bias and anchors towards more of the same, instead of finding something different. According to one intensive academic study, minority applicants who ‘whitened’ their CVs were more than twice as likely to receive calls for interviews, and it did not matter whether the organisations they applied to claimed to value diversity or not.

AI can play a critical role in the democratisation of the job market and mitigating these entrenched biases.

It can be used to safeguard the inclusion of candidates from non-traditional backgrounds throughout the hiring process without needing to hide or erase their identity in the process. AI can help us understand candidates better and see them for what they can do, not just what they have or haven’t done previously to determine their relevancy for a role. It can also improve and enhance the way candidates and organisations find one another, improving the accuracy, experience and outcomes of recruitment for everyone.

Platforms such as Clu, will be pivotal in these advancements. To reduce bias, Clu removes CVs from the hiring process and helps organisations inclusively source and assess each candidate in consistent and transparent ways to create a more level playing field.  Over our years of research into building high-performing and diverse workforces, we have found that implementing hiring strategies that focus on both soft and technical skills and strategies that encourage collaboration between team members in the hiring process is the key to incremental improvement.

We believe it is important that AI is anchored to auditable, diverse and actionable data. Within our context, this ensures that recommendations to candidates and organisations are equitable, ethical and make it easier to mitigate and challenge unconscious and implicit bias. Our algorithm captures significantly more data sets across demographics, skills and target remuneration than other solutions in the Rec-tech market. We can use these new data points to allow organisations to track their performance in safeguarding holistic diversity and inclusion over time. Something a traditional CV-sifting algorithm could never do.

But AI driving better performance is only one part of the puzzle, it must also help with learning to be truly effective. We have spoken to over 1,000 hiring managers during our R&D phase and far too many couldn’t confidently dissect the soft and technical skills needed for a role. We can use AI to not only inform how skills can be matched to fill gaps and enhance teams, it can also be used to help hiring and recruitment managers create more attainable and actionable skills profiles to match candidates to.

Learning from the way people identify themselves, demographically and cognitively, can help elevate those who are systemically overlooked and excluded by standard recruitment to not only fill critical skills gaps currently facing organisations but also greater fulfil their potential.

We can improve the accuracy of recruitment by understanding more about what people from certain locations, with certain career and experience histories are most interested in. We can see what organisations are over indexing in their own cultures and start making recommendations of how they can better balance themselves to create the more holistic, innovative, and productive environments that indirectly impact recruitment and talent attraction.

An example that brings this to life is in an anecdote of a Harvard economics and law fellow Michael Rosenbaum sharing a new idea with the White House in the millennium. He argued that underserved, urban populations were an untapped source of talent for software development. The White House dismissed his idea, however, so he went on to found Catalyte which specialises in getting non-traditional talent into coding jobs and it is has gone on to have huge success.

We want to inspire policy makers and highlight how technology offers hope and positive conversation changes. We want to show that talent shines through in unexpected places when you look at what people can do.

Jen ShortenAbout the author

Jen has worked in software development for over 20 years specialising in large scale data integration and IT modernisation projects. She is on the Board of category-defining inclusive recruitment software, Clu. This cloud-based tool enriches every touchpoint of the hiring process for organisations, to hire the talent you want to and to build your team.

The World of AI featured

What are AI-driven hiring assessments and how do they work?

The World of AI featured

By Dr Gema Ruiz de Huydobro, IO Psychology consultant at HireVue

As anyone who has gone through it recently will well know, looking for a new job is practically full-time work in itself.

Every application requires a significant time investment to tailor your CV and cover letter before completing any specific requirements for the company in question (such as a multiple-choice questionnaire or aptitude test). If you’re then invited to an initial interview, you will need to spend even more time preparing for a short conversation, which too often provides limited opportunity to showcase your full potential.

Meanwhile, organisations continue to drown in endless piles of CVs and struggle to differentiate the deluge of applications. For instance, a financial services company opening new banking centers internationally has been receiving nearly 100,000 job applications each month for well over a year. Such high volumes of applications have led many companies to invest in both on-demand video interviewing and pre-hire assessment tests driven by artificial intelligence (AI). This helps both recruiters and candidates save time and begins to democratise the hiring process by offering all candidates an equal opportunity to be considered for the role. However, if you’re invited to a video interview or AI-driven assessment for the first time, it’s perfectly natural to feel a little apprehensive about how it will work.

Is there really anything to be nervous about?

The role of AI in recruitment

AI in recruitment typically involves machine-learning algorithms which analyse your answers to questions and provide insights to help hiring managers make more informed decisions at an early stage in the interview process. Rather than submitting a CV and cover letter, you may be invited to complete a short video interview and/or games-based assessment to apply for the role. We’ll explain these in more detail later.

Following your assessment, the AI algorithm (also called an assessment model) helps the recruiter to make a more informed decision by evaluating your submission and measuring data points which are scientifically proven to be predictive of successful performance in the specific job role for which you’re applying. A pool of candidates, ranked by their fit for the role, is presented to the recruiter, who then reviews the recommended shortlist, and decides which to progress to the next round.

Sounding straightforward so far? Now let’s look at how video and games-based assessments work in more detail…

Video interviews

If you’re invited to take an AI-powered video interview, you will likely receive instructions via email and will need to follow the link to enter the interview, so you can choose to complete it at a time and place convenient to you from either a computer or smartphone. Most AI-powered video interviews take 20 to 30 minutes to complete. It’s important to note that this video interview may only be the first step in your interviewing process, as those who are successful are very likely to meet one or more people face-to-face later in the process.

You should expect a format which is similar to a traditional interview in which you are asked a series of questions. The questions will be relevant to the success in the role you are applying for and every candidate will be asked the same set of questions. This creates a much fairer process for all candidates and helps to minimise bias.

While it’s natural for most people to feel a little self-conscious on camera, keep in mind that you’re u

nlikely to lose out on the job simply because you don’t smile enough, don’t make enough eye contact, or blink too much. When building assessments, only data features related to success in the role are leveraged. Physical appearance and other demographic factor-related data that have nothing to do with it are not considered – on the contrary, assessments should always be tested for adverse impact to avoid anybody to be adversely impacted in this regard.

Game-based assessments 

Games are another popular part of AI-powered assessments, as they are scientifically proven to measure cognitive skills including problem-solving and working memory, as well as job-relevant personality traits. Their accuracy is similar (and often increasingly higher) when compared to longer and more repetitive psychometric tests.

Again, you will receive an email with a link to enter the assessment, and it can be completed on your smartphone from any location and typically takes just 15 minutes. Safe to say, a game-based assessment is typically more fun than a traditional psychometric test containing hundreds of fill-in-the-circle questions!

Game-based assessments will also be tailored to the role you’re applying for. For example, both entry- and mid-level jobs require cognitive skills, but a manager may need to demonstrate more sophisticated organisational and problem-solving skills.

Preparing for success

Regardless of the type of interview, preparation is key. If you’re invited to a video interview with an AI assessment, take the time to practice potential interview questions, or take advantage of the practice tests often offered with most games-based assessments. This will ensure you aren’t taken by surprise and can showcase your full potential.

It’s also a good idea to create a calm environment where you won’t be disturbed. These types of interviews provide an opportunity to choose a time and location that suits you, so you won’t need to worry about taking time off work, the bus being late or getting lost en route!

Finally, take a deep breath and remember that the premise of this technology is to give everyone an equal opportunity to be recognised as a great candidate for a job, regardless of background, gender or race.  Given the increased awareness on the importance of hiring impartially, businesses have more need than ever to ensure they’re reflecting this in the interview process. Good luck!

Gema Ruiz de HuydobroAbout the author

Dr Gema Ruiz de Huydobro is an accomplished business psychologist with over ten years experience in both academic and business fields. In her current role as I-O Psychology Consultant at HireVue Gema is responsible for designing scientifically validated pre-hire assessments to enable organisations to identify high quality candidates while minimising bias in the selection process.

Artificial intelligence. Human head outline with circuit board inside, AI

Understanding the UK AI labour market: 2020

Artificial intelligence. Human head outline with circuit board inside, AI

This report presents findings from research into the UK Artificial Intelligence (AI) labour market, carried out by Ipsos MORI, in association with Perspective Economics, Warwick University, and the Queen’s University Belfast, on behalf of the Department for Digital, Culture, Media & Sport (DCMS).

The research aimed to create a set of recommendations on policy areas that the government and industry should focus on, to bridge skills gaps in the sector. It involved:

  • A survey of 118 firms and public sector organisations, including those whose core business was developing AI-led products or services, and others in wider sectors developing or using AI tools, technologies or techniques to improve their products, services or internal processes;
  • A total of 50 in-depth interviews with firms, public sector organisations, recruitment agencies, employees and aspiring employees, universities and course providers;
  • Analysis of AI job postings on the Burning Glass Technologies database; and
  • A roundtable discussion with stakeholders from across government, the private and public sector to validate the findings.


Beyond bias: Is it time to fall in love with AI systems again?

Agata Nowakowska, Area Vice President EMEA at Skillsoft

A couple of years ago, AI seemed the ideal solution for remedying those temporary lapses in good judgement, unforced errors and gut instinct impulsiveness that are part and parcel of the human condition.

As AI adoption accelerated, it seemed as though high stakes decisions were increasingly being delegated to AI systems. Suddenly, AI algorithms were determining everything from someone’s suitability for a job role, to whether or not they’d be selected for a university course, or if their application for credit was accepted.

Before long, however, a growing awareness of bias in AI systems began to raise some disquieting concerns. The resulting soul searching led to heated debates about whether organisations using AI systems were actually trading fairness for consistency or comprising social justice in their pursuit of streamlined efficiencies.

Suddenly, it seemed like we had all fallen out of love with AI.

The problem with technology bias

AI systems are versatile, accurate, reliable, autonomic (self-correcting), fast and affordable. Which is why some 64% of today’s businesses now depend on them for productivity growth. But in the rush to take advantage of the benefits this technology confers, organisations have learned the hard way that it’s a risky business proposition to depend exclusively on AI systems if bias isn’t checked.

The problem is that AI applications can be just as unfair, prejudiced, or discriminatory as the humans who create them. An issue not helped by the fact that the development community is still, by and large, predominantly composed of white males. And when AI systems make mistakes, the scale and scope of their operation means the consequences impact a significant number of people.

Awareness is growing that the machine learning (ML) used to train AI systems represents a key entry point for bias. For example, the data sets selected for ML training can create an echo chamber that amplifies bias. Similarly, historical data used to train AI systems will reflect the prevalent thinking and cultural mores of an era.

With experience comes wisdom

AI systems have proved highly successful at tackling a variety of complex workplace and public safety challenges – whether that is handling hazardous situations using AI-guided robots to fight fires, disable bombs or clean up chemical skills. A more recent example is helping millions of people access digital banking services during the coronavirus pandemic.

To successfully harness the potential of AI, however, organisations will need to ensure that their AI systems do not repeat the mistakes of the past. In other words, applying the lessons learned about the disruptive impact of bias to achieve fairer and more equitable outcomes for all.

For example, back in 2015 Amazon was forced to ditch an automated AI recruitment screening tool that favoured men for technical jobs and penalised women. The in-house programme had been developed using data accumulated from CVs submitted over the past decade, which reflected the dominance of men across the tech industry. The firm now uses a much watered-down version of the recruiting engine to help with some rudimentary chores like culling duplicate candidate profiles from databases.

One Tech World Virtual Conference 2022

01 APRIL 2022

Book your place now to what is becoming the largest virtual conference for women in technology in 2022


Restoring trust in algorithms and AI systems: the top steps to take

Delivering on the promise of AI starts with the creation of fairness metrics and measuring fairness at each step of the technology development process: design, coding, testing, feedback, analysis, reporting and risk mitigation.

This should include creating design models that test AI systems and challenge results, using approaches like counterfactual testing to ensure that outcomes can be repeated and explained. Performing side-by-side AI and human testing, using third party external judges to challenge the accuracy and possible results biases will also be crucial.

Re-aligning cultural thinking across the organisation is another mission-critical task. Alongside educating employees that driving out bias is everyone’s mandate, diversifying the organisation’s software development community will mitigate against the ‘group-think’ mentality that introduces bias into AI systems.

Falling in love with AI – again

Realising the opportunities offered by AI means that the way systems are developed, deployed, and used must be carefully managed to prevent the perpetuation of human or societal biases. That includes thinking carefully about the fairness of any underlying data attributes used, ensuring everyone has access to the tools and processes needed to counter unfair bias, and boosting the diversity of the AI community. On occasion that may include crowd-sourcing opinions from the widest number of interested participants to address unconscious bias and assure mass acceptance and uptake.

Understanding how bias in data works is a critical first step to controlling bias in AI systems. This is why some forward thinking organisations are utilising new tools to tackle bias. For example, LinkedIn is using LIFT, an Open Source toolkit, to identify bias in job search algorithms. It has now joined forces with IBM and Accenture to build toolkits that combat bias in business. Similarly, an app that enables rapid DNA testing of wastewater for COVID-19 is an example of an innovative AI system that can detect a coronavirus hotspot without any community bias. Once COVID-19 is detected, hospitals and first responders can gear up for an increased caseload.

Armed with the right tools, processes and determination to ensure fairness is a design characteristic built into every aspect of algorithm and AI system development, there’s every indication that the love affair with AI is set to flourish once again.

Agata Nowakowska, SkillsoftAbout the author

Agata Nowakowska is Area Vice President EMEA at Skillsoft, where she leads the field operations, to include enterprise and small & mid-market, as well as channel sales/strategic alliances across Europe, Middle East and Africa.

diversity and inclusion, National Inclusion Week, inspirational profiles

International Day of Women & Girls in Science: Diversity in will equal fairness out

diversity and inclusion, National Inclusion Week, inspirational profiles

Louise Lunn, Vice President, Global Analytics Delivery, FICO discusses the critical need for diversity in the people behind data analytics on International Day of Women and Girls in Science

The United Nations International Day of Women and Girls in Science throws a spotlight on achieving full and equal access and participation for women and girls in science, citing the importance of this goal in global development. The UN has highlighted that over the past decades the global community has made great strides in inspiring and engaging women and girls in science, yet there is still much work to be done.

This is the case in financial services as much as many other sectors. One critical area is artificial intelligence (AI) and how it affects financial decisioning.

There’s no contesting the far-reaching growth of AI. From loan applications to fraud prevention, it and machine learning are entrenched in our lives and has a say in the important decisions we make as well as those that are made for us. To make fair and accurate assessments, AI software needs to be reflective of the people it scrutinises and the best way to achieve this is to have a diverse team at work.

Of course, gone are the days of gender discrimination in financial decisions – it is mandated that risk cannot be measured based on gender. But to achieve the equality that is expected of financial services providers, it is crucial to make it easier for girls and women to enter the sector and further their careers, because one of the real challenges in AI is fighting the bias that can be coded into the models themselves.

One Tech World Virtual Conference 2022

01 APRIL 2022

Book your place now to what is becoming the largest virtual conference for women in technology in 2022


All AI models are trained on datasets, and these datasets frequently have coded into them a level of bias. In fact, FICO Chief Analytics Officer Scott Zoldi says, “All data is biased.” It’s up to the data scientists to correct for this, and that is why it is so important to achieve more diverse teams building AI.

Recognising that we need diversity in innovation and teams is the first step. In many cases, AI learns from data generated by human actions. Left unchecked by data scientists, algorithms can mimic our biases, conscious or not. However, we can mitigate those biases by including people across race, gender, sexual orientation, age, and economic conditions to challenge our own thinking views. By bringing in people with different thoughts and approaches to our own, analytics teams will see a quick improvement in their code.

For any girl or woman thinking about data science as a career route, the opportunities are immense. Data scientists are a new breed of analytical experts, responsible for collecting, analysing, and interpreting extremely large amounts of data. These roles are an offshoot of several traditional technical roles, including business domain expertise, mathematicians, scientists, statisticians, and computer professionals.  All these different jobs fit into the disciplines of a data scientist.

The insights that data scientists uncover should be used to drive business decisions and take actions intended to achieve business goals. While executives are smart individuals, they may not be well-versed in all the tools, techniques, and algorithms available to a data scientist (e.g., statistical analysis, machine learning, artificial intelligence, and so on). Part of the data scientist’s role is to translate business needs into algorithms.

The magic is also in the data scientist’s ability to deliver the results in an understandable, compelling, and insightful way, while using appropriate language and jargon level for their audience. In addition, results should always be related back to the business goals that spawned the project in the first place.

I would argue that if you accomplish diversity in your teams, you’ll make improved AI because your teams will be better at spotting bias and correcting for it. Different backgrounds drive more creative thinking, and more diverse teams tend to improve a company’s ability to solve problems. That’s just as true in data science as it is in other fields.

Louise LunnAbout the author

Louise Lunn leads FICO’s created Global Analytics Delivery organisation. Based in the UK, Louise oversees teams of data scientists worldwide who develop custom analytics solutions and exploratory analytics projects for the world’s top banks, as well as retailers, telecommunications firms, insurance companies and other businesses.   

What does it take for computers to understand human language?' wth Anna Grüebler & Mia Chang, AWS

Listen to the latest She Talks Tech podcast on 'What does it take for computers to understand human language?' with Anna Grüebler & Mia Chang, AWS

What does it take for computers to understand human language?' wth Anna Grüebler & Mia Chang, AWS

This episode is the fourth of an AWS special series of the She Talks Tech podcast.

The objective of these podcasts is to demonstrate how Cloud technology is helping transform many industries like Retail, Financial Services or even Sports. But we also want to hear from the women behind these stories who are enabling these transformations to understand what they do day to day and how they got into working in technology.

In this episode, Anna, Artificial Intelligence Specialist Solutions Architect, AWS, and Mia, Machine Learning Specialist Solutions Architect, AWS, will share with you their story about Natural Language Processing using Artificial intelligence.


‘She Talks Tech’ brings you stories, lessons and tips from some of the most inspirational women (and men!) in tech.

From robotics and drones, to fintech, neurodiversity and coronavirus apps; these incredible speakers are opening up to give us the latest information on tech in 2021.

Vanessa Valleley OBE, founder of WeAreTheCity and WeAreTechWomen brings you this latest resource to help you rise to the top of the tech industry. Women in tech make up just 17 per cent of the industry in the UK and we want to inspire that to change.

WeAreTechWomen are delighted to bring this very inspiring first series to wherever you normally listen to podcasts!

So subscribe, rate the podcast and give it a 5-star review – and keep listening every Wednesday morning for a new episode of ‘She Talks Tech’.

Produced by Pineapple Audio Production.

Discover more from our
She Talks Tech podcast


How the 4-day week, AI, and Adult-retraining will transform the workplace

The confluence of a global event and the awakening by many people to the existence of technologies has created a paradigm shift for employers and employees alike.

Prior to the COVID-19 pandemic, virtual technologies like Zoom and Microsoft Teams had barely penetrated many workforces let alone the general public consciousness.

It seems that everything now is different. In many ways, of course, lives, careers, and industries have fundamentally changed in the wake of the global pandemic. However, like a change in the awareness of already existing technologies, it seems that employers have figured out something that some employees but many researchers have known for more than a decade.

People are just as productive while working from home as they are at work. Call it managerial enlightenment. Call it managerial self-interested enlightenment – less rent to pay for the office space! Call it an opportunity to rebalance work-life balance. Call it organizational and human evolution. All probably carry some truth.

In many societies, compressed work weeks or four-day work weeks are already the norm. Economic research provides evidence that working shorter work weeks does not reduce employee performance or overall economic productivity. In fact, there is reason to believe that well-rested and healthy employees actually perform better than over-worked, stressed employees.

It is unfortunate that it took a global pandemic for many employers to learn these lessons, but it is important nonetheless that they have learned these lessons.

Despite the lack of awareness that many of us had toward the ongoing Fourth Industrial Revolution, that revolution has been underway for decades and will accelerate in the coming decade. Automation through machines such as robots, artificial intelligence, and machine learning have existed, in some cases, for over a half century. It is the more rapid advances in artificial intelligence – the human programming of machines to respond to situations or events in ways that humans would – and in machine learning, where the machines no longer need human programmers but learn on their own, that will accelerate trends that seem to have only appeared in the wake of a once-in-a-decade global pandemic.

Highly repetitive work with standard physical movements that requires little autonomous decision making will continue to see displacement due to technology. Jobs and industries that now dominate service-based economies, will contract if not outright disappear. Companies will no longer need dozens of accountants or financial analysts when a data scientist who creates algorithms and just a handful of accountants or financial analysts can do the same amount of work with fewer mistakes. Marketing, human resources, supply chain management, and most other job categories found in modern companies will shrink.

Even the retail sector will see a technological transformation. Fast-food or cafeteria-style restaurants will operate with very little human work, as machines will take orders, prepare and cook meals, deliver orders, take payments, and clean tables, utensils, dishes, and glassware. Pharmacies will use automated robots and kiosk-style ordering terminals to serve customers. Convenience stores will use robots to stock shelves and kiosks for checkout. While this sounds like some far-off, futuristic scenario, consider that these types of retail settings already exist in countries like China.

Even call centers, once the exemplar of outsourcing to foreign countries, will be replaced by technology. When you visit a website and a dialogue box appears on the bottom of your screen, that is an artificial intelligence bot. By the end of this decade, virtual bots will have conversations with you on the telephone. The development of hologram bots means that you might not even know that the person you Facetime with is actually an artificial intelligence bot.

All of this might scare you and make you wonder if your job will even exist in ten years. It might not. So what can you do to ensure that you have a long, vibrant career?

First, adopt a lifelong learning mindset. Recall that automation occurs for jobs that have highly repetitive functions and little independent decision-making. Seek out opportunities to up-skill, which might mean returning to formal education settings or seeking out certificate programs. Second, learn to thrive through failure. Yes, learn to fail. Find safe opportunities to try new things at which you will fail. Take cooking lessons. Try to build a piece of furniture. Attempt to paint. Put yourself into uncomfortable situations and learn to adapt. You will find that your ability to problem solve and take calculated risks will benefit you across your life. Finally, get healthy. The next decade will feel stressful. Your ability to cope with stress will benefit you. Exercise, sleep well, take your holidays, recharge your mental and physical batteries, and try your best to take breaks from your smartphone – which creates stress by even being on your person.

Do these steps guarantee a successful career as the Fourth Industrial Revolution? No. However, these steps will help you adapt to a rapidly changing future.

Anthony WheelerAbout the author

Anthony Wheeler is Dean of Business Administration, Professor of Management at Widener University and co-author with M. Ronald Buckley of HR without people? Industrial evolution in the age of automation, AI and Machine Learning.