Smiling man and woman standing on weighing dishes of balance scale. Concept of gender equality at work or in business, equal rights for both sexes. Colorful vector illustration in flat cartoon style.

How can we break the biometric bias for gender equality?

Article by Cindy White, CMO at Mitek

Today’s expectation is that technology solutions are unbiased. Sexist AI algorithms and facial recognition technologies need to be a thing of the past.

Even the most advanced technologies today lack the intellect to be deliberately biased, so let’s not “feed the beast”.

Although women comprise half of the population and the majority of the world is composed of people of color, the development of biometrics technology has long been the province of white men, a situation that has lent itself to egregious bias.

For example, a 2019 investigation by The New York Times discovered that one widely used facial-recognition data set was estimated to be more than 75% male and more than 80% white. While much progress has been made in reducing bias in facial recognition technology, we’re still not there yet.

Digital access is a daily requirement and enables financial transactions, retail convenience, education, healthcare, and even dating. How can we better understand the challenges and work as a community to offer alternative digital solutions?

Defining biometric bias

Biometric systems are being used to analyse the physiological or behavioural traits of an individual for the purposes of identity verification and authentication. This is commonly conducted using facial recognition or fingerprint analysis, both of which use machine learning.

Now, the problem with bias arises when the dataset used to train that biometric system (machine learning) lacks equal representation of all archetypes. Biometric bias can be defined as a system performing in an inconsistent manner which does not fully acknowledge the demographic make-up of society.

Questioning the design process

It’s important to note that biometrics itself is not actually biased, as they are not making any independent and intelligent decisions based on human values. Bias and inequality in biometric technologies are caused by a lack of diverse demographic data, bugs, and inconsistencies found in the algorithms.

One Tech World Virtual Conference 2022

01 APRIL 2022

Book your place now to what is becoming the largest virtual conference for women in technology in 2022

FIND OUT MORE

Inclusion means equal access, but we aren’t there yet. We still have a long way to go, even with some of the world’s most widely adopted technologies. Collectively we have a responsibility to ensure digital identity technologies are truly inclusive. That means not misrepresenting the underrepresented through racist and sexist facial recognition and artificial intelligence (AI) algorithms, or apps and mobile devices that don’t consider women and people of colour.

Crackdown on ethical AI guidelines

Determining ‘what is right’ goes beyond creating accuracy benchmarks. We also need to create ethical guidelines. The UK recently launched its 10 Year National AI Strategy, while the EU is currently working through the proposal of the EU AI Act. However, we need to do more than theoretically talk about AI and its implications.

AI ethical guidelines would serve to solidify the rights and freedoms of individuals using or subject to data-driven biometric technologies. Until we define what is and is not an ethical use of biometric technology, there is no metric or benchmark that exist to gauge the quality of technology.

Putting these practices in place will be a step forward in a gender equality. To be successful long-term, technology firms should be prioritising digital access for everyone, including women. To start, they should look at their own workforces; the more women influencing these tools, the better gender bias will be tackled.

Cindy WhiteAbout the author

International marketing executive with extensive experience across B2B and B2C, Cindy White has a proven record of innovation and leadership and a passion for building brands and product marketing. As Mitek Chief Marketing Officer, Cindy leads the company’s global marketing, brand, communications, product marketing, customer acquisition and partner programs.

Before joining Mitek, Cindy was Vice President of Marketing at FICO, where she developed a deep interest and expertise in fraud prevention. Previously, she was Director of Worldwide SMB Marketing for Microsoft, leading a global team chartered with the roll out of Office 365 and supporting the success of more than 85 million customers worldwide.

Meet our 100 incredible leaders breaking the bias & calling for societal change this International Women’s Day

As part of our #WeAreBreakingTheBias campaign, we will be sharing the thoughts of over 100 leaders who are calling for societal change for women. We hope you will join us so we can amplify why we should all #BreakTheBias for gender equity.

VIEW OUR 100 INSPIRING LEADERS

ransgender-woman-holding-mobile-phone-featured

When will facial recognition understand the trans and non-binary communities?

woman reading the news on her phone

Article by Cindy White, CMO, Mitek Systems

For those who identify as transgender or non-binary, human-computer interfaces are almost never built with them in mind.

Facial recognition software is a booming industry, set to be worth USD 8.5 billion by 2025. It’s found in everything from applications for banking apps to airport security. However, instead of making life easier, they reinforce existing biases and discriminate against these groups.

To create a more fair and equal playing field in how artificial intelligence (AI) technologies impact our lives, we must listen, learn and act. As we wave goodbye to Pride Month, it’s high time to put our collective education on equality into practice in a world where digital access for all is a daily requirement.

Equal is equal. Why should some identities work better than others?

A lack of algorithmic inclusivity in technological design has generated digital exclusion through unintentional bias often learned by biometric systems.

Typically, facial recognition software determines the user’s gender simply by scanning their face and assigning male or female labels, based on previous data analysed and learned by the machine. Superficial features such as the amount of makeup on the face, or the shape of the jawline and cheekbones will put your gender into a binary category. As a result, these systems are unable to properly identify non-binary and trans people and match them to their official identity with ease. This is exacerbated by the fact that one third of those who have transitioned do not have updated IDs or records. As it stands, most facial recognition tech can’t accommodate these differences.

This is not an issue isolated to one brand of technology: several researchers have shown how ineffective facial recognition technology is generally in recognising non-binary and transgender people, including Os Keyes’ paper on automatic gender recognition (AGR) systems.

The inability to identify people within these groups has consequences in the real world. It can lead to people being mistreated, unable to get approved for financial products and services or facing issues with the government or police due to misidentification. People who aren’t represented lose the ability to be acknowledged and fight for their freedoms and rights.

Inclusion starts with research and development

The key to solving this begins with noting that biometrics are not within themselves biased; they are not making any decisions based on human values. Bias and inequality in biometric technologies are caused by a lack of diverse demographic data, bugs, and inconsistencies in the algorithms. For example, if the training data primarily includes information related to just one demographic, the learning models will disproportionately focus on the characteristics of that demographic.

To build equitable access for all communities using biometrics, we need to think through which is appropriate to prove identity, and which layers of biometrics are most appropriate for proving if the person is real. Organisations must think beyond the current framework.

Though technology cannot fully solve biases against minority groups, there are opportunities to create new technologies that may help address some of this discrimination. The key is to ensure that trans and non-binary people are involved in the design process. Research from the Universities of Michigan and Illinois conducted design sessions with 21 members of the community. They detailed four types of trans-inclusive technologies: those for changing bodies, changing appearances and/or gender expressions, safety, and finding resources. The results: centering trans people in the design process made the designs more inclusive.

Let’s listen

Success starts with listening before acting. As society strives for equality, safety, and fairness for everyone, it’s crucial that those who will benefit most from the technology changes are at the heart of the design process.

About the author

Cindy WhiteInternational marketing executive with extensive experience across B2B and B2C, Cindy White has a proven record of innovation and leadership and a passion for building brands and product marketing. As Mitek Chief Marketing Officer, Cindy leads the company’s global marketing, brand, communications, product marketing, customer acquisition and partner programs.

Before joining Mitek, Cindy was Vice President of Marketing at FICO, where she developed a deep interest and expertise in fraud prevention. Previously, she was Director of Worldwide SMB Marketing for Microsoft, leading a global team chartered with the roll out of Office 365 and supporting the success of more than 85 million customers worldwide.

Passionate about leadership, with more than 25 years of experience in technology, Cindy has developed a reputation for building high-performing teams in complex and fast-paced environments. A native of South Africa, Cindy holds a degree in communications from the Chartered Institute of Marketing Management in Johannesburg.


WeAreTechWomen covers the latest female centric news stories from around the world, focusing on women in technology, careers and current affairs. You can find all the latest gender news here

Don’t forget, you can also follow us via our social media channels for the latest up-to-date gender news. Click to follow us on Twitter, Facebook and YouTube


group of young multiethnic diverse people gesture hand high five, laughing and smiling together in brainstorm meeting at office, company culture

Why the future of identity should lead with diversity

Article by Cindy White, CMO, Mitek

group of young multiethnic diverse people gesture hand high five, laughing and smiling together in brainstorm meeting at office, company cultureOur digital identity holds a lot of power. We use it today to prove who we are to our banks, insurers, entertainment providers, healthcare providers, the government and more.

Our digital identities will be at the heart of post-pandemic economic revival, driving adoption of the sprawling sharing economy, and improving safety and security in this digital revolution. But the value of what digital identities can offer goes far beyond this.

Making digital identities mainstream will make it possible to open up a new world of inclusion for billions of people – in education, healthcare, and especially in banking. While the possibilities are huge, there is one major issue to contend with: half of the planet doesn’t currently have access to the internet and modern services. This digital exclusion also means that we are all missing out on potential innovation that comes with inclusivity, as well as the obvious economic and financial benefits.

That said, mobile technologies possess great power to accelerate change. The technology industry holds the responsibility to build and offer technologies that can connect digitally underserved communities. Creating technologies has an inherently moral dimension, to “shape how the people using them can realise their potential, identities, relationships, and goals”, according to the World Economic Forum.

Putting the onus on product innovation

We’ve all heard the expression, “With great power comes great responsibility”. Technology firms have both.

Today’s expectation is that technology solutions are unbiased. Race is probably the most widely discussed issue around this, and one which must be urgently addressed. Facial recognition systems are under scrutiny and in some cases have already been proven to be racist.

In one study, researchers at the National Institute of Standards and Technology (NIST) found that facial recognition algorithms "falsely identified” African American and Asian faces 10 to 100 times more than Caucasian faces. But algorithms and technologies aren’t capable of intrinsic bias, so the responsibility to be inclusive lies with the technology leaders driving this innovation.

Similarly, physical disability is another long-battled topic. As critical services move online, the need for accessibility only grows. Biometric authentication methods may provide the bridge for secure access to such services, but there is no one-size-fits-all approach to making them accessible for all.

For instance, biometric technologies need to consider the interaction from consumers – like how to position the device or how long you have to input your data. Likewise, voice biometrics may be a good solution for people with a visual impairment but won’t work for those with a hearing impairment. Across the board, instructions on how to complete verification checks need to be understood by everyone – no matter who they are.

The why behind diversity

Inclusion means equal access, but we aren’t there yet. We still have a long way to go, even with some of the world’s most widely adopted technologies.

Getting diversity wrong could be extremely damaging. Collectively we have a responsibility to ensure digital identity technologies are truly inclusive and don’t exclude or misrepresent the underrepresented, through racist and sexist facial recognition and artificial intelligence (AI) algorithms, or apps and mobile devices that don’t consider disabilities.

Change always starts from within. We need to put diversity at the centre of creating and building technology solutions.

When it comes to diversity in identity technologies, there is also an urgent business need. Gartner recently found that the overwhelming majority of companies see minimising bias and discrimination as a key driver behind their selection of identity technologies. With adoption of these technologies rising rapidly since the onset of the COVID-19 pandemic, and identity verification from home becoming the norm, there is no better time to act.

Transparency creates trust

What’s clear is this: Input from a diverse workforce leads to diverse output.

In the technology sector, and specifically in the identity space, trust is everything. Open disclosure on status and intentions will set organisations apart. Companies that consistently report on the outcomes of their diversity initiatives, and even their gaps and failures, will win through trust.

Identity verification is all about enabling trust in the digital economy. This means we need to be transparent in explaining, for example, how we are using AI and machine learning to protect consumers, while at the same time meeting consumer demand for convenience, accuracy, and quality. Being open about how this works and why it’s important will be crucial.

Looking to the future

Putting these practices in place isn’t a diversity issue – it’s a business issue. To be successful long-term, technology firms should be prioritising access for everyone.

For identity technology providers in particular, this means solutions need to enable inclusive access – free from bias and discrimination –whether geographic or biometric. We can create technology solutions that grant digital access to everyone, and work with governments and policymakers to make equal access a reality.

In ten years, we’ll look back to now as a pivotal moment for identity technologies.

Diversity is the key to being a success story.

About the author

Cindy WhiteInternational marketing executive with extensive experience across B2B and B2C, Cindy White has a proven record of innovation and leadership and a passion for building brands and product marketing. As Mitek Chief Marketing Officer, Cindy leads the company’s global marketing, brand, communications, product marketing, customer acquisition and partner programs.

Before joining Mitek, Cindy was Vice President of Marketing at FICO, where she developed a deep interest and expertise in fraud prevention. Previously, she was Director of Worldwide SMB Marketing for Microsoft, leading a global team chartered with the roll out of Office 365 and supporting the success of more than 85 million customers worldwide.

Passionate about leadership, with more than 25 years of experience in technology, Cindy has developed a reputation for building high-performing teams in complex and fast-paced environments. A native of South Africa, Cindy holds a degree in communications from the Chartered Institute of Marketing Management in Johannesburg.


WeAreTechWomen covers the latest female centric news stories from around the world, focusing on women in technology, careers and current affairs. You can find all the latest gender news here

Don’t forget, you can also follow us via our social media channels for the latest up-to-date gender news. Click to follow us on Twitter, Facebook and YouTube