WeAreTechWomen grabbed a quick five minutes with Jing Huang, Senior Director of Engineering at Momentive, to learn more about machine learning, AI, bias and how Employee Resource Groups can improve outcomes. 

Why do algorithms become biased and how can leaders amplify the perspectives fed into them? 

On a basic level, artificial intelligence is a reflection of the real world. So if the world we live in is unjust or unfair, algorithms will reflect those realities resulting in unfair decisions, reinforcing stereotypes, and perpetuating discrimination. There are three crucial aspects to take into account when thinking of algorithmic bias. These include 1) the algorithmic bias itself, 2) the training data bias, and 3) the human bias. Taken together, these factors can result in substantial biases within algorithms that leaders and companies should acknowledge, understand, and take action to prevent.

For artificial intelligence and machine learning-specific algorithms, the effect of training data bias is unprecedented. Training data can be unbalanced due to human-generated bias, data collection errors, or sampling bias. Algorithms become biased if the data they are trained on reflects societal biases and inequalities.

There can be no ‘one size fits all’ approach for leaders to ensure their algorithms are as unbiased as possible, but the first step companies can take is to acknowledge that bias exists in AI and commit to addressing it. This starts with internally aligning on what healthy machine learning means and recognising that diversity and inclusion are essential components of responsible AI development.

Organisational leaders can amplify the perspectives of data fed into algorithms by actively seeking diverse perspectives, building diversified teams, and acquiring various data sources.

How does diverse leadership lead to healthy machine learning?

First, it’s important to define what healthy machine learning means. If we define healthy machine learning as meaning it can reflect our human values of fairness, responsibility, and trustworthiness, then having representation in influential roles is important to ensure that these values are amplified and biases are addressed.

By involving individuals with different perspectives and experiences in developing AI systems, companies can ensure that gender bias is identified and addressed early in the process. Diverse leadership can foster a culture of inclusion and accountability, where employees feel comfortable speaking up about potential bias and can work together to create ethical AI systems.

How can Employee Resource Groups improve outcomes within AI?

AI algorithms result from the algorithm logic, from the training data, and from the human who codes it, trains it, evaluates it, and uses it. A crucial responsibility of AI leaders is to ensure that diverse perspectives are incorporated into each stage of the algorithm’s creation. As the field of AI continues to advance, leaders must ensure that best practices are maintained by establishing baseline criteria for identifying potential biases in datasets and conducting regular reviews of rules and processes.

Employee Resource Groups (ERGs) can infuse multiple perspectives within AI by leveraging diverse experiences and perspectives. ERGs can offer feedback on how AI technologies might impact various communities and identities, identify biases in AI algorithms and models, promote diversity and inclusion within AI development teams, and develop culturally sensitive AI.

By including ERGs in the AI development process, organisations can create technologies that are more ethical, inclusive, and fair, while also benefiting from the unique insights and perspectives of their employees. This approach can lead to better AI outcomes, increased trust in technology, and a more equitable society.

About Jing Huang – Senior Director of Engineering, Machine Learning at Momentive 

Jing Huang is Senior Director of Engineering, Machine Learning at Momentive (maker of SurveyMonkey). She leads the machine learning engineering team, with the vision to empower every product and business function with machine learning. Previously she was an entrepreneur who devoted her time to build mobile-first solutions and data products for non-tech industries. She also worked at Cisco Systems for six years, where her contributions ranged from security to cloud management to big data infrastructure.