September 11, 2020, ainerd

Let’s Shine Light On The Gender Biased AI Systems.

As AI systems vault onto the market, it will be absolutely imperative to build trust in those digital systems.  This will require advance detection in black box models against a standard scoring engine.  Some company’s products, like CognitiveScale’s Cortex Certifai, have developed a solution partnering with the non-for-profit AI Global to develop the first AI Trust Index (ATX) that shines light on bias in AI models. Truly amazing technology.

Now let’s look at a few data points to consider:

As artificial intelligence (AI) technology gains power, many groups are taking steps to ensure the security and integrity of their systems. Biased AI systems are likely to become an increasingly common problem as time goes on, and many of these problems relate to gender bias.

Artificial intelligence may give us the tools to fight discrimination, but we still have some difficult questions to ask. If AI and automation continue to ignore and leave behind women’s experiences, everyone will be worse off. If action is not taken now, AI is likely to widen the gender gap in the future, and that is bad for everyone.

States and businesses should consider the intersectional dimensions of gender discrimination, and, despite the best intentions, their responses will lag behind the use of AI and automation to achieve gender equality. Data influencing algorithms and AI / automation should be separated by gender, otherwise women’s experiences will not be informed by technological tools and could in turn internalize existing gender bias against women. Companies involved in AI or automation should adopt a gender-equity approach and principles to overcome inherent gender bias. Rather than treating gender as a problem to be addressed, standards should take account of gender perspectives.

The fact that AI systems learn from data does not guarantee that their production will be free from human prejudice and discrimination. As a result, artificial intelligence can inherit or even reinforce the prejudices of its creators, who are often unaware of their own prejudices, and AI can use biased data.

AI systems have also been shown to be better at identifying men than women, but there is a tendency for the results to lean toward women and for a cumulative bias to be baked into the data and algorithms being trained.

A growing number of AI standards integrate gender perspectives, and Google AI has developed its own guidelines to suggest ways to avoid gender bias and other well-accepted criteria that can lead to AI bias. While gender diversity is mentioned only once, Google’s policies explicitly refer to the integration of a gender perspective.

When artificial intelligence is deliberately built in and managed by humans, it can help eliminate prejudice as a source of recruitment and hiring. AI tools that automate specific recruitment tasks by creating job interviews that enhance the experience of potential employees and increase diversity by eliminating gender bias. In addition to eliminating prejudice, AI can also help eliminate prejudice at the source of recruitment or hiring, especially when people intentionally incorporate AI.

But, as artificial intelligence experts warn, AI itself can complicate systems aimed at reducing prejudice through AI. There is a risk that AI will exacerbate the problem of gender bias in its attempt to solve it.

While robotization and automation of jobs will affect both men and women, gender bias is more likely to persist, disproportionately affecting women. For example, if the AI used to screen potential applicants is coded with data scientists “gender bias, the workplace could be exclusively male.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x