The impact of artificial intelligence (AI) is considerable and affects various aspects of our society. While the contributions of AI are undeniably positive, it has unfortunately given rise to gender biases that stand in the way of full equality between individuals. Understanding the origins of these biases and finding solutions to counter them is of crucial importance. Faced with this major challenge, it is imperative to explore potential solutions. Indeed, we must actively engage in the search for remedies to ensure that AI is a fair and inclusive tool for all.
Unveiling the Continuation of Gender Inequalities by AI
The use of AI perpetuates gender inequality for a number of reasons. Firstly, the algorithms that power AI are predominantly designed by male developers. Throughout the development process, from design to coding to learning, these developers project their own vision of the world, consciously or unconsciously incorporating their gender stereotypes. These stereotypes are then automatically incorporated into the algorithms, which are then widely distributed. In addition, the databases used to train these algorithms reflect existing inequalities in society. This is due to the automatic selection of information and the historical and current domination of men in data production. As a result, the algorithms tend to reproduce the sexist prejudices present in the databases.
Steven T. Piantadosi, a cognitive scientist at Berkeley, recently highlighted the biases in ChatGPT. He shared some of his conversations with the chatbot on Twitter (@spiantado) to illustrate how it works. In one exchange, Piantadosi asked the conversational tool to write a Python function to determine whether a person would be a good scientist based on their ethnicity and gender. Unfortunately, ChatGPT’s answer was biased, suggesting that you had to be a ‘white male’ to be a good scientist. According to the scientist, this response highlights a fundamental problem in the structure of these models. In addition, ChatGPT is based on pre-existing data from a dataset called Common Crawl, which collects vast amounts of information from the internet. Unlike other similar tools, ChatGPT does not collect data in real time, but relies on pre-existing data. It is therefore likely to reproduce the toxicity and misinformation present on the internet. Maya Ackerman, a professor specialising in AI at Santa Clara University, explains: “People say the AI is sexist, but it’s the world that is sexist. All the models do is reflect our world to us, like a mirror”.
Strategies for Overcoming Bias in AI
As we move towards a more technologically advanced society, it is important to ensure that the development of AI is done in a fair, inclusive and non-discriminatory way. To achieve this goal, we need to ensure that diverse educational samples are used in the development of AI systems. This means ensuring that the educational data used to develop AI systems is representative of the different genders, ethnic backgrounds, ages and other relevant factors likely to influence the performance of these systems.
At the same time, like any industry, AI has a responsibility to promote equality in its approach and methods. For example, initiatives must be taken to attract more women into technology jobs, in order to diversify the industry and the workforce behind new technologies. By doing so, we can ensure that information technologies are developed in an equitable and inclusive way, for the benefit of all members of society. As well as promoting diversity in the workforce, it is also essential to ensure that AI systems are designed in a transparent and accountable way. The algorithms used in these systems must therefore be transparent, and the data used to train them must be easily accessible and verifiable.
Overall, eliminating gender bias is a crucial task that requires a concerted effort from all stakeholders involved in the development and deployment of AI systems. By working together to promote diversity, transparency and accountability, we can ensure that AI is developed in a way that benefits all members of society, regardless of gender, ethnicity, and sexuality.
Overcoming Bias: Why It Matters
At the recent LivePerson conference, a panel of AI equality experts came together to discuss the potential impact of AI bias on society. They highlighted that AI scientists, who are also the future recruiters of AI professionals, could be contributing to a male-dominated workforce. This is a crucial issue, as it risks creating a lack of diversity in the workforce. Frida Polli, CEO of Pymetrics, highlighted this reality by comparing the current state of AI to a situation where all children are raised by 20-year-old men, illustrating the homogeneity of the group that builds it. This analogy highlights the fact that AI is no more impartial than the data it is supplied with. It is therefore essential to ensure that the designers of these systems come from diverse backgrounds and experiences, so that they can create truly unbiased algorithms.
According to a CNN Business report, gender stereotyping in facial recognition software is also a major concern. Automated gender recognition systems have discriminated against transgender, intersex and non-binary people based on a binary classification of gender that does not take into account the real complexity of gender identity. This harms these communities and undoes decades of progress in civil rights and equality.
As Miriam Vogel, Executive Director of EqualAI, explained, the potential harm caused by AI bias is not limited to gender bias. A host of other biases can be encoded in AI, such as racial bias, age bias and socio-economic bias. If we really want to create fair and impartial AI systems, we need to take a proactive approach to identifying and dealing with these biases.
Conclusion
As AI becomes ubiquitous in our daily lives, it is essential to recognise and address the issue of prejudice. While prejudice is an inevitable reality, we must not allow it to take root in new technologies. AI offers a unique opportunity to start afresh and create bias-free systems. However, it is up to humanity to eliminate them. The Financial Times also stresses the importance of training human problem-solvers to diversify AI, because without this diversity, algorithms will continue to reflect our biases.