Aurora50 Loader
Aurora50 Logo

Fixing AI bias – using AI

Fixing the bAIs- firefighter images
Suzanne Locke 16 January 2024
Share
Bias in computer code is not new; back in 1988, the UK Commission for Racial Equality found a British medical school guilty of discrimination because the programme it was using to pick out interviewees from job applicants was biased against women and people with non-European names.
Twenty years later, Amazon also had to stop using a hiring algorithm after finding it was favouring male applicants who used words like ‘executed’ or ‘captured’.
But the most visible form of bias in artificial intelligence is in imagery itself.

‘85% of AI projects biased’

A 2018 Gartner report predicted that right up to 2030, 85 percent of AI projcts would still produce false results caused by bias – bias either built into the algorithms or biases in the AI team members.

When new AI imaging tools such as DALL-E, Midjourney and Stable Diffusion are asked to create artificial images of a chief executive (CEO) or a doctor, a firefighter or an astronaut, a founder or a president – the pictures they produce are of men.

When asked to create AI imagery of a nurse, an assistant or a flight attendant? Now the AI tools picture a woman.

97% of CEO images are of white men

 
When it comes to creating pictures of a CEO or a director, OpenAI’s DALL-E 2 generated images of white men 97 percent of the time.
We asked ChatGPT, another tool in OpenAI’s stable, why this happens.
You: Why is using AI to create images a problem for gender balance at work?

Billions of data fragments training AI

 
ChatGPT.ai: Using AI to create images can be a problem for gender balance at work if the AI is trained on a dataset that is biased towards one gender. For example, if an AI is trained on a dataset that contains mostly images of men in professional settings, it may have difficulty accurately generating images of women in similar settings. This can perpetuate gender stereotypes and make it more difficult for women to be represented in the workplace. It is important to ensure that the datasets used to train AI are diverse and representative of all genders to avoid perpetuating biases and to promote gender balance in the workplace.
AI tools are trained on billions and billions of fragments of data and images scraped from the internet. They use the file names, descriptions and other meta data uploaded along with the image to understand what is contained within the image. [Link to accessibility article]
If that image – and thousands or billions of others uploaded in the 40-year history of the internet – shows a male CEO, that is what AI presumes a CEO looks like.

Midjourney’s AI-generated pictures of professors show inherent bias - picture courtesy Reddit

Midjourney’s AI-generated pictures of professors, courtesy Reddit

 
Midjourney created 22 images: 15 were male and seven female. The women professors it created operated in roles such as anthropology, art history and gender studies. However, the male professors it imagined worked in STEM (science, technology, engineering and maths) and business subjects.

Fixing the bAIs is a very visual representation of what’s wrong, and makes systemic issues and the unconscious bias very easy to understand.
Diana Wilde, co-founder Aurora50

This is where Fixing the bAIs comes in. Aurora50 has worked to produce an AI tool that can fix its own bias.

Vast image bank of women in work

Fixing the bAIs is a vast image bank of women in various professions, to teach AI that women can be engineers, mathematicians, CEOs and more – and that men can work in professions such as nursing.

Gender is removed from the file names, tags and descriptions.

All of our images are royalty- and rights-free, and we encourage you to download and use them on the internet so that diversity in jobs becomes commonplace in public images.

Countries implementing AI rule of law

Fixing the bAIs has won a clutch of international awards – bronze at the Epica Awards, three golds, a silver and a bronze at the Cresta Awards and three silvers at the Gerety Awards.
Fixing the bAIs - CEO images

Images of CEOs from Fixing the bAIs

Images of CEOs from Fixing the bAIs

 

Presented at the United Nations, Fixing the bAIs has been called a “concrete solution” to “eradicate gender bias”.

Visual representation of what’s wrong

“Fixing the bAIs is a concrete solution to eradicate gender bias in artificial intelligence,” says Jana Novohradska, an AI consultant, UNESCO representative for Slovak and gender equality rapporteur for the Council of Europe’s AI committee.
Aurora50 co-founder Diana Wilde says: “AI learns from us and we have unconscious biases. When algorithms are learning from us, they are also bringing those biases into AI.

“Fixing the bAIs is a very visual representation of what’s wrong, and makes systemic issues and the unconscious bias very easy to understand.”

#fixingthebais

Related posts