Associate professor Elisabeth Wetzer says AI tends to treat men and women differently.
Image:
Photo: Petter Bjørklund/SFI Visual Intelligence.

Associate professor Elisabeth Wetzer says AI tends to treat men and women differently.

Researcher: This makes AI perform worse for women

Artificial intelligence may treat men and women differently. How does this happen? AI researcher Elisabeth Wetzer explains the biases surrounding AI technology.

Researcher: This makes AI perform worse for women

Artificial intelligence may treat men and women differently. How does this happen? AI researcher Elisabeth Wetzer explains the biases surrounding AI technology.

By Petter Bjørklund, Communication Advisor at SFI Visual Intelligence

From health-promoting technologies to personal assistants. There is little doubt that artificial intelligence (AI) can help us in many different ways. But does it help everyone equally?

“AI has a tendency to treat men and women differently,“ says associate professor at UiT Machine Learning Group and SFI Visual Intelligence, Elisabeth Wetzer.

She is an AI expert and describes this as a significant challenge with today’s AI technology. Systems that favor job applications by men, grant lower credit limits to women, and recognize fewer female faces are only a handful of examples she mentions.

What makes an AI algorithm become “biased”, in other words, to act in ways that favor or discriminate against groups like men or women?

Data can reinforce biases in data

AI is trained on enormous amounts of data. Chatbots like ChatGPT, DeepSeek, and Elon Musk’s Grok are based on millions of pictures, videos and text from the internet. These “big data” are essential for an AI system to perform a given task.

But data are historical. This means that they can reflect prejudices and outdated stereotypes throughout history, for example pertaining to gender.

“If you look through a set of data from the last decade, you will quickly find groups who have been discriminated against based on their gender, sexuality, or skin color. Since AI is made to find patterns and correlations in data, there is a risk that the systems may pick up and reinforce biases from the dataset,” Wetzer says.

The consequences of this can be significant, especially for marginalized and underrepresented groups.

“Let us say you have a credit score system designed to determine the size of the loan someone should be granted. If the system is based on salary statistics from the past sixty years, it will pick up that there is a significant wage gap between women and men,” she explains.

“The system will then assume that women are less economically responsible than men, and that women are less suited to be granted a loan. This means that it has learned a skewed and incorrect connection between gender and income.”

Should AI be gender-blind?

If there is a risk of AI misusing gender information to treat people differently, does this mean that AI systems should be developed to be gender-blind?

It depends on what the AI system is designed for, Wetzer responds. In some cases, information about gender can be an important factor for the system to take into account.

“For example, some diseases occur more frequently in women than men. For those cases, you do not want an AI to consciously ignore the person’s gender when detecting such diseases,” Wetzer says.

“If the information is relevant to the AI’s task, it is important that gender is not ignored. But an algorithm should never use this information to determine how suited someone is to be granted a loan,” she adds.

It is not always easy to develop systems that do not take this into consideration. This is because they are adept at detecting gender-related factors which the developers may not have realized existed in the data. For example, an AI tool from Amazon learned to ignore job applications that mentioned universities associated with women.

Lacking representation in data

In today’s global society, it is important that all people are represented equally. This also applies to the data that AI systems are based on. Who is represented in the data or not has a significant impact on whom the technology performs better or worse on.

“If a specific group of people is not equally represented in the data as other groups, the system will perform worse on that particular group,” Wetzer explains.

If AI is trained on images of male professors, it will assume that the profession is reserved for men. AI developers need to be conscious of the representation in the dataset, especially when developing systems that make decisions affecting people’s health and well-being.

"For example, an AI cancer diagnostics system may have been created in a developed country that can afford to do so. People will assume that it will perform well for everyone, but some marginalized groups may not ever have been part of the training data. The system will most likely not work well on those people."

“Woke” AI

But the pursuit of equal representation in AI models and data can also go too far. Last year, Gemini, Google’s AI-based image generator, was accused of being “woke” when it generated images of German soldiers from 1943 with African and Asian appearances.

“If you ask an AI about how something was in Germany at a particular point in time, it would be wrong of it to assume that the population was more diverse than it actually was. You can clearly see that it has actively tried to make a more diverse set and produced something which does not make much sense,” Wetzer says.

Skewed gender balance

Biases in AI do not just include training data. Only 30 percent of today’s global AI workforce are women, meaning that the systems are often developed by men. This can significantly impact the development of such systems.

“There are a lot of things to consider when developing AI, for example which training data, neural network, and parameters to use. These decisions are made by someone, and today’s workforce is not particularly diverse,” she says.

If AI is developed by only a single group, there is a risk that the system will be based on how that particular group understands, experiences, and interprets the world. This is rarely a conscious choice and usually happens without the developers themselves being aware of it.

“Several studies show that technology is shaped by those who create it. This means that there is a chance that a single group may forget to include others’ perspectives and experiences around gender discrimination and racism.”

A need for female role models

The AI workforce and academia needs to reflect the diversity in society, she says. Increased focus on diversity among AI developers and researchers is crucial for developing AI technology that works well for everyone.

“It is absolutely essential to include different perspectives and experiences in the development of these solutions”, Wetzer emphasizes.

She believes that the field will become more diverse. However, several measures are still needed to motivate girls and women to study, research, and develop AI.

“We must continue to spark interest in STEM sciences from an early age and put STEM careers on the map for girls. We also need strong role models who can inspire them to study and work with AI. This means we need to shed more light on female researchers and their contributions to the field."

AI regulation is necessary

Last year, the AI Act, the world’s first AI law, was passed in the EU. It imposes strict requirements for the responsible development and use of AI in Europe and Norway. The entire legislation is set to be implemented in Norway by 2026.

The lack of such guidelines can contribute to reinforcing social and economic inequalities among different groups of people, such as those based on gender. Wetzer is positive about the AI Act and sees it as an important step toward developing safer and more fair AI.

“I believe it will provide thorough guidelines on how AI systems should be developed and tested before being implemented, similarly to how medications are tested. There are clear guidelines and multiple stages that must be followed before the drugs can be used and sold, and the same should apply to AI”, Wetzer says.

“The regulation will encourage developers and researchers to consider how AI systems should be designed according to fundamental ethical principles. It is important that AI systems serve more than just corporate interests”, she concludes.

Stay updated on the latest Visual Intelligence news on our social media:

LinkedIn

BlueSky

1

Latest news

Visual Intelligence Annual Report 2024

April 2, 2025

The fifth Visual Intelligence annual report, showcasing the centre's activities, results, staff, funding and publications for 2024, is now available on our web pages.

uit.no: UiT er vertskap for landsdekkende KI-konferanse

April 1, 2025

I juni møtes det norske KI-miljøet i Tromsø for å presentere ny forskning og diskutere nye retninger innen feltet. KI-forskere inviteres til å delta og vise fram forskningen sin under konferansen.

Successful Industry Pitch Day at UiT

March 20, 2025

Visual Intelligence and the Digital Innovation Lab invited industry professionals to present ideas for master's projects to computer science and machine learning students at UiT The Arctic University of Norway.

Dagens Næringsliv: Norges eldste fagmiljø innen KI

March 18, 2025

Kunstig intelligens (KI) endrer måten vi løser komplekse problemer på. Ved UiT Norges arktiske universitet leder professor Robert Jenssen Visual Intelligence, et senter for forskningsdrevet innovasjon som utvikler neste generasjons KI-metoder.

Visual Intelligence at the UiT Open Day in Tromsø

March 13, 2025

Visual Intelligence researchers had the great pleasure of talking to high school students about the AI study programme at UiT The Arctic University of Norway during the UiT Open Day‍‍.

Visual Intelligence represented at CuttingEdgeAI seminar

March 11, 2025

Director Robert Jenssen represented Visual Intelligence at the CuttingEdgeAI seminar "KI anno 2023: I offentlighetens interesse?" at the University of Bergen on March 7th.

forskning.no: Derfor fungerer KI dårligere på kvinner

March 8, 2025

Det hender at kunstig intelligens behandler menn og kvinner ulikt. Hvordan skjer dette? KI-forsker Elisabeth Wetzer forklarer hva som ligger bak skjevhetene i teknologien (Popular science story in forskning.no and sciencenorway.no)

Visual Intelligence at TEKdagen 2025

February 11, 2025

We had the pleasure of talking to students about the exciting career opportunities at Visual Intelligence and UiT The Arctic University of Norway during TEKdagen 2025

School visit from Breivang upper secondary school

February 5, 2025

Last week, we welcomed students from Breivang upper secondary school to a full-day practical session on AI and programming at UiT The Arctic University of Norway

Successful course on collaborative coding and reproducible research

January 30, 2025

Visual Intelligence and UiT researchers organized a special curricular course on collaborative coding and reproducible research at UiT The Arctic University of Norway.

Insightful talks on deep learning-based sea ice forecasting at UiT

January 16, 2025

Andrew McDonald, a PhD student from University of Cambridge and the British Antarctic Survey, presented his work on sea ice forecasting with diffusion models, as well as the downstream applications of those forecasts‍, at Visual Intelligence in Tromsø.

A very successful Northern Lights Deep Learning Conference 2025!

January 14, 2025

Deep learning researchers from 27 different countries congregated at the 8th edition of the NLDL conference. - I have seen several interesting posters and talks that have opened my mind about different paths to explore in my own research, says NLDL 2025 attendee Tiago Ramos.

NLDL 2025 featured on NRK: the Norwegian national news

January 8, 2025

NRK, the national broadcasting company in Norway, spoke to Visual Intelligence researchers Robert Jenssen, Suaiba Amina Salahuddin, Michael Kampffmeyer and Elisabeth Wetzer about this year's Northern Lights Deep Learning Conference in Tromsø, Norway.

uit.no: Unge professor Kampffmeyer

January 2, 2025

In record time, principal investigator Michael Kampffmeyer became a professor in machine learning at the young age of 32. Read more about his academic journey and work in the latest UiT Researcher Portrait

Happy Holidays from Visual Intelligence!

December 24, 2024

2024 has been a year full of exciting events and accomplishments, and we look forward to continuing our journey on researching the next generation of deep learning methodology in 2025!