ChatGPT is biased: ‘Men are doctors, women are beauticians’
Nothing human is alien to artificial intelligence. ‘Language models like ChatGPT contain many biases,’ says former data science student Jonas Klein. In his master’s thesis, he investigates how language models inherit biases from the texts they’re trained on.

What is your thesis about?
‘I find artificial intelligence like ChatGPT fascinating: we try to mimic human intelligence with it. The system behind ChatGPT is trained on existing texts once written by humans. Because these texts inevitably reflect our ideas and prejudices, such language models can also adopt stereotypes from our society.
‘I specifically researched gender bias, the idea that men are the ‘standard.’ I looked at professions: does a model automatically consider a plumber to be male? While people can still assume that not every plumber has to be a man, a language model might treat the assumption that men are the standard as fact.
‘That’s not harmless. For example, during job applications, ChatGPT is sometimes asked to determine the best candidate from dozens of resumes. That seems convenient, but it’s also risky. If the model contains biases, those choices will unintentionally be discriminatory, which could lead to a female plumber being overlooked.’
What came out of your research?
‘As expected, language models like ChatGPT contain many biases: men are indeed often seen as the norm. As a result, professions like doctors, CEOs, and plumbers are automatically assigned to men, even when you explicitly tell the language model that it could also be a woman.
‘Moreover, it turns out that the stereotypes not only favor men but also gender entire professional groups. For example, a beautician is almost always seen as a woman. The same applies to a nurse.’
Why is it important to investigate this?
‘A huge amount of money is being poured into artificial intelligence. Major companies like Google, Meta, and other tech giants are investing billions in its development, making it incredibly fast. But because of this pace, we sometimes forget to consider how we can use AI fairly and responsibly. That’s why it’s important to conduct research on this topic.
‘With my research, I also want to raise awareness among users. Anyone who uses ChatGPT in everyday life should realize that the answers are not neutral. It’s crucial that users remain alert to potential biases so that no one is unknowingly disadvantaged. Fair AI doesn’t just start with the developers, but also with the people who work with it.’
How do you view the use of AI: do you see opportunities, or are you concerned about what it could mean for the future?
‘Tech CEOs often claim we’re close to achieving artificial intelligence that’s equal to human intelligence, but I don’t believe that. AI is far less intelligent than humans: it’s primarily about numbers and mathematical models.
‘What worries me is how quickly AI is developing and how it’s being used in so many different areas. Often without people critically considering the outcomes. I see the idea that robots will one day take over and become smarter than us as a horror scenario that’s unlikely to happen. But the rapid adoption, combined with little oversight and awareness, does make the situation risky.’
Do you think tech companies are open to critical voices?
‘Companies behind AI systems do a lot of manual correction. For example, OpenAI has built in a control layer that determines what the system can and cannot say. As a result, users notice that the chatbot responds only to sensitive topics like bullying or suicide. Yet, it doesn’t always work out. In the US, parents are suing OpenAI because ChatGPT allegedly encouraged their 16-year-old son to commit suicide.
‘It’s a constant push and pull: companies are manually trying to mitigate risks, but at the same time, they’re fighting against a model that learns from existing texts. Moreover, the executives of these companies have a vested interest in ensuring their chatbots are used as much as possible.’