Column by Christine Nellemann, Dean of Sustainability, Diversity and International Cooperation, DTU. Published in Frederiksborg Amts Avis and on sn.dk on 13.05.2024.
According to ChatGPT, fashion designers are women, and executives are men. And Google Translate will automatically choose the female option when translating gender-neutral nouns into English in connection with activities stereotypically associated with women, such as "my girlfriend cleans" for the Danish "min kæreste gør rent". Finally, Google's foremost artificial intelligence model, Gemini, claims that the United States was founded by a black George Washington, an Asian Thomas Jefferson, and a native chieftain.
Artificial intelligence is not more intelligent than what we 'educate' it to be. When a team of developers in Silicon Valley feeds the world's upcoming chatbots with historical datasets containing past stereotypical views on gender, race, sexuality, and religion, the chatbot will respond accordingly. Conversely, developers who are excessively aware of bias may end up passing on a distorted picture of reality that will make the chatbot rewrite history.
Input affects output. And we live in a time where artificial intelligence technologies contribute greatly to defining how we talk about and view the world around us. This is an underestimated position of power, and artificial intelligence researchers and developers therefore have a huge responsibility for 'educating' the next generations of chatbots to act based on a representative and contemporary worldview.
DTU student pinpoints gender stereotype ChatGPT
At DTU, Sara Sterlie, who is studying Artificial Intelligence, could recently document the necessity of this. She has tested how ChatGPT relates to gender stereotypes, limited to men and women.
The results have come as a surprise to DTU's researchers in this field, as they show that ChatGPT's distribution of men and women on different job titles is far more gender stereotyped than the actual distribution.
For example, according to ChatGPT, women are typically fashion designers or nurses, while men are software engineers and executives. ChatGPT also struggles to associate the male pronoun with a nurse, while it has an even harder time connecting the female pronoun with a pilot who is landing a plane.
The results worry me as the person responsible at DTU for incorporating diversity into research on new technologies for people. They show that there is a need to think carefully to ensure that the enormous potential of artificial intelligence is not eclipsed by computer-generated bias.
As one of the European universities conducting research within artificial intelligence, we at DTU strive to let bias awareness permeate our approach to research, teaching, and recruitment of both staff and students. A high degree of diversity will lower the barrier for inquisitive questions and a creative environment.
Among other activities, we hold workshops and presentations aimed at creating inclusive working environments in which we can all feel seen and heard. We also invite female high school students to IT camps, where they can try their hand at machine learning over several days and perhaps help change the underrepresentation of female students in this field.
Finally, we have started working with blind recruitment when hiring new employees. Here, a new technology ensures anonymization of applicants, so that only their qualifications and experience determine whether they get the job.
Tools are necessary
Despite all this, we still have a long way to go. Perhaps we will never get there. Because just like among everyone else working with machine learning and AI, there is an imbalance in the representation among those who research and develop the technologies. What are we doing about it?
To return to DTU student Sara Sterlie, she is now developing tools and methods, together with her supervisors, that will help the world's artificial intelligence developers avoid creating biased chatbots in the future. That, I believe, is the way forward.
In addition to ensuring broader representation, we can use information and education to create bias awareness among those who develop the technologies. This will ensure that the chatbots of tomorrow will deliver answers that are truthful, accurate, and fair.
We cannot change our factual perception of the world 20 years ago. But it is possible for us to manage how chatbots interpret the text in the future. This is an important task which everyone working in the field is obliged to shoulder. And the rest of us should monitor that this obligation is met.