Feminist AI: Prioritising Social Good Over Performance

How to overcome inequalities in and through AI

Feminist AI is not just a trend; it represents a fundamental shift in how we can think about technology. Unlike utopian views on AI, which assume technology can solve complex social issues in a neutral way, Feminist AI recognises that all AI systems are shaped by human choices, which can embed bias and reflect existing power dynamics in society. Instead of aiming for an illusory objectivity of data, Feminist AI embraces subjectivity and contextual understanding as strengths.

There’s a tendency to think that Feminist AI is about women only. While women are among the historically marginalised groups, this approach to AI goes beyond gender, tackling broader inequalities to foster social empowerment, inclusion, and equality through AI research and practice.

In this interview, Sara Colombo and Geert-Jan Houben talk about Feminist AI and what it can mean for people and society.

Who benefits from Feminist AI?

“Feminist AI is an interdisciplinary approach to AI design and development grounded in feminist principles and theories. These principles help address various forms of marginalisation, such as gender bias, racial discrimination, disability bias, and other systemic inequalities affecting diverse social groups. Feminist theories offer frameworks for developing AI in a way that is more reflective, critical, and inclusive,” explains Sara Colombo, Assistant Professor at the Faculty of Industrial Design Engineering and co-director of the Convergence Feminist Generative AI Lab.

“People often respond to Feminist AI with interest and curiosity, as the connection between feminism and AI is not immediately obvious. It’s always important to clarify that while Feminist AI includes discussions on women and gender issues, it also addresses broader themes such as race, disability, age, colonialism, and, more broadly, the distribution of power in society. We  examine how AI can either reinforce or challenge these structures.”

Is AI objective?

“As a data modeller myself, I see AI as an artefact that assumes a certain picture of the world. The picture is coded by data and the picture is therefore by definition inaccurate and vulnerable to lots of effects. So, it is not the AI per se that is inaccurate but it is data’s representation of the real world that is inaccurate. That is why it is interesting to look at theories that help to consider and overcome these inaccuracies,” explains Geert-Jan Houben, Pro Vice Rector Magnificus AI, Data and Digitalisation.

“A common misconception is that working with data means working with an objective entity, but this assumption is extremely risky, because data is never truly neutral. When we treat data and AI as purely objective, we overlook how our own perspectives, backgrounds, and biases shape the choices we make in collecting, selecting, and using data. We fail to consider our own positionalities, as well as systemic inequalities, and how they reflect into the datasets we design and build,” mentions Sara. “This is especially critical in AI, where models built on biased or non-representative data can perpetuate and even amplify existing discriminations.”

There are always assumptions and abstractions at play because you never truly get the total truth. In designing a product, we somehow need to strike a balance between how accurate representations used in the product can be and what you want to do with the product.

Geert-Jan Houben

How can we improve fairness in AI?

“Bias in training data is a major challenge for the development of fair and just AI systems,” stresses Sara. “Another critical challenge is the lack of diverse perspectives in AI design, development, and governance.”

Sara elaborates further: “It is crucial to develop AI solutions with the inclusion and representation of the diverse groups they will impact. Yet, I’ve seen developer teams working on AI-driven healthcare solutions make critical decisions without any real understanding of the diverse groups of patients – their needs, values, lived experiences. Unfortunately, current AI development processes do little to encourage AI engineers to question whether this understanding is relevant or necessary.  Feminist AI seeks to tackle these issues by adopting participatory design, promoting algorithmic transparency, and conducting more inclusive ethical audits. Together, these elements contribute to the development of fairer and more equitable AI systems.”

We need to narrow the gap between AI engineers and the diverse groups of people that are affected by AI systems.

Sara Colombo

What would be the result of feminist AI?

“As designers and engineers, we must recognise that our worldviews shape the systems we create. We also need to be aware of the power structures and systemic inequalities embedded in society. Acknowledging them, bringing them into discussion, and understanding their impact on AI development is essential – an approach that Feminist AI supports and fosters,” stresses Sara. “Moreover, Feminist AI is developed through participatory and inclusive methods, ensuring diverse representation at every stage. By adopting a feminist lens, AI systems can become more equitable, just, and accountable, adhering to ethical principles that prioritise human and societal well-being over profit-driven automation.”

Sara gives a few promising examples of Feminist AI in practice: “One example is the High-risk EU AI Act Toolkit (HEAT), which draws upon feminist principles to guide developers of high risk AI systems in complying with the European AI Act. Another example is the Algorithmic Justice League, founded by Joy Buolamwini, which advocates for fairer AI policies. Another instance is SOF+IA, a chatbot powered by generative AI and developed with feminist approaches, aimed at engaging in conversations and addressing instances of violence and digital harassment that women frequently encounter on social media platforms.”

We need to decolonise AI by questioning Western-centric assumptions in algorithmic development.

Sara Colombo

What is the benefit of researching Feminist AI in a Convergence lab?

“As a multidisciplinary lab, we are constantly learning from each other and cross-pollinating our research. We also have the opportunity to make a bigger impact by reaching diverse audiences, raising awareness on this topic, and engaging a broader community, including societal stakeholders and the public,” explains Sara. “The Feminist Generative AI Lab fosters interdisciplinary collaborations between AI researchers, developers, social scientists, and ethicists to address AI’s societal impacts. Working in a Convergence Lab, we can collaborate to address shared challenges – such as the need for power redistribution in AI development and governance – while bringing our own expertise and perspectives.”

Geert-Jan agrees: “This is a very nice example of the benefits of Convergence, and in particular the Convergence on AI, Data and Digitalisation, where we bring researchers from different disciplines together, for optimal impact for society.”

Link TU Delft Website