A photo of Sheeza Shah at the previous in-person Funders' Forum. Sheeza is a Brown woman wearing glasses and a hijab. She is standing next to a screen with the words "Tech justice: dismantling oppression, building futures". The profile of a white woman and Brown man are visible looking on.

AI can increase injustice, or drive liberation. Our latest Funders' Forum brought together not-for-profits and funders to explore who AI is harming. They also considered what funders can do to support justice-driven AI.

On 4 March our online Funders' Forum examined how AI is failing minoritised communities. It also explored the role funders can play in addressing, and mitigating AI’s harmful impacts. 

The event had 2 guiding values:

  • curiosity: learning with, and from, academics and communities across the country who are working on AI and building evidence of its impact
  • collectivism: how can we use these people's ideas to build community-owned solutions? What’s the future of AI for not-for-profits?

Setting the context 

Sheeza Shah, one of Catalyst’s Executive Directors, outlined some of AI’s negative impacts:

  • increasing the digital skills gap
  • replacing labour
  • supporting warfare
  • perpetuating misinformation and misrepresentation
  • perpetuating existing inequity and injustice.

But AI can be used positively. For example, to: 

  • increase accessibility and inclusivity
  • empower underserved communities
  • liberate people to focus on work that matters
  • improve scientific research and healthcare
  • support new paradigms and perspectives.

We need to learn how to develop AI in responsible ways.

Ways that include people from minoritised communities, and contribute to justice and liberation.

Examples:

AI disproportionately harms minoritised communities

Shaf Choudhry (Seen on Screen Institute and The Riz Test) spoke about the human cost of data collection for AI and the environmental cost of AI.

Open AI’s exploitation of Kenyan workers

In 2022, people in Kenya were employed by Sama, a data training company, on behalf of Open AI. The Kenyan workers’ role was to correct data that had been wrongly labelled, and moderate graphic text and images. At a time when Open AI was worth 200 million dollars, these workers were on short-term contracts and paid less than 2 dollars an hour. After a Time magazine investigation into the workers’ pay and conditions, Sama ended its contract with Open AI. The Kenyan workers lost their jobs. They had no access to support to help them deal with the harmful and damaging images they’d seen and content they’d read.

Data centre growth at the expense of community health

AI has created a huge increase in demand for data centres (there are 22 in Chile – 16 have been approved since 2012.) The centres have large cooling systems to manage the heat generated by AI processing units. These systems use 25 million litres of filtered water each year. This has raised concerns about water scarcity and sustainability in regions with limited water resources. 

But because data centres are seen as lucrative sources of income, countries are willing to approve them. Often this is at the expense of their citizens. In 2022, the Greater London Authority imposed a ban on new housing in Hillingdon, Ealing and Hounslow. Because data centres are competing with residents for water and electricity. And in Vietnam, the government asked locals to use less water because data centres needed it.

Using a range of knowledge to create AI for public good

In her presentation, Ramla Anshur (Public Interest AI) spoke about the AI industry’s structures, values and norms. She contrasted them with the main features of pluriversal perspective-driven tech

Dominant business model and AI

The dominant business model for AI is underpinned by capitalism, imperialism and white supremacy culture. It:

  • takes a one size fits all approach to problem solving
  • causes a huge amount of socio-environmental damage
  • ignores the human labour that’s involved in collecting data.

And it creates AI that’s harmful. One example is the Project Nimbus collaboration between Google and Amazon which provides AI cloud computing to the Israeli government.

Alternative model: a pluriversal perspective on AI

An alternative to this is tech desoigned from a different perspective - that of the pluriverse. Pluriversal tech is technology that embraces multiple ways of knowing, being, and designing, instead of imposing a single, dominant worldview. It challenges the idea of universal, one-size-fits-all technological solutions. And supports diverse cultural, ecological, and social perspectives. A pluriversal perspective leads to tech that’s co-designed, co-produced, and co-owned communities.

For a more equitable alternative AI future, everyone involved in shaping and creating tech needs to work in ways that: 

  • are pluriversal: using diverse types of knowledge
  • prioritise data sovereignty: enabling communities to have control over their data, and avoid privatising their data
  • are eco-conscious and small: prioritising environmental well-being and using less resources
  • use co-production: creating tech alongside the communities that will be using it 
  • result in AI that’s collectively owned.

Developing a Feminist AI tool

Chayn is a global nonprofit that supports survivors of gender-based violence. Hera Hussain, the organisation’s founder and CEO, spoke about the feminist AI tool that Chayn has created.

Chayn asks how can:

  • tech be reclaimed as a public tool? 
  • the internet be made safe and fun? 
  • people have safe online lives?

And why should the face of AI be overwhelmingly white and male?

Advokit: AI that supports survivors

The organisation’s latest work on AI tools, Advokit, builds on Chayn’s resource on how to build a case without a lawyer. Advokit is the result of Chayn’s conversations with people in its community. It enables survivors of gender-based violence to generate self-advocacy letters more easily. For example, legal requests to remove non-consensual images posted online. The letters can be sent to the police or tech companies. 

Chayn applied its trauma-informed design principles to Advokit. Two of these principles are privacy and agency. So Advokit doesn’t store any personal data and survivors can use its outputs how they want. Chayn has built a prototype, run a co-design workshop and is testing Advokit. It hopes to launch the tool in May 2025. 

Hera explained that Chayn needs more funding so it can add more languages and voice-to-text to Advokit. But identifying funders who support AI projects is challenging.

Thanks to attendees

Thanks to everyone who attended the forum:

  • Hera Hussain – Chayn
  • Ramla Anshur – Public interest AI
  • Shaf Choudry – The Riz Test/ Seen on Screen Institute
  • Timothy Cheng – The Social Investment Consultancy
  • Otis Thomas – T.A.P. Project C.I.C.
  • Kirsty Gillan-Thomas –  Paul Hamlyn Foundation
  • Lahari Parchuri
  • Andy Curtis – Paul Hamlyn Foundation
  • Nichola Blackmore – The National Lottery Community Fund
  • Jo Morfee
  • Arfah Farooq – Muslim Tech Fest/Muslamic Makers
  • Grace Perry – London Funders 
  • Viv Ahmun

Some links shared by speakers and attendees

On 4 March our online Funders' Forum examined how AI is failing minoritised communities. It also explored the role funders can play in addressing, and mitigating AI’s harmful impacts. 

The event had 2 guiding values:

  • curiosity: learning with, and from, academics and communities across the country who are working on AI and building evidence of its impact
  • collectivism: how can we use these people's ideas to build community-owned solutions? What’s the future of AI for not-for-profits?

Setting the context 

Sheeza Shah, one of Catalyst’s Executive Directors, outlined some of AI’s negative impacts:

  • increasing the digital skills gap
  • replacing labour
  • supporting warfare
  • perpetuating misinformation and misrepresentation
  • perpetuating existing inequity and injustice.

But AI can be used positively. For example, to: 

  • increase accessibility and inclusivity
  • empower underserved communities
  • liberate people to focus on work that matters
  • improve scientific research and healthcare
  • support new paradigms and perspectives.

We need to learn how to develop AI in responsible ways.

Ways that include people from minoritised communities, and contribute to justice and liberation.

Examples:

AI disproportionately harms minoritised communities

Shaf Choudhry (Seen on Screen Institute and The Riz Test) spoke about the human cost of data collection for AI and the environmental cost of AI.

Open AI’s exploitation of Kenyan workers

In 2022, people in Kenya were employed by Sama, a data training company, on behalf of Open AI. The Kenyan workers’ role was to correct data that had been wrongly labelled, and moderate graphic text and images. At a time when Open AI was worth 200 million dollars, these workers were on short-term contracts and paid less than 2 dollars an hour. After a Time magazine investigation into the workers’ pay and conditions, Sama ended its contract with Open AI. The Kenyan workers lost their jobs. They had no access to support to help them deal with the harmful and damaging images they’d seen and content they’d read.

Data centre growth at the expense of community health

AI has created a huge increase in demand for data centres (there are 22 in Chile – 16 have been approved since 2012.) The centres have large cooling systems to manage the heat generated by AI processing units. These systems use 25 million litres of filtered water each year. This has raised concerns about water scarcity and sustainability in regions with limited water resources. 

But because data centres are seen as lucrative sources of income, countries are willing to approve them. Often this is at the expense of their citizens. In 2022, the Greater London Authority imposed a ban on new housing in Hillingdon, Ealing and Hounslow. Because data centres are competing with residents for water and electricity. And in Vietnam, the government asked locals to use less water because data centres needed it.

Using a range of knowledge to create AI for public good

In her presentation, Ramla Anshur (Public Interest AI) spoke about the AI industry’s structures, values and norms. She contrasted them with the main features of pluriversal perspective-driven tech

Dominant business model and AI

The dominant business model for AI is underpinned by capitalism, imperialism and white supremacy culture. It:

  • takes a one size fits all approach to problem solving
  • causes a huge amount of socio-environmental damage
  • ignores the human labour that’s involved in collecting data.

And it creates AI that’s harmful. One example is the Project Nimbus collaboration between Google and Amazon which provides AI cloud computing to the Israeli government.

Alternative model: a pluriversal perspective on AI

An alternative to this is tech desoigned from a different perspective - that of the pluriverse. Pluriversal tech is technology that embraces multiple ways of knowing, being, and designing, instead of imposing a single, dominant worldview. It challenges the idea of universal, one-size-fits-all technological solutions. And supports diverse cultural, ecological, and social perspectives. A pluriversal perspective leads to tech that’s co-designed, co-produced, and co-owned communities.

For a more equitable alternative AI future, everyone involved in shaping and creating tech needs to work in ways that: 

  • are pluriversal: using diverse types of knowledge
  • prioritise data sovereignty: enabling communities to have control over their data, and avoid privatising their data
  • are eco-conscious and small: prioritising environmental well-being and using less resources
  • use co-production: creating tech alongside the communities that will be using it 
  • result in AI that’s collectively owned.

Developing a Feminist AI tool

Chayn is a global nonprofit that supports survivors of gender-based violence. Hera Hussain, the organisation’s founder and CEO, spoke about the feminist AI tool that Chayn has created.

Chayn asks how can:

  • tech be reclaimed as a public tool? 
  • the internet be made safe and fun? 
  • people have safe online lives?

And why should the face of AI be overwhelmingly white and male?

Advokit: AI that supports survivors

The organisation’s latest work on AI tools, Advokit, builds on Chayn’s resource on how to build a case without a lawyer. Advokit is the result of Chayn’s conversations with people in its community. It enables survivors of gender-based violence to generate self-advocacy letters more easily. For example, legal requests to remove non-consensual images posted online. The letters can be sent to the police or tech companies. 

Chayn applied its trauma-informed design principles to Advokit. Two of these principles are privacy and agency. So Advokit doesn’t store any personal data and survivors can use its outputs how they want. Chayn has built a prototype, run a co-design workshop and is testing Advokit. It hopes to launch the tool in May 2025. 

Hera explained that Chayn needs more funding so it can add more languages and voice-to-text to Advokit. But identifying funders who support AI projects is challenging.

Thanks to attendees

Thanks to everyone who attended the forum:

  • Hera Hussain – Chayn
  • Ramla Anshur – Public interest AI
  • Shaf Choudry – The Riz Test/ Seen on Screen Institute
  • Timothy Cheng – The Social Investment Consultancy
  • Otis Thomas – T.A.P. Project C.I.C.
  • Kirsty Gillan-Thomas –  Paul Hamlyn Foundation
  • Lahari Parchuri
  • Andy Curtis – Paul Hamlyn Foundation
  • Nichola Blackmore – The National Lottery Community Fund
  • Jo Morfee
  • Arfah Farooq – Muslim Tech Fest/Muslamic Makers
  • Grace Perry – London Funders 
  • Viv Ahmun

Some links shared by speakers and attendees

Our Catalyst network - what we do

Support & services

Our free services help you make the right decisions and find the right support to make digital happen.

Learn what other non-profits are doing

39+ organisations share 50+ Guides to how they use digital tools to run their services. Visit Shared Digital Guides.