A sketch of hundreds of stick figures. In the centre are a small group with the number 1 in red. Around them are more with the number 10 in red. Around them are many many more with the number 100 in red.

We asked the Catalyst community “How do you think AI will (or is) contribute to injustice and inequity in the world?”.

This is the first of 2 articles. Article 2 will share views on how AI can contribute to a fairer, more just world. 

It’s understandable if you feel concerned about the rise of AI software. 

It’s developing faster than any software in history, fuelled by its own intelligence and the ‘technofeudal’ race to use data in more and more novel ways. Regulation can’t keep up.

It has potential to do jobs people currently get paid for, to repeat bias and discrimination, and to be used to scale harmful actions - whether on social media or in physical warfare. These things are already happening by the way.

None of us are in control of it. But we have power together. So we’ve crowdsourced this article to bring people’s views together. This can help us develop a story that has a better chance of serving a just and regenerative future for all life. 

AI is already contributing to injustice

Here’s some examples:

But is AI the problem?

Is AI responsible? Here are some views on where the problem lies.

AI isn’t the problem, the business models behind it are the problem

"As with all tech, the main problem is the business models behind it, what their incentives are and whose interests they're serving. Under the current system, AI will continue to be another tool for profit-making, extraction, oppression and control.”

“The rapid pace in which AI is being implemented into research, innovation and development side lines ethical considerations - See Viral Justice by Ruha Benjamin.

AI isn’t the problem, the socio-economic-political and cultural forces influencing its development are the problem

What will the socio-economic-political and cultural forces around this set of technologies contribute to? Will it be freedom and justice or will it be oppression and destruction?

“It's not techno-determinist to recognise that there are powerful forces driving the development, adoption and use of AI, from many different perspectives and with many different motivations. There are many ways in which AI will contribute to injustice and inequity.” 

“Developed countries (white majority) have more access to technology (development). This shifts any AI database's configuration.” 

"AI is heavily based on databases. Because these databases are run and most of the time generated by privileged people, AI reads biassed information. AI generated content ends up using elitist vocabulary or only white people’s views.”

AI isn’t the problem, ‘technofeudalism’ is the problem

The widening access to incredibly powerful tools is also leading to a smaller number of commercial organisations holding huge power, wealth and influence.

“AI will exacerbate and accelerate the trends already happening in society, whatever they are - which includes inequity, where it exists. Yanis Varofakis makes a great case for how we are now living in an age of 'technofeudalism', where a small number of big tech companies essentially control the world.” 

“If left to a small number of corporations who are currently battling for market domination, AI will contribute to injustice and inequity by reinforcing - and amplifying - existing systemic dynamics."

“Data is the new oil etc etc. it's the power imbalances that are the issue there, as ever, rather than the tech itself.”

AI isn’t the problem, biased input is the problem

Hidden biases within training data and algorithms will reinforce & exacerbate existing inequalities and power dynamics.

“If it's built on biased information, all it will do is continue to perpetuate that bias, making it even worse."

“The hidden biases within training data and algorithms reinforce & exacerbate existing inequalities and power dynamics. And make them even more opaque. This makes it harder for organisations to challenge/question the decisions made by systems they can't interrogate or understand.” 

“Technology amplifies and proliferates biases, often rapidly, before we can catch it. One prominent example of this is facial recognition software.”

Example: facial recognition software: "Facial recognition software uses AI to analyse images of faces and create a facial signature, which is a map of a person's facial features. The software then compares the facial signature to a database of images to determine if there is a match. When it was first released, this software did not recognise skin tones that were not white. It was widely adopted by law enforcement agencies in the US. This led to the misidentification of many individuals.” 

How AI in the context of ‘technofeudalism’ could create more injustice

Perpetuate existing inequity and injustice

“AI is nothing other than a tool to reproduce society's settings. Injustices and inequalities are part of this.”

“If there aren't restrictions and regulations, the lack of diversity and white supremacy will keep reign in the tech world, and AI will be a tool to help perpetuate this. Especially since this is a tool that requires critical thinking to assess the results, and more and more people are having less of this.”

“Looking at systemic change, and what it takes to genuinely shift ways of being, working, communicating and looking at the outputs of various AI tools I have used, I think there is more likelihood of the current systems being reinforced through using AI.” 

“Unequal access to this technological inequality widens gaps perpetuated by capitalism and imperialism.“

“Designing new ways of working with AI without designing with communities and civic organisations reinvents the existing power in systems.”

“Data consumption is wrecking the environment in quite an alarming way and no-one seems to want to talk about it.” 

Example: crime modelling:AI based pre-crime models of monitoring will reinforce injustices in how countries police their populations leading to more police brutality targeted and racialised and marginalised groups.”

Increasing the digital skills gap

"I think access and use of AI tools will accelerate the digital skills gap and digital accessibility gap that already exists. Those already literate and confident in digital will more easily transition to using AI tools, increasing inequity.” 

“AI skills has all the usual representation problems of most tech. There's still a huge gender divide in computer science grads the world over, for a start.”

“My worry is that AI will enlarge the digital divide between small and large charities. In the latest Charity Digital Skills Report we saw that whilst half of small charities (53%) are using AI tools, this is much less in comparison to three quarters (78%) of large charities.” 

Replacing labour

“AI can replace labour in ways that will see mass unemployment. But this really is a choice of global governance rather than AI as many don't need to be doing the labour they currently are doing.”

“At worst case AI will be used in the short term to add shareholder value. It will also replace jobs, damage local economies, automate already poor service delivery and dramatically increase carbon in the atmosphere.”

By supporting warfare

“AI's role in weapons manufacturing makes militarised imperialism stronger and quicker. We can see this in the creation of more lethal weapons used against Palestinians in Palestine. UK examples of this include BAE systems.” 

Example: scaling warfare: Israeli military uses an AI-assisted system called Lavender to identify Gaza targets. Lavender AI is what comes to mind when I think about AI's contribution to current injustices.”

Perpetuating misinformation and misrepresentation

"AI is trained on info that's already available on the internet. The problem is that the info that already exists on the internet is biased and incomplete. It excludes a lot of voices and experiences from people who don't have access to the internet (like older generations or people living at locations with no internet).”

“If we are talking about the content it creates, I think it will be extracted from mainstream views and might neglect the views of less influential/powerful entities/vulnerable communities, keeping in mind people who own these technologies are the most powerful and their knowledge is the mainstream knowledge that is respected.”

Example: AI chatbots: “AI Chatbots also have a huge part to play in the spread of misinformation on the internet, something as gentle as image searching 'baby peacock' or as huge as its use in the influence on elections across the globe.” 

How might AI contribute to justice and fairness in the world?

Nothing is certain. The future remains unmanifest. Read the network's views on the positive potential of AI.

---

Thanks to the 12 community members who contributed their views.

Image credit: "Participation Inequality 700 70 7" by ChristopherA is licensed under CC BY 2.0.

This is the first of 2 articles. Article 2 will share views on how AI can contribute to a fairer, more just world. 

It’s understandable if you feel concerned about the rise of AI software. 

It’s developing faster than any software in history, fuelled by its own intelligence and the ‘technofeudal’ race to use data in more and more novel ways. Regulation can’t keep up.

It has potential to do jobs people currently get paid for, to repeat bias and discrimination, and to be used to scale harmful actions - whether on social media or in physical warfare. These things are already happening by the way.

None of us are in control of it. But we have power together. So we’ve crowdsourced this article to bring people’s views together. This can help us develop a story that has a better chance of serving a just and regenerative future for all life. 

AI is already contributing to injustice

Here’s some examples:

But is AI the problem?

Is AI responsible? Here are some views on where the problem lies.

AI isn’t the problem, the business models behind it are the problem

"As with all tech, the main problem is the business models behind it, what their incentives are and whose interests they're serving. Under the current system, AI will continue to be another tool for profit-making, extraction, oppression and control.”

“The rapid pace in which AI is being implemented into research, innovation and development side lines ethical considerations - See Viral Justice by Ruha Benjamin.

AI isn’t the problem, the socio-economic-political and cultural forces influencing its development are the problem

What will the socio-economic-political and cultural forces around this set of technologies contribute to? Will it be freedom and justice or will it be oppression and destruction?

“It's not techno-determinist to recognise that there are powerful forces driving the development, adoption and use of AI, from many different perspectives and with many different motivations. There are many ways in which AI will contribute to injustice and inequity.” 

“Developed countries (white majority) have more access to technology (development). This shifts any AI database's configuration.” 

"AI is heavily based on databases. Because these databases are run and most of the time generated by privileged people, AI reads biassed information. AI generated content ends up using elitist vocabulary or only white people’s views.”

AI isn’t the problem, ‘technofeudalism’ is the problem

The widening access to incredibly powerful tools is also leading to a smaller number of commercial organisations holding huge power, wealth and influence.

“AI will exacerbate and accelerate the trends already happening in society, whatever they are - which includes inequity, where it exists. Yanis Varofakis makes a great case for how we are now living in an age of 'technofeudalism', where a small number of big tech companies essentially control the world.” 

“If left to a small number of corporations who are currently battling for market domination, AI will contribute to injustice and inequity by reinforcing - and amplifying - existing systemic dynamics."

“Data is the new oil etc etc. it's the power imbalances that are the issue there, as ever, rather than the tech itself.”

AI isn’t the problem, biased input is the problem

Hidden biases within training data and algorithms will reinforce & exacerbate existing inequalities and power dynamics.

“If it's built on biased information, all it will do is continue to perpetuate that bias, making it even worse."

“The hidden biases within training data and algorithms reinforce & exacerbate existing inequalities and power dynamics. And make them even more opaque. This makes it harder for organisations to challenge/question the decisions made by systems they can't interrogate or understand.” 

“Technology amplifies and proliferates biases, often rapidly, before we can catch it. One prominent example of this is facial recognition software.”

Example: facial recognition software: "Facial recognition software uses AI to analyse images of faces and create a facial signature, which is a map of a person's facial features. The software then compares the facial signature to a database of images to determine if there is a match. When it was first released, this software did not recognise skin tones that were not white. It was widely adopted by law enforcement agencies in the US. This led to the misidentification of many individuals.” 

How AI in the context of ‘technofeudalism’ could create more injustice

Perpetuate existing inequity and injustice

“AI is nothing other than a tool to reproduce society's settings. Injustices and inequalities are part of this.”

“If there aren't restrictions and regulations, the lack of diversity and white supremacy will keep reign in the tech world, and AI will be a tool to help perpetuate this. Especially since this is a tool that requires critical thinking to assess the results, and more and more people are having less of this.”

“Looking at systemic change, and what it takes to genuinely shift ways of being, working, communicating and looking at the outputs of various AI tools I have used, I think there is more likelihood of the current systems being reinforced through using AI.” 

“Unequal access to this technological inequality widens gaps perpetuated by capitalism and imperialism.“

“Designing new ways of working with AI without designing with communities and civic organisations reinvents the existing power in systems.”

“Data consumption is wrecking the environment in quite an alarming way and no-one seems to want to talk about it.” 

Example: crime modelling:AI based pre-crime models of monitoring will reinforce injustices in how countries police their populations leading to more police brutality targeted and racialised and marginalised groups.”

Increasing the digital skills gap

"I think access and use of AI tools will accelerate the digital skills gap and digital accessibility gap that already exists. Those already literate and confident in digital will more easily transition to using AI tools, increasing inequity.” 

“AI skills has all the usual representation problems of most tech. There's still a huge gender divide in computer science grads the world over, for a start.”

“My worry is that AI will enlarge the digital divide between small and large charities. In the latest Charity Digital Skills Report we saw that whilst half of small charities (53%) are using AI tools, this is much less in comparison to three quarters (78%) of large charities.” 

Replacing labour

“AI can replace labour in ways that will see mass unemployment. But this really is a choice of global governance rather than AI as many don't need to be doing the labour they currently are doing.”

“At worst case AI will be used in the short term to add shareholder value. It will also replace jobs, damage local economies, automate already poor service delivery and dramatically increase carbon in the atmosphere.”

By supporting warfare

“AI's role in weapons manufacturing makes militarised imperialism stronger and quicker. We can see this in the creation of more lethal weapons used against Palestinians in Palestine. UK examples of this include BAE systems.” 

Example: scaling warfare: Israeli military uses an AI-assisted system called Lavender to identify Gaza targets. Lavender AI is what comes to mind when I think about AI's contribution to current injustices.”

Perpetuating misinformation and misrepresentation

"AI is trained on info that's already available on the internet. The problem is that the info that already exists on the internet is biased and incomplete. It excludes a lot of voices and experiences from people who don't have access to the internet (like older generations or people living at locations with no internet).”

“If we are talking about the content it creates, I think it will be extracted from mainstream views and might neglect the views of less influential/powerful entities/vulnerable communities, keeping in mind people who own these technologies are the most powerful and their knowledge is the mainstream knowledge that is respected.”

Example: AI chatbots: “AI Chatbots also have a huge part to play in the spread of misinformation on the internet, something as gentle as image searching 'baby peacock' or as huge as its use in the influence on elections across the globe.” 

How might AI contribute to justice and fairness in the world?

Nothing is certain. The future remains unmanifest. Read the network's views on the positive potential of AI.

---

Thanks to the 12 community members who contributed their views.

Image credit: "Participation Inequality 700 70 7" by ChristopherA is licensed under CC BY 2.0.

Our Catalyst network - what we do

Support & services

Our free services help you make the right decisions and find the right support to make digital happen.

Learn what other non-profits are doing

39+ organisations share 50+ Guides to how they use digital tools to run their services. Visit Shared Digital Guides.

An autumn oak tree. Its leaves are red and yellow. On one side of the tree is a track heading into the distance. On the other side is a green field.
News

Welcoming Sheeza, Kate and Hannah to Catalyst

18.11.2024
Joe Roberson
Joe Roberson