By Julie-Ann Sherlock
As a kid, I dreamt of a world where robots or computers could do the more mundane tasks in life, leaving the creative, fun or more emotional stuff to us humans. That world is fast becoming a reality, with Artificial Intelligence (AI) even managing to do some of the fun and creative stuff, too! But this lazier way of getting tasks done is not without its downsides.
With AI rapidly becoming part of our everyday lives, there is a growing concern about the potential emergence of automated racism. Systems using technology have become integral to various aspects of our lives, from hiring processes to law enforcement, leading to questions surrounding bias and discrimination becoming more pertinent than ever.
This article is spurred by my desire to learn more about the intricate relationship between AI and automated racism by exploring instances where bias has manifested and the ethical considerations that arise.
I have used AI to help write this article to see what AI has to say about its biases. But I have also researched to find out more from other sources.
Understanding Automated Racism
Automated racism refers to the inadvertent or deliberate introduction of biassed algorithms into AI systems, leading to discriminatory outcomes that disproportionately affect certain racial or ethnic groups.
Despite the promise of objectivity in AI, unfortunately, these systems are not immune to the biases inherent in the data used to train them.
Some of the issues that have arisen so far are found in software designed for the following:
Facial Recognition Technology: One glaring example of automated racism is found in facial recognition technology. Studies have shown that these systems often exhibit higher error rates when identifying individuals with darker skin tones, particularly women.
In 2018, MIT Media Lab researcher Joy Buolamwini’s study revealed that commercially available facial recognition systems from IBM, Microsoft, and Face++ had higher error rates for darker-skinned and female faces, with error rates reaching 34.7% for dark-skinned women compared to 0.8% for light-skinned men.
Criminal Justice Algorithms: AI is increasingly used in criminal justice systems for risk assessment and sentencing. However, concerns have been raised about the fairness of these algorithms.
ProPublica’s investigation into the COMPAS system, used for predicting future criminal behaviour, found that it disproportionately labelled African-American defendants as high risk compared to their Caucasian counterparts. This raised questions about reinforcing existing societal biases rather than mitigating them.
Recruitment Algorithms: Automated systems are also used in the hiring process to sift through resumes and identify potential candidates. If the training data used to develop these algorithms is biassed, it can perpetuate existing disparities in employment.
A 2018 study found that an AI hiring tool used by Amazon showed bias against female candidates. Although Amazon disputed the findings, it shed light on the potential pitfalls of relying on AI for recruitment. Further studies like the one completed at Salford University have supported this.
Other sectors, such as financing, education, and healthcare, are also using AI and have the potential for bias and racism, which could have a significant impact on all sectors of life.
Ethical Considerations
One critical ethical consideration is the transparency of AI systems, and users should understand how these systems make decisions. Establishing accountability mechanisms is crucial to address instances of automated racism. Developing and deploying the technology should be a collaborative effort involving ethicists, policymakers, and technologists.
To mitigate biases, it is also essential to use diverse and representative datasets for training AI algorithms. Including various voices and perspectives in the data helps ensure that the AI system does not perpetuate existing societal prejudices.
Ongoing Monitoring And Bias Mitigation
Continuous monitoring of AI systems is crucial to identify and rectify emerging biases. Developers should implement mechanisms for ongoing evaluation and improvement. Ethical frameworks, such as Google’s AI Principles, emphasise the importance of minimising unfair biases and ensuring accountability throughout the development lifecycle.
As we witness the dawn of AI in all aspects of our lives, the spectre of automated racism looms large. Acknowledging the existence of biases in AI systems is the first step toward addressing this issue. With concerted efforts from researchers, developers, and policymakers, it is possible to harness the potential of the technology while ensuring that it does not inadvertently contribute to systemic discrimination.
Striking a balance between technological advancement and ethical considerations will be paramount in navigating the evolving landscape of AI and mitigating the risks of automated racism. As end users of the technology, it is also incumbent on us not to entirely rely on it for decision-making, administration or other aspects of our work and life.
Together, we must push for the playing field to be levelled for everyone instead of allowing our new robot overlords, oops, I mean AI tools, to continue to perpetuate outdated misogyny, bigotry and racist ideas.
