This artist is creating a “belligerent algorithm” to expose AI bias

January 07, 2021

By  

This artist is creating a “belligerent algorithm” to expose AI bias

 

The algorithms underpinning artificial intelligence (AI) systems have increasingly been shown to reflect the biases of their designers. Artist and game maker A.M. Darke is creating a system based on their own prejudices to highlight the problem of AI bias, calling for programmers to be held accountable for algorithms that govern everything from credit ratings to criminal convictions.

The art project has been commissioned by the Open Data Institute (ODI), which has made Darke its research and development artist-in-residence. Darke is writing an algorithm that is overtly biased against the demographic predominantly designing the algorithms influencing our lives: white men. The ongoing project seeks to flip the usual narrative, in which the inherent biases of this demographic are unwittingly reflected in the AI systems they build.

Such algorithms have led to real-world consequences for marginalised groups. In January this year, an African American man was wrongfully arrested after a facial recognition system falsely matched his photo with security footage of a shoplifter. In 2018, Amazon ditched a recruitment algorithm that discriminated against women because it was trained on datasets of CVs predominantly submitted by men.

“I’m often trying to bridge the gap between how a dominant culture views black culture and make this connection and use that to interrogate a certain kind of politics of oppression,” Darke said, speaking at the ODI’s recent Data Futures event. “I think with this commission it was different because I was talking to people who already were aware of the problems.”

Darke aims to use their art to get the audience to feel complicit and responsible in the hope they are spurred to take action. Instead of focusing on who is harmed by how we use data and over-rely on algorithms, they wanted to highlight the people who were part of building the oppressive systems and structures that lead to AI bias.

“In my work, I try to avoid taking marginalised experiences and then serving them on a platter for a more privileged audience,” said Darke, who is assistant professor of digital arts and new media, and critical race and ethnic studies at UC Santa Cruz.

“We tend to feel a certain amount of guilt and then we feel good about ourselves for feeling bad and then we don’t do anything. So highlighting people who are building these systems in a ‘just following orders’ way, not necessarily these large tech icons, who we think of is all-powerful, but the engineers and people working in content moderation, people working on policy just like everyday people who are building these structures that are deeply harmful.”

See Also