This can’t end well. A recent New York Times article that explained how fast fashion brands are trying to avoid making insensitive and offensive material revealed that mega-retailer Zara has a new plan for combating racism, thievery and cultural appropriation in its designs. To avoid future gaffes, Zara told the Times it will rely on an algorithm to "scan designs for insensitive or offensive features."
But this strategy for achieving increased sensitivity is faulty and troubling.
In recent years, Zara has been accused of appropriating designs from non-European cultures, like a Somali Baati dress and a South Asian lungi skirt, as well as selling outright offensive items like a white supremacist “Pepe the frog” skirt and a striped top that looked suspiciously like a concentration camp uniform, complete with yellow Star of David.
Zara isn’t alone in weathering such scandals, or seeking solutions when they happen. A few weeks ago, H&M experienced a backlash after it dressed a young black model in a sweatshirt with the phrase “coolest monkey in the jungle” printed across the front. In response to criticism, H&M hired a new global leader for diversity and inclusiveness.
Fast fashion is literally fast: stores like Zara can turn designs into in-store products in just a few weeks. But the lack of oversight over a massive amount of product allows for major mistakes to happen. “When you have two hours to approve a line versus two months, things go unnoticed,” Adheer Bahulkar, a retail expert, told the Times.
Zara’s choice to rely on an algorithm to scan for potentially offensive content is a step in the right direction, but algorithms aren’t always reliable: remember when Google Photos labeled African American users “gorillas?" Or when a Nikon camera insisted that Asian subjects of photos were blinking? AI technology does not have the best track record, and has a pattern of contributing to existing biases.
Kate Crawford, an AI researcher at Microsoft, and Meredith Whittaker, a researcher at Google, told the MIT Technology Review that bias may exist in algorithmic products in all sectors. “It’s still early days for understanding algorithmic bias,” they said. “Just this year we’ve seen more systems that have issues, and these are just the ones that have been investigated.”
Cathy O’Neil, a mathematician and the author of Weapons of Math Destruction, similarly told the MIT Technology Review, “Algorithms replace human processes, but they’re not held to the same standards. People trust them too much.”
The decision to rely on algorithms to prevent another PR gaffe seems like an inexpensive way to address a problem that actual human beings should be responsible for. As Blavity pointed out, Zara would be better off promising to increase diversity in its design and production teams. The company did hire a team of diversity officers in 2016 after multiple embarrassing, offensive products were released, and Zara now requires new employees go through inclusion training (perhaps in response to reports that salespeople discriminated against non-white customers). But in order to truly respond to accusations of insensitivity, the company must go beyond use of a robot and a mandatory one-time training for workers, and seek to employ an inclusive and diverse upper-level staff, as well as disrupt the all-white, all-European makeup of its board of directors.