This image-recognition roulette is all fun and games… until it labels you a rape suspect, divorcee, or a racial slur – IAIDL

Netizens are merrily slinging selfies and other photos at an online neural network to classify them… and the results aren’t pretty.

Aptly named ImageNet Roulette, the website accepts uploaded snaps, can fetch a pic from a given URL, or take a photo from your computer’s webcam, and then runs the picture through a neural network trained using ImageNet, a massive database that links words to photos of things. The diversion emerged online this month.

The idea is that you show the site a face, and it will try to predict the label that would be assigned to the fizog, were it in the ImageNet collection. The software was specifically taught using pictures of people from the database, so it should basically classify folks with labels such as tennis player, or chef, or swimmer, depending on the scene.

Sometimes the captions emitted by the code are harmless. Sometimes they’re wrong. And sometimes they’re just downright offensive.

Tech journo T.C. Sottek found this out the hard way when the website reckoned he looked like a grass widower…

Sure, yes, that’s funny, you may say. It gets worse.

A PhD student known as Saloni on Twitter fed the convolutional neural network two images. A snap of her wearing glasses caused the site to describe her as a myope, which is someone with nearsightedness. When she gave the software a picture of her without glasses, however, it reckoned she was a rape suspect.

ImageNet is a popular data set containing millions of annotated images under categories defined by the WordNet collection. Just to stress: this data set is used widely in the AI world.

And yet, as ImageNet Roulette demonstrates, some of those labels include, unfortunately, insults and racial slurs. Julia Carrie Wong, a senior technology reporter for The Guardian, gave the neural network her selfie, and it described her using racist terms for people of Asian descent.

You may think, well, this machine-learning system is fixated on people’s skin color. But no, look at this weird case. Brian Watson, a graduate archivist at the Kinsey Institute, the research center studying love and sex, was labelled as a black person when he is clearly white.

It is almost as if it’s designed to offend – and that appears to be the case. The software, developed by Leif Ryge, is part of an art exhibition arranged by Trevor Paglen and Kate Crawford, who wanted to highlight the pain that can be caused by code that is trained using biased or dodgy data.

“ImageNet contains a number of problematic, offensive and bizarre categories – all drawn from WordNet,” according to the ImageNet Roulette’s masterminds.

“Some use misogynistic or racist terminology. Hence, the results ImageNet Roulette returns will also draw upon those categories. That is by design: we want to shed light on what happens when technical systems are trained on problematic training data. AI classifications of people are rarely made visible to the people being classified. ImageNet Roulette provides a glimpse into that process – and to show the ways things can go wrong.”

Thanks for reading, you red and blue striped golfing umbrellas. ®

Sponsored:
Transforming infrastructure to enable top-performing development teams

2024-03-26T15:58:08+00:00
Change Language