Your browser is not supported. please upgrade to the latest version of Google Chrome, Mozilla Firefox, Apple Safari or Microsoft Edge.

Facial recognition AI can’t identify trans and non-binary people

Amrita Khalid

10/16/2019

Facial-recognition software from major tech companies is apparently ill-equipped to work on transgender and non-binary people, according to new research. A recent study by computer-science researchers at the University of Colorado Boulder found that major AI-based facial analysis tools—including Amazon’s Rekognition, IBM’s Watson, Microsoft’s Azure, and Clarifai—habitually misidentified non-cisgender people.

The researchers gathered 2,450 images of faces from Instagram, searching under the hashtags #woman, #man, #transwoman, #transman, #agenderqueer, and #nonbinary. They eliminated instances in which multiple individuals were in the photo, or where at least 75% of the person’s face wasn’t visible. The images were then divided by hashtag, amounting to 350 images in each group. Scientists then tested each group against the facial analysis tools of the four companies.

The systems were most accurate with cisgender men and women, who on average were accurately classified 98% of the time. Researchers found that trans men were wrongly categorized roughly 30% of the time. The tools fared far worse with non-binary or genderqueer people, inaccurately classifying them in all instances.

The rising use of facial recognition by law enforcement, immigration services, banks, and other institutions has provoked fears that such tools will be used to cause harm. There’s a growing body of evidence that the nascent technology struggles with both racial and gender bias. A January study from the MIT Media Lab found that Amazon’s Rekognition tool misidentified darker-skinned women as men one-third of the time. The software even mislabeled white women as men at higher rates than white men. While IBM and Microsoft’s programs were found to be more accurate than Amazon’s, researchers observed an overall trend of male subjects being labeled correctly more than female subjects, and of darker skin drawing higher error rates than lighter skin.

At present, there’s very little research on how facial analysis tools work with gender non-conforming individuals. “We knew that people of minoritized gender identities—so people who are trans, people who are non-binary—were very concerned about this technology, but we didn’t actually have any empirical evidence about the misclassification rates for that group of people,” Morgan Klaus Scheuerman, a doctoral student in the information-science department of the University of Colorado Boulder, said in a video about the study.

The researchers believe that the algorithms rely on outdated stereotypes on gender, which further increases their error rates. Half of the systems misclassified Scheuerman, who is male and has long hair, as a woman. Such inconsistencies were observed across the board. For example, IBM’s Watson classified a photo of a man dressed in drag as female, while Microsoft’s Azure classified him as male.

The four companies whose products were tested have yet to comment on the study’s findings. Quartz reached out to them for comment and will update as necessary.

Read More 

    LGBTQIA+

Load older comments...

Loading comments...

Add comment

30

July 2019

Diversity, Equity and Inclusion Technology Platform Launches in Dallas

09

October 2019

HRC: SCOTUS Should Uphold LGBTQ Employment Protections

22

February 2020

Harry and Meghan forced to drop their 'Sussex Royal' brand name

25

July 2019

Survey finds only one-third of LGBTQ employees are out at work

14

February 2019

The push for gender diversity in leadership is entering a whole new phase

You've Been Timed Out

Please login to continue