Tech

Here’s what’s really going on with that study saying AI can detect your sexual orientation


Last week, scientists made headlines around the world when news broke of an artificial intelligence (AI) that had been trained to determine people’s sexual orientation from facial images more accurately than humans.

According to the study, when looking at photos this neural network could correctly distinguish between gay and heterosexual men 81 percent of the time (and 74 percent for women), but it didn’t take long before news of the findings provoked an uproar.

 

On Friday, sex and gender diversity groups GLAAD and the Human Rights Campaign (HRC) issued a joint statement decrying what they called “dangerous and flawed research that could cause harm to LGBTQ people around the world”.

The AI, which was trained by researchers from Stanford University on more than 35,000 public images of men and women sourced from an American dating site, used a predictive model called logistic regression to classify their sexual orientation (also made public on the site) based on their facial features.

This included fixed features, such as the shape of a person’s nose, as well as transient features, such as grooming style.

In the researchers’ testing using a separate set of images that the algorithm hadn’t seen before, the neural network outperformed human judges attempting to determine the sexual orientation of the individuals shown, with the human judges scoring 61 percent for men and 54 percent for women.

When the algorithm was presented with five facial images of each person, it became even more accurate, the researchers claimed, getting it right 91 percent of the time with men and 83 percent with women.

It’s worth pointing out that the sample of images the researchers used had some definite limitations. For starters, they’re all profile shots taken from a dating site, so they’re not exactly regular images of the individuals involved, and the study only compiled images of white people aged between 18 and 40.

 

Taking these kinds of factors into account, GLAAD’s Chief Digital Officer Jim Halloran, says any claims that this AI can determine people’s sexual identity is grossly flawed.

“Technology cannot identify someone’s sexual orientation. What their technology can recognise is a pattern that found a small subset of out white gay and lesbian people on dating sites who look similar,” Halloran says.

“This research isn’t science or news, but it’s a description of beauty standards on dating sites that ignores huge segments of the LGBTQ community, including people of colour, transgender people, older individuals, and other LGBTQ people who don’t want to post photos on dating sites.”

GLAAD and the HRC further pointed out that the study, which has not yet been published, didn’t verify people’s information, assumed only two sexual orientations and didn’t include data on bisexual individuals, and wasn’t peer-reviewed.

But more damningly, the organisations said that this kind of “junk science” could even be threatening in the wrong hands.

“This is dangerously bad information that will likely be taken out of context,” explains HRC’s Director of Public Education and Research, Ashland Johnson.

 

“Imagine for a moment the potential consequences if this flawed research were used to support a brutal regime’s efforts to identify and/or persecute people they believed to be gay.”

In an explanatory note accompanying their paper – which the researchers claim has in fact been peer-reviewed and is due to be published in the Journal of Personality and Social Psychology – the authors acknowledge the limitations with regard to their sample, but maintain that they “did not build a privacy-invading tool”.

“We studied existing technologies, already widely used by companies and governments, to see whether they present a risk to the privacy of LGBTQ individuals,” they explain in the note.

In a separate response to GLAAD and the HRC’s denunciation of their study, the researchers have also hit back, writing “[it] really saddens us that the LGBTQ rights groups, HRC and GLAAD, who strived for so many years to protect the rights of the oppressed, are now engaged in a smear campaign”.

Regardless, the controversy in the media headlines may have damaged the study’s publication prospects.

Shinobu Kitayama, an editor with the Journal of Personality and Social Psychology, has just disclosed that the paper is now being re-examined in an ethical review, which means it could be weeks before we know the status of the study.

We’ll just have to wait until the results of the review are known to see what the next chapter is for this provocative research – but it’s unlikely the controversy will disappear any time soon.

The findings, for now at least, are due to be reported in an upcoming edition of the Journal of Personality and Social Psychology.

 



Source link

Products You May Like

Articles You May Like

Blue dinosaur eggs have revealed a surprise about ancient bird-like creatures
BREAKING: Light has been stored as sound for the first time
If you feel everyone around you has more friends, you’re probably wrong
Physicists just cracked the problem of stabilising a totally new kind of particle
Here's the story behind this heartbreaking photo, in the photographer's own words

Leave a Reply

Your email address will not be published. Required fields are marked *