Stanford University researchers are responding to national LGBTQ groups that have called their study on using artificial intelligence to determine sexual orientation “dangerous and flawed.”
In their report released this month, “Deep neural networks can detect sexual orientation from faces,” Michal Kosinski, an assistant professor of organizational behavior at Stanford, and Yilun Wang, who studies computer science at the school, extracted features from more than 35,000 facial images that had been posted to a dating website and entered them into a logistic regression designed to classify sexual orientation. (The pair established people’s orientation by looking at the gender of the partners they were seeking.)
“Given a single facial image, a classifier could correctly distinguish between gay and heterosexual men in 81 percent of cases, and in 74 percent of cases for women,” Kosinski and Wang said in a summary of their findings.
In a Sept. 8 statement, the national groups GLAAD and Human Rights Campaign blasted the research, adding that it “could cause harm to LGBTQ people around the world.”
Kosinski and Wang responded through a statement of their own, saying the groups “do a great disservice to the LGBTQ community by dismissing our results outright without properly assessing the science behind it, and hurt the mission of the great organizations that they represent.”
In their research, the pair said, “Human judges achieved much lower accuracy” of accuracy than the algorithms, with “61 percent for men and 54 percent for women.”
When five facial images for each person were used, “the accuracy of the algorithm increased to 91 percent and 83 percent, respectively,” they said.
They added that among other findings, “Composite faces” that were examined “suggest that gay men had larger foreheads than heterosexual men, while lesbians had smaller foreheads than heterosexual women.”
Kosinski and Wang weren’t available for comment for this story, but in their statement, they acknowledged: “Given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.”
Among other concerns, GLAAD and HRC said that the research hadn’t been peer reviewed and that information from the online profiles hadn’t been independently verified.
“This research isn’t science or news, but it’s a description of beauty standards on dating sites that ignores huge segments of the LGBTQ community, including people of color, transgender people, older individuals, and other LGBTQ people who don’t want to post photos on dating sites,” Jim Halloran, GLAAD’s chief digital officer, said in a statement. “At a time where minority groups are being targeted, these reckless findings could serve as weapon to harm both heterosexuals who are inaccurately outed, as well as gay and lesbian people who are in situations where coming out is dangerous.”
In response to such criticism, Kosinski and Wang wrote that the study “was peer reviewed and accepted for publication in the Journal of Personality and Social Psychology, the leading academic journal in psychology. … In addition, before it was sent for a formal peer review, the manuscript was reviewed by over a dozen experts in the fields of sexuality, psychology, and artificial intelligence.”
They also addressed GLAAD and HRC’s complaint about information not being independently verified.
“We put much effort into ascertaining that our data was as valid as possible, and there are no reasons to believe that there are gross inaccuracies,” the researchers said. “Our approach was no different than in other similar studies. More than a dozen scholars who have reviewed this work did not see any issues in how we handled those variables.”
In a call with local media outlet Bay Area Reporter, Jessica Stern, executive director of OutRight Action International, said she wasn’t aware of any governments using artificial intelligence algorithms like the ones in the study, but “governments around the world use arbitrary standards to target LGBTI people already” and use social media and dating sites “to trap people.”
Stern, who hadn’t read the Stanford study, said, “I don’t want to see any more tools put in the hands of governments that could use so-called irrefutable evidence to target LGBTI people.”
Kosinski and Wang indicated they’re as concerned as anyone else that “widely available tools can be used to detect sexual orientation from images of people’s faces,” which “may already” be taking place.
“Our findings could be wrong,” they said. “In fact, despite evidence to the contrary, we hope that we are wrong. … If our findings are wrong, we merely raised a false alarm. However, if our results are correct, GLAAD and HRC representatives’ knee-jerk dismissal of the scientific findings puts at risk the very people for whom their organizations strive to advocate.”
This article was originally published Sept. 19, 2017, in the Bay Area Reporter.