Tech
Study: YouTube Discriminates Against Videos Tagged ‘Gay’ or ‘Lesbian’
It’s only the most recent time the platform has been accused of inadvertent anti-LGBTQ+ bias.
October 01 2019 12:31 PM EST
May 31 2023 4:47 PM EST
By continuing to use our site, you agree to our Private Policy and Terms of Use.
It’s only the most recent time the platform has been accused of inadvertent anti-LGBTQ+ bias.
New research has shed light on YouTube's mysterious rating process, showing that the company's artificial intelligence seems to be targeting LGBTQ+ related terms and hiding related videos from being searched, subscribed to, or monetized.
Although the company doesn't have a predetermined list of banned words, an investigation suggests that YouTube instead uses bots that have learned to compile their own lists of keywords that contribute to videos being "hidden" on the site.The research was conducted by a group that includes Sealow, the CEO of research firm Ocelot AI, YouTube Analyzed creator Andrew; and the Nerd City YouTuber, who goes by Een.
The team tested over 10,000 terms from various dictionary lists, uploading videos of between one and two seconds. Each employed a different set of words to test YouTube's automatic rating system and then compiled an Excel spreadsheet listing words that were found either to be safe or to trigger an "advertiser-unfriendly" flag.
According to their tests, YouTube's artificial intelligence tended to hide or demonetize videos with metadata including words like "gay" and "lesbian," but if those terms were changed to "happy," the videos were left alone.
In response, tech website The Vergereached out to YouTube for comment. A spokesperson for the platform said that the site doesn't use a list of LGBTQ+ related words to trigger demonetization, but that's not what's at issue. It's generally accepted that there's no list of blocked words. Instead the problem appears to be artificial intelligence that's learned to discriminate.
"We're proud of the incredible LGBTQ+ voices on our platform and take concerns like these very seriously," the spokesperson toldThe Verge. "Sometimes our systems get it wrong, which is why we've encouraged creators to appeal. Successful appeals ensure that our systems are updated to get better and better."
But that's been the company line for years now: The system simply needs more time to improve. In a video posted earlier this year, CEO Susan Wojcicki claimed there shouldn't be any automatic demonetization -- and that the company works "incredibly hard to make sure that when our machines learn something" and that the decisions the algorithm makes "are fair."
There's been no estimate for when the algorithm will stop discriminating against LGBTQ+ content.
YouTube also says that manual review of blocked videos will result in an improved algorithm. But the researchers point out that manual reviewers are often low-paid and may reside in countries where homosexuality is illegal, so their standards may skew toward agreeing with the company's artificial intelligence.
"This is not a matter of LGBTQ+ personalities being demonetized for something that everyone else would also be demonetized for, such as sex or tragedy," Sealow says in a video outlining his findings. "This is LGBTQ+ terminology like 'gay' and 'lesbian' being the sole reason a video is demonetized despite the context."
YouTubers have responded to the faulty rating system in a variety of ways. Some use special code to avoid running afoul of the algorithm. Others have filed suit against the company, alleging that it unfairly discriminates against LGBTQ+ content. (As Outpreviously reported, the same law firm representing those YouTubers also represents a conservative group suiting the company for discrimination.)
The researchers made it clear that they don't believe the company intentionally discriminates against LGBTQ+ content. Instead, they say it's a problem caused by overreliance on algorithms that are allowed to discriminate.
RELATED | Lawyers Representing LGBTQ+ YouTubers Also Sued on Behalf of Conservatives