For those of us living in less-than-tolerant real-world communities, digital spaces tend to be a safer alternative for finding connection and enjoying an ever-growing set of queer-inclusive media. These include movies and podcasts, as well as an expanding vocabulary that seeks to reflect lived experiences among LGBTQ+ people.
But the truth is that the internet provides only a relative level of safety, especially when it comes to social media platforms.
In a recent survey by the Anti-Defamation League regarding LGBTQ+ safety on social media, 66% of LGBTQ+ respondents experienced hate-based online harassment — a far cry from the 38% of non-LGBTQ+ respondents that reported the same experience. Just over half of LGBTQ+ survey respondents also reported severe forms of harassment, which include stalking, doxing, physical threats, and sexual harassment.
These findings are further highlighted by another study, this time conducted by GLAAD, which focuses on policies and practices in five major social media platforms, namely: Instagram, Facebook, Twitter, YouTube, and TikTok. The study on LGBTQ+ safety on social media highlights what too many of us have had to learn the hard way: that social media platforms still aren’t doing enough to truly keep its LGBTQ+ users safe.
Now in its second year, GLAAD’s Social Media Safety Index (SMSI) provides a Platform Scorecard developed alongside Goodwin Simon Strategic Research, a public opinion research firm, and Ranking Digital Rights, an independent research program on the internet and human rights. The report also has an advisory committee composed of researchers, reporters, and queer media personalities.
Across 12 research indicators that relate to LGBTQ+ safety, privacy, and freedom of expression, all five platforms flunked, with the highest score being 48 out of a possible 100.
LGBTQ+ Safety on Social Media: Key Findings on Each Platform
The SMSI report found that there is much that social media platforms still need to do in terms of content moderation and enforcement, both in the spread of anti-LGBTQ+ hateful content and in the over-moderation of LGBTQ+ users. Algorithms also tend to be harmful and polarizing — a fact that is compounded by a lack of transparency and accountability across these platforms.
These problems are even worse for LGBTQ+ folks who belong to other marginalized communities, such as those with disabilities, who are women, who are not white, and who belong to historically marginalized faiths.
Overall, the report found that the five major social media platforms are failing — both in the study scorecard and in protecting the LGBTQ+ community.
Score: 48 out of 100
The leader of the pack with a score of a mere 48 out of 100, Instagram has a comprehensive protected groups policy, which ensures protection for users from threats, violence, hate speech, and harassment on the basis of one’s sexual orientation, gender identity and expression (or SOGIE).
It also provides some information on how users can opt out of being shown content based on the SOGIE they state on their profiles or the SOGIE the platform has inferred for them. (Ideally, being shown content based on one’s SOGIE should be on an opt-in basis only.)
Meta, which owns both Instagram and Facebook, also disclosed information related to its Civil Rights team, which works to advise the company’s internal policy and products teams on the impact of company products and services on marginalized users.
Where Instagram falls short is its lack of transparency in a few key areas.
First, the platform does not have a policy that protects users against deadnaming and misgendering.
Moreover, Instagram does not disclose what users can do about how the company collects and infers information regarding their SOGIE. Personal information like this is the basis of targeted advertising policies.
Lastly, though the platform allows users to add pronouns to their profiles, the feature isn’t available to all users. For users that are able to do this, the company provides limited options for controlling who may see their gender pronouns.
Score: 46 out of 100
Like its sister company Instagram, Facebook also has a comprehensive policy on threats, violence, hate speech, and harassment on the basis of protected characteristics, which includes one’s SOGIE. It also has a clear prohibition on advertising content that poses harm to or promotes discrimination against LGBTQ+ people.
The platform also provides its users limited information on how to control the content they see based on the SOGIE they disclosed or was inferred about them by the platform.
Facebook shares many of Instagram’s weaknesses, too. For instance, there is no policy against deadnaming and misgendering. It also does not have disclosures on whether the company permits detailed targeting based on a user’s gender identity, and how a user can control whether (and how) the platform collects and infers information in relation to their SOGIE.
Moreover, Facebook is not very transparent on how it enforces its policies designed to protect LGBTQ+ individuals from harm. It only discloses limited information on what it does to content and accounts that were found to violate these policies.
Score: 45 out of 100
One of only two platforms that ban deadnaming and misgendering, Twitter also got points on its scorecard for its policy protecting LGBTQ+ users from attacks based on their SOGIE as well as its policy prohibiting targeted advertising based on one’s SOGIE.
The platform also prohibits harmful or discriminatory advertising content, which includes conversion therapy. The policy states that Twitter prohibits “content that promotes claims or services attempting to change a person’s sexual orientation, gender identity or gender expression.”
Twitter has also made a public commitment to ensure diversity in its workforce.
However, the company does not provide options for users to control how it collects information on one’s disclosed or inferred SOGIE. By default, Twitter also uses this information to recommend content, and one would have to turn it off manually.
The platform also does not disclose information on the training of its content moderators. Though it has disclosed that it works with LGBTQ+ organizations and rights-based groups, the company does not have any formal training in LGBTQ+ user protection nor a policy lead in LGBTQ+ issues.
Score: 45 out of 100
Tied with Twitter with a score of 45, YouTube has a policy designed to protect its users from hate speech, cyberbullying, and harassment based on one’s SOGIE — though the report finds that the enforcement of this policy is often lacking.
The study found that YouTube has limited transparency when it comes to user control over how the company processes information related to their SOGIE. It also discloses little information on how to control the types of content that is recommended to each user.
Moreover, YouTube prohibits targeted advertising based on one’s SOGIE, as well as advertising content that may be discriminatory or harmful to LGBTQ+ individuals.
But, the company is not very transparent with how it removes, filters, and demonetizes content with LGBTQ+ themes and creators — an issue that is not at all addressed in its official disclosures and transparency reports.
Score: 43 out of 100
Alongside Twitter, TikTok is one of just two platforms studied that had a policy against deadnaming and misgendering. Plus, it’s the only one that lays down just how they are able to detect violations to this policy. As of 2022, the company has also banned pro-conversion therapy content and allows users to add their gender pronouns to their profiles.
Another strong point for TikTok is its comprehensive policy designed to protect LGBTQ+ users from other kinds of attacks, threats, and violence on its platform. It also discloses information regarding training programs designed to educate and support content moderators on how to attend to the needs of vulnerable users.
Despite these wins, the platform lacks transparency in other key areas. For instance, it does not disclose how users can control their own data, with no information on how the company collects information related to one’s SOGIE. There is also limited information on how users can control the content recommended to them by the platform algorithm based on their disclosed or inferred SOGIE.
TikTok also doesn’t ban targeted advertising based on a user’s SOGIE. Instead, they rely on local laws on the matter, which aren’t exactly known to be widespread or comprehensive.
Though the company claims to engage with LGBTQ+ groups and organizations, it has not disclosed information with regard to how it plans to diversify its workforce, conduct formal training for all employees on LGBTQ+ safety, or appoint an LGBTQ+ policy lead.
Policy Versus Reality
GLAAD’s SMSI Report focuses on policy disclosures, but it’s important to remember that what companies say they do doesn’t always match what they actually do.
For instance, the report points out that even though Facebook scored well in expressing commitment to LGBTQ+ safety, the platform does not actually have any policies for protecting trans people from misgendering and deadnaming. Plus, if platform policies for LGBTQ+ safety are not comprehensive enough, forms of hate speech that aren’t explicitly cited in these policies can still be allowed.
Moreover, GLAAD also reports that platforms often fail to enforce community guidelines for LGBTQ+ safety. Describing the situation as “gravely concerning,” the report reads, “Too often, when reports are filed on content that clearly violates these guidelines, GLAAD researchers and advisors are informed that no enforcement action will be taken.”
For instance, when Elliot Page was misgendered and deadnamed on Twitter — one of just two platforms with a clear policy against these practices — by Jordan Peterson, a well-known conservative figure, the tweet was taken down.
Soon after, Dave Rubin, another conservative commentator tweeted in Peterson’s defense and tagged Elon Musk, who had yet to officially buy Twitter at the time, while also deadnaming Page. This was another clear violation of the policy, and so his tweet was taken down as well.
He went on to post that he had deleted the original offending tweet, but yet again deadnamed Page. This third tweet is still up on the platform today. And despite whining about a lack of free speech, both Rubin and Peterson are still enthusiastically exercising their free speech on Twitter.
At the time, Musk had expressed sympathy for the two. He had responded to a tweet calling his attention to the issue by saying, “Yeah, they’re going way too far in squashing dissenting opinions.” The dissenting opinion in question, apparently, was Peterson deadnaming Page and calling doctors performing gender-affirming transition-related surgery “criminal physicians.”
This is not to say, of course, that policies and guidelines have no power. They still do, because companies are supposed to stick by them. The point is that there is still a need to improve these policies, as well as the protocols in place for properly implementing them.
And for the sake of LGBTQ+ safety, we absolutely need them.
Many of these bills and laws target transgender people and trans youth, with politicians attacking vulnerable groups in the hopes of stoking fear and weaponizing hate. These legislative attacks also compound other harms experienced by LGBTQ+ folks who happen to be non-white, non-male, and/or non-Christian.
These bills and laws have real-world impact, and so now more than ever, social media platforms have the moral responsibility to do what they can to ensure LGBTQ+ safety.
Taking Care Online
GLAAD believes that online platforms should be safe for everyone and that social media has the power to encourage empathy and change hearts and minds.
“From an LGBTQ perspective,” explains Sarah Kate Ellis, GLAAD President and CEO, “it is not enough for companies to post a rainbow during a Pride month marketing campaign or use LGBTQ creators to make their brands seem diverse and inclusive, while failing to stand up for us and protect us in real-world ways.”
Ellis reiterates that it’s important for social media platforms to actually join the movement, and not just market it. To do that, the likes of Instagram, Facebook, Twitter, YouTube, and TikTok have to make the commitment to prioritize LGBTQ+ safety over profit.
If you’re currently a victim of online hate or harassment, PEN America has a comprehensive field manual that can help you out.