Rules Enforcement
About this report
Insights into how and when we enforce our policies, and reports of potential violations.
01. Latest Data: Accounts Actioned
Data can't be loaded
We're sorry - but this data seems unavailable right now.
02.
Overview
X's purpose is to serve the public conversation. We welcome people to share their unique point of view on X, but there are some behaviors that discourage others from expressing themselves or place people at risk of harm. The Our Rules exist to help ensure that all people can participate in the public conversation freely and safely, and include specific policies that explain the types of content and behavior that are prohibited.
This section covers the latest data about instances where we've taken enforcement actions under the Our Rules to either require the removal of specific posts or to suspend accounts. These metrics are referred to as: , , and . More details about our range of enforcement options are available in our Help Center.
Impressions
We continue to explore ways to share more context and details about how we enforce the Our Rules. As such, we are introducing a new metric – – for enforcement actions where we required the removal of specific posts. Impressions capture the number of views a post received prior to removal.
From July 1, 2021 through December 31, 2021, X required users to remove 4M posts that violated the Our Rules. Of the posts removed, 71% received fewer than 100 impressions prior to removal, with an additional 21% receiving between 100 and 1,000 impressions. Only 8% of removed posts had more than 1,000 impressions. In total, impressions on these violative posts accounted for less than 0.1% of all impressions for all posts during that time period.
Some notable changes since our last report:
03. Analysis
Data can't be loaded
We're sorry - but this data seems unavailable right now.
Big picture
We have a global team that manages enforcement of the Our Rules with 24/7 coverage of most supported languages on X. Our goal is to apply the Our Rules objectively and consistently. Enforcement actions are taken on content that is determined to violate the Our Rules.
We are committed to providing due process and to better ensure that the enforcement of the Our Rules is fair, unbiased, proportional and respectful of human rights, influenced by the spirit of the Santa Clara Principles on Transparency and Accountability in Content Moderation and other multi stakeholder processes. We will continue to invest in expanding the information available about how we do so in future reports.
Safety
The "Safety" section of the Our Rules covers violence, terrorism/violent extremism, child sexual exploitation, abuse/harassment, hateful conduct, promoting suicide or self-harm, sensitive media (including graphic violence and adult content), and illegal or certain regulated goods or services. More information about each policy can be found in the Our Rules.
Some notable changes since the last report:
Other select takeaways:
Terrorism/violent extremism
The Our Rules prohibit the promotion of terrorism and violent extremism. We suspended 33,693 unique accounts for violations of the policy during this reporting period. Of those accounts, 92% were proactively identified and actioned. Our current methods of surfacing potentially violating content for review include leveraging the shared industry hash database supported by the Global Internet Forum to Counter Terrorism (GIFCT).
Child sexual exploitation
We do not tolerate child sexual exploitation on X. When we are made aware of child sexual exploitation media, including links to images of or content promoting child exploitation, the material will be removed from the site without further notice and reported to The National Center for Missing & Exploited Children ("NCMEC"). People can report content that appears to violate the Our Rules regarding Child Sexual Exploitation via our web form.
We suspended 596,997 unique accounts during this reporting period – a 32% increase since our previous report. Of these, 91% of suspended accounts were identified proactively by employing internal proprietary tools and industry hash sharing initiatives. These tools and initiatives support our efforts in surfacing potentially violative content for further review and, if appropriate, removal.
Abuse/Harassment
Under our Abusive Behavior policy, we prohibit content that harasses or intimidates, or is otherwise intended to shame or degrade others. We took action on 940,679 accounts during this reporting period. This is a 10% decrease from our last report and is in line with a 11% decrease in accounts reported under this policy during this period.
Violence
Our policies prohibit sharing content that threatens violence against an individual or a group of people. We also prohibit the glorification of violence. 41,386 accounts were suspended and we took action on 70,229 unique pieces of content during this reporting period.
Hateful conduct
We expanded our Hateful Conduct policy in December 2021 to prohibit dehumanizing speech on the basis of gender, gender identity and sexual orientation. During this period 104,565 accounts were suspended under this policy, representing a 22% decrease in account suspensions since our last report.
Promoting suicide or self-harm
We prohibit content that promotes, or otherwise encourages, suicide or self-harm. During this reporting period there was a substantial increase in the volume of accounts suspended (18%), and content removed (23%) under this policy. 408,143 accounts were actioned in total. We attribute these changes to our continued investment in identifying violative content at scale.
Sensitive media, including graphic violence and adult content
We removed a total of 1.1M unique pieces of content under our Sensitive Media policy during this period, a 31% decrease since our last report.
Illegal or certain regulated goods or services
Due to continued refinement of enforcement guidelines, we saw a 37% increase in accounts suspended under this policy, representing a total of 119,508 accounts.
Privacy
The "Privacy" section of the Our Rules covers private information and non-consensual nudity. More information about each policy can be found in the Our Rules.
Some notable changes since the last report:
Other select takeaways:
Private information
We expanded our private information policy in late November to prohibit sharing media of private individuals without the permission of those depicted. 34,181 accounts and 62,537 unique pieces of content were actioned under this policy.
Authenticity
The "Authenticity" section of the Our Rules covers platform manipulation and spam, civic integrity, impersonation, synthetic and manipulated media, and copyright and trademark. We have standalone report pages for platform manipulation and spam, copyright, and trademark, and cover civic integrity and impersonation enforcement actions in this section.[1] More information about each policy can be found in the Our Rules.
Some notable changes since the last report:
Other select takeaways:
Civic Integrity
During this reporting period the number of accounts actioned under Civic Integrity policy has decreased due to the low number of major national elections in the United States.
Impersonation
This reporting period, we actioned 181,644 accounts and suspended 169,396 accounts, a 16% and 15% decrease respectively, for violations of the impersonation policy. This decrease is in line with a similar 15% decrease in accounts reported during this period.
COVID-19 misleading information
As of March 2021, we incorporated a five-strike system meant to address repeated violations of the COVID-19 misinformation policy. After the fifth strike, the user is eligible for suspension under the policy. Since the launch of the strike system we invested in and increased our proactive detection efforts to surface and mitigate the harm related to COVID-19 misinformation. We suspended 1,376 accounts, an increase of 123%, for violations of the COVID-19 misinformation policy during this reporting period.
01. Latest Data: Accounts Reported
Data can't be loaded
We're sorry - but this data seems unavailable right now.
02.
Overview
Insights into for violations of the Our Rules.
03.
Analysis
Big picture
Reported content is assessed to determine whether it violates any aspects of the Our Rules, independent of its initial report category. For example, content reported under our private information policy may be found to violate – and be actioned under – our hateful conduct policies. We may also determine that reported content does not violate the Rules at all.
The policy categories in this section do not map cleanly to the ones in the Accounts Actioned section above. This is because people typically report content for possible Our Rules violations through our Help Center or in-app reporting.
We are committed to providing due process and to better ensure that the enforcement of the Our Rules is fair, unbiased, proportional and respectful of human rights, influenced by the spirit of the Santa Clara Principles on Transparency and Accountability in Content Moderation and other multi stakeholder processes. We will continue to invest in expanding the information available about how we do so in future reports.
Footnotes
Accounts Actioned
To provide meaningful metrics, we de-duplicate accounts which were actioned multiple times for the same policy violation. This means that if we took action on a post or account under multiple policies, the account would be counted separately under each policy. However, if we took action on a post or account multiple times under the same policy (for example, we may have placed an account in read-only mode temporarily and then later also required media or profile edits on the basis of the same violation), the account would be counted once under the relevant policy.
Accounts Reported
To provide meaningful metrics, we de-duplicate accounts which were reported multiple times (whether multiple users reported an account for the same potential violation, or whether multiple users reported the same account for different potential violations). For the purposes of these metrics, we similarly de-duplicate reports of specific posts. This means that even if we received reports about multiple posts by a single account, we only counted these reports towards the "accounts reported" metric once.