DSA Transparency Report - April 2025

Introduction

This report covers the content moderation activities of X’s international entity X Internet Unlimited Company (“XIUC”) (formerly known as Twitter International Unlimited Company (“TIUC”)) under the Digital Services Act (DSA), during the date range 1 October, 2024 to 31 March, 2025.

We may refer to “notices” as defined in the DSA as “user reports” and “reports”.

Description of our Content Moderation Practices

Our content moderation systems are designed and tailored to mitigate systematic risks without unnecessarily restricting the use of our service and fundamental rights, especially freedom of expression. Content moderation activities are implemented and anchored on principled policies and leverage a diverse set of interventions to ensure that our actions are reasonable, proportionate and effective. Our content moderation systems blend automated and human review paired with a robust appeals system that enables our users to quickly raise potential moderation anomalies or mistakes.

Policies 

X's purpose is to serve the public conversation. Violence, harassment, and other similar types of behaviour discourage people from expressing themselves, and ultimately diminish the value of global public conversation. Our Rules are designed to ensure all people can participate in the public conversation freely and safely.

X has policies protecting user safety as well as platform and account integrity. The X Rules and policies are publicly accessible on our Help Center, and we are making sure that they are written in an easily understandable way. We also keep our Help Center regularly updated anytime we modify our Rules.

For the purposes of the summary tables below, the X policy titles in use at the start of the reporting period have been retained, even if they changed throughout the period.

Enforcement 

When determining whether to take enforcement action, we may consider a number of factors, including (but not limited to) whether:

When we take enforcement actions, we may do so either on a specific piece of content (e.g., an individual post or Direct Message) or on an account. We may employ a combination of these options. In most cases, this is because the behaviour violates the X Rules.

To enforce our Rules, we use a combination of machine learning and human review. Our systems are able to surface content to human moderators who use important context to make decisions about potential violations. This work is led by an international, cross-functional team with 24-hour coverage and the ability to cover multiple languages. We also have a complaints process for any potential errors that may occur.

To ensure that our human reviewers are prepared to perform their duties we provide them with a robust support system. Each human reviewer goes through extensive training and refreshers, they are provided with a suite of tools that enable them to do their jobs effectively, and they have a suite of wellness initiatives available to them. For further information on our human review resources, see the section titled “Human resources dedicated to Content Moderation”.

Reporting violations

X strives to provide an environment where people can feel free to express themselves. If abusive behaviour happens, we want to make it easy for people to report it to us. EU users can also report any

violation of our Rules or their local laws, no matter where such violations appear.

Transparency

We always aim to exercise moderation with transparency. Where our systems or teams take action against content or an account as a result of violating our Rules or in response to a valid and properly scoped request from an authorised entity in a given country, we strive to provide context to users. Our Help Center article explains notices that users may encounter following actions taken. We promptly notify affected users about legal requests to withhold content, including a copy of the original request, unless we are legally prohibited from doing so. We have also updated our global transparency centre covering a broader array of our transparency efforts. 

Content Moderation Governance Structure

Own Initiative Content Moderation Activities

X employs a combination of heuristics and machine learning algorithms to automatically detect content that we believe violates the X Rules and policies enforced on our platform. We use combinations of natural language processing models, image processing models and other sophisticated machine learning methods to detect potentially violative content. These models vary in complexity and in the outputs they produce. For example, the model used to detect abuse on the platform is trained on abuse violations detected in the past. Content flagged by these machine learning models are either reviewed by human content reviewers before an action is taken or, in some cases, automatically actioned, based on the historical accuracy of the model’s output. Heuristics are typically utilised to enable X to react quickly to new forms of violations that emerge on the platform. Heuristics are common patterns of behaviours, text, or keywords that may be typical of a certain category of violations. Pieces of content detected by heuristics may also get reviewed by human content reviewers before an action is taken on the content. These heuristics are used to flag content for review by human agents proactively.

Testing, Evaluation, and Iteration

Automated enforcements under the X Rules and policies undergo rigorous testing before being applied to the live product. Both machine learning and heuristic models are trained and/or validated on thousands of data points and labels (e.g., violative or non-violative) including those that are generated by trained human content moderators. For example, inputs to content-related models can include the text within the post itself, the images attached to the post, and other characteristics. Training data for the models comes from both the cases reviewed by our content moderators, random samples, and various other samples of pieces of content from the platform.

Use of Human Moderation

Before any given algorithm is launched to the platform, we verify its detection of policy violating content or behaviour by drawing a statistically significant test sample and performing item-by-item human review. Reviewers have expertise in the applicable policies and are trained by our Policy teams to ensure the reliability of their decisions. Human review helps us to confirm that these automations achieve a level of precision, and sizing helps us understand what to expect once the automations are launched.

In addition, humans proactively conduct manual content reviews for potential policy violations. We conduct proactive sweeps for certain high-priority categories of potentially violative content both periodically and during major events, such as elections. Content moderators also proactively review content flagged by heuristic and machine learning models for potential violations of other policies, including our adult content, violent content, child sexual exploitation (CSE) and violent and hateful entities policies.

Once reviewers have confirmed that the detection meets an acceptable standard of accuracy, we consider the automation to be ready for launch. Once launched, automations are monitored dynamically for ongoing performance and health. If we detect anomalies in performance (for instance, significant spikes or dips against the volume we established during sizing, or significant changes in user complaint/overturn rates), our Engineering (including Data Science) teams - with support from other functions - revisit the automation to diagnose any potential problems and adjust the automations as appropriate.

Automated Moderation Activity Examples

A vast majority of all accounts that are suspended for the promotion of terrorism and CSE are proactively flagged by a combination of technology and other purpose-built internal proprietary tools. When we remove CSE content with these automated systems, we immediately report it to the National Center for Missing and Exploited Children (NCMEC). NCMEC makes reports available to the appropriate law enforcement agencies around the world to facilitate investigations and prosecutions.

Our current methods deploy a range of internal tools and and third party solutions that utilises industry standard hash libraries (e.g., PhotoDNA) to ensure known CSAM is caught prior to any user reports being filed. We leverage the hashes provided by NCMEC and industry partners. We scan media uploaded to X for matches to hashes of known CSAM sourced from NGOs, law enforcement and other platforms. We also have the ability to block keywords and phrases from Trending and block search results for certain terms that are known to be associated with CSAM.

We commit to continuing to invest in technology that improves our capability to detect and remove, for instance, terrorist and violent extremist content online before it can cause user harms, including the extension or development of digital fingerprinting and AI-based technology solutions. Our participation in multi-stakeholder communities, such as the Christchurch Call to Action, Global Internet Forum to Counter Terrorism and EU Internet Forum (EUIF), helps to identify emerging trends in how terrorists and violent extremists are using the internet to promote their content and exploit online platforms.

You can learn more about our commitment to eradicating CSE and terrorist content, and the actions we’ve taken here. Our continued investment in proprietary technology is steadily reducing the burden on people to report this content to us.

Scaled Investigations

These moderation activities are supplemented by scaled human investigations into the tactics, techniques and procedures that bad actors use to circumvent our Rules and policies. These investigations may leverage signals and behaviours identifiable on our platform, as well as off-platform information, to identify large-scale and/or technically sophisticated evasions of our detection and enforcement activities. For example, through these investigations, we are able to detect coordinated activity intended to manipulate our platform and artificially amplify the reach of certain accounts or their content.  

Indications of Accuracy for Content Moderation

The possible rate of error of the automated and human means used in enforcing X Rules and policies is represented by the number of Content Removal Complaints (appeals) received and the number of Content Removal Complaints that resulted in reversal of our enforcement decision (successful appeals) by remediation type and by country.

Closing Statement on Content Moderation Activities

Our content moderation systems are designed and tailored to mitigate systematic risks without unnecessarily restricting the use of our service and fundamental rights, especially freedom of expression. Content moderation activities are implemented and anchored on principled policies and leverage a diverse set of interventions to ensure that our actions are reasonable, proportionate and effective. Our content moderation systems blend automated and human review paired with a robust appeals system that enables our users to quickly raise potential moderation anomalies or mistakes.

Human resources dedicated to Content Moderation

Today, we have 1486 people working in content moderation. Our teams work on both initial reports as well as on complaints of initial decisions across the world (and are not specifically designated to only work on EU matters).

Linguistics Expertise of our Content Moderation Team

X’s scaled operations team possesses a variety of skills, experiences, and tools that allow them to effectively review and take action on reports across all of our Rules and policies. X has analysed which languages are most commonly found in reports reviewed by our content moderators, and has hired content moderation specialists who have professional proficiency in these commonly spoken languages. The following table is a summary of the number of people in our content moderation team who possess professional proficiency in the languages that are most commonly contained in reported content in the EU on our platform:

Primary Language

People

Bulgarian

1

English

1,307

French

63

German

63

Italian

1

Polish

1

Portuguese

17

Spanish

33

Total

1,486

In addition to the primary language support, we have also have people supporting additional languages. The following is the list of secondary EU language support:

Secondary Language

People

Bulgarian

1

Romanian

1

French

73

German

64

Greek

1

Irish

1

Italian

3

Latvian

1

Polish

1

Portuguese

22

Spanish

53

Total

221

Please note that the numbers included in the secondary language support are not separate or distinct from the numbers included in the primary language support data. Additionally, the English language is not indicated as a secondary language category in the table above, since all agents with different primary language capability also speak English as well.

Qualifications of our Content Moderation Team

Content Moderation Team Qualifications

Years in Current Role

Headcount

0 to 1

602

1 to 2

157

2 to 3

213

3 to 4

243

4 to 5

87

5 to 6

47

6 to 7

53

7 or more

84

The above table includes all moderators who support EU member state languages as of March 2025. The content moderation team collectively provides linguistic capacity in multiple languages. In situations where we need additional language support, we use translation services and/or machine translation tools, to investigate and address challenges in additional languages. Additionally, content moderators will leverage playbooks that contain colloquial terms and phrases that are consistently being updated to reflect various EU languages spoken within the region and trends.

Moderators are recruited using a standard job description that includes a language requirement which states that the candidate should be able to demonstrate written and spoken fluency in the language and have at least one year of work experience for entry-level positions. In the interview and application process, each agent candidate must meet certain linguistic standards to be considered “language qualified”. This determination is made through multiple tests (i.e. written, oral, etc.) of the candidate’s respective language, to determine their respective proficiency level. Candidates must also meet the educational and background requirements in order to be considered, as well as demonstrate an understanding of current events for the country or region of content moderation they will support.

Organisation, Team Resources, Expertise, Training and Support of our Team that Reviews and Responds to Reports of Illegal Content

Description of the team

X has built a specialised team made up of individuals who have received specific training in order to assess and take action on illegal content that X becomes aware of via reports or other processes on our own initiative. This team consists of different tier groups, with higher tiers consisting of more senior, or more specialised, individuals.

When handling a report of illegal content or a complaint against a previous decision, content and senior content reviewers first assess the content under X’s Rules and policies. If no violation of X’s Rules and policies is determined warranting a global removal of the content, the content moderators will assess the content for potential illegality under Local Laws. If more detailed investigation is required, content moderators can escalate reports to experienced policy and/or legal request specialists who have also undergone in-depth training and/or have language expertise in the respective case’s language. These individuals take appropriate action after carefully reviewing the report and/or complaint in close detail. In cases where the specialist team cannot determine a final decision or action on a case, regarding the potential illegality of the reported content, the report will be discussed with in-house legal counsel. Everyone involved in this process works closely together with daily exchanges through meetings and other channels to ensure the timely and accurate handling of reports. Additionally, in the instance that a case warrants in-house legal counsel review, the lessons learned and actions made on this case will be disseminated to all relevant content moderator parties to ensure consistency in review and an understanding of best practices made by the agent, if a similar case is encountered in the future.

Furthermore, all teams involved in solving these reports closely collaborate with a variety of other policy teams at X who focus on X Rules and policies. This cross-team effort is particularly important in the aftermath of tragic events, such as violent attacks, to ensure alignment, swift consistency in review, and the same potential remediation actions if the content is found violative.

Content moderators are supported by team leads, subject matter experts, quality auditors and trainers. We hire people with diverse backgrounds in fields such as law, political science, psychology, communications, sociology and cultural studies, and languages.

Training and support of persons processing legal requests

All team members, i.e. all employees hired by X as well as vendor partners working on these reports, are trained and retrained regularly on our tools, processes, Rules and policies, including special sessions on cultural and historical context. Initially when joining the team at X, each individual follows an onboarding program and receives individual mentoring during this period, as well as thereafter through our Quality Assurance (QA) program , in house and external counsels (for internal employees).

All team members have direct access to robust training and workflow documentation for the entirety of their employment, and are able to seek guidance at any time from trainers, leads, and internal specialist legal and policy teams as outlined above, as well as managerial support.

Updates about significant current events or Rules and policy changes are shared with all content reviewers in real time, to give guidance and facilitate balanced and informed decision making. In the case of Rules and policy changes, all training materials and related documentation is updated. Calibration sessions are carried out frequently during the reporting period. These sessions aim to increase collective understanding and focus on the needs of the content reviewers in their day-to-day work, by allowing content moderators to ask questions and discuss aspects of recently reviewed cases, X’s Rules and policies, and/or local laws.

The entire team also participates in obligatory X Rules and policies refresher training as the need arises or whenever Rules and policies are updated. These trainings are delivered by the relevant policy specialists who were directly involved in the development of the Rules and policy change. For these sessions we also employ the “train the trainer” method to ensure timely training delivery to the whole team across all of the shifts. All team members use the same training materials to ensure consistency.

QA is a critical measure to the business to help ensure that we are delivering a consistent service at the desired level of quality to our key stakeholders, both externally and internally as it pertains to our case work. We have a dedicated QA team within our vendor team to help us identify areas of opportunity for training and potential defect detection in our workflow or Rules and policies. The QA specialists perform quality checks of reports to ensure that content is actioned appropriately.

The standards and procedures within the QA team ensure the team’s QA is assessed equally, objectively, efficiently and transparently. In case of any mis-alignments, additional training is scheduled, to ensure the team understands the issues and can handle reports accurately.

In addition, given the nature and sensitivity of their work, the entire team has access to online resources and regular onsite group and individual sessions related to resilience and well-being. These are provided by mental health professionals. Content reviewers also participate in resilience, self-care, and vicarious trauma training as part of our mandatory wellness plan during the reporting period.

Training and Support provided to those Persons performing Content Moderation Activities for our XIUC Terms of Service and Rules

Training is a critical component of how X maintains the health and safety of the public conversation through enabling content moderators to accurately and efficiently moderate content posted on our platform. Training at X aims to improve the content moderators’ enforcement performance and quality scores by enhancing content moderators’ understanding and application of X Rules through robust training and quality programs and a continuous monitoring of quality scores.

Training Process

There is a robust training program and system in place for every workflow to provide content moderators with the adequate work skills and job knowledge required for processing user cases. All content moderators must be trained in their assigned workflows. These focus areas ensure that content moderators are set up for success before and during the content moderation lifecycle, which includes:

Training Analysis and Design

Before commencing design work on any content moderators program or resource, a rigorous learner analysis is conducted in close collaboration with training specialists and quality analysts to identify performance gaps and learning needs. Each program is designed with key stakeholder engagement and alignment. The design objective is to adhere to visual and learning design principles to maximise learning outcomes and ensure that agents can perform their tasks with accuracy and efficiency. This is achieved by making sure that the content is:

X’s training programs and resources are designed based on needs, and a variety of modalities are employed to diversify the content moderators learning experience, including:

Classroom Training

Classroom training is delivered either virtually or face-to-face by expert trainers. Classroom training activities can include:

 Onboarding and Ramp Up

When content moderators successfully complete their classroom training program, they undergo an onboarding period. The onboarding phase includes case study by observation, demonstration and hands-on training on live cases. Onboarding activities include content moderator shadowing, guided case work, Question and Answer sessions with their trainer, coaching, feedback sessions, etc. Quality audits are conducted for each onboarding content moderator and content moderators must be coached for any mis-action spotted in their quality scores the same day that the case was reviewed. Trainers conduct needs assessment for each onboarding content moderator and prepare refresher training accordingly. After the onboarding period, content is evaluated on an ongoing basis with the QA team to identify gaps and address potential problem areas. There is a continuous feedback loop with quality analysts across the different workflows to identify challenges and opportunities to improve materials and address performance gaps.

Up-Skilling

When a content moderator needs to be upskilled they receive training of a specific workflow within the same pillar that the content moderator is currently working. The training includes a classroom training phase and onboarding phase which is specified above.

Refresher Sessions

Refresher sessions take place when a content moderator has previously been trained, has access to all the necessary tools, but would need a review of some or all topics. This may happen for content moderators who have been on prolonged leave, transferred temporarily to another content moderation policy workflow, or ones who have recurring errors in the quality scores. After a needs assessment, trainers are able to pinpoint what the content moderator needs and prepare a session targeting their needs and gaps.

New Launch/Update Roll-Outs

There are also processes that require new and/or specific product training and certification. These new launches and updates are identified by X and the knowledge is transferred to the content moderators.

Remediation Plans

There are remediation plans in place to support content moderators who do not pass the training or onboarding phase, or are not meeting quality requirements.

Relevant Data for the Reporting Period

Member States Orders to Act Against Illegal Content

Removal Orders Received - 1 October to 31 March

Illegal Content Category

France

Spain

Illegal or harmful speech

1

3

Unsafe and illegal products

11

Removal Orders Median Handle Time (Hours) - 1 October to 31 March

Illegal Content Category

France

Spain

Illegal or harmful speech

1.2

5.6

Unsafe and illegal products

2.3

Removal Orders Median Time To Acknowledge Receipt

X provides an automated acknowledgement of receipt of removal orders submitted by law enforcement through our Legal Request submission portal. As a consequence of this immediate acknowledgement of receipt, the median time was zero hours.

Important Notes about Removal Orders:

Information Requests Received - October to 31 March

Illegal Content Category

Austria

Belgium

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Luxembourg

Netherlands

Poland

Portugal

Romania

Slovenia

Spain

Data protection & privacy violations

1

9

9

3

1

1

8

Illegal or harmful speech

17

11

117

5229

17

2

25

10

47

2

1

77

Intellectual property infringements

3

5

1

Issue Unknown

5

2

1

1

2

4

Negative effects on civic discourse or elections

3

23

1

Non-consensual behavior

7

7

4

Pornography or sexualized content

4

64

1

1

1

Protection of minors

1

10

75

1

1

3

2

2

2

9

Risk for public security

16

81

2

1822

262

3

1

1

10

13

1

2

78

Scams and fraud

5

4

57

120

4

1

2

3

1

2

10

2

56

Scope of platform service

1

Self-harm

2

1

2

Unsafe and illegal products

1

5

1

1

1

Violence

6

4

1

5

164

305

5

14

14

33

13

1

2

31

Information Request Median Handle Time (Hours) - October to 31 March

Illegal Content Category

Austria

Belgium

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Luxembourg

Netherlands

Poland

Portugal

Romania

Slovenia

Spain

Data protection & privacy violations

0.4

217.7

189.3

46.0

0.1

49.3

218.8

Illegal or harmful speech

380.6

115.9

137.8

139.7

361.1

159.3

217.1

442.6

23.2

2.7

74.3

96.6

Intellectual property infringements

0.6

1.6

451.5

Issue Unknown

529.2

27.4

45.5

20.6

34.6

0.8

Negative effects on civic discourse or elections

51.9

74.4

0.4

Non-consensual behavior

149.5

77.7

461.5

Pornography or sexualized content

243.4

71.5

368.6

404.5

367.0

Protection of minors

1.4

4.8

2.6

4.2

70.3

35.9

201.6

6.7

22.0

2.1

Risk for public security

121.5

142.8

83.7

53.9

99.7

18.4

44.0

144.3

22.1

1.9

482.7

457.0

83.5

Scams and fraud

28.8

145.0

139.4

78.1

116.3

28.6

1.5

30.0

27.2

62.7

46.6

104.9

203.7

Scope of platform service

104.6

Self-harm

83.2

0.1

233.6

Unsafe and illegal products

382.3

96.0

4.9

342.8

459.2

Violence

288.9

431.4

0.6

76.3

166.5

90.8

44.2

46.6

136.2

106.1

4.5

68.0

86.7

263.4

Information Request Median Time To Acknowledge Receipt

X provides an automated acknowledgement of receipt of information requests submitted by law enforcement through our Legal Request submission portal. As a consequence of this immediate acknowledgement of receipt, the median time is zero.

Important Notes about Information Requests:

Illegal Content Notices

Illegal Content Notices Received - October to 31 March

Reason Code

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

EU

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Animal welfare

107

44

9

4

3

16

19

921

2

15

412

300

9

18

19

66

8

3

3

1

95

75

31

8

4

1

352

28

Data protection & privacy violations

170

350

61

80

58

205

160

3057

43

69

2987

2427

237

65

218

514

35

27

24

6

832

813

277

116

11

24

2021

157

Illegal or harmful speech

1987

1302

569

166

173

2581

398

47107

161

599

25983

28849

589

144

1881

4004

209

162

103

15

2236

3404

1400

901

96

83

22854

771

Intellectual Property Infringements

117

19

26

27

28

132

110

0

8

135

2362

4265

75

18

4648

220

16

181

9

2

617

1627

1434

227

2

11

4446

1352

Negative effects on civic discourse or elections

111

102

34

20

16

129

67

1470

22

54

1079

4559

48

37

174

177

21

18

10

219

601

79

598

12

4

408

44

Non-consensual behavior

31

91

27

12

20

54

69

3650

21

47

1423

599

41

51

99

133

6

21

6

3

259

260

54

14

10

2

705

188

Pornography or sexualized content

197

359

192

24

46

270

211

6112

94

122

6355

2422

148

272

208

1001

30

54

33

6

452

740

281

220

28

16

1806

214

Protection of minors

108

217

22

20

8

71

114

3235

130

801

2725

6695

102

89

214

352

9

57

11

14

1792

1432

124

43

39

11

9093

149

Risk for public security

101

157

28

35

14

155

126

1583

35

75

2340

2558

64

21

120

201

51

21

9

4

217

394

101

263

24

8

654

107

Scams and fraud

385

704

113

116

64

498

358

6048

153

278

6477

5149

294

319

1111

1325

135

73

55

20

1422

1051

757

488

54

64

4198

511

Scope of platform service

6

8

3

8

9

12

683

17

12

185

122

11

7

10

26

6

3

27

24

9

11

3

1

115

5

Self-harm

13

26

5

4

29

7

707

3

22

289

270

12

22

21

52

2

3

5

54

73

17

4

6

2

251

85

Unsafe and illegal products

45

73

31

4

12

77

97

1418

14

27

1403

790

12

25

46

131

28

9

3

3

157

135

65

35

10

8

457

44

Violence

233

265

97

97

54

263

112

11386

79

747

4126

5141

141

150

299

798

75

47

60

30

435

512

238

209

59

51

3471

292

Actions Taken on Illegal Content Notices - 1 October to 31 March

Closure Type

Action Type

Grounds for Action

Reason Code

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

EU

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Automated Means

No Violation Found

Terms of Service and/or

X’s Rules or Policies

Animal welfare

1

1

1

2

Terms of Service and/or

X’s Rules or Policies

Data protection & privacy violations

1

2

1

4

9

14

6

4

5

3

2

3

4

1

15

1

Terms of Service and/or

X’s Rules or Policies

Illegal or harmful speech

4

3

1

2

13

39

1

4

1

2

1

7

2

1

5

4

2

1

27

Terms of Service and/or

X’s Rules or Policies

Negative effects on civic discourse or elections

3

3

1

1

1

1

Terms of Service and/or

X’s Rules or Policies

Non-consensual behavior

1

1

3

4

2

1

2

1

1

4

Terms of Service and/or

X’s Rules or Policies

Pornography or sexualized content

4

2

1

1

12

64

11

12

5

2

9

1

1

12

11

5

1

1

23

3

Terms of Service and/or

X’s Rules or Policies

Protection of minors

10

8

1

1

1

9

17

344

3

270

7

9

13

45

1

5

2

487

456

8

4

8

1367

8

Terms of Service and/or

X’s Rules or Policies

Risk for public security

1

1

6

9

3

4

4

1

1

3

2

2

5

5

2

7

4

Terms of Service and/or

X’s Rules or Policies

Scams and fraud

1

1

2

1

2

1

4

Terms of Service and/or

X’s Rules or Policies

Scope of platform service

2

3

3

2

3

1

1

2

1

1

7

Terms of Service and/or

X’s Rules or Policies

Self-harm

1

1

1

1

1

1

1

Terms of Service and/or

X’s Rules or Policies

Unsafe and illegal products

1

1

Terms of Service and/or

X’s Rules or Policies

Violence

13

1

1

1

1

1

Manual Closure

Content removed globally following illegal content notice

Basis of Law and/or

Local Laws

Animal welfare

3

1

4

Basis of Law and/or

Local Laws

Data protection & privacy violations

1

1

1

1

6

5

1

2

1

5

Basis of Law and/or

Local Laws

Illegal or harmful speech

2

8

1

38

14

1

5

5

2

24

Basis of Law and/or

Local Laws

Non-consensual behavior

1

4

1

1

6

4

2

1

1

Basis of Law and/or

Local Laws

Pornography or sexualized content

14

1

1

1

2

73

3

2

26

35

11

5

34

3

14

21

10

4

3

54

3

Basis of Law and/or

Local Laws

Protection of minors

10

9

1

2

75

4

33

11

14

7

34

Basis of Law and/or

Local Laws

Risk for public security

8

1

5

4

Basis of Law and/or

Local Laws

Scams and fraud

3

8

1

2

22

2

1

19

4

1

2

3

10

1

1

10

3

1

20

Basis of Law and/or

Local Laws

Scope of platform service

1

Basis of Law and/or

Local Laws

Unsafe and illegal products

1

Basis of Law and/or

Local Laws

Violence

4

4

8

4

6

11

Manual Closure

Country withheld Content

Basis of Law and/or

Local Laws

Animal welfare

2

2

1

1

1

402

13

31

1

2

2

1

3

2

1

6

2

Basis of Law and/or

Local Laws

Data protection & privacy violations

20

41

8

8

19

53

23

482

8

12

252

367

40

26

33

67

5

3

7

138

93

59

33

4

4

298

50

Basis of Law and/or

Local Laws

Illegal or harmful speech

1036

452

180

54

28

1174

128

19993

36

194

6570

14216

133

32

807

1239

55

51

22

3

722

1102

564

311

31

26

8889

256

Basis of Law and/or

Local Laws

Negative effects on civic discourse or elections

18

6

2

3

3

19

6

162

1

53

1129

3

1

63

25

3

1

28

74

5

30

3

51

6

Basis of Law and/or

Local Laws

Non-consensual behavior

3

29

2

1

1

12

10

511

5

3

173

162

8

21

18

12

9

63

71

7

3

3

1

144

33

Basis of Law and/or

Local Laws

Pornography or sexualized content

11

24

46

6

12

47

18

3217

39

35

1656

954

48

95

11

52

6

20

16

2

48

306

17

88

12

2

135

76

Basis of Law and/or

Local Laws

Protection of minors

8

10

5

2

12

4

445

21

21

163

582

10

28

13

14

1

7

1

1

34

56

10

9

5

212

49

Basis of Law and/or

Local Laws

Risk for public security

25

17

4

4

2

22

6

326

6

6

217

567

12

3

19

27

5

3

1

29

52

15

15

4

3

67

19

Basis of Law and/or

Local Laws

Scams and fraud

59

106

27

15

16

172

74

1787

25

76

241

1532

39

55

484

179

14

15

15

3

314

191

89

56

6

10

553

67

Basis of Law and/or

Local Laws

Scope of platform service

1

1

1

84

3

7

8

1

1

7

3

1

2

2

2

7

2

Basis of Law and/or

Local Laws

Self-harm

4

2

2

92

1

16

29

1

3

3

6

1

2

5

2

1

17

16

Basis of Law and/or

Local Laws

Unsafe and illegal products

8

8

2

4

30

3

483

1

1

253

275

8

5

7

21

12

1

28

23

12

6

3

61

8

Basis of Law and/or

Local Laws

Violence

50

32

14

16

1

39

12

2802

8

171

547

962

14

27

40

158

10

3

1

1

54

119

42

27

4

1

705

46

Manual Closure

Content removed globally following illegal content notice

Terms of Service and/or X’s Rules or Policies

Intellectual Property Infringements

58

7

9

2

6

66

2

5

96

996

1022

53

5

18

168

8

83

1

180

930

406

178

1

9

1550

12

Manual Closure

Global content deletion based on a violation of XIUC Terms

Terms of Service and/or

X’s Rules or Policies

Animal welfare

1

5

2

1

5

69

2

3

25

82

1

3

3

5

13

14

3

3

1

26

5

Terms of Service and/or

X’s Rules or Policies

Data protection & privacy violations

10

29

2

22

1

3

14

181

11

8

215

315

4

10

12

17

2

1

76

30

11

3

2

122

6

Terms of Service and/or

X’s Rules or Policies

Illegal or harmful speech

45

35

17

5

59

12

1340

9

16

708

759

14

3

29

52

6

1

65

112

35

23

1

5

295

34

Terms of Service and/or

X’s Rules or Policies

Negative effects on civic discourse or elections

1

1

2

9

1

24

32

1

2

1

1

6

1

Terms of Service and/or

X’s Rules or Policies

Non-consensual behavior

1

7

2

8

75

2

4

55

59

6

4

6

9

1

1

10

3

6

1

37

2

Terms of Service and/or

X’s Rules or Policies

Pornography or sexualized content

31

33

8

4

11

23

615

14

22

395

470

6

12

25

62

6

7

2

51

66

19

14

3

1

175

32

Terms of Service and/or

X’s Rules or Policies

Protection of minors

35

62

5

7

1

15

26

1225

54

345

916

4498

18

24

69

114

1

25

2

7

730

576

29

12

17

4

3788

34

Terms of Service and/or

X’s Rules or Policies

Risk for public security

4

7

4

1

6

53

74

8

194

360

3

1

5

5

1

1

2

1

14

27

11

7

1

41

9

Terms of Service and/or

X’s Rules or Policies

Scams and fraud

3

1

27

3

36

9

4

1

1

2

5

2

1

29

Terms of Service and/or

X’s Rules or Policies

Scope of platform service

1

1

1

6

2

43

12

1

2

1

3

1

2

1

5

1

Terms of Service and/or

X’s Rules or Policies

Self-harm

1

5

2

57

1

28

31

1

1

2

4

1

5

10

1

1

2

22

5

Terms of Service and/or

X’s Rules or Policies

Unsafe and illegal products

3

5

1

1

1

11

34

108

2

123

114

8

6

1

2

6

1

24

5

Terms of Service and/or

X’s Rules or Policies

Violence

32

24

3

7

34

7

1526

2

129

361

762

11

38

29

64

1

2

64

80

41

24

1

2

383

15

Manual Closure

No Violation Found

Terms of Service and/or

X’s Rules or Policies

Animal welfare

101

37

6

4

2

14

13

432

12

178

171

8

14

14

58

7

3

3

1

78

59

26

4

3

1

313

21

Terms of Service and/or

X’s Rules or Policies

Data protection & privacy violations

128

272

47

47

35

140

107

2313

18

44

1687

1671

170

26

162

397

29

23

15

5

600

679

199

78

6

18

1542

98

Terms of Service and/or

X’s Rules or Policies

Illegal or harmful speech

871

789

351

101

132

1279

238

24730

110

370

13885

13352

427

107

1008

2577

141

108

78

11

1376

2057

782

536

62

50

13186

461

Terms of Service and/or

X’s Rules or Policies

Negative effects on civic discourse or elections

91

91

32

16

12

106

59

1274

20

53

685

3300

40

35

103

147

18

17

9

188

512

71

546

9

4

344

36

Terms of Service and/or

X’s Rules or Policies

Non-consensual behavior

26

48

25

8

18

42

39

3031

12

38

666

358

26

26

60

111

5

12

5

3

171

173

40

10

6

1

498

151

Terms of Service and/or

X’s Rules or Policies

Pornography or sexualized content

148

277

110

11

33

209

150

2028

22

45

1168

915

83

152

156

822

14

23

14

4

313

321

225

112

10

9

1355

95

Terms of Service and/or

X’s Rules or Policies

Protection of minors

42

123

10

10

6

31

58

1078

47

145

688

1365

66

25

106

154

5

17

7

4

478

307

76

17

9

7

3403

55

Terms of Service and/or

X’s Rules or Policies

Risk for public security

65

132

24

26

10

121

56

1119

25

55

1480

1574

44

14

94

162

45

17

6

1

170

302

69

231

19

5

517

70

Terms of Service and/or

X’s Rules or Policies

Scams and fraud

311

575

80

97

47

312

275

4058

114

188

1397

3455

237

254

603

1118

120

53

37

17

1059

802

655

411

43

49

3494

438

Terms of Service and/or

X’s Rules or Policies

Scope of platform service

6

7

2

7

8

7

544

14

5

70

100

6

7

9

17

2

1

21

21

5

8

2

93

2

Terms of Service and/or

X’s Rules or Policies

Self-harm

7

16

3

3

24

4

505

1

20

149

172

7

14

16

31

1

1

2

39

44

13

4

4

172

59

Terms of Service and/or

X’s Rules or Policies

Unsafe and illegal products

33

58

28

3

7

36

53

706

12

24

651

393

4

20

30

101

13

7

3

2

119

107

53

29

7

8

359

31

Terms of Service and/or

X’s Rules or Policies

Violence

145

197

80

74

48

180

86

6871

63

414

2294

3321

114

82

219

557

62

42

54

25

305

295

149

143

53

48

2314

226

Manual Closure

Offer of help in case of self-harm and suicide concern based on XIUC Terms of Service and Rules

Terms of Service and/or

X’s Rules or Policies

Illegal or harmful speech

2

2

1

1

Terms of Service and/or

X’s Rules or Policies

Pornography or sexualized content

1

Terms of Service and/or

X’s Rules or Policies

Protection of minors

1

2

Terms of Service and/or

X’s Rules or Policies

Risk for public security

1

1

Terms of Service and/or

X’s Rules or Policies

Scams and fraud

1

1

Terms of Service and/or

X’s Rules or Policies

Self-harm

1

1

1

2

39

1

32

2

3

11

2

4

11

2

35

5

Terms of Service and/or

X’s Rules or Policies

Violence

1

Reports of Illegal Content Median Handle Time (Hours) - 1 October to 31 March

Reason Code

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

EU

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Animal welfare

4.4

7.4

6.1

4.7

2.5

3.8

3.2

10.1

1.6

2.3

16.7

1.8

8.4

3.1

1.2

3.3

2.3

2.9

0.3

13.5

8.6

5.2

5.1

2.6

5.9

2.7

10.7

2.3

Data protection & privacy violations

3.2

3.5

4.6

1.9

4.3

3.5

5.7

4.4

4.5

4.1

6.9

3.3

3.7

1.4

2.7

4.8

10.5

12.5

2.1

0.9

3.0

9.3

6.5

3.1

8.8

0.7

3.5

4.9

Illegal or harmful speech

1.8

2.3

3.5

1.9

2.0

1.9

3.7

1.5

2.6

2.5

5.6

1.5

2.7

2.9

2.9

1.8

1.2

1.5

3.0

6.1

3.0

2.2

3.9

1.8

1.6

1.1

1.9

2.9

Intellectual Property Infringements

6.8

17.6

38.0

46.6

28.2

13.8

46.0

15.0

14.7

28.8

36.5

15.4

47.1

38.5

4.6

6.8

33.7

45.0

37.1

34.3

10.0

41.0

10.5

27.6

14.6

27.7

40.2

Negative effects on civic discourse or elections

1.7

1.4

1.2

1.3

2.3

1.8

2.1

1.3

1.3

0.9

9.7

1.2

1.7

2.9

1.0

1.3

2.4

2.0

1.2

2.6

1.3

3.5

1.2

1.6

1.9

2.4

1.1

Non-consensual behavior

6.3

2.8

1.1

9.5

10.6

5.5

8.4

6.4

2.6

11.2

8.3

3.5

11.0

1.0

8.1

1.9

9.0

20.6

5.7

1.3

2.8

3.9

2.2

2.5

0.2

4.1

2.5

1.6

Pornography or sexualized content

3.4

2.8

9.9

2.1

2.9

4.7

4.6

2.7

3.0

4.8

5.0

4.5

7.6

1.9

3.7

2.4

3.0

3.9

2.1

4.6

3.1

2.3

4.7

2.0

10.9

7.5

2.9

3.7

Protection of minors

1.9

3.8

1.1

0.8

1.0

2.8

2.3

2.0

2.5

2.7

4.6

3.0

2.8

1.8

4.9

2.3

0.5

5.9

2.2

4.3

2.2

2.1

5.6

3.8

2.0

9.0

2.2

2.3

Risk for public security

2.6

1.9

3.2

9.1

5.0

3.1

1.6

4.8

3.2

1.6

7.6

2.1

2.2

2.1

2.1

2.0

2.9

1.0

2.0

5.8

2.1

1.9

3.0

1.5

4.0

1.1

2.2

3.5

Scams and fraud

6.3

2.9

2.2

3.6

1.9

1.6

3.4

3.8

3.0

3.9

10.9

1.6

6.5

3.3

1.6

4.6

9.8

11.5

4.3

14.1

3.1

3.8

7.4

3.4

7.4

4.6

2.1

9.2

Scope of platform service

6.4

5.9

7.9

11.8

17.5

2.8

2.8

1.1

7.9

8.3

7.0

1.6

0.3

5.0

0.6

11.5

0.5

3.3

4.1

1.7

1.5

0.5

3.6

2.2

10.6

Self-harm

5.9

2.6

0.3

7.1

8.2

13.2

1.9

5.0

9.1

7.9

2.5

1.6

1.6

4.5

2.5

1.1

9.1

1.1

2.4

1.9

7.5

1.9

0.9

7.0

5.6

5.2

Unsafe and illegal products

0.5

2.0

8.6

0.7

6.7

2.0

3.4

0.6

5.5

1.6

7.1

1.9

0.2

0.6

4.0

3.0

3.0

4.2

9.0

0.2

1.6

4.2

2.1

11.2

0.7

12.9

2.4

8.6

Violence

2.3

2.1

1.8

0.7

0.4

2.6

2.6

0.8

2.9

5.5

7.4

2.2

1.5

0.8

2.9

2.1

0.6

0.4

0.5

0.4

2.5

2.6

2.0

1.7

0.8

1.0

2.9

1.8

Own Initiative Enforcements

RESTRICTED REACH LABELS DATA

Restricted Reach Labels - 1 October to 31 March

Detection Method

Enforcement

Policy

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Own Initiative

Automated Means

Hateful Conduct

5307

9522

3591

3787

1162

5016

6068

1515

6548

38349

43667

5082

3624

16447

12352

1426

2171

1156

520

28814

23199

6933

8726

1871

2386

29228

15916

Own Initiative

Manual Review

Abuse & Harassment

1

2

1

1

2

1

3

3

1

4

1

6

1

4

2

Own Initiative

Manual Review

Hateful Conduct

41

78

15

28

7

22

52

7

85

150

204

52

25

210

83

4

12

8

3

290

77

33

39

12

19

87

155

Own Initiative

Manual Review

Violent Speech

3

3

2

4

3

2

2

3

16

16

3

3

12

10

1

33

10

4

9

2

2

11

9

User Report

Manual Review

Abuse & Harassment

151

238

90

57

27

99

168

17

165

1289

1379

257

221

322

476

17

52

44

10

1065

596

264

217

22

68

1053

530

User Report

Manual Review

Hateful Conduct

1388

3794

804

867

309

1194

2664

208

1924

12946

13067

1754

912

4860

5052

322

427

359

195

12726

7413

3518

2582

253

855

9937

4588

User Report

Manual Review

Violent Speech

641

770

228

165

94

463

496

87

669

5074

10170

419

218

1062

1801

79

181

101

26

4765

1831

665

517

110

137

2857

1451

ACTIONS TAKEN ON CONTENT FOR XIUC TERMS OF SERVICE AND RULES VIOLATIONS

XIUC Terms of Service and Rules Content Removal Actions - 1 October to 31 March

Detection Method

Enforcement

Policy

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Own Initiative

Automated Means

Abuse & Harassment

1

38

1

4

3

1

3

Own Initiative

Automated Means

Child Sexual Exploitation

1

4

1

1

1

1

1

2

20

17

1

4

1

5

1

15

9

2

2

1

13

6

Own Initiative

Automated Means

Hateful Conduct

1

Own Initiative

Automated Means

Non-Consensual Nudity

5

7

7

4

3

10

6

1

4

113

101

18

12

5

39

2

40

24

26

23

4

64

12

Own Initiative

Automated Means

Other

12

42

10

2

2

18

18

2

24

222

215

26

10

39

72

4

6

3

1

162

50

28

21

9

12

185

52

Own Initiative

Automated Means

Perpetrators of Violent Attacks

1

8

1

2

1

Own Initiative

Automated Means

Private Information & media

17

18

3

4

2

4

1

58

278

1

2

2

124

1

42

12

1

12

3

4

23

13

Own Initiative

Automated Means

Sensitive Media

577

913

583

405

151

752

388

111

441

9010

7732

727

935

915

4603

166

189

105

78

3523

2278

1052

1150

269

123

5017

1121

Own Initiative

Automated Means

Violent Speech

1050

2621

590

669

237

982

1221

267

1243

26224

9533

947

709

2834

3084

240

448

202

122

5584

4336

1757

1915

329

311

19975

2804

Own Initiative

Manual Review

Abuse & Harassment

1

Own Initiative

Manual Review

Sensitive Media

1

1

Own Initiative

Manual Review

Suicide & Self Harm

1

1

1

Own Initiative

Manual Review

Violent Speech

3

3

1

1

2

2

3

9

1

1

2

7

7

1

4

3

User Report

Manual Review

Abuse & Harassment

884

1057

576

376

204

764

606

217

950

18176

37160

1004

494

800

6055

347

331

238

56

7507

3958

1037

1364

133

175

6121

1519

User Report

Manual Review

Child Sexual Exploitation

2

10

1

1

1

1

10

12

2

1

1

2

1

26

2

4

16

10

User Report

Manual Review

Deceased Individuals

4

2

1

1

1

9

2

6

2

59

62

2

1

8

31

1

1

46

39

6

3

33

16

User Report

Manual Review

Distribution of Hacked Materials

3

1

1

1

1

1

User Report

Manual Review

Hateful Conduct

34

75

34

19

10

50

34

16

41

766

389

34

21

64

99

14

9

4

2

186

203

96

58

8

13

295

73

User Report

Manual Review

Illegal or certain regulated goods and services

308

233

302

177

95

409

251

86

325

21482

22760

302

281

211

4431

141

183

85

58

3816

1290

431

685

76

99

2275

620

User Report

Manual Review

Intellectual property infringements

1

1

User Report

Manual Review

Misleading & Deceptive Identities

5

2

2

2

User Report

Manual Review

Non-Consensual Nudity

160

228

197

57

41

146

91

12

93

3322

2295

98

105

146

623

75

57

34

9

1503

676

166

334

58

38

756

242

User Report

Manual Review

Paid Partnerships (Monetization policy)

10

3

2

1

User Report

Manual Review

Perpetrators of Violent Attacks

1

1

1

3

6

3

1

2

4

3

1

1

User Report

Manual Review

Private Information & media

44

78

17

13

16

29

19

6

60

766

635

26

26

92

133

16

3

7

1

370

178

90

85

10

15

606

147

User Report

Manual Review

Sensitive Media

209

436

135

98

59

285

163

129

153

3729

2882

329

253

401

1231

53

67

18

16

1454

1039

360

529

93

63

1900

402

User Report

Manual Review

Suicide & Self Harm

139

158

60

39

25

142

95

34

137

1074

2402

153

87

210

630

16

55

15

6

590

1198

187

166

23

24

1257

314

User Report

Manual Review

Violent & Hateful Entities

1

1

2

1

1

5

1

6

User Report

Manual Review

Violent Speech

1285

1910

595

502

140

1274

834

200

1128

16108

16947

1230

526

2067

6629

243

409

209

62

6794

8830

1980

1466

338

271

10576

2573

ACTIONS TAKEN ON ACCOUNTS FOR XIUC TERMS OF SERVICE AND RULES VIOLATIONS

XIUC Terms of Service and Rules Account Suspensions - 1 October to 31 March

Detection Method

Enforcement

Policy

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Own Initiative

Automated Means

Abuse & Harassment

2

3

1

Own Initiative

Automated Means

Ban Evasion

10

6

1

7

1

1

1

1

57

95

16

3

7

12

1

3

2

29

19

3

12

26

4

Own Initiative

Automated Means

CWC for various countries for illegal activity

1

27

3

1

4

1

2

2

3

9

Own Initiative

Automated Means

Child Sexual Exploitation

1533

2161

963

1003

343

1894

1708

570

1194

32941

29569

1191

1575

1819

6710

615

655

283

155

10511

10467

1711

2765

635

372

9558

3190

Own Initiative

Automated Means

Financial Scam

3

5

4

3

1

1

1

68

133

3

2

2

17

3

1

10

4

3

5

1

1

17

2

Own Initiative

Automated Means

Help with my compromised account

1

1

1

Own Initiative

Automated Means

Illegal or certain regulated goods and services

11

8

8

8

2

25

4

7

33

333

310

16

8

12

79

11

5

1

202

67

11

28

5

1

54

15

Own Initiative

Automated Means

Misleading & Deceptive Identities

359

519

319

166

135

415

293

83

1636

8405

5740

469

272

506

2021

850

155

100

56

3542

2590

544

699

106

102

3225

649

Own Initiative

Automated Means

Non-Consensual Nudity

1

1

2

1

Own Initiative

Automated Means

Other

34

22

25

9

7

23

12

1

6

2286

2348

8

8

21

112

8

4

2

4

132

110

31

46

6

14

914

24

Own Initiative

Automated Means

Perpetrators of Violent Attacks

5

3

1

6

6

4

4

17

52

79

1

5

29

21

3

19

1

38

46

36

8

69

32

Own Initiative

Automated Means

Platform Manipulation & Spam

768977

887929

587549

533245

234342

1127520

591260

324718

970573

9026603

10550111

832342

768735

532892

6928641

999895

499288

106809

191696

4415694

4723010

980498

1708144

318744

307200

4275658

1284839

Own Initiative

Automated Means

Sensitive Media

64

50

31

231

116

108

6

59

23

804

2946

298

330

10

638

96

9

1

3

108

2193

277

105

20

373

215

125

Own Initiative

Automated Means

Unknown

197

199

139

305

46

323

94

129

235

4191

1861

313

176

245

1732

173

42

10

40

3231

1292

184

361

114

136

625

587

Own Initiative

Automated Means

Violent & Hateful Entities

91

75

71

13

18

40

29

12

40

812

915

28

22

31

95

10

10

16

3

752

177

30

145

2

2

103

150

Own Initiative

Automated Means

Violent Speech

1

1

1

1

1

Own Initiative

Manual Review

Child Sexual Exploitation

37

54

30

13

2

47

48

22

35

1701

1382

31

89

71

107

37

20

17

2

299

144

46

73

17

11

301

85

User Report

Manual Review

Abuse & Harassment

550

634

567

420

272

869

424

245

676

16226

52551

827

515

664

7291

410

321

149

79

5531

2657

956

1199

155

239

4336

1264

User Report

Manual Review

Ban Evasion

7

7

1

2

2

1

1

69

52

7

1

5

5

1

1

29

8

6

4

34

4

User Report

Manual Review

CWC for various countries for illegal activity

1

7

1

1

1

3

4

User Report

Manual Review

Child Sexual Exploitation

10

26

12

2

1

9

15

11

407

242

7

5

24

53

1

12

1

108

46

16

19

10

68

26

User Report

Manual Review

Deceased Individuals

1

1

1

1

3

User Report

Manual Review

Distribution of Hacked Materials

1

User Report

Manual Review

Financial Scam

1

1

4

4

1

1

1

1

2

1

3

1

User Report

Manual Review

Hateful Conduct

9

16

8

2

3

8

8

4

8

217

107

9

3

19

24

6

7

2

34

38

22

13

3

5

70

15

User Report

Manual Review

Help with my compromised account

3

User Report

Manual Review

Illegal or certain regulated goods and services

288

323

546

224

149

924

221

112

378

16681

17173

296

249

279

3564

190

157

71

55

4012

1568

388

597

119

104

1923

532

User Report

Manual Review

Intellectual property infringements

4

19

2

1

1

7

7

1

6

323

87

13

5

8

59

5

4

1

2

57

48

41

20

1

4

114

28

User Report

Manual Review

Misleading & Deceptive Identities

1001

697

513

314

361

1109

459

237

347

7525

5814

1046

577

240

2712

307

398

21

28

3004

2109

640

2106

94

399

3419

1316

User Report

Manual Review

Non-Consensual Nudity

34

69

88

15

12

55

30

6

34

1187

692

28

39

53

233

28

18

12

2

512

245

61

139

19

9

279

103

User Report

Manual Review

Other

28

26

9

9

2

17

27

9

14

231

199

13

17

17

69

4

8

1

144

65

24

31

7

6

106

26

User Report

Manual Review

Perpetrators of Violent Attacks

3

2

1

3

4

5

4

17

32

2

12

12

3

6

1

13

23

13

1

1

14

6

User Report

Manual Review

Platform Manipulation & Spam

325

321

200

332

106

570

224

117

200

8326

4618

297

210

602

2161

139

110

44

34

2514

1151

410

640

89

84

1818

783

User Report

Manual Review

Private Information & media

2

8

2

1

2

4

55

46

4

1

8

9

24

18

7

4

2

41

5

User Report

Manual Review

Sensitive Media

1

1

2

3

3

59

22

3

1

2

6

1

9

4

1

2

2

1

5

6

User Report

Manual Review

Suicide & Self Harm

8

7

1

2

5

2

1

5

18

39

2

1

10

17

1

3

18

25

2

6

1

29

16

User Report

Manual Review

Unknown

1

2

2

1

1

User Report

Manual Review

Username Squatting

1

1

1

1

1

User Report

Manual Review

Violent & Hateful Entities

27

21

12

2

13

8

12

5

17

185

193

12

7

10

49

4

6

4

1

170

32

7

39

1

39

40

User Report

Manual Review

Violent Speech

131

204

69

71

21

146

97

24

124

1996

1303

103

77

258

537

18

56

28

13

695

722

201

202

35

34

1220

310

Overall Figures

COMPLAINTS OF ACTIONS TAKEN FOR ILLEGAL CONTENT RECEIVED - 1 October to 31 March

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Complaints Received

50

24

53

3

3

128

4

2

4

122

204

24

26

48

66

2

1

127

94

43

41

2

1

848

32

Overturned Appeals

7

3

6

0

0

23

1

0

0

12

66

6

4

8

14

1

0

14

11

12

9

0

0

165

1

Median Time to Respond (Hours)

0.6

1.7

11.2

1.8

4.6

0.8

3.5

2.2

0.9

1.1

1.2

1.5

1.0

2.4

0.6

0.5

2.3

1.4

0.6

3.6

2.3

0.4

0.5

0.7

2.4

COMPLAINTS OF ACTIONS TAKEN FOR XIUC TERMS OF SERVICE AND RULES VIOLATIONS RECEIVED - 1 October to 31 March

Appeal Category

Metric

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Content Action Complaints

Complaints Received

457

651

176

132

86

273

258

70

274

5439

5159

342

198

695

1714

84

113

82

61

2355

835

600

408

75

85

4134

603

Content Action Complaints

Overturned Appeals

43

79

20

13

3

29

30

4

18

667

449

31

11

106

67

11

12

3

5

171

78

58

35

4

8

569

44

Content Action Complaints

Median Time to Respond (Hours)

2.3

2.8

10.2

4.0

381.5

1.1

11.2

3.1

162.0

2.1

13.7

230.3

364.0

0.5

398.1

246.7

0.5

217.3

183.8

366.5

3.8

1.4

2.8

347.7

3.0

0.8

227.7

Live Feature Action Complaints

Complaints Received

6

2

10

4

3

24

174

209

9

13

8

6

4

2

1

140

47

14

18

2

3

105

36

Live Feature Action Complaints

Overturned Appeals

0

0

0

0

0

0

16

13

1

3

1

0

2

0

0

7

3

3

1

0

0

16

1

Live Feature Action Complaints

Median Time to Respond (Hours)

116.5

22.2

23.5

217.7

23.2

23.9

23.8

23.9

12.7

24.0

48.3

13.6

16.6

55.1

23.6

23.9

21.9

23.8

23.5

22.2

19.0

23.8

23.9

Sensitive Media Action Complaints

Complaints Received

14

58

31

10

7

37

65

22

34

306

275

39

15

28

136

15

8

1

187

111

47

45

11

1

150

89

Sensitive Media Action Complaints

Overturned Appeals

7

46

2

6

5

28

48

19

27

181

175

27

9

22

86

12

4

1

126

71

32

37

4

1

112

53

Sensitive Media Action Complaints

Median Time to Respond (Hours)

0.9

1.4

674.5

0.8

1.2

1.6

0.7

0.3

2.2

1.4

1.3

2.3

3.1

2.1

1.0

0.8

0.3

2.6

1.2

1.0

2.4

0.9

0.1

1.0

1.8

1.4

Account Suspension Complaints

Complaints Received

2149

3184

1347

603

422

2034

1730

583

4186

28919

38332

1857

1806

2317

9546

738

863

381

133

23597

9317

3190

3711

713

389

15854

4523

Account Suspension Complaints

Overturned Appeals

236

365

138

75

47

241

193

68

461

3404

4261

220

196

313

906

80

102

37

17

2821

1041

352

428

99

42

1877

614

Account Suspension Complaints

Median Time to Respond (Hours)

4.2

2.5

4.1

6.3

2.8

3.9

3.2

6.1

37.1

3.5

4.7

2.4

1.1

3.1

2.8

14.7

8.2

8.2

2.3

6.5

9.4

2.3

6.4

1.5

6.1

1.2

6.0

Restricted Reach Complaints

Complaints Received

235

316

134

120

30

251

256

87

277

1578

2307

220

119

848

640

49

72

19

23

1649

753

252

322

61

72

1478

661

Restricted Reach Complaints

Overturned Appeals

71

93

42

33

10

72

84

33

82

499

757

66

39

296

223

16

26

6

8

558

227

73

94

16

22

493

218

Restricted Reach Complaints

Median Time to Respond (Hours)

1.4

0.8

0.5

0.7

1.8

0.7

1.3

0.8

1.1

1.2

1.1

0.8

1.0

1.2

1.5

1.6

0.5

0.7

1.2

1.0

1.0

1.3

0.6

0.8

0.7

1.3

0.9

INDICATORS OF ACCURACY FOR CONTENT MODERATION

VISIBILITY FILTERING INDICATORS

Metric

Enforcement

Policy

Bulgarian

Czech

Danish

Dutch

English

Estonian

Finnish

French

German

Greek

Hungarian

Italian

Latvian

Lithuanian

Polish

Portuguese

Romanian

Slovenian

Spanish

Swedish

Appeal Rate

Automated Means

Hateful Conduct

0.0%

7.7%

3.2%

8.9%

2.0%

6.9%

6.2%

2.0%

7.3%

0.0%

0.4%

6.3%

5.5%

0.0%

5.4%

3.5%

1.2%

8.0%

2.5%

14.0%

Manual Review

Abuse & Harassment

28.6%

23.1%

0.0%

7.0%

2.4%

0.0%

0.0%

9.0%

12.0%

7.7%

0.0%

12.4%

0.0%

0.0%

3.4%

3.0%

8.6%

0.0%

5.7%

2.3%

Manual Review

Hateful Conduct

2.7%

2.2%

0.9%

2.1%

1.1%

0.9%

2.4%

1.1%

3.0%

0.7%

0.0%

3.0%

0.0%

0.0%

0.8%

1.2%

1.8%

0.9%

1.9%

1.5%

Manual Review

Violent Speech

0.0%

0.6%

0.0%

0.6%

0.9%

0.0%

0.0%

0.8%

1.3%

0.0%

0.0%

1.4%

0.0%

0.0%

1.5%

1.2%

1.2%

0.0%

1.5%

4.0%

Note: Cells that are blank mean that there was no enforcement. For cells containing ‘0.0%’ value, there were no cases of successful appeals or overturns.

Metric

Enforcement

Policy

Bulgarian

Czech

Danish

Dutch

English

Estonian

Finnish

French

German

Greek

Hungarian

Italian

Latvian

Lithuanian

Polish

Portuguese

Romanian

Slovenian

Spanish

Swedish

Overturn Rate

Automated Means

Hateful Conduct

46.7%

19.3%

59.5%

40.4%

64.3%

27.3%

31.3%

50.4%

0.0%

68.0%

33.3%

52.4%

36.4%

33.3%

50.0%

39.7%

72.2%

Manual Review

Abuse & Harassment

100.0%

100.0%

80.0%

63.4%

90.1%

113.3%

80.0%

76.9%

53.8%

33.3%

80.0%

75.9%

100.0%

Manual Review

Hateful Conduct

0.0%

0.0%

100.0%

28.2%

36.9%

0.0%

44.4%

32.7%

38.5%

0.0%

31.9%

16.1%

33.3%

66.7%

0.0%

25.3%

0.0%

Manual Review

Violent Speech

0.0%

50.0%

14.5%

17.4%

12.3%

18.2%

33.3%

33.3%

0.0%

19.2%

16.7%

Note: Cells that are blank mean that there was no enforcement. For cells containing ‘0.0%’ value, there were no cases of successful appeals or overturns.

INDICATORS OF ACCURACY FOR CONTENT REMOVAL

Metric

Enforcement

Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Estonian

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Appeal Rate

Automated Means

Sensitive Media

0.0%

0.4%

0.0%

0.0%

0.0%

0.0%

Non-Consensual Nudity

0.0%

0.0%

0.0%

0.0%

0.0%

0.4%

0.0%

0.0%

0.0%

11.1%

0.0%

3.7%

0.0%

22.2%

0.0%

2.6%

0.0%

Abuse & Harassment

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

Private Information & media

0.0%

0.0%

2.2%

2.9%

4.6%

1.2%

0.0%

0.0%

0.0%

Perpetrators of Violent Attacks

0.0%

0.0%

Violent Speech

0.0%

0.5%

1.6%

1.1%

1.6%

4.4%

0.0%

1.5%

5.0%

6.4%

1.7%

1.3%

0.0%

1.0%

0.0%

1.1%

4.3%

0.4%

1.4%

5.5%

2.2%

Child Sexual Exploitation

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

Other

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

1.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

Manual Review

Deceased Individuals

0.0%

0.0%

0.9%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

6.3%

0.0%

Suicide & Self Harm

0.0%

0.0%

0.0%

0.0%

2.9%

5.0%

0.0%

6.4%

11.3%

2.3%

0.0%

1.0%

0.0%

0.8%

5.5%

0.0%

0.0%

0.0%

6.0%

0.0%

Sensitive Media

0.0%

0.0%

0.0%

0.0%

3.1%

2.9%

0.0%

4.6%

8.0%

0.0%

0.0%

0.4%

0.0%

0.4%

6.9%

0.0%

0.0%

3.7%

0.0%

Hateful Conduct

0.0%

0.0%

0.0%

0.0%

Abuse & Harassment

0.0%

2.3%

1.0%

0.0%

3.1%

0.8%

1.2%

6.8%

8.6%

0.0%

0.0%

0.0%

0.6%

0.0%

1.0%

8.8%

0.0%

0.0%

0.0%

7.7%

0.0%

Non-Consensual Nudity

0.0%

0.0%

0.0%

0.0%

0.7%

0.8%

6.3%

1.8%

2.1%

0.0%

0.0%

0.4%

0.0%

0.8%

0.0%

0.0%

1.6%

4.3%

Private Information & media

0.0%

33.3%

0.0%

16.7%

1.1%

3.7%

0.0%

3.4%

10.3%

0.0%

0.0%

0.0%

2.6%

5.9%

0.0%

8.3%

5.6%

Perpetrators of Violent Attacks

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

Violent Speech

3.5%

0.0%

1.4%

1.2%

0.5%

3.5%

1.2%

4.9%

6.1%

2.1%

0.7%

0.0%

0.6%

0.0%

0.7%

4.6%

0.0%

0.0%

5.1%

1.3%

Distribution of Hacked Materials

0.0%

50.0%

0.0%

Illegal or certain regulated goods and services

0.0%

0.0%

0.0%

0.6%

2.4%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

Intellectual property infringements

0.0%

0.0%

Child Sexual Exploitation

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

Note: Cells that are blank mean that there was no enforcement. For cells containing ‘0.0%’ value, there were no cases of successful appeals or overturns.

Metric

Enforcement

Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Estonian

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Overturn Rate

Automated Means

Sensitive Media

0.0%

Non-Consensual Nudity

100.0%

100.0%

0.0%

100.0%

0.0%

Abuse & Harassment

Private Information & media

0.0%

100.0%

57.1%

0.0%

Perpetrators of Violent Attacks

Violent Speech

0.0%

75.0%

33.3%

43.5%

32.0%

40.0%

31.8%

35.5%

66.7%

50.0%

30.0%

42.9%

29.6%

0.0%

0.0%

32.6%

33.3%

Child Sexual Exploitation

Other

100.0%

Manual Review

Deceased Individuals

100.0%

0.0%

Suicide & Self Harm

25.0%

13.4%

27.3%

12.7%

0.0%

0.0%

0.0%

0.0%

19.6%

Sensitive Media

0.0%

12.6%

6.7%

5.3%

0.0%

0.0%

0.0%

5.3%

Hateful Conduct

Abuse & Harassment

0.0%

0.0%

50.0%

7.4%

0.0%

13.8%

8.5%

50.0%

20.0%

7.1%

19.7%

Non-Consensual Nudity

0.0%

23.6%

0.0%

7.7%

22.2%

0.0%

50.0%

50.0%

100.0%

Private Information & media

0.0%

0.0%

100.0%

12.7%

0.0%

13.6%

0.0%

50.0%

14.7%

0.0%

Perpetrators of Violent Attacks

Violent Speech

25.0%

14.3%

100.0%

55.6%

14.7%

66.7%

17.8%

16.2%

50.0%

0.0%

19.0%

24.1%

7.0%

18.0%

14.3%

Distribution of Hacked Materials

0.0%

Illegal or certain regulated goods and services

25.0%

0.0%

0.0%

Intellectual property infringements

Child Sexual Exploitation

Note: Cells that are blank mean that there was no enforcement. For cells containing ‘0.0%’ value, there were no cases of successful appeals or overturns.

INDICATORS OF ACCURACY FOR SUSPENSIONS

Metric

Enforcement

Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Estonian

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Lithuanian

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Appeal Rate

Automated Means

Abuse & Harassment

20.0%

0.0%

Automated Means

Ban Evasion

50.0%

15.0%

6.7%

17.4%

50.0%

0.0%

0.0%

0.0%

Automated Means

Child Sexual Exploitation

39.7%

39.7%

45.2%

33.1%

47.6%

8.1%

27.3%

46.3%

52.5%

57.8%

54.7%

63.4%

22.9%

45.0%

40.6%

48.6%

58.2%

57.7%

36.0%

Automated Means

CWC for various countries for illegal activity

0.0%

0.0%

0.0%

0.0%

Automated Means

Financial Scam

50.0%

0.4%

0.0%

25.0%

50.0%

0.0%

50.0%

Automated Means

Help with my compromised account

0.0%

Automated Means

Illegal or certain regulated goods and services

0.0%

0.0%

0.0%

100.0%

0.0%

0.0%

0.0%

0.0%

Automated Means

Misleading & Deceptive Identities

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.2%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.1%

1.7%

Automated Means

Non-Consensual Nudity

0.0%

0.0%

Automated Means

Other

0.0%

0.0%

0.0%

0.0%

11.8%

1.0%

0.0%

9.0%

0.0%

0.0%

9.1%

50.0%

0.0%

14.3%

Automated Means

Perpetrators of Violent Attacks

23.8%

100.0%

40.0%

25.0%

50.0%

0.0%

25.0%

42.1%

Automated Means

Platform Manipulation & Spam

2.9%

2.8%

0.7%

0.5%

0.8%

0.1%

0.0%

1.3%

1.4%

1.3%

1.9%

1.2%

0.0%

0.7%

0.2%

0.0%

1.5%

1.3%

1.7%

2.0%

0.0%

1.4%

0.7%

Automated Means

Sensitive Media

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

2.8%

Automated Means

Violent & Hateful Entities

28.6%

7.4%

0.0%

22.0%

24.4%

100.0%

60.0%

0.0%

37.5%

9.5%

41.2%

Automated Means

Violent Speech

100.0%

0.0%

100.0%

Manual Review

Abuse & Harassment

70.0%

27.3%

50.0%

46.2%

50.8%

1.4%

51.7%

44.1%

50.7%

50.0%

30.0%

100.0%

43.0%

53.4%

50.0%

43.8%

33.3%

49.6%

54.8%

Manual Review

Ban Evasion

0.0%

0.0%

25.3%

47.1%

14.3%

30.0%

50.0%

42.9%

Manual Review

Child Sexual Exploitation

0.0%

0.0%

0.0%

0.0%

18.8%

12.3%

100.0%

26.7%

38.8%

40.0%

36.4%

32.1%

25.0%

9.5%

40.0%

133.3%

33.0%

40.0%

Manual Review

CWC for various countries for illegal activity

0.0%

0.0%

0.0%

Manual Review

Deceased Individuals

0.0%

50.0%

Manual Review

Distribution of Hacked Materials

0.0%

Manual Review

Financial Scam

0.0%

0.0%

0.0%

0.0%

0.0%

Manual Review

Hateful Conduct

75.0%

0.0%

53.6%

100.0%

61.3%

57.1%

25.0%

0.0%

50.0%

66.7%

100.0%

75.0%

100.0%

Manual Review

Help with my compromised account

0.0%

Manual Review

Illegal or certain regulated goods and services

0.0%

0.0%

0.0%

0.2%

0.0%

28.6%

16.7%

0.0%

20.0%

6.8%

8.7%

0.0%

14.3%

7.5%

0.0%

Manual Review

Intellectual property infringements

0.0%

0.0%

0.0%

40.0%

47.2%

0.0%

198.2%

60.0%

257.1%

0.0%

229.0%

83.3%

156.3%

0.0%

246.6%

0.0%

Manual Review

Misleading & Deceptive Identities

0.0%

0.0%

0.0%

0.0%

0.0%

0.1%

0.0%

0.0%

0.7%

1.9%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

1.0%

0.0%

0.0%

0.0%

0.7%

0.0%

Manual Review

Non-Consensual Nudity

57.1%

25.0%

44.4%

33.3%

37.3%

16.1%

33.3%

37.9%

43.1%

16.7%

20.0%

33.3%

15.1%

11.1%

14.3%

50.0%

30.5%

22.2%

Manual Review

Other

0.0%

0.0%

0.0%

4.8%

11.4%

0.0%

1.9%

50.0%

33.3%

0.0%

5.3%

5.6%

0.0%

0.0%

2.4%

0.0%

Manual Review

Perpetrators of Violent Attacks

0.0%

25.0%

100.0%

66.7%

14.3%

100.0%

50.0%

50.0%

42.9%

Manual Review

Platform Manipulation & Spam

40.0%

25.0%

22.7%

37.5%

19.2%

5.7%

7.1%

22.1%

24.9%

9.5%

27.3%

14.8%

0.0%

29.6%

31.5%

19.2%

85.7%

20.4%

8.7%

Manual Review

Private Information & media

25.0%

25.2%

30.8%

55.6%

0.0%

0.0%

50.0%

0.0%

50.0%

58.3%

Manual Review

Sensitive Media

6.4%

0.0%

10.0%

42.9%

0.0%

0.0%

0.0%

0.0%

0.0%

Manual Review

Suicide & Self Harm

100.0%

100.0%

100.0%

47.9%

37.5%

78.6%

58.3%

40.0%

100.0%

0.0%

61.1%

50.0%

Manual Review

Username Squatting

0.0%

0.0%

0.0%

Manual Review

Violent & Hateful Entities

133.3%

66.7%

26.1%

50.0%

16.7%

33.3%

50.0%

0.0%

72.2%

66.7%

100.0%

0.0%

50.0%

87.5%

Manual Review

Violent Speech

57.1%

29.4%

47.1%

63.6%

64.3%

43.2%

40.7%

48.6%

54.1%

54.5%

28.0%

54.8%

0.0%

42.0%

57.3%

60.0%

50.0%

55.6%

53.8%

Note: Cells that are blank mean that there was no enforcement. For cells containing ‘0.0%’ value, there were no cases of successful appeals or overturns.

Metric

Enforcement

Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Estonian

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Lithuanian

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Overturn Rate

Automated Means

Abuse & Harassment

0.0%

Automated Means

Ban Evasion

100.0%

0.0%

0.0%

25.0%

0.0%

Automated Means

Child Sexual Exploitation

3.2%

6.9%

2.9%

2.1%

17.2%

21.5%

14.3%

11.2%

7.0%

1.7%

3.7%

9.8%

0.0%

10.0%

37.1%

0.7%

154.4%

8.2%

0.0%

Automated Means

CWC for various countries for illegal activity

Automated Means

Financial Scam

0.0%

0.0%

0.0%

0.0%

0.0%

Automated Means

Help with my compromised account

Automated Means

Illegal or certain regulated goods and services

0.0%

Automated Means

Misleading & Deceptive Identities

80.0%

100.0%

100.0%

50.0%

Automated Means

Non-Consensual Nudity

Automated Means

Other

0.0%

3.4%

12.5%

0.0%

0.0%

0.0%

Automated Means

Perpetrators of Violent Attacks

38.2%

0.0%

0.0%

50.0%

0.0%

100.0%

37.5%

Automated Means

Platform Manipulation & Spam

12.8%

13.8%

19.4%

25.0%

15.0%

14.1%

31.9%

16.6%

13.5%

14.5%

20.7%

13.6%

0.0%

17.2%

13.5%

18.3%

3.1%

13.5%

26.7%

Automated Means

Sensitive Media

50.0%

0.0%

Automated Means

Violent & Hateful Entities

50.0%

9.9%

0.0%

20.0%

0.0%

0.0%

0.0%

0.0%

42.9%

Automated Means

Violent Speech

0.0%

0.0%

Manual Review

Abuse & Harassment

14.3%

0.0%

0.0%

16.7%

6.5%

5.5%

0.0%

4.9%

2.6%

10.3%

11.1%

0.0%

5.0%

3.7%

6.8%

7.1%

0.0%

9.7%

11.8%

Manual Review

Ban Evasion

2.3%

0.0%

0.0%

0.0%

0.0%

0.0%

Manual Review

Child Sexual Exploitation

0.0%

4.5%

33.3%

0.0%

0.0%

0.0%

25.0%

0.0%

16.7%

0.0%

0.0%

0.0%

0.0%

100.0%

Manual Review

CWC for various countries for illegal activity

Manual Review

Deceased Individuals

0.0%

Manual Review

Distribution of Hacked Materials

Manual Review

Financial Scam

Manual Review

Hateful Conduct

66.7%

90.1%

100.0%

66.7%

110.0%

100.0%

28.6%

100.0%

75.0%

86.7%

100.0%

Manual Review

Help with my compromised account

Manual Review

Illegal or certain regulated goods and services

2.6%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

Manual Review

Intellectual property infringements

0.0%

5.9%

4.1%

0.0%

5.6%

2.8%

0.0%

8.0%

2.8%

Manual Review

Misleading & Deceptive Identities

10.3%

0.0%

0.0%

100.0%

0.0%

Manual Review

Non-Consensual Nudity

0.0%

0.0%

0.0%

0.0%

5.3%

7.1%

0.0%

2.5%

4.0%

0.0%

0.0%

0.0%

25.0%

0.0%

0.0%

0.0%

5.6%

0.0%

Manual Review

Other

100.0%

7.2%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

Manual Review

Perpetrators of Violent Attacks

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

Manual Review

Platform Manipulation & Spam

25.0%

0.0%

40.0%

0.0%

15.0%

17.1%

0.0%

21.4%

30.6%

0.0%

16.7%

30.8%

8.1%

0.0%

60.0%

0.0%

13.1%

50.0%

Manual Review

Private Information & media

0.0%

54.5%

25.0%

40.0%

100.0%

0.0%

28.6%

Manual Review

Sensitive Media

16.7%

100.0%

0.0%

Manual Review

Suicide & Self Harm

0.0%

0.0%

0.0%

14.9%

33.3%

9.1%

28.6%

25.0%

0.0%

27.3%

0.0%

Manual Review

Username Squatting

Manual Review

Violent & Hateful Entities

25.0%

0.0%

6.3%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

0.0%

28.6%

Manual Review

Violent Speech

25.0%

20.0%

29.2%

21.4%

22.7%

30.7%

18.2%

29.7%

20.5%

50.0%

42.9%

16.7%

23.3%

29.4%

50.0%

20.0%

31.2%

34.3%

Note: Cells that are blank mean that there was no enforcement.  For cells containing ‘0%’ value, there were no cases of successful appeals or overturns.

Art. 24.2: Average Monthly Active Recipients - 1 October to 31 March

Country Name

Logged In Users

Logged Out Users

Total

Austria

820,508

527,375

1,347,884

Belgium

1,395,023

871,258

2,266,281

Bulgaria

419,838

244,604

664,442

Cyprus

182,679

93,858

276,537

Czechia

963,143

927,532

1,890,675

Germany

10,490,768

5,107,639

15,598,407

Denmark

773,601

330,220

1,103,821

Estonia

186,397

97,151

283,549

Spain

9,623,028

5,432,943

15,055,971

Finland

1,504,663

698,343

2,203,007

France

11,643,813

5,777,497

17,421,310

Greece

868,983

767,844

1,636,827

Croatia

319,177

369,862

689,040

Hungary

701,480

471,939

1,173,419

Ireland

1,382,790

812,254

2,195,045

Italy

5,457,672

2,367,161

7,824,833

Lithuania

264,246

148,601

412,847

Luxembourg

110,635

61,402

172,038

Latvia

264,294

143,346

407,641

Malta

79,743

35,850

115,594

Netherlands

4,737,950

2,736,329

7,474,280

Poland

4,284,544

2,979,738

7,264,282

Portugal

1,550,643

704,858

2,255,502

Romania

1,086,831

538,532

1,625,363

Sweden

1,729,830

812,559

2,542,389

Slovenia

198,042

238,148

436,190

Slovakia

273,202

219,924

493,126

Further Information on Suspensions

During the applicable reporting period 1 October, 2024 to 31 March, 2025 there were zero actions taken for: provision of manifestly unfounded reports or complaints; or manifestly illegal content. While manifestly illegal content is not a category that we have taken action on during the reporting period, we suspended 132,155 accounts for violating our Child Sexual Exploitation policy and 4,626 for violating our Violent and Hateful Entity policy.

Disputes submitted to out-of-court dispute settlement bodies.

To date, zero disputes have been submitted to the out-of-court settlement bodies.

 

Reports received by trusted flaggers.

During the reporting period, we have received 271 reports from Article 22 DSA approved trusted flaggers. Once Article 22 DSA awarded trusted flaggers information is published, we immediately enrol them in our trusted flaggers program, which ensures prioritisation of human review, via their email, username, and account.