DSA Transparency Report - October 2025

Introduction

This report covers the content moderation activities of X’s international entity X Internet Unlimited Company (“XIUC”) (formerly known as Twitter International Unlimited Company (“TIUC”)) under the Digital Services Act (DSA), during the date range 1 April, 2025 to 30 June, 2025.

We may refer to “notices” as defined in the DSA as “user reports” and “reports”.

Description of our Content Moderation Practices

Our content moderation systems are designed and tailored to mitigate systematic risks without unnecessarily restricting the use of our service and fundamental rights, especially freedom of expression. Content moderation activities are implemented and anchored on principled policies and leverage a diverse set of interventions to ensure that our actions are reasonable, proportionate and effective. Our content moderation systems blend automated and human review paired with a robust appeals system that enables our users to quickly raise potential moderation anomalies or mistakes.

Policies 

X's purpose is to serve the public conversation. Violence, harassment, and other similar types of behaviour discourage people from expressing themselves, and ultimately diminish the value of global public conversation. Our Rules are designed to ensure all people can participate in the public conversation freely and safely.

X has policies protecting user safety as well as platform and account integrity. The X Rules and policies are publicly accessible on our Help Center, and we are making sure that they are written in an easily understandable way. We also keep our Help Center regularly updated anytime we modify our Rules.

For the purposes of the summary tables below, the X policy titles in use at the start of the reporting period have been retained, even if they changed throughout the period.

Enforcement 

When determining whether to take enforcement action, we may consider a number of factors, including (but not limited to) whether:

When we take enforcement actions, we may do so either on a specific piece of content (e.g., an individual post or Direct Message) or on an account. We may employ a combination of these options. In most cases, this is because the behaviour violates the X Rules.

To enforce our Rules, we use a combination of machine learning and human review. Our systems are able to surface content to human moderators who use important context to make decisions about potential violations. This work is led by an international, cross-functional team with 24-hour coverage and the ability to cover multiple languages. We also have a complaints process for any potential errors that may occur.

To ensure that our human reviewers are prepared to perform their duties we provide them with a robust support system. Each human reviewer goes through extensive training and refreshers, they are provided with a suite of tools that enable them to do their jobs effectively, and they have a suite of wellness initiatives available to them. For further information on our human review resources, see the section titled “Human resources dedicated to Content Moderation”.

Reporting violations

X strives to provide an environment where people can feel free to express themselves. If abusive behaviour happens, we want to make it easy for people to report it to us. EU users can also report any violation of our Rules or their local laws, no matter where such violations appear.

Transparency

We always aim to exercise moderation with transparency. Where our systems or teams take action against content or an account as a result of violating our Rules or in response to a valid and properly scoped request from an authorised entity in a given country, we strive to provide context to users. Our Help Center article explains notices that users may encounter following actions taken. We promptly notify affected users about legal requests to withhold content, including a copy of the original request, unless we are legally prohibited from doing so. We have also updated our global transparency centre covering a broader array of our transparency efforts. 

Content Moderation Governance Structure

Own Initiative Content Moderation Activities

X employs a combination of heuristics and machine learning algorithms to automatically detect content that we believe violates the X Rules and policies enforced on our platform. We use combinations of natural language processing models, image processing models and other sophisticated machine learning methods to detect potentially violative content. These models vary in complexity and in the outputs they produce. For example, the model used to detect abuse on the platform is trained on abuse violations detected in the past. Content flagged by these machine learning models are either reviewed by human content reviewers before an action is taken or, in some cases, automatically actioned, based on the historical accuracy of the model’s output. Heuristics are typically utilised to enable X to react quickly to new forms of violations that emerge on the platform. Heuristics are common patterns of behaviours, text, or keywords that may be typical of a certain category of violations. Pieces of content detected by heuristics may also get reviewed by human content reviewers before an action is taken on the content. These heuristics are used to flag content for review by human agents proactively.

Testing, Evaluation, and Iteration

Automated enforcements under the X Rules and policies undergo rigorous testing before being applied to the live product. Both machine learning and heuristic models are trained and/or validated on thousands of data points and labels (e.g., violative or non-violative) including those that are generated by trained human content moderators. For example, inputs to content-related models can include the text within the post itself, the images attached to the post, and other characteristics. Training data for the models comes from both the cases reviewed by our content moderators, random samples, and various other samples of pieces of content from the platform.

Use of Human Moderation

Before any given algorithm is launched to the platform, we verify its detection of policy violating content or behaviour by drawing a statistically significant test sample and performing item-by-item human review. Reviewers have expertise in the applicable policies and are trained by our Policy teams to ensure the reliability of their decisions. Human review helps us to confirm that these automations achieve a level of precision, and sizing helps us understand what to expect once the automations are launched.

In addition, humans proactively conduct manual content reviews for potential policy violations. We conduct proactive sweeps for certain high-priority categories of potentially violative content both periodically and during major events, such as elections. Content moderators also proactively review content flagged by heuristic and machine learning models for potential violations of other policies, including our adult content, violent content, child sexual exploitation (CSE) and violent and hateful entities policies.

Once reviewers have confirmed that the detection meets an acceptable standard of accuracy, we consider the automation to be ready for launch. Once launched, automations are monitored dynamically for ongoing performance and health. If we detect anomalies in performance (for instance, significant spikes or dips against the volume we established during sizing, or significant changes in user complaint/overturn rates), our Engineering (including Data Science) teams - with support from other functions - revisit the automation to diagnose any potential problems and adjust the automations as appropriate.

Automated Moderation Activity Examples

A vast majority of all accounts that are suspended for the promotion of terrorism and CSE are proactively flagged by a combination of technology and other purpose-built internal proprietary tools. When we remove CSE content with these automated systems, we immediately report it to the National Center for Missing and Exploited Children (NCMEC). NCMEC makes reports available to the appropriate law enforcement agencies around the world to facilitate investigations and prosecutions.

Our current methods deploy a range of internal tools and and third party solutions that utilises industry standard hash libraries (e.g., PhotoDNA) to ensure known CSAM is caught prior to any user reports being filed. We leverage the hashes provided by NCMEC and industry partners. We scan media uploaded to X for matches to hashes of known CSAM sourced from NGOs, law enforcement and other platforms. We also have the ability to block keywords and phrases from Trending and block search results for certain terms that are known to be associated with CSAM.

We commit to continuing to invest in technology that improves our capability to detect and remove, for instance, terrorist and violent extremist content online before it can cause user harms, including the extension or development of digital fingerprinting and AI-based technology solutions. Our participation in multi-stakeholder communities, such as the Christchurch Call to Action, Global Internet Forum to Counter Terrorism and EU Internet Forum (EUIF), helps to identify emerging trends in how terrorists and violent extremists are using the internet to promote their content and exploit online platforms.

You can learn more about our commitment to eradicating CSE and terrorist content, and the actions we’ve taken here. Our continued investment in proprietary technology is steadily reducing the burden on people to report this content to us.

Scaled Investigations

These moderation activities are supplemented by scaled human investigations into the tactics, techniques and procedures that bad actors use to circumvent our Rules and policies. These investigations may leverage signals and behaviours identifiable on our platform, as well as off-platform information, to identify large-scale and/or technically sophisticated evasions of our detection and enforcement activities. For example, through these investigations, we are able to detect coordinated activity intended to manipulate our platform and artificially amplify the reach of certain accounts or their content.  

Indications of Accuracy for Content Moderation

The possible rate of error of the automated and human means used in enforcing X Rules and policies is represented by the number of Content Removal Complaints (appeals) received and the number of Content Removal Complaints that resulted in reversal of our enforcement decision (successful appeals) by remediation type and by country.

Closing Statement on Content Moderation Activities

Our content moderation systems are designed and tailored to mitigate systematic risks without unnecessarily restricting the use of our service and fundamental rights, especially freedom of expression. Content moderation activities are implemented and anchored on principled policies and leverage a diverse set of interventions to ensure that our actions are reasonable, proportionate and effective. Our content moderation systems blend automated and human review paired with a robust appeals system that enables our users to quickly raise potential moderation anomalies or mistakes.

Human resources dedicated to Content Moderation

Today, we have 1352 people working in content moderation. Our teams work on both initial reports as well as on complaints of initial decisions across the world (and are not specifically designated to only work on EU matters).

Linguistics Expertise of our Content Moderation Team

X’s scaled operations team possesses a variety of skills, experiences, and tools that allow them to effectively review and take action on reports across all of our Rules and policies. X has analysed which languages are most commonly found in reports reviewed by our content moderators, and has hired content moderation specialists who have professional proficiency in these commonly spoken languages. The following table is a summary of the number of people in our content moderation team who possess professional proficiency in the languages that are most commonly contained in reported content in the EU on our platform:

Primary Language

People

Bulgarian

1

English

1197

French

53

German

55

Italian

1

Polish

1

Portuguese

13

Spanish

31

Total

1352

In addition to the primary language support, we have also have people supporting additional languages. The following is the list of secondary EU language support:

Secondary Language

People

Bulgarian

1

French

62

German

56

Greek

1

Irish

1

Italian

1

Latvian

1

Polish

1

Portuguese

17

Spanish

44

Total

185

Please note that the numbers included in the secondary language support are not separate or distinct from the numbers included in the primary language support data. Additionally, the English language is not indicated as a secondary language category in the table above, since all agents with different primary language capability also speak English as well.

Qualifications of our Content Moderation Team

Content Moderation Team Qualifications

Years in Current Role

Headcount

0 to 1

613

1 to 2

134

2 to 3

159

3 to 4

264

4 to 5

82

5 to 6

24

6 to 7

30

7 or more

46

The above table includes all moderators who support EU member state languages as of March 2025. The content moderation team collectively provides linguistic capacity in multiple languages. In situations where we need additional language support, we use translation services and/or machine translation tools, to investigate and address challenges in additional languages. Additionally, content moderators will leverage playbooks that contain colloquial terms and phrases that are consistently being updated to reflect various EU languages spoken within the region and trends.

Moderators are recruited using a standard job description that includes a language requirement which states that the candidate should be able to demonstrate written and spoken fluency in the language and have at least one year of work experience for entry-level positions. In the interview and application process, each agent candidate must meet certain linguistic standards to be considered “language qualified”. This determination is made through multiple tests (i.e. written, oral, etc.) of the candidate’s respective language, to determine their respective proficiency level. Candidates must also meet the educational and background requirements in order to be considered, as well as demonstrate an understanding of current events for the country or region of content moderation they will support.

Organisation, Team Resources, Expertise, Training and Support of our Team that Reviews and Responds to Reports of Illegal Content

Description of the team

X has built a specialised team made up of individuals who have received specific training in order to assess and take action on illegal content that X becomes aware of via reports or other processes on our own initiative. This team consists of different tier groups, with higher tiers consisting of more senior, or more specialised, individuals.

When handling a report of illegal content or a complaint against a previous decision, content and senior content reviewers first assess the content under X’s Rules and policies. If no violation of X’s Rules and policies is determined warranting a global removal of the content, the content moderators will assess the content for potential illegality under Local Laws. If more detailed investigation is required, content moderators can escalate reports to experienced policy and/or legal request specialists who have also undergone in-depth training and/or have language expertise in the respective case’s language. These individuals take appropriate action after carefully reviewing the report and/or complaint in close detail. In cases where the specialist team cannot determine a final decision or action on a case, regarding the potential illegality of the reported content, the report will be discussed with in-house legal counsel. Everyone involved in this process works closely together with daily exchanges through meetings and other channels to ensure the timely and accurate handling of reports. Additionally, in the instance that a case warrants in-house legal counsel review, the lessons learned and actions made on this case will be disseminated to all relevant content moderator parties to ensure consistency in review and an understanding of best practices made by the agent, if a similar case is encountered in the future.

Furthermore, all teams involved in solving these reports closely collaborate with a variety of other policy teams at X who focus on X Rules and policies. This cross-team effort is particularly important in the aftermath of tragic events, such as violent attacks, to ensure alignment, swift consistency in review, and the same potential remediation actions if the content is found violative.

Content moderators are supported by team leads, subject matter experts, quality auditors and trainers. We hire people with diverse backgrounds in fields such as law, political science, psychology, communications, sociology and cultural studies, and languages.

Training and support of persons processing legal requests

All team members, i.e. all employees hired by X as well as vendor partners working on these reports, are trained and retrained regularly on our tools, processes, Rules and policies, including special sessions on cultural and historical context. Initially when joining the team at X, each individual follows an onboarding program and receives individual mentoring during this period, as well as thereafter through our Quality Assurance (QA) program, in house and external counsels (for internal employees).

All team members have direct access to robust training and workflow documentation for the entirety of their employment, and are able to seek guidance at any time from trainers, leads, and internal specialist legal and policy teams as outlined above, as well as managerial support.

Updates about significant current events or Rules and policy changes are shared with all content reviewers in real time, to give guidance and facilitate balanced and informed decision making. In the case of Rules and policy changes, all training materials and related documentation is updated. Calibration sessions are carried out frequently during the reporting period. These sessions aim to increase collective understanding and focus on the needs of the content reviewers in their day-to-day work, by allowing content moderators to ask questions and discuss aspects of recently reviewed cases, X’s Rules and policies, and/or local laws.

The entire team also participates in obligatory X Rules and policies refresher training as the need arises or whenever Rules and policies are updated. These trainings are delivered by the relevant policy specialists who were directly involved in the development of the Rules and policy change. For these sessions we also employ the “train the trainer” method to ensure timely training delivery to the whole team across all of the shifts. All team members use the same training materials to ensure consistency.

QA is a critical measure to the business to help ensure that we are delivering a consistent service at the desired level of quality to our key stakeholders, both externally and internally as it pertains to our case work. We have a dedicated QA team within our vendor team to help us identify areas of opportunity for training and potential defect detection in our workflow or Rules and policies. The QA specialists perform quality checks of reports to ensure that content is actioned appropriately.

The standards and procedures within the QA team ensure the team’s QA is assessed equally, objectively, efficiently and transparently. In case of any mis-alignments, additional training is scheduled, to ensure the team understands the issues and can handle reports accurately.

In addition, given the nature and sensitivity of their work, the entire team has access to online resources and regular onsite group and individual sessions related to resilience and well-being. These are provided by mental health professionals. Content reviewers also participate in resilience, self-care, and vicarious trauma training as part of our mandatory wellness plan during the reporting period.

Training and Support provided to those Persons performing Content Moderation Activities for our XIUC Terms of Service and Rules

Training is a critical component of how X maintains the health and safety of the public conversation through enabling content moderators to accurately and efficiently moderate content posted on our platform. Training at X aims to improve the content moderators’ enforcement performance and quality scores by enhancing content moderators’ understanding and application of X Rules through robust training and quality programs and a continuous monitoring of quality scores.

Training Process

There is a robust training program and system in place for every workflow to provide content moderators with the adequate work skills and job knowledge required for processing user cases. All content moderators must be trained in their assigned workflows. These focus areas ensure that content moderators are set up for success before and during the content moderation lifecycle, which includes:

Training Analysis and Design

Before commencing design work on any content moderators program or resource, a rigorous learner analysis is conducted in close collaboration with training specialists and quality analysts to identify performance gaps and learning needs. Each program is designed with key stakeholder engagement and alignment. The design objective is to adhere to visual and learning design principles to maximise learning outcomes and ensure that agents can perform their tasks with accuracy and efficiency. This is achieved by making sure that the content is:

X’s training programs and resources are designed based on needs, and a variety of modalities are employed to diversify the content moderators learning experience, including:

Classroom Training

Classroom training is delivered either virtually or face-to-face by expert trainers. Classroom training activities can include:

Onboarding and Ramp Up

When content moderators successfully complete their classroom training program, they undergo an onboarding period. The onboarding phase includes case study by observation, demonstration and hands-on training on live cases. Onboarding activities include content moderator shadowing, guided case work, Question and Answer sessions with their trainer, coaching, feedback sessions, etc. Quality audits are conducted for each onboarding content moderator and content moderators must be coached for any mis-action spotted in their quality scores the same day that the case was reviewed. Trainers conduct needs assessment for each onboarding content moderator and prepare refresher training accordingly. After the onboarding period, content is evaluated on an ongoing basis with the QA team to identify gaps and address potential problem areas. There is a continuous feedback loop with quality analysts across the different workflows to identify challenges and opportunities to improve materials and address performance gaps.

Up-Skilling

When a content moderator needs to be upskilled they receive training of a specific workflow within the same pillar that the content moderator is currently working. The training includes a classroom training phase and onboarding phase which is specified above.

Refresher Sessions

Refresher sessions take place when a content moderator has previously been trained, has access to all the necessary tools, but would need a review of some or all topics. This may happen for content moderators who have been on prolonged leave, transferred temporarily to another content moderation policy workflow, or ones who have recurring errors in the quality scores. After a needs assessment, trainers are able to pinpoint what the content moderator needs and prepare a session targeting their needs and gaps.

New Launch/Update Roll-Outs

There are also processes that require new and/or specific product training and certification. These new launches and updates are identified by X and the knowledge is transferred to the content moderators.

Remediation Plans

There are remediation plans in place to support content moderators who do not pass the training or onboarding phase, or are not meeting quality requirements.

Relevant Data for the Reporting Period

Member States Orders to Act Against Illegal Content

Removal Orders Received - 1 April to 30 June

Illegal Content Category

France

Unsafe and illegal products

1

Removal Orders Median Handle Time (Hours) - 1 April to 30 June

Illegal Content Category

France

Unsafe and illegal products

3.6

Removal Orders Median Time To Acknowledge Receipt

X provides an automated acknowledgement of receipt of removal orders submitted by law enforcement through our Legal Request submission portal. As a consequence of this immediate acknowledgement of receipt, the median time was zero hours.

Important Notes about Removal Orders:

Removal Orders Received - 1 April to 30 June

Illegal Content Category

France

Unsafe and illegal products

1

Removal Orders Median Handle Time (Hours) - 1 April to 30 June

Illegal Content Category

France

Unsafe and illegal products

3.6

Removal Orders Median Time To Acknowledge Receipt

X provides an automated acknowledgement of receipt of removal orders submitted by law enforcement through our Legal Request submission portal. As a consequence of this immediate acknowledgement of receipt, the median time was zero hours.

Important Notes about Removal Orders:

Information Requests Received - 1 April to 30 June

Illegal Content Category

Austria

Belgium

Denmark

Finland

France

Germany

Greece

Ireland

Italy

Malta

Netherlands

Poland

Portugal

Spain

Data protection & privacy violations

4

4

2

5

3

Illegal or harmful speech

3

2

1

56

2181

7

1

16

4

26

4

28

Intellectual property infringements

1

2

4

1

Negative effects on civic discourse or elections

11

Non-consensual behavior

4

10

Pornography or sexualized content

1

21

1

Protection of minors

4

52

2

3

2

8

Risk for public security

6

32

674

101

17

9

1

43

Scams and fraud

2

1

1

19

35

1

1

4

32

Self-harm

1

1

Unsafe and illegal products

1

1

1

Violence

4

4

4

87

123

3

3

8

43

12

1

18

Issue unknown

1

Information Request Median Handle Time (Hours) - 1 April to 30 June

Illegal Content Category

Austria

Belgium

Denmark

Finland

France

Germany

Greece

Ireland

Italy

Malta

Netherlands

Poland

Portugal

Spain

Data protection & privacy violations

96.74

74.115

103.525

134.89

125.17

Illegal or harmful speech

75.28

119.465

125.43

80.2

53.84

24.645

142.77

107.2

20.955

93.525

27.425

7.6

Intellectual property infringements

191.2

1.895

121.98

145.82

Negative effects on civic discourse or elections

26.12

Non-consensual behavior

20.36

147.465

Pornography or sexualized content

50.18

41.35

26.9

Protection of minors

11.925

1.295

0.475

25.35

12.4

Risk for public security

98.4

44.385

28.87

44.93

13.6

124.1

52.43

74.97

32.83

Scams and fraud

51.86

28.65

25.83

103.23

29.45

171.75

23.32

132.095

28.5

Self-harm

0.28

Unsafe and illegal products

146

20.87

7

Violence

46.88

25.39

22.41

44.52

46.85

81.47

97.42

37.295

100.83

25.35

50

54.995

Issue unknown

0.1

Information Request Median Time To Acknowledge Receipt

X provides an automated acknowledgement of receipt of information requests submitted by law enforcement through our Legal Request submission portal. As a consequence of this immediate acknowledgement of receipt, the median time is zero.

Important Notes about Information Requests:

Illegal Content Notices

Illegal Content Notices Received - 1 April to 30 June

Reason Code

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

European Union

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Animal welfare

21

15

3

2

1

6

14

2

1,451

16

197

93

5

7

20

29

1

3

1

19

39

10

3

131

5

Data protection & privacy violations

126

146

49

32

15

82

63

15

1,969

40

1,870

1,445

100

28

157

414

9

11

13

3

800

514

112

97

11

11

1,804

68

Defamation/insult

218

275

108

25

20

257

112

28

11,109

184

7,008

5,080

268

35

391

2,293

24

20

22

2

1,098

1,659

432

395

29

14

4,860

185

Illegal or harmful speech

239

291

48

28

54

358

395

30

20,305

139

8,580

8,616

138

35

340

1,004

33

38

22

1

676

1,256

449

412

21

14

3,022

178

Intellectual Property Infringements

1

34

39

96

1

132

6

4

36

33

217

218

4

Negative effects on civic discourse or elections

54

43

17

6

16

72

20

9

547

20

419

1,070

13

21

25

127

12

14

3

167

2,321

66

554

5

4

173

28

Non-consensual behavior

9

33

5

9

6

28

20

4

1,511

17

794

405

26

12

71

60

2

8

6

2

190

109

32

26

7

4

277

28

Pornography or sexualized content

112

114

60

21

15

102

45

110

4,313

37

2,064

1,495

69

86

79

335

25

21

18

8

246

333

170

99

9

17

1,088

78

Protection of minors

67

129

14

5

16

31

70

89

1,896

302

1,571

4,846

25

52

146

239

6

15

8

9

1,395

506

89

40

11

4

8,557

52

Risk for public security

80

38

16

5

10

82

23

12

955

25

1,362

938

28

23

55

144

17

10

5

7

103

267

60

106

6

3

298

25

Scams and fraud

281

437

106

91

35

292

250

41

2,629

232

3,093

2,804

78

251

592

902

20

51

10

21

1,022

990

319

247

32

50

2,865

258

Scope of platform service

6

10

1

3

4

528

3

104

56

1

9

6

17

4

1

4

8

21

5

7

2

42

3

Self-harm

5

13

1

1

2

5

6

415

16

115

91

9

1

11

21

2

1

2

22

25

9

2

1

3

91

59

Unsafe and illegal products

44

37

5

3

5

48

30

2

364

22

595

262

12

14

26

98

6

9

2

71

93

19

15

1

1

234

11

Violence

69

92

26

24

9

119

50

14

5,311

1,930

2,280

1,506

55

18

220

479

18

12

15

3

265

477

125

97

19

6

1,543

63

Actions Taken on Illegal Content Notices - 1 April to 30 June

Closure Type

Action

Law

Reason Code

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

European Union

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Automated Means

Country withheld Content

"Basis of Law and/or

Local Laws"

Illegal or harmful speech

1

Global content deletion based on a violation of TIUC Terms of Service and Rules

"Terms of Service and/or

X’s Rules or Policies"

Violence

1

No Violation Found

"Terms of Service and/or

X’s Rules or Policies"

Animal welfare

3

"Terms of Service and/or

X’s Rules or Policies"

Data protection & privacy violations

9

1

6

1

4

"Terms of Service and/or

X’s Rules or Policies"

Defamation/insult

1

3

60

2

"Terms of Service and/or

X’s Rules or Policies"

Illegal or harmful speech

1

10

3

1

"Terms of Service and/or

X’s Rules or Policies"

Negative effects on civic discourse or elections

1

"Terms of Service and/or

X’s Rules or Policies"

Non-consensual behavior

1

4

1

"Terms of Service and/or

X’s Rules or Policies"

Pornography or sexualized content

2

3

1

2

1

44

1

1

3

11

8

1

20

1

"Terms of Service and/or

X’s Rules or Policies"

Protection of minors

9

22

3

1

1

5

9

141

40

28

2

10

31

1

3

161

65

13

1

738

5

"Terms of Service and/or

X’s Rules or Policies"

Risk for public security

1

2

"Terms of Service and/or

X’s Rules or Policies"

Scams and fraud

3

1

2

1

11

"Terms of Service and/or

X’s Rules or Policies"

Scope of platform service

1

2

1

"Terms of Service and/or

X’s Rules or Policies"

Self-harm

8

"Terms of Service and/or

X’s Rules or Policies"

Unsafe and illegal products

2

1

2

"Terms of Service and/or

X’s Rules or Policies"

Violence

1

51

2

Manual Closure

Country withheld Content

"Basis of Law and/or

Local Laws"

Animal welfare

1

2

1

1,078

6

9

3

1

2

1

4

"Basis of Law and/or

Local Laws"

Data protection & privacy violations

7

23

11

12

1

4

9

6

346

3

186

264

18

9

28

48

1

2

1

150

104

25

22

2

1

419

8

"Basis of Law and/or

Local Laws"

Defamation/insult

69

103

45

6

5

113

36

9

5,339

58

2,772

2,412

57

6

139

691

6

4

2

374

617

161

124

6

3

1,855

62

"Basis of Law and/or

Local Laws"

Illegal or harmful speech

110

116

16

9

30

158

297

5

11,423

67

2,839

4,916

58

6

99

447

6

5

7

1

243

411

135

133

8

6

1,147

71

"Basis of Law and/or

Local Laws"

Negative effects on civic discourse or elections

13

10

1

2

12

1

57

2

28

361

2

9

17

2

1

32

56

5

10

26

6

"Basis of Law and/or

Local Laws"

Non-consensual behavior

3

7

7

1

1

7

425

4

109

110

5

18

16

3

1

2

50

21

6

7

1

74

10

"Basis of Law and/or

Local Laws"

Pornography or sexualized content

5

9

15

9

3

24

5

17

2,727

16

525

368

29

19

9

49

14

5

2

5

21

117

10

54

6

3

101

17

"Basis of Law and/or

Local Laws"

Protection of minors

8

6

1

3

8

5

21

355

8

140

343

7

20

21

12

1

4

1

29

43

3

6

2

135

13

"Basis of Law and/or

Local Laws"

Risk for public security

9

3

1

1

6

14

1

5

152

202

248

3

1

8

22

3

13

25

7

7

35

9

"Basis of Law and/or

Local Laws"

Scams and fraud

31

93

7

56

17

104

96

16

1,000

77

122

1,128

15

8

326

209

4

3

1

4

297

256

48

36

5

2

579

77

"Basis of Law and/or

Local Laws"

Scope of platform service

1

1

159

6

11

1

3

1

3

7

1

12

2

"Basis of Law and/or

Local Laws"

Self-harm

1

36

1

13

9

2

2

2

3

1

8

3

"Basis of Law and/or

Local Laws"

Unsafe and illegal products

8

6

1

1

2

11

3

102

11

90

87

3

4

17

1

22

11

2

3

1

38

5

"Basis of Law and/or

Local Laws"

Violence

13

16

4

3

1

32

8

6

1,329

1,744

407

386

13

6

68

126

3

3

3

1

48

75

21

26

5

322

15

Global content deletion based on a violation of TIUC Terms of Service and Rules

"Terms of Service and/or

X’s Rules or Policies"

Animal welfare

1

3

1

2

191

3

20

23

2

1

10

4

1

7

8

1

7

1

"Terms of Service and/or

X’s Rules or Policies"

Data protection & privacy violations

1

12

2

14

14

118

5

247

190

2

10

16

91

24

6

5

4

64

3

"Terms of Service and/or

X’s Rules or Policies"

Defamation/insult

4

8

4

8

9

111

1

315

104

5

4

21

18

14

13

11

43

6

"Terms of Service and/or

X’s Rules or Policies"

Illegal or harmful speech

6

13

2

2

22

13

525

5

548

321

8

2

8

25

1

37

51

20

4

1

59

7

"Terms of Service and/or

X’s Rules or Policies"

Intellectual Property Infringements

0

2

12

21

0

65

0

1

3

3

7

4

0

"Terms of Service and/or

X’s Rules or Policies"

Negative effects on civic discourse or elections

1

1

3

29

19

3

1

1

3

1

"Terms of Service and/or

X’s Rules or Policies"

Non-consensual behavior

3

2

33

2

36

26

2

16

4

1

1

5

2

2

1

20

"Terms of Service and/or

X’s Rules or Policies"

Pornography or sexualized content

9

15

4

2

8

6

8

316

5

240

778

1

10

9

44

1

1

1

1

52

39

14

11

2

162

5

"Terms of Service and/or

X’s Rules or Policies"

Protection of minors

22

64

6

2

8

21

28

708

154

587

3,732

9

6

51

72

1

4

3

3

665

209

27

9

5

3

3,739

17

"Terms of Service and/or

X’s Rules or Policies"

Risk for public security

2

5

9

52

1

134

74

2

3

1

14

8

18

2

"Terms of Service and/or

X’s Rules or Policies"

Scams and fraud

3

2

13

20

4

1

1

1

48

2

"Terms of Service and/or

X’s Rules or Policies"

Scope of platform service

2

6

15

4

1

"Terms of Service and/or

X’s Rules or Policies"

Self-harm

2

1

1

1

1

48

6

12

1

1

3

5

4

1

1

13

2

"Terms of Service and/or

X’s Rules or Policies"

Unsafe and illegal products

2

17

9

24

1

199

38

1

16

3

4

1

3

1

"Terms of Service and/or

X’s Rules or Policies"

Violence

9

13

2

3

4

29

4

3

827

64

275

240

5

1

46

54

3

3

47

68

25

16

3

1

185

1

No Violation Found

"Terms of Service and/or

X’s Rules or Policies"

Animal welfare

20

11

3

1

1

6

10

1

175

7

92

67

3

5

10

25

1

2

1

10

30

9

3

120

4

"Terms of Service and/or

X’s Rules or Policies"

Data protection & privacy violations

109

111

36

20

14

64

39

9

1,498

31

1,083

991

80

19

119

346

8

8

12

3

558

386

81

70

9

6

1,320

57

"Terms of Service and/or

X’s Rules or Policies"

Defamation/insult

145

163

59

19

15

136

64

19

5,583

125

3,097

2,555

203

29

248

1,574

18

16

20

2

706

1,027

257

259

23

11

2,925

115

"Terms of Service and/or

X’s Rules or Policies"

Illegal or harmful speech

122

162

32

16

22

178

84

25

8,308

67

4,462

3,362

72

27

232

530

26

32

15

394

791

294

274

13

7

1,807

100

"Terms of Service and/or

X’s Rules or Policies"

Negative effects on civic discourse or elections

40

33

16

4

16

60

18

9

484

18

282

689

13

19

16

106

9

14

2

133

2,262

60

543

5

4

147

22

"Terms of Service and/or

X’s Rules or Policies"

Non-consensual behavior

6

23

5

2

5

27

11

3

1,045

11

412

268

19

12

37

39

1

5

4

135

86

23

18

7

3

182

18

"Terms of Service and/or

X’s Rules or Policies"

Pornography or sexualized content

96

86

41

9

12

68

32

85

1,208

16

369

347

38

57

55

231

10

15

14

2

171

169

146

33

3

12

804

55

"Terms of Service and/or

X’s Rules or Policies"

Protection of minors

28

37

4

4

11

14

39

31

689

96

328

729

9

24

63

124

3

7

4

3

540

189

46

25

3

1

3,938

17

"Terms of Service and/or

X’s Rules or Policies"

Risk for public security

69

34

15

4

4

63

13

7

749

24

849

614

24

22

45

119

14

10

5

7

89

228

45

99

6

3

244

14

"Terms of Service and/or

X’s Rules or Policies"

Scams and fraud

250

341

90

35

17

187

152

25

1,605

155

649

1,663

62

243

263

689

16

47

9

17

725

734

270

210

27

48

2,220

179

"Terms of Service and/or

X’s Rules or Policies"

Scope of platform service

5

10

1

3

361

3

61

41

1

9

5

14

4

1

2

5

14

4

7

2

29

1

"Terms of Service and/or

X’s Rules or Policies"

Self-harm

3

11

1

1

1

2

5

282

15

59

61

6

1

8

16

2

1

2

15

15

6

1

1

3

60

53

"Terms of Service and/or

X’s Rules or Policies"

Unsafe and illegal products

36

29

4

2

3

20

18

2

234

10

182

137

9

14

21

65

5

9

2

45

77

17

11

1

191

5

"Terms of Service and/or

X’s Rules or Policies"

Violence

47

63

19

18

4

58

38

5

3,093

113

1,347

877

37

11

106

297

15

6

9

2

169

332

79

55

11

5

1,023

47

Offer of help in case of self-harm and suicide concern based on TIUC Terms of Service and Rules

"Terms of Service and/or

X’s Rules or Policies"

Data protection & privacy violations

1

"Terms of Service and/or

X’s Rules or Policies"

Defamation/insult

1

"Terms of Service and/or

X’s Rules or Policies"

Illegal or harmful speech

1

"Terms of Service and/or

X’s Rules or Policies"

Non-consensual behavior

1

"Terms of Service and/or

X’s Rules or Policies"

Protection of minors

2

2

2

3

"Terms of Service and/or

X’s Rules or Policies"

Self-harm

1

1

41

2

9

2

3

1

10

"Terms of Service and/or

X’s Rules or Policies"

Unsafe and illegal products

1

"Terms of Service and/or

X’s Rules or Policies"

Violence

2

2

1

1

Reports of Illegal Content Median Handle Time (Hours) - 1 April to 30 June 

Reason Code

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

European Union

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Animal welfare

3

7

1

0

1

0.5

2

42

1

4

4

2

11

1

2

10

10

14

1

5

6.5

8

3

4

Data protection & privacy violations

7

9

5

10.5

3

5

7.5

3

4

3

3

7

10.5

7

8

3

7.5

7

1

8

2

11

10

1

4

7

8.5

Defamation/insult

3

5

2

4

10

3

7

2

3

3

2

4.5

6

3

2

3

1

1

7

3

2

9

1

3

2

3

5

Illegal or harmful speech

3.5

2

2.5

1

2

2

2

3

2

1

1

3

3

2

2

1.5

3

2

2

2

2

7

2

1

1

3

2

Negative effects on civic discourse or elections

5.5

0

9

2

0

2.5

2

1

2

2

2

4

1

1

1

1.5

10

14

2

1

8.5

1

12

2

2

2

Non-consensual behavior

2

2

1

20

2.5

13.5

10

3

3

4

4

8

1

1

9

1.5

3

5.5

6

2

5

1

12

1

4.5

5

4

Pornography or sexualized content

3

1

3.5

2

0

2

2

3

2

2

6

9

5

2

3

3

8

0

4.5

2

2

3

6.5

9

2

2

2

Protection of minors

1

2

1

2

4

3

4

3

2

2

5

5

3.5

1

2.5

2

5.5

2

5

3

3

2

3.5

9.5

5.5

2

6

Risk for public security

1

3

1

7

5.5

1

2

1

1

1

3

1

1

1

2

1

1

2

1

2

1

2

1

0.5

0

2

9

Scams and fraud

4

1

6

10

3.5

1

2.5

2

2

2

2

1

7

1

4.5

9.5

13.5

12

0

2

2

9

4

2.5

11

4

1

Scope of platform service

14.5

20

4

6

1

2

3

4

91

19

0.5

2

14

0

0

0

5

15

1

5.5

2

13

Self-harm

1

7

5

6

0

6

3.5

6

1.5

2

2

2

2

6

8

12

3

5.5

4

1

9

12

0

8

9

Unsafe and illegal products

1

2

2

12

0

2

0

1

7

3

1

1.5

2.5

4

7

3.5

4

5.5

9.5

1

3

1

3

0

4

1

Violence

2

3

1

2

11

2

1

2.5

3

3

2

2

2.5

3

3

3

1.5

13

13

5.5

4

8

1

1

1.5

6

5

Own Initiative Enforcements

RESTRICTED REACH LABELS DATA

Restricted Reach Labels - 1 April to 30 June

Detection

Enforcement

Main Policy

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Own Initiative

Automated Means

Hateful Conduct

3,593

6,213

2,394

3,022

660

3,767

4,502

1,098

5,414

21,583

30,925

3,847

2,639

13,247

9,925

1,040

1,492

750

434

23,595

18,238

5,108

7,451

1,337

1,696

17,249

11,434

User Report

Manual

Abuse & Harassment

204

255

128

60

51

236

168

25

85

1,959

2,046

278

86

281

823

16

39

16

3

991

1,046

213

210

34

21

2,087

359

User Report

Manual

Hateful Conduct

164

368

140

116

41

112

266

38

247

1,484

1,665

172

108

474

552

42

32

47

12

1,293

1,155

196

288

42

64

1,123

528

User Report

Manual

Violent Speech

76

134

59

53

13

91

99

8

90

710

724

60

50

200

306

17

28

14

6

557

352

92

103

25

15

403

235

ACTIONS TAKEN ON CONTENT FOR XIUC TERMS OF SERVICE AND RULES VIOLATIONS

XIUC Terms of Service and Rules Content Removal Actions - 1 April to 30 June

Detection

Enforcement

Main Policy

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Own Initiative

Automated Means

Abuse & Harassment

2

4

4

1

1

1

11

63

3

3

5

21

1

1

100

Child Sexual Exploitation

1

3

3

1

1

1

14

3

1

2

5

1

1

7

2

2

4

2

Civic Integrity

3

Hateful Conduct

4

5

2

1

2

2

5

18

16

2

3

13

8

2

1

22

9

3

6

14

10

Non-Consensual Nudity

4

24

8

1

6

32

8

178

84

8

9

37

42

6

1

49

33

9

22

2

3

79

26

Other

8

24

1

30

17

2

24

251

127

27

7

48

60

4

7

1

132

100

17

17

21

1

310

33

Private Information & media

22

3

1

2

Sensitive Media

371

561

211

153

63

421

252

69

246

4,987

4,642

428

531

503

2,320

258

96

65

25

2,134

1,253

555

583

165

61

2,972

652

Violent Speech

278

684

149

173

58

203

300

64

286

7,047

2,396

254

135

706

763

46

98

52

24

1,483

928

364

394

75

70

4,785

700

Manual Enforced

Violent Speech

1

Child Sexual Exploitation

2

1

2

4

3

1

1

1

3

1

3

User Report

Manual Enforced

Abuse & Harassment

340

640

351

201

88

437

318

114

298

8,523

9,241

666

346

593

3,159

145

191

49

30

5,043

1,949

799

724

155

86

4,050

705

Child Sexual Exploitation

1

1

1

1

7

4

1

2

4

2

5

1

1

8

Civic Integrity

1

Deceased Individuals

2

4

1

2

1

10

23

15

1

8

9

12

10

5

1

4

2

Distribution of Hacked Materials

1

1

1

2

2

Hateful Conduct

31

43

20

16

4

20

29

9

47

452

265

33

20

83

59

6

3

3

135

141

42

37

4

19

214

54

Illegal or certain regulated goods and services

89

285

240

127

34

242

112

31

95

10,461

2,753

308

239

261

3,518

112

139

14

22

4,517

802

424

350

127

48

3,809

319

Intellectual property infringements

1

Misleading & Deceptive Identities

11

6

2

4

2

5

4

2

2

67

60

4

3

12

34

4

3

34

30

13

5

1

45

7

Non-Consensual Nudity

59

155

88

9

11

89

54

2

61

1,743

893

79

20

89

353

175

27

7

8

864

371

103

219

13

2

584

135

Perpetrators of Violent Attacks

1

3

1

2

2

2

2

2

13

2

1

1

1

Private Information & media

29

71

17

6

10

8

12

41

15

429

281

20

6

20

103

3

5

1

142

63

41

33

1

5

276

62

Sensitive Media

85

153

54

57

13

100

63

249

48

1,392

1,109

102

154

188

442

17

20

7

8

587

615

154

138

61

14

791

215

Suicide & Self Harm

70

94

58

49

5

66

78

20

72

796

1,212

87

44

123

431

19

29

5

7

358

558

153

125

21

30

786

226

Synthetic & Manipulated Media

1

Violent & Hateful Entities

4

2

1

1

Violent Speech

734

1,309

515

418

92

864

704

176

781

11,436

9,371

790

421

1,333

4,367

152

252

110

84

4,427

5,350

1,439

1,048

207

192

6,808

1,758

ACTIONS TAKEN ON ACCOUNTS FOR XIUC TERMS OF SERVICE AND RULES VIOLATIONS

XIUC Terms of Service and Rules Account Suspensions - 1 April to 30 June

Detection Method

Enforcement

Main Policy

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Own Initiative

Automated Means

Abuse & Harassment

7

Ban Evasion

3

13

2

1

1

46

32

2

1

2

8

2

1

1

27

5

3

1

6

23

11

CWC for various countries for illegal activity

3

18

1

1

1

2

2

2

2

15

26

5

1

15

3

1

1

63

4

12

8

2

66

12

Child Sexual Exploitation

686

1,147

588

343

148

665

711

186

793

32,854

10,298

768

893

1,559

9,115

386

376

88

57

7,809

3,569

1,174

1,161

318

143

9,706

1,753

Illegal or certain regulated goods and services

27

26

34

30

4

30

17

7

28

674

1,045

41

24

285

415

9

19

1

4

560

140

27

48

9

3

262

129

Misleading & Deceptive Identities

235

471

152

108

61

172

245

66

252

5,180

4,610

267

216

424

1,841

119

106

32

22

1,687

1,356

402

384

69

65

2,903

513

Non-Consensual Nudity

3

1

1

3

2

1

Other

6

19

14

4

2

11

7

3

19

380

278

18

9

11

265

52

24

2

1

76

53

16

15

6

2

183

20

Perpetrators of Violent Attacks

2

20

23

15

3

1

3

9

32

53

5

31

23

5

27

54

7

11

2

50

18

Platform Manipulation & Spam

260,004

301,545

284,305

194,502

104,506

540,006

187,110

129,876

723,668

4,668,766

4,873,482

391,798

300,480

424,625

4,208,022

359,745

215,597

75,582

90,731

3,355,531

1,322,613

387,839

553,535

99,744

84,632

2,998,334

474,118

Sensitive Media

1

1

15

7

2

3

1

1

1

9

Violent & Hateful Entities

54

114

68

9

11

26

36

7

30

551

918

51

12

31

109

9

12

14

2

539

93

36

62

1

1

110

105

Manual Enforced - Proactive Detection

Child Sexual Exploitation

17

8

11

1

2

17

6

2

10

214

106

6

10

18

43

10

4

1

66

34

13

15

2

2

92

23

User Report

Manual Enforced

Abuse & Harassment

184

446

404

203

84

377

210

76

231

8,456

12,352

596

391

697

3,506

182

209

35

35

6,406

1,317

706

608

159

83

3,615

691

Ban Evasion

4

8

3

1

1

40

30

1

2

4

4

1

1

2

18

9

1

3

11

1

CWC for various countries for illegal activity

2

1

1

Child Sexual Exploitation

9

24

4

7

1

15

10

4

8

335

132

7

8

21

107

8

10

1

1

115

46

19

16

3

3

184

26

Financial Scam

1

2

Hateful Conduct

6

14

6

2

1

5

6

2

5

113

49

5

4

14

12

3

24

32

11

13

2

7

36

11

Help with my compromised account

1

1

Illegal or certain regulated goods and services

73

253

242

111

43

254

131

30

137

8,211

3,635

336

262

557

3,491

95

138

14

17

3,539

909

377

375

118

44

2,827

434

Intellectual property infringements

3

11

1

3

4

2

118

47

4

2

5

23

2

1

1

26

19

35

8

1

38

3

Misleading & Deceptive Identities

138

264

98

49

27

139

183

42

122

6,403

1,885

143

123

282

1,564

166

70

13

14

3,962

625

192

246

42

29

1,364

254

Non-Consensual Nudity

20

36

24

4

8

18

14

9

514

281

20

8

19

119

57

13

1

1

221

129

25

67

2

1

152

51

Other

2

2

2

2

3

4

2

2

59

22

3

2

13

2

1

19

11

5

5

1

22

4

Perpetrators of Violent Attacks

1

2

2

1

5

1

6

2

4

5

2

1

4

3

Platform Manipulation & Spam

117

145

98

70

25

109

80

34

127

1,923

1,969

103

88

235

761

73

46

9

9

946

471

132

175

32

35

1,220

193

Private Information & media

1

3

1

2

2

1

3

33

17

3

5

9

11

4

2

16

2

Sensitive Media

1

2

2

2

1

9

6

1

7

1

7

2

2

2

2

2

Suicide & Self Harm

1

2

2

2

1

3

2

12

26

3

3

6

10

2

1

10

9

2

1

1

1

17

4

Username Squatting

1

Violent & Hateful Entities

3

8

4

1

3

1

6

5

45

68

4

2

4

1

2

2

39

3

9

4

6

17

Violent Speech

42

88

32

33

7

39

41

8

41

719

404

34

31

107

190

13

12

7

4

252

255

79

87

15

9

467

127

Overall Figures

COMPLAINTS OF ACTIONS TAKEN FOR ILLEGAL CONTENT RECEIVED- 1 April to 30 June

Austria

Belgium

Bulgaria

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Netherlands

Poland

Portugal

Romania

Spain

Sweden

Complaints

3

11

2

11

4

1

4

30

45

1

4

8

16

101

42

7

10

112

24

Overturn

0

1

1

1

0

1

1

1

8

0

0

3

4

15

4

1

4

20

0

Median Time to Respond (Hours)

0.08

0.53

2.93

0.68

1.4

0.57

0.28

1.47

1.24

6.3

0.89

7.87

0.46

1.37

0.51

0.38

1.18

1.2

1.38

COMPLAINTS OF ACTIONS TAKEN FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS RECEIVED -1 April to 30 June

Appeal Category

Metric

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Account Suspension Complaints

Complaints Received

804

960

595

234

191

830

662

173

1,145

10,029

15,404

678

383

1,022

2,743

506

744

124

41

8,997

3,499

863

1,296

219

154

4,492

1,526

Account Suspension Complaints

Overturned Appeals

95

154

68

38

16

111

102

34

153

1,348

1,636

85

67

140

402

29

50

19

5

1,063

483

148

149

25

14

761

219

Account Suspension Complaints

Median Time to Resolve

4.66

4.03

4.22

2.3

35.45

7.58

4.35

4.43

7.37

3.73

2.45

2.64

4.07

5.43

6.35

9.55

6.93

5.38

4

5.03

5.17

4.29

4.48

4.31

3.37

5.77

3.98

Content Action Complaints

Complaints Received

82

118

38

31

17

82

55

18

58

1,105

1,052

68

20

175

268

8

24

11

5

476

236

124

76

14

23

880

168

Content Action Complaints

Overturned Appeals

20

31

9

8

2

13

22

5

15

357

249

14

13

60

66

2

6

4

1

122

57

36

24

4

4

294

57

Content Action Complaints

Median Time to Resolve

0.45

0.42

0.96

0.55

0.1

0.62

0.5

0.58

0.6

0.75

0.6

0.43

0.98

0.22

0.37

0.2

0.2

0.2

1.03

0.53

0.53

0.5

0.43

1.09

0.03

0.53

0.34

Restricted Reach Complaints

Complaints Received

221

243

111

98

27

166

170

55

206

1,239

2,218

173

66

601

561

47

46

33

10

1,197

638

200

203

37

40

1,271

379

Restricted Reach Complaints

Overturned Appeals

78

90

57

33

9

58

68

15

90

383

619

67

22

281

226

19

20

17

8

422

199

83

76

20

16

519

132

Restricted Reach Complaints

Median Time to Resolve

2.68

3.93

6.13

0.86

1.62

1.92

0.6

138.33

0.98

2.82

11.24

2.35

2.25

1.33

1.88

1.93

1.88

11.5

64.88

1.49

2.58

2.03

2.35

2.74

1.05

4.4

0.84

Sensitive Media Action Complaints

Complaints Received

19

18

2

2

1

9

5

2

15

131

254

12

3

13

95

15

4

2

162

38

18

30

12

4

56

65

Sensitive Media Action Complaints

Overturned Appeals

13

14

1

1

0

4

2

1

7

64

185

3

2

10

68

15

3

0

121

18

11

23

5

1

38

34

Sensitive Media Action Complaints

Median Time to Resolve

0.15

0.5

0.08

0.13

0.73

0.8

0.13

0.51

0.33

0.1

0.16

0.06

0.2

0.23

0.18

0.28

0.85

2.47

0.16

0.29

0.24

0.13

0.58

0.21

0.07

0.05

INDICATORS OF ACCURACY FOR CONTENT MODERATION

VISIBILITY FILTERING INDICATORS

Metric

Enforcement

Policy

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Appeal Rate

Automated Means

Hateful Conduct

2.20%

1.75%

1.55%

1.92%

1.36%

1.78%

2.33%

1.46%

1.51%

2.38%

1.92%

1.82%

1.48%

2.44%

1.73%

1.54%

1.34%

0.80%

1.38%

1.88%

1.46%

1.49%

1.65%

2.47%

1.36%

2.40%

2.29%

Manual

Abuse & Harassment

1.47%

1.18%

0.00%

0.00%

0.00%

1.27%

0.00%

0.00%

2.35%

0.56%

0.54%

1.08%

0.00%

1.42%

0.49%

6.25%

0.00%

0.00%

0.00%

0.50%

0.29%

0.00%

0.95%

0.00%

0.00%

1.25%

0.28%

Manual

Hateful Conduct

1.83%

1.09%

0.00%

1.72%

2.44%

2.68%

0.75%

2.63%

1.62%

1.62%

1.32%

2.33%

1.85%

1.48%

2.36%

0.00%

0.00%

2.13%

0.00%

1.39%

0.87%

3.06%

2.08%

0.00%

3.13%

1.51%

1.70%

Manual

Violent Speech

2.63%

2.99%

1.69%

1.89%

7.69%

1.10%

0.00%

0.00%

1.11%

0.56%

1.93%

3.33%

0.00%

1.50%

1.31%

0.00%

3.57%

0.00%

0.00%

1.44%

0.85%

1.09%

1.94%

0.00%

6.67%

2.48%

1.70%

Note: Cells that are blank mean that there was no enforcement. For cells containing ‘0.0%’ value, there were no cases of successful appeals or overturns.

Metric

Enforcement

Policy

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Overturn Rate

Automated Means

Hateful Conduct

24.05%

32.11%

35.14%

29.31%

55.56%

31.34%

34.29%

31.25%

34.15%

28.02%

36.47%

31.43%

28.21%

33.13%

36.05%

50.00%

30.00%

16.67%

50.00%

36.49%

29.70%

26.32%

26.02%

30.30%

21.74%

30.43%

42.37%

Manual Review

Abuse & Harassment

0.00%

66.67%

33.33%

100.00%

54.55%

72.73%

100.00%

25.00%

50.00%

100.00%

40.00%

33.33%

0.00%

65.38%

100.00%

Manual Review

Hateful Conduct

0.00%

50.00%

0.00%

0.00%

33.33%

0.00%

100.00%

0.00%

25.00%

13.64%

0.00%

0.00%

57.14%

15.38%

0.00%

5.56%

20.00%

0.00%

0.00%

0.00%

35.29%

33.33%

Manual Review

Violent Speech

0.00%

25.00%

0.00%

100.00%

0.00%

0.00%

0.00%

25.00%

21.43%

0.00%

0.00%

25.00%

0.00%

0.00%

0.00%

0.00%

100.00%

0.00%

0.00%

25.00%

Note: Cells that are blank mean that there was no enforcement. For cells containing ‘0.0%’ value, there were no cases of successful appeals or overturns.

INDICATORS OF ACCURACY FOR CONTENT REMOVAL

Metric

Enforcement

Main Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Appeal Rate

Auto Enforced

Abuse & Harassment

36.81%

9.09%

0.00%

0.00%

8.93%

Child Sexual Exploitation

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

Civic Integrity

0.00%

Hateful Conduct

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

Non-Consensual Nudity

0.00%

0.00%

25.00%

11.11%

2.69%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

12.50%

0.00%

0.00%

0.00%

9.38%

0.00%

Other

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

Private Information & media

0.00%

0.00%

Sensitive Media

0.00%

0.00%

0.00%

Violent Speech

0.00%

2.94%

4.35%

0.00%

3.13%

4.99%

0.00%

5.23%

5.99%

3.70%

0.00%

0.00%

2.16%

0.00%

3.69%

3.47%

0.00%

0.00%

0.00%

6.43%

4.61%

Manual Enforced

Abuse & Harassment

4.76%

0.00%

7.02%

20.00%

5.44%

1.27%

3.03%

6.62%

7.01%

7.56%

4.76%

3.90%

0.00%

3.97%

10.92%

2.60%

0.00%

8.70%

5.71%

Child Sexual Exploitation

0.00%

0.00%

0.00%

0.00%

0.00%

Civic Integrity

0.00%

Deceased Individuals

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

Distribution of Hacked Materials

0.00%

0.00%

100.00%

Hateful Conduct

0.00%

5.00%

0.00%

0.00%

100.00%

0.00%

0.00%

0.00%

0.00%

Illegal or certain regulated goods and services

0.00%

0.00%

0.00%

0.00%

0.01%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

Intellectual property infringements

0.00%

Non-Consensual Nudity

0.00%

0.00%

0.00%

50.00%

4.26%

0.76%

0.00%

0.91%

0.70%

2.08%

0.00%

1.75%

3.57%

0.98%

0.00%

0.00%

2.54%

0.00%

Perpetrators of Violent Attacks

0.00%

4.55%

0.00%

0.00%

0.00%

0.00%

Private Information & media

0.00%

0.00%

0.00%

0.00%

5.88%

0.00%

6.84%

13.86%

0.00%

3.33%

0.00%

18.18%

0.00%

0.00%

6.01%

0.00%

Sensitive Media

0.00%

0.00%

0.00%

0.00%

23.08%

2.18%

0.00%

9.03%

12.09%

0.00%

0.00%

4.55%

0.00%

3.36%

7.14%

0.00%

0.00%

8.39%

7.69%

Suicide & Self Harm

0.00%

0.00%

0.00%

0.00%

7.89%

6.23%

5.88%

7.20%

15.67%

3.70%

0.00%

2.31%

0.00%

1.54%

10.91%

0.00%

0.00%

7.10%

0.00%

Synthetic & Manipulated Media

0.00%

Violent & Hateful Entities

0.00%

Violent Speech

0.00%

0.99%

7.55%

2.61%

4.04%

3.95%

0.63%

5.69%

6.40%

1.24%

1.94%

0.00%

3.99%

0.00%

1.51%

2.95%

0.00%

4.35%

5.02%

2.88%

Note: Cells that are blank mean that there was no enforcement. For cells containing ‘0.0%’ value, there were no cases of successful appeals or overturns.

Metric

Enforcement

Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Estonian

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Overturn Appeal Rate

Auto Enforced

Abuse & Harassment

45.28%

0.00%

20.00%

Child Sexual Exploitation

Civic Integrity

Hateful Conduct

Non-Consensual Nudity

0.00%

0.00%

83.33%

0.00%

33.33%

Other

Private Information & media

Sensitive Media

Violent Speech

100.00%

100.00%

81.82%

60.46%

71.64%

47.06%

50.00%

80.00%

55.56%

60.00%

59.61%

71.43%

Manual Enforced

Abuse & Harassment

0.00%

0.00%

83.33%

0.00%

12.12%

0.00%

20.00%

15.56%

33.33%

0.00%

31.58%

0.00%

30.77%

50.00%

9.28%

0.00%

Child Sexual Exploitation

Civic Integrity

Deceased Individuals

Distribution of Hacked Materials

0.00%

Hateful Conduct

0.00%

0.00%

Illegal or certain regulated goods and services

0.00%

Intellectual property infringements

Non-Consensual Nudity

0.00%

50.00%

10.34%

25.00%

0.00%

0.00%

50.00%

0.00%

0.00%

20.00%

Perpetrators of Violent Attacks

0.00%

Private Information & media

10.34%

30.77%

14.29%

0.00%

50.00%

27.27%

Sensitive Media

16.67%

4.35%

0.00%

9.09%

0.00%

0.00%

0.00%

7.69%

50.00%

Suicide & Self Harm

50.00%

38.69%

0.00%

48.28%

20.25%

100.00%

20.00%

0.00%

50.00%

25.00%

Synthetic & Manipulated Media

Violent & Hateful Entities

Violent Speech

100.00%

4.76%

25.00%

22.22%

22.41%

0.00%

25.96%

21.95%

0.00%

0.00%

23.17%

32.35%

35.29%

0.00%

30.15%

66.67%

Note: Cells that are blank mean that there was no enforcement. For cells containing ‘0.0%’ value, there were no cases of successful appeals or overturns.

INDICATORS OF ACCURACY FOR SUSPENSIONS

Metric

Enforcement

Main Policy

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Appeal Rate

Auto Enforced

Abuse & Harassment

Ban Evasion

7.69%

50.00%

21.74%

28.13%

25.00%

100.00%

20.00%

20.00%

8.70%

Child Sexual Exploitation

17.88%

21.26%

12.44%

14.83%

24.32%

22.04%

20.31%

19.19%

13.76%

5.33%

19.03%

23.08%

25.73%

13.49%

7.12%

7.75%

14.29%

23.26%

13.56%

13.69%

33.94%

17.38%

20.31%

21.77%

18.75%

11.95%

18.15%

CWC for various countries for illegal activity

Illegal or certain regulated goods and services

0.30%

Misleading & Deceptive Identities

0.40%

0.09%

0.05%

0.10%

Non-Consensual Nudity

33.33%

Other

18.18%

25.00%

4.22%

1.44%

12.50%

1.13%

3.95%

9.43%

12.50%

50.00%

4.44%

5.00%

Perpetrators of Violent Attacks

50.00%

5.26%

26.09%

35.71%

33.33%

55.56%

29.03%

9.43%

40.00%

26.67%

34.78%

40.00%

22.22%

26.42%

28.57%

9.09%

16.00%

11.11%

Platform Manipulation & Spam

0.51%

0.52%

0.38%

0.23%

0.29%

0.31%

0.57%

0.24%

0.21%

0.37%

0.46%

0.32%

0.35%

0.26%

0.18%

0.16%

0.28%

0.31%

0.10%

0.55%

0.57%

0.40%

0.51%

0.41%

0.26%

0.29%

0.55%

Sensitive Media

11.11%

Violent & Hateful Entities

18.52%

5.31%

8.70%

11.11%

15.38%

8.11%

28.57%

6.67%

11.17%

17.45%

13.73%

8.33%

12.90%

11.93%

25.00%

7.14%

50.00%

11.46%

7.37%

8.33%

6.56%

13.89%

15.89%

Manual Enforced

Abuse & Harassment

14.13%

9.64%

3.72%

3.45%

7.14%

4.50%

10.10%

6.58%

7.76%

3.73%

2.49%

8.53%

1.78%

5.19%

3.17%

2.20%

3.83%

11.43%

3.17%

8.57%

3.82%

8.54%

2.52%

2.41%

4.97%

8.21%

Ban Evasion

75.00%

37.50%

33.33%

32.50%

30.00%

50.00%

50.00%

50.00%

27.78%

11.11%

100.00%

45.45%

Child Sexual Exploitation

53.85%

61.29%

60.00%

87.50%

33.33%

37.50%

56.25%

50.00%

44.44%

19.93%

54.43%

38.46%

38.89%

53.85%

19.73%

27.78%

35.71%

100.00%

54.49%

47.44%

38.71%

48.39%

80.00%

20.00%

32.12%

62.50%

CWC for various countries for illegal activity

Financial Scam

Hateful Conduct

50.00%

57.14%

100.00%

100.00%

50.00%

16.67%

46.90%

48.98%

40.00%

25.00%

42.86%

41.67%

37.50%

37.50%

36.36%

38.46%

50.00%

14.29%

27.78%

45.45%

Help with my compromised account

Illegal or certain regulated goods and services

3.16%

0.41%

6.25%

1.73%

1.37%

0.52%

1.61%

3.52%

1.14%

2.53%

Intellectual property infringements

Misleading & Deceptive Identities

0.42%

0.28%

Non-Consensual Nudity

20.00%

30.56%

8.00%

50.00%

37.50%

5.56%

7.14%

11.11%

20.19%

24.47%

25.00%

21.05%

10.92%

7.69%

14.03%

18.75%

28.00%

13.43%

10.67%

13.73%

Other

100.00%

50.00%

50.00%

6.78%

27.27%

15.38%

52.63%

18.18%

20.00%

22.73%

25.00%

Perpetrators of Violent Attacks

100.00%

50.00%

25.00%

33.33%

Platform Manipulation & Spam

28.07%

26.71%

12.37%

7.35%

32.00%

18.35%

18.99%

29.41%

20.31%

14.98%

13.83%

26.21%

21.59%

15.95%

12.96%

9.59%

30.43%

22.22%

33.33%

22.25%

24.84%

14.39%

24.57%

12.50%

22.22%

11.03%

19.17%

Private Information & media

66.67%

100.00%

66.67%

8.82%

17.65%

40.00%

22.22%

27.27%

50.00%

18.75%

50.00%

Sensitive Media

100.00%

33.33%

14.29%

Suicide & Self Harm

100.00%

50.00%

50.00%

100.00%

33.33%

50.00%

41.67%

38.46%

66.67%

66.67%

50.00%

50.00%

40.00%

33.33%

100.00%

62.50%

Username Squatting

Violent & Hateful Entities

33.33%

37.50%

33.33%

33.33%

20.00%

17.78%

16.18%

25.00%

100.00%

25.00%

100.00%

7.50%

11.11%

16.67%

17.65%

Violent Speech

30.95%

51.14%

37.50%

36.36%

50.00%

45.00%

56.10%

62.50%

63.41%

56.25%

56.54%

52.94%

51.61%

57.01%

52.38%

38.46%

41.67%

57.14%

50.00%

58.33%

56.20%

45.68%

59.77%

28.57%

44.44%

57.54%

51.56%

Note: Cells that are blank mean that there was no enforcement. For cells containing ‘0.0%’ value, there were no cases of successful appeals or overturns.

Metric

Enforcement

Main Policy

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Overturn Rate

Auto Enforced

Abuse & Harassment

Ban Evasion

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

Child Sexual Exploitation

8.94%

8.64%

10.96%

11.76%

11.11%

2.01%

11.81%

36.84%

9.17%

8.55%

9.46%

9.04%

6.96%

10.00%

12.46%

16.67%

25.93%

0.00%

12.50%

10.84%

3.97%

8.87%

5.56%

2.90%

0.00%

8.73%

6.62%

CWC for various countries for illegal activity

Illegal or certain regulated goods and services

0.00%

Misleading & Deceptive Identities

0.00%

0.00%

0.00%

0.00%

Non-Consensual Nudity

0.00%

Other

0.00%

0.00%

12.50%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

Perpetrators of Violent Attacks

0.00%

100.00%

16.67%

20.00%

0.00%

0.00%

44.44%

0.00%

50.00%

25.00%

0.00%

50.00%

50.00%

14.29%

50.00%

0.00%

50.00%

0.00%

Platform Manipulation & Spam

3.08%

2.33%

1.74%

1.35%

1.32%

2.34%

2.62%

3.45%

5.85%

3.14%

3.86%

1.67%

1.71%

2.81%

1.88%

1.58%

1.61%

4.20%

0.00%

3.83%

3.20%

2.19%

2.98%

1.95%

0.45%

2.06%

4.43%

Sensitive Media

0.00%

Violent & Hateful Entities

0.00%

0.00%

16.67%

0.00%

0.00%

0.00%

0.00%

0.00%

12.90%

12.50%

14.29%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

8.06%

14.29%

33.33%

25.00%

26.67%

11.76%

Manual Enforced

Abuse & Harassment

19.23%

4.65%

6.67%

0.00%

0.00%

0.00%

4.76%

0.00%

0.00%

6.35%

6.31%

1.96%

14.29%

8.33%

8.11%

0.00%

0.00%

0.00%

5.91%

7.08%

3.70%

1.92%

0.00%

50.00%

7.26%

3.51%

Ban Evasion

0.00%

0.00%

0.00%

15.38%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

Child Sexual Exploitation

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

1.55%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

2.06%

0.00%

8.33%

0.00%

0.00%

0.00%

4.55%

0.00%

Financial Scam

Hateful Conduct

0.00%

12.50%

16.67%

0.00%

100.00%

0.00%

9.43%

29.17%

0.00%

0.00%

16.67%

20.00%

22.22%

8.33%

0.00%

0.00%

0.00%

0.00%

0.00%

40.00%

Help with my compromised account

Illegal or certain regulated goods and services

0.00%

0.00%

12.50%

0.70%

0.00%

0.00%

0.00%

3.13%

0.00%

0.00%

Intellectual property infringements

Misleading & Deceptive Identities

0.00%

0.00%

Non-Consensual Nudity

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

2.88%

5.80%

20.00%

0.00%

0.00%

0.00%

9.68%

8.33%

0.00%

0.00%

0.00%

0.00%

Other

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

Perpetrators of Violent Attacks

0.00%

0.00%

0.00%

0.00%

Platform Manipulation & Spam

0.00%

2.56%

0.00%

0.00%

0.00%

10.00%

6.67%

0.00%

3.85%

2.09%

1.82%

0.00%

0.00%

2.70%

3.06%

0.00%

14.29%

0.00%

0.00%

2.38%

1.71%

0.00%

0.00%

0.00%

0.00%

1.49%

2.70%

Private Information & media

50.00%

0.00%

0.00%

33.33%

0.00%

0.00%

50.00%

0.00%

0.00%

0.00%

100.00%

Sensitive Media

0.00%

0.00%

0.00%

Suicide & Self Harm

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

40.00%

10.00%

0.00%

25.00%

40.00%

0.00%

25.00%

0.00%

0.00%

40.00%

Username Squatting

Violent & Hateful Entities

0.00%

0.00%

100.00%

0.00%

0.00%

25.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

0.00%

Violent Speech

38.46%

28.89%

0.00%

58.33%

33.33%

50.00%

8.70%

0.00%

42.31%

46.17%

33.19%

27.78%

37.50%

45.90%

29.29%

20.00%

40.00%

50.00%

0.00%

41.50%

35.17%

29.73%

28.85%

25.00%

25.00%

43.07%

37.88%

Note: Cells that are blank mean that there was no enforcement.  For cells containing ‘0%’ value, there were no cases of successful appeals or overturns.

Art. 24.2: Average Monthly Active Recipients- 1 April to 30 June

Country

Logged Out Users

Logged In Users

Total

Austria

402,097

1,068,900

1,470,998

Belgium

711,034

1,853,664

2,564,698

Bulgaria

209,173

584,405

793,578

Croatia

312,204

563,217

875,420

Cyprus

72,723

215,430

288,153

Czechia

672,875

1,525,899

2,198,774

Denmark

272,137

869,677

1,141,815

Estonia

74,715

201,408

276,124

Finland

508,199

1,488,415

1,996,615

France

4,383,667

14,017,874

18,401,541

Germany

3,531,872

11,397,270

14,929,142

Greece

609,078

1,396,471

2,005,549

Hungary

343,191

931,319

1,274,509

Ireland

763,785

1,914,628

2,678,414

Italy

1,829,465

6,507,061

8,336,526

Latvia

98,072

262,100

360,172

Lithuania

117,964

313,764

431,729

Luxembourg

48,247

140,041

188,288

Malta

29,101

98,531

127,632

Netherlands

2,241,404

6,000,726

8,242,130

Poland

2,657,068

6,149,196

8,806,264

Portugal

535,877

1,845,028

2,380,906

Romania

395,672

1,268,554

1,664,226

Slovakia

160,939

380,635

541,575

Slovenia

194,339

343,703

538,041

Spain

4,138,000

12,546,695

16,684,695

Sweden

689,660

2,117,076

2,806,736

Further Information on Suspensions

During the applicable reporting period 1 April, 2025 to 30 June, 2025 there were zero actions taken for: provision of manifestly unfounded reports or complaints; or manifestly illegal content. While manifestly illegal content is not a category that we have taken action on during the reporting period, we suspended 89,151 accounts for violating our Child Sexual Exploitation policy and 3,248 for violating our Violent and Hateful Entity policy.

Out-of-court dispute settlement body disputes.

To date, X has processed zero out-of-court settlement body disputes.

 

Reports received by trusted flaggers.

During the reporting period, we have received 185 reports from Article 22 DSA approved trusted flaggers. Once Article 22 DSA awarded trusted flaggers information is published, we immediately enrol them in our trusted flaggers program, which ensures prioritisation of human review, via their email, username, and account.