Opening Remarks

X was founded on a commitment to transparency. We also want people on X to feel they are able to freely express themselves, while also ensuring that conversations on X are safe, legal and unregretted. When you think about some of the world’s most powerful moments, movements, and memes, they prevailed because people had a place to express their ideas, challenge conventional norms, and demand better. That’s why free expression matters.

We also believe, and we’re proving, that free expression and platform safety can coexist. X is reflective of real conversations happening in the world, and that sometimes includes perspectives that may be offensive, controversial, and/or narrow-minded to others. While we welcome everyone to express themselves on X, we will not tolerate behaviour that harasses, threatens, dehumanises or uses fear to silence the voices of others. Our TIUC Terms of Service and Rules - which are continually being reviewed, and are informed by feedback from the people who use X - help ensure everyone feels safe expressing themselves.

We are committed to fair, informative, responsive, and accountable enforcement. In the past, we too often got caught in a binary paradigm of whether to leave content up, or take it down.

To be clear, we do continue to remove dangerous and illegal content and accounts. X also responds to reports of illegal content and takes action on content that violates local laws. But what we’ve learned is that there are other types of content where a range of potential reasonable, proportionate, and effective approaches, that also seek to balance fundamental rights, can be appropriate.

You can think about how we moderate on X in three buckets: content and accounts that remain, are restricted, and are removed.

  1. Remain: The overwhelming majority of content on X is healthy—meaning it does not violate our TIUC Terms of Service and Rules or our policies such as Hateful Conduct, Abuse & Harassment, and more. Keep in mind: just because a post doesn’t violate a policy, doesn’t mean everyone will like it.
  2. Restrict: This is where our new Freedom of Speech, Not Reach enforcement philosophy is used. For content that may be interpreted as potentially violating our policies—meaning it’s awful, but lawful—we restrict the reach of posts by making the content less discoverable, and we’re making this action more transparent to everyone. When we decide to restrict a piece of content, a restricted reach label is applied, the ability to engage with the content is taken away, and its reach is restricted to views occurring directly on the author's profile. Restricted reach labels are not in use for all policies; our restricted reach labels were initially only applied to Hateful Conduct, but we have since expanded application to our Abuse & Harassment, Civic Integrity, and Violent Speech Policies. That said, restricting content—or even a whole account—is something we’ve done for a long time, and we have a range of enforcement options for the variety of use cases that we face every day. For example, we may also place an account in read-only mode, temporarily limiting its ability to post, Repost, or Like.
  3. Remove: If reported content is illegal, we withhold access to it in the respective jurisdictions. We also know that certain types of content, such as targeted violent threats, targeted harassment, or privacy violations, can be extremely harmful if not removed and we either suspend outright or require that this content be deleted before returning to the platform.

We've made significant progress towards improving the safeguards to protect our users and our platform, but we know that this critical work will never be done. X is committed to ensuring the safety and health of the platform and fulfilment of its DSA Compliance obligations through our continued investment in human and automated protections.

This report covers the content moderation activities of X’s international entity Twitter International Unlimited Company (TIUC) under the Digital Services Act (DSA), during the date range August 28, 2023 to October 20, 2023.

We refer to “notices” as defined in the DSA as “user reports” and “reports”.

Description of our Content Moderation Practices

X's purpose is to serve the public conversation. Violence, harassment, and other similar types of behaviour discourage people from expressing themselves, and ultimately diminish the value of global public conversation. Our rules are designed to ensure all people can participate in the public conversation freely and safely.

X has policies protecting user safety as well as platform and account integrity. The X Rules and Policies are publicly accessible on our Help Center, and we are making sure that they are written in an easily understandable way. We also keep our Help Center regularly updated anytime we modify our rules.

Additionally, you will find explanations in our Help Center on our policy development process and rules enforcement philosophy. Creating a new policy or making a policy change requires in-depth research around trends in online behaviour, developing clear external language that sets expectations around what’s allowed, and creating enforcement guidance for reviewers that can be scaled across millions of pieces of content and accounts. Our policies are dynamic, and we continually review them to ensure that they are up-to-date, necessary, and proportional.

We consider diverse perspectives around the changing nature of online speech, including how our Rules are applied and interpreted in different cultural and social contexts. We then test the proposed rule with samples of potentially violative content to measure the policy effectiveness, and once we determine it meets our expectations, we build and operationalise product changes to support the update. Finally, we train our global review teams, update the X Rules, and start enforcing the relevant policy.

While we aim to enable open discussion of differing opinions and viewpoints, we are committed to the objective, timely, and consistent enforcement of our rules. This approach allows many forms of speech to exist on our platform and, in particular, promotes counterspeech: speech that presents facts to correct misstatements or misperceptions, points out hypocrisy or contradictions, warns of offline or online consequences, denounces hateful or dangerous speech, or helps change minds and disarm.

Thus, context matters. When determining whether to take enforcement action, we may consider a number of factors, including (but not limited to) whether:

When we take enforcement actions, we may do so either on a specific piece of content (e.g., an individual post or Direct Message) or on an account. We may employ a combination of these options. In most cases, this is because the behaviour violates the X Rules.

X strives to provide an environment where people can feel free to express themselves. If abusive behaviour happens, we want to make it easy for people to report it to us. EU users can also report any violation of our rules or their local laws, no matter where such violations appear, and we’ve recently improved our reporting flow to make it easier to use in several key ways. It now takes less steps to report most content, with extra steps only when it helps us take the right action. We now have clearer choices that match directly to our policies and how they’re communicated externally. We’ve also included new options that were previously only available at help.x.com.

EXERCISE OF MODERATION

To enforce our rules, we are using a combination of machine learning and human review. Our systems are able to surface content to human moderators who use important context to make decisions about potential rule violations. This work is led by an international, cross-functional team with 24-hour coverage and the ability to cover multiple languages. We also have a complaints process for any potential errors that may occur.

Examples of actions we may take:

To ensure that our human reviewers are prepared to perform their duties we provide them with a robust support system. Each human reviewer goes through extensive training and refreshers, they are provided with a suite of tools that enable them to do their jobs effectively, and they have a suite of wellness initiatives available to them. For further information on our human review resources, see the section titled “Human resources dedicated to Content Moderation”.

We always aim to exercise moderation with transparency. Where our systems or teams take action against content or an account as a result of violating our rules or in response to a valid and properly scoped request from an authorised entity in a given country, we strive to provide context to users. Our Help Center article explains notices that users may encounter following actions taken. We will also promptly notify affected users about legal requests to withhold content, including a copy of the original request, unless we are legally prohibited from doing so.

COOPERATION WITH PUBLIC AUTHORITIES

Cooperation with law enforcement authorities within the EU is crucial to X. We work closely with law enforcement, and we do our best to assist them in identifying users whose content may be in violation of local laws. Any law enforcement authority or agency can find guidelines on our Help Center specifically for law enforcement and can reach out to X using a dedicated form.

TIUC is headquartered in Dublin, Ireland, and processes law enforcement requests relating to users who live in the EU. We receive and respond to requests related to user data from EU law enforcement agencies and judicial authorities wherever there is a valid legal process. We have existing processes in place, including a dedicated online portal for law enforcement, and expert teams with global coverage across all timezones that review and respond to reports in diverse languages.

Law enforcement can use our dedicated portal to submit their legal demands and can request the following information:

Our Own Initiative Content Moderation Activities

AUTOMATED CONTENT MODERATION

X employs a combination of heuristics and machine learning algorithms to automatically detect content that violates the X Rules and policies enforced on our platform.

MACHINE LEARNING MODELS

We use combinations of natural language processing models, image processing models and other sophisticated machine learning methods to detect potentially violative content. These models vary in complexity and in the outputs they produce. For example, the model used to detect abuse on the platform is trained on abuse violations detected in the past. Content flagged by these machine learning models are either reviewed by human content reviewers before an action is taken or, in some cases, automatically actioned based on model output.

HEURISTIC MODELS

Heuristics are typically utilised to enable X to react quickly to new forms of violations that emerge on the platform. Heuristics are common patterns of text or keywords that may be typical of a certain category of violations. Pieces of content detected by heuristics may also get reviewed by human content reviewers before an action is taken on the content. These heuristics are used to flag content for review by human agents and prioritise the order such content is reviewed.

TESTING, EVALUATION, AND ITERATION

Automated enforcements under the X Rules and policies undergo rigorous testing before being applied to the live product. Both machine learning and heuristic models are trained and/or validated on thousands of data points and labels (e.g., violative or non-violative) that are generated by trained human content reviewers. For example, inputs to content-related models can include the text within the post itself, the images attached to the post, and other characteristics. Training data for the models comes from both the cases reviewed by our content moderators, random samples, and various other samples of pieces of content from the platform.

Once reviewers have confirmed that the detection meets an acceptable standard of accuracy, we consider the automation to be ready for launch. Once launched, automations are monitored dynamically for ongoing performance and health. If we detect anomalies in performance (for instance, significant spikes or dips against the volume we established during sizing, or significant changes in user complaint/overturn rates), our Engineering (including Data Science) and Policy teams revisit the automation to diagnose any potential problems and adjust the automations as appropriate.

USE OF HUMAN MODERATION

Before any given algorithm is launched to the platform, we verify its detection of policy violating content or behaviour by drawing a statistically significant test sample and performing item-by-item human review. Reviewers have expertise in the applicable policies and are trained by our Policy teams to ensure the reliability of their decisions. During this testing phase, we also calculate the expected volume of moderation actions a given automation is likely to perform in order to set a baseline against which we can monitor for anomalies in the future (called “sizing”). Human review helps us to confirm that these automations achieve a level of precision, and sizing helps us understand what to expect once the automations are launched.

In addition, humans proactively conduct manual content reviews for potential policy violations. We conduct proactive sweeps for certain high-priority categories of potentially violative content both periodically and during major events, such as elections. Agents also proactively review content flagged by heuristic and machine learning models for potential violations of other policies, including our sensitive media, child sexual exploitation (CSE) and violent and hateful entities policies.

AUTOMATED MODERATION ACTIVITY EXAMPLES

A vast majority of all accounts that are suspended for the promotion of terrorism and CSE are proactively flagged by a combination of technology and other purpose-built internal proprietary tools.

When we remove CSE content, we immediately report it to the National Center for Missing and Exploited Children (NCMEC). NCMEC makes reports available to the appropriate law enforcement agencies around the world to facilitate investigations and prosecutions.

Our current methods for surfacing potentially violative terrorist content for review include leveraging the shared industry hash database, e.g., supported by the Global Internet Forum to Counter Terrorism (GIFCT), and deploying a range of internal tools and/or utilising the industry hash sharing (e.g., PhotoDNA) prior to any reports filed. We commit to continuing to invest in technology that improves our capability to detect and remove, for instance, terrorist and violent extremist content online, including the extension or development of digital fingerprinting and AI-based technology solutions. Our participation in multi-stakeholder communities, such as the Christchurch Call to Action, Global Internet Forum to Counter Terrorism and EU Internet Forum (EUIF), helps to identify emerging trends in how terrorists and violent extremists are using the Internet to promote their content and exploit online platforms.

You can learn more about our commitment to eradicating CSE and terrorist content, and the actions we’ve taken here. Our continued investment in proprietary technology is steadily reducing the burden on people to report this content to us.

SCALED INVESTIGATIONS

These moderation activities are supplemented by scaled human investigations into the tactics, techniques and procedures that bad actors use to circumvent our rules and policies. These investigations may leverage signals and behaviours identifiable on our platform, as well as off-platform information, to identify large-scale and/or technically sophisticated evasions of our detection and enforcement activities. For example, through these investigations, we are able to detect coordinated activity intended to manipulate our platform and artificially amplify the reach of certain accounts or their content.  

CLOSING STATEMENT ON CONTENT MODERATION ACTIVITIES

Our content moderation systems are designed and tailored to mitigate systematic risks without unnecessarily restricting the use of our service and fundamental rights, especially freedom of expression. Content moderation activities are implemented and anchored on principled policies and leverage a diverse set of interventions to ensure that our actions are reasonable, proportionate and effective. Our content moderation systems blend automated and human review paired with a robust appeals system that enables our users to quickly raise potential moderation anomalies or mistakes.

Enforcement Activity Summary Data

RESTRICTED REACH LABELS DATA: FREEDOM OF SPEECH, NOT REACH

Our mission at X is to promote and protect the public conversation. We believe X users have the right to express their opinion and ideas without fear of censorship. We also believe it is our responsibility to keep users on our platform safe from content that violates our rules.

These beliefs are the foundation of Freedom of Speech, Not Reach - our freedom of expression based enforcement philosophy which means, where appropriate, restricting the reach of posts that are classified as potentially meeting our threshold for enforcement under our Hateful Conduct, Abuse & Harassment, Civic Integrity, and Violent Speech policies. Please note these policies have a range of enforcement actions, such as removal, suspension, and restricted reach.

Restricting the reach of posts, also known as visibility filtering, is one of our existing enforcement actions that allows us to move beyond the binary “leave up versus take down” approach to content moderation. Posts with these labels will be made less discoverable on the platform. This can include:

Additionally, these labels bring transparency to this enforcement action by displaying which policy the post potentially violates to both the author and other users on X, and communicating that the post’s visibility is limited. Authors can submit a complaint on the label if they think we incorrectly limited their post’s visibility.

RESTRICTED REACH LABELS DATA

Restricted Reach Labels - Aug 28 to Oct 20

Detection

Enforcement

Policy

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Grand Total

Own Initiative

Automated Means

Hateful Conduct

1,118

2,137

653

759

251

1,333

1,311

281

1,389

11,279

9,913

1,015

707

3,760

2,631

320

461

190

121

6,711

5,263

1,528

1,944

417

423

9,706

3,827

69,448

Manual Review

Abuse & Harassment

1

1

0

1

0

0

0

0

0

1

10

0

0

0

0

0

0

0

0

2

6

0

1

0

1

1

2

27

Hateful Conduct

31

10

11

17

14

15

61

1

22

56

127

14

4

62

26

2

2

2

0

169

407

12

30

2

10

39

82

1,228

Violent Speech

1

1

2

1

1

1

2

4

2

15

User Report

Manual Review

Abuse & Harassment

87

244

73

42

24

159

99

38

75

2,093

1,069

194

87

223

827

53

40

16

1

672

866

299

221

29

19

1,868

204

9,622

Hateful Conduct

95

251

52

65

28

144

90

17

139

2,313

1,046

97

115

248

526

54

85

18

9

727

803

300

145

32

38

1,429

301

9,167

Violent Speech

13

35

12

9

6

22

23

19

321

261

22

8

29

79

4

15

1

3

173

79

56

22

2

9

131

69

1,423

Grand Total

1,345

2,678

802

893

323

1,673

1,584

337

1,644

16,064

12,428

1,342

922

4,323

4,089

433

604

227

134

8,456

7,428

2,195

2,363

482

500

13,174

4,487

90,930

Important Note: The table lists actions of visibility filtering on content potentially violative of our rules in accordance with our Freedom of Speech, Not Reach enforcement philosophy. We did not apply any visibility filtering based on illegal content.

ACTIONS TAKEN ON CONTENT FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS

TIUC Terms of Service and Rules Content Removal Actions - Aug 28 to Oct 20*

Detection Method

Enforcement Process

Policy

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Grand Total

User Report

Manual Review

Abuse & Harassment

78

197

83

54

27

123

72

17

85

4,291

1,088

142

49

89

560

348

197

32

4

1,077

976

226

730

17

21

1,214

148

11,945

Child Sexual Exploitation

1

3

1

1

0

0

0

1

0

10

16

0

0

0

2

1

1

0

1

5

15

0

4

0

0

7

0

69

Counterfeit

0

0

0

0

0

0

2

0

0

15

1

0

0

0

1

0

0

0

0

9

0

0

0

0

0

15

0

43

Deceased Individuals

1

4

0

0

0

0

0

0

0

13

9

1

0

1

6

0

0

0

0

2

3

0

2

0

0

6

2

50

Hateful Conduct

3

15

0

1

0

1

0

0

4

130

34

5

2

9

14

0

0

0

0

11

33

3

11

2

0

22

6

306

Illegal or Certain Regulated Goods and Services

2

6

50

22

1

20

1

2

17

1,420

225

13

3

7

94

101

139

0

0

342

214

8

59

8

0

184

2

2,940

Misleading & Deceptive Identities

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Non-Consensual Nudity

0

16

31

1

0

10

2

2

5

132

134

49

7

10

34

6

3

1

0

232

128

1

93

0

1

144

30

1,072

Perpetrators of Violent Attacks

1

0

0

0

0

0

0

0

0

4

0

0

0

1

0

0

0

0

0

4

0

1

0

0

0

2

0

13

Private Information & Media

2

6

1

0

0

3

1

0

1

90

39

0

0

7

8

10

0

0

8

25

26

6

1

0

1

36

2

273

Sensitive Media

26

44

7

5

5

19

6

7

18

360

301

23

16

121

119

8

5

3

0

123

89

38

17

9

4

256

58

1,687

Suicide & Self Harm

12

28

8

5

2

12

22

4

17

177

189

20

10

20

124

5

10

7

0

122

171

117

21

3

5

215

99

1,425

Synthetic & Manipulated Media

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

1

Violent Speech

73

170

25

31

11

72

70

9

51

1,669

1,143

65

55

93

455

26

29

9

2

582

487

329

80

21

20

529

202

6,308

Own Initiative

Automated Means

Abuse & Harassment

3

5

0

0

2

1

3

0

0

19

47

1

1

5

5

0

0

0

0

7

2

3

0

0

0

6

3

113

Hateful Conduct

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

9

0

0

0

2

0

11

Non-Consensual Nudity

2

1

0

0

0

108

19

0

2

201

26

0

44

103

30

26

0

1

0

7

3

4

7

0

0

0

27

611

Other

1

1

0

0

0

0

0

0

0

3

3

0

0

1

0

0

0

0

0

1

0

0

0

0

0

0

1

11

Perpetrators of Violent Attacks

0

0

0

0

0

0

0

0

0

0

1

1

0

0

2

0

0

0

0

2

0

0

1

0

0

0

1

8

Private Information & Media

0

1

0

0

0

1

0

0

0

2

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

6

Sensitive Media

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

Violent Speech

224

564

167

109

44

202

260

51

204

6,041

1,986

216

155

592

575

54

100

48

27

1,160

825

346

371

70

61

4,118

597

19,167

Manual Review

Abuse & Harassment

3

2

3

1

0

0

2

0

0

3

7

1

0

0

0

0

3

0

0

0

15

1

7

0

0

1

1

50

Hateful Conduct

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

Illegal or Certain Regulated Goods and Services

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

3

0

0

0

0

5

Non-Consensual Nudity

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

1

1

0

0

0

0

0

0

3

Private Information & Media

3

1

0

2

0

4

8

0

0

9

14

0

6

5

2

16

0

0

1

4

3

1

2

0

0

1

1

83

Sensitive Media

140

336

42

41

41

182

106

13

112

1,683

1,866

63

24

96

330

3

69

30

2

1,107

291

70

127

15

51

977

421

8,238

Suicide & Self Harm

5

0

0

1

0

4

2

0

3

8

18

0

1

2

5

2

2

0

0

9

20

4

2

1

0

7

4

100

Violent Speech

3

3

1

0

0

0

0

0

2

7

11

1

1

2

2

0

4

0

0

17

12

2

2

0

0

1

3

74

Grand Total

583

1403

420

274

133

762

576

106

521

16,288

7160

601

374

1,164

2,369

606

562

131

45

4,850

3,315

1,169

1,540

146

164

7,743

1,609

54,614

ACTIONS TAKEN ON ACCOUNTS FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS

TIUC Terms of Service and Rules Account Suspensions - Aug 28 to Oct 20

Detection Method

Enforcement Process

Policy

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Grand Total

User Report

Manual Review

Abuse & Harassment

86

70

47

15

32

73

81

8

36

2,475

726

38

27

63

199

312

266

12

3

887

669

68

162

8

3

486

43

6,895

Ban Evasion

1

0

1

0

0

1

2

0

12

26

14

2

0

1

1

3

2

0

1

6

3

1

0

0

1

5

4

87

Child Sexual Exploitation

194

237

372

122

55

661

138

36

121

4,157

2,150

164

270

1,823

695

557

581

84

63

2,387

2,543

634

1,768

258

32

875

1,566

22,543

Copyright Repeated Infringer 

2

7

2

1

1

6

5

0

4

116

50

3

2

9

41

5

3

1

0

22

41

20

5

1

1

91

8

447

Counterfeit

4

1

3

0

0

5

1

0

3

60

21

1

0

0

5

8

7

1

1

30

13

0

13

0

0

25

2

204

Deceased Individuals

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

1

0

0

0

0

3

Distribution of Hacked Materials

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

1

Financial Scam

3

0

7

1

2

2

0

0

2

48

24

3

0

1

13

1

0

1

0

4

20

0

5

0

0

9

4

150

Hateful Conduct

1

12

1

4

0

5

4

2

7

79

59

4

3

5

14

8

2

0

0

22

41

10

8

1

2

38

11

343

Illegal or Certain Regulated Goods and Services

27

24

54

21

9

56

34

7

24

1,241

416

27

26

43

95

167

253

6

2

341

479

21

89

6

6

187

9

3,670

Misleading & Deceptive Identities

13

29

24

11

9

38

14

4

7

274

213

28

16

112

106

12

16

9

5

164

180

38

70

10

3

208

47

1,660

Non-Consensual Nudity

5

13

9

7

2

8

5

1

4

73

108

16

8

16

26

3

11

0

0

70

76

6

29

2

3

57

13

571

Other

30

27

12

5

5

29

20

2

8

4,744

634

14

17

31

88

67

55

7

3

160

263

23

65

2

0

98

36

6,445

Perpetrators of Violent Attacks

0

1

0

1

0

3

0

3

6

13

12

2

1

6

8

0

0

1

1

6

16

2

1

0

0

18

4

105

Platform Manipulation & Spam

1,861

2,299

2,423

1,216

430

3,623

1,233

194

729

139,102

28,537

1,806

2,671

5,080

209,019

2,494

6,304

923

319

7,201

16,176

1,954

2,592

675

263

7,521

2,483

449,128

Private Information & Media

0

0

1

1

0

0

1

0

0

3

7

1

0

2

2

1

0

0

0

4

3

1

2

0

0

5

0

34

Sensitive Media

2

0

2

0

0

2

1

1

0

15

27

2

3

1

5

1

0

0

0

8

6

1

3

0

0

4

1

85

Suicide & Self Harm

2

4

0

0

0

2

5

0

4

18

17

2

1

3

10

0

1

0

0

8

12

3

4

1

0

13

2

112

Trademark

0

0

0

0

1

0

0

0

0

3

4

1

1

0

2

0

1

0

0

0

2

0

2

0

0

3

0

20

Username Squatting

0

1

0

0

0

2

2

0

0

5

4

1

1

0

3

0

0

0

0

3

0

1

2

1

0

4

2

32

Violent & Hateful Entities

29

38

8

3

18

17

39

4

38

350

537

108

12

12

402

4

5

42

2

325

166

20

42

1

0

110

170

2,502

Violent Speech

228

441

104

100

23

222

312

36

204

4,375

2,859

192

151

360

1,109

58

114

45

20

1,403

1,678

655

258

56

65

1,605

592

17,265

Own Initiative

Automated Means

Child Sexual Exploitation

351

559

372

135

140

698

253

58

265

5,708

8,985

205

432

793

1,327

365

542

591

102

5,989

5,108

627

1,118

256

61

1,711

1,083

37,834

Financial Scam

1

3

7

1

0

6

1

0

0

41

31

0

2

9

7

4

9

3

0

104

30

8

27

1

0

8

2

305

Illegal or Certain Regulated Goods and Services

0

2

1

0

1

3

4

0

0

53

30

0

0

4

3

1

1

5

0

43

41

2

18

0

0

2

0

214

Other

3

7

6

4

0

3

1

0

1

51

26

2

6

0

18

9

4

1

1

31

40

2

10

0

1

32

2

261

Perpetrators of Violent Attacks

0

0

0

1

0

1

6

0

4

8

5

0

0

1

3

0

0

0

0

4

10

7

3

1

0

14

0

68

Platform Manipulation & Spam

11,954

26,623

39,771

15,529

2,465

33,141

33,991

3,790

11,395

261,976

186,543

24,423

34,708

18,735

228,091

20,273

30,171

3,583

1,665

76,471

138,144

32,099

53,887

8,391

5,737

121,340

23,487

1,448,383

Violent & Hateful Entities

8

9

1

1

0

3

5

0

5

73

96

2

1

1

11

1

2

1

0

70

33

3

17

0

1

21

11

376

Manual Review

Abuse & Harassment

1

0

0

1

0

2

0

0

0

17

9

2

0

3

1

0

0

1

0

1

2

0

1

0

0

8

0

49

Grand Total

14,806

30,407

43,228

17,180

3,193

38,612

36,159

4,146

12,879

425,104

232,144

27,049

38,359

27,114

441,304

24,354

38,350

5,317

2,188

95,764

165,797

36,206

60,202

9,671

6,179

134,498

29,582

1,999,792

Important Notes about Action based on TIUC Terms of Service and Rules Violations:

  1. The categories “Other” refer to cases of workflow exceptions and tooling inconsistencies which prevent a further clarification on the violated policy of TIUC Terms of Service and Rules.
  2. User reports of illegal content which have been actioned under TIUC Terms of Service and Rules are displayed in the table "Actions Taken on Illegal Content".

*A data extraction limitation is impacting the availability of data ranging from Aug 28 to Sept 23. See the table "TIUC Terms of Service and Rules Content Removal Actions - Sep 5 to Sep 23” in the Appendix.

Orders received from Member States’ authorities including orders issued in accordance with Articles 9 (Removal Orders) and 10 (Information Requests)

REMOVAL ORDERS, Art. 9 DSA

Removal Orders Received - Aug 28 to Oct 20

Illegal Content Category

France

Italy

Spain

Grand Total

Unsafe and/or Illegal Products

1

1

Illegal or Harmful Speech

4

1

5

Grand Total

1

4

1

6

Removal Orders Median Handle Time (Hours) - Aug 28 to Oct 20

Illegal Content Category

France

Italy

Spain

Unsafe and/or Illegal Products

32

124

Illegal or Harmful Speech

73

Removal Orders Median Time to Acknowledge Receipt - Aug 28 to Oct 20

X provides an automated acknowledgement of receipt of removal orders submitted by law enforcement through our Legal Request submission portal. As a consequence of this immediate acknowledgement of receipt, the median time is zero.

Important Notes about Removal Orders:

  1. To improve clarity, we've omitted countries and violation types with no legal requests from the tables above.
  2. The table “Removal Orders Median Handle Time” shows the category which we considered to fit best and under which we handled the order. This category might deviate from the information provided by the authority when submitting the order via the X online submission platform.
  3. In the cases from France and Spain, we asked the submitting authority to fulfil Article 9 information requirements but did not receive responses in the reporting period.

INFORMATION REQUESTS, Art. 10 DSA

Information Requests Received - Aug 28 to Oct 20

Content Category

Austria

Belgium

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Malta

Netherlands

Poland

Portugal

Spain

Grand Total

Data Protection and Privacy Violations

2

1

1

1

5

Illegal or Harmful Speech

4

3

1

43

623

4

1

6

9

6

700

Intellectual Property Infringements

2

1

3

Negative Effects on Civic Discourse or Elections

2

1

3

Non-Consensual Behaviour

3

23

1

1

28

Not Specified

1

4

7

1

3

4

20

Other

2

8

4

1

1

16

Pornography or Sexualized Content

13

13

Protection of Minors

7

1

31

2

1

1

43

Risk for Public Security

19

654

16

1

17

1

1

7

716

Scams and/or Fraud

1

1

2

7

1

1

1

1

15

Self-Harm

1

1

Unsafe and/or Illegal Products

4

2

6

Violence

1

71

61

1

2

6

8

7

1

1

159

Grand Total

6

32

2

787

795

9

1

4

33

1

9

22

3

24

1,728

Information Request Median Time to Acknowledge Receipt - Aug 28 to Oct 20

X provides an automated acknowledgement of receipt of information requests submitted by law enforcement through our Legal Request submission portal. As a consequence of this immediate acknowledgement of receipt, the median time is zero.

Information Request Median Handle Time (Hours) - Aug 28 to Oct 20

Content Category

Austria

Belgium

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Malta

Netherlands

Poland

Portugal

Spain

Data Protection and Privacy Violations

152

141

146

173

Illegal or Harmful Speech

146

42

146

138

127

64

73

164

78

129

Intellectual Property Infringements

114

175

Negative Effects on Civic Discourse or Elections

183

20

Non-Consensual Behaviour

21

117

124

170

Not Specified

219

35

24

5

1

2

Other

170

152

149

2

165

Pornography or Sexualized Content

56

Protection of Minors

2

49

4

2

26

209

Risk for Public Security

5

8

47

18

149

43

126

74

Scams and/or Fraud

30

172

169

120

120

20

124

73

Self-Harm

194

Unsafe and/or Illegal Products

146

126

Violence

154

132

119

51

19

147

19

48

241

190

Important Notes about Information Requests:

  1. The content category for each request is determined by the information law enforcement provides while submitting such requests through the X online submission platform.
  2. The median handling time is the time between receiving the order and either: 1) disclosing information to law enforcement if the order is valid; or 2) pushing back due to legal issues. The median handling time does not include extra time where X pushes back due to legal issues, receives a valid order later, and disclosure is eventually made.
  3. To improve clarity, we've omitted countries and violation types with zero legal requests from the tables above.
  4. The “Not Specified” category shows cases where the illegal content category could not be determined based on the information law enforcement provided during the submission process.
  5. The “Other” category here shows cases where law enforcement selects “Cybercrime” as the content category during the case submission process without providing more details to determine a more specific content category.

Reports submitted in accordance with Article 16 (Illegal Content)

ACTIONS TAKEN ON ILLEGAL CONTENT:

ACTIONS TAKEN ON ACCOUNTS FOR POSTING ILLEGAL CONTENT: We suspended accounts in response to 855 reports of Intellectual Property Infringements. This was the only type of violation of local law that resulted in account suspension as many types of illegal behaviour are addressed in our policies, such as account suspensions for posting CSE. On our own initiative, we withheld 1 account for breaching local laws connected to unsafe and/or illegal products.

Also, we withheld 15 accounts in one Member State each for provision of illegal content.

REPORTS OF ILLEGAL CONTENT

Illegal Content Reports Received - Aug 28 to Oct 20

Content Category

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

EU

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Grand Total

Animal Welfare

14

4

1

2

4

2

4

1

88

4

96

58

3

1

11

10

1

1

1

1

15

16

3

2

2

2

56

3

392

Data Protection & Privacy Violations

18

50

7

5

6

16

21

6

500

13

727

592

45

9

90

98

5

0

1

0

164

94

60

17

3

10

703

27

3,269

Illegal or Harmful Speech

397

448

46

34

32

205

175

46

5,258

133

9,499

11,265

198

32

335

1203

60

41

35

7

995

893

626

96

27

26

3088

203

35,006

Intellectual Property Infringements

16

19

4

14

17

8

14

2

0

35

737

872

19

7

64

185

7

29

5

4

701

601

262

52

0

0

835

21

4,531

Negative Effects on Civic Discourse or Elections

27

33

6

1

3

25

12

7

475

14

314

934

15

7

26

132

3

5

2

1

219

514

24

16

14

3

127

8

2,940

Non-Consensual Behaviour

15

16

2

4

4

2

11

4

179

9

196

143

7

15

35

36

0

2

0

0

34

17

16

2

1

0

186

22

943

Pornography or Sexualized Content

38

50

9

3

4

25

23

1

468

10

865

641

44

109

55

145

5

3

2

2

113

107

67

48

8

1

324

26

3,158

Protection of Minors

43

49

11

7

3

20

24

3

462

22

672

564

24

7

65

57

12

0

1

0

107

78

17

8

2

13

305

17

2,550

Risk for Public Security

39

105

8

4

4

46

13

8

414

24

981

950

17

8

22

59

9

5

1

0

120

111

35

9

3

4

181

22

3,163

Scams and/or Fraud

96

140

12

23

33

83

90

20

833

48

1292

749

46

42

233

356

8

65

34

1

520

300

177

79

7

9

743

70

6,013

Scope of Platform Service

3

2

1

0

0

0

0

0

53

1

31

35

0

0

3

9

4

0

0

0

10

4

8

0

0

0

28

0

189

Self-Harm

1

4

0

1

2

4

5

0

74

2

41

72

0

0

4

8

1

0

1

0

7

11

4

2

0

0

56

6

305

Unsafe and Illegal Products

5

20

0

0

2

6

2

2

126

5

600

179

4

0

21

18

8

3

0

1

55

21

19

5

1

4

105

17

1224

Violence

57

177

12

4

4

45

41

7

1095

47

2274

1448

37

10

78

219

8

3

5

11

182

135

94

16

5

6

743

64

6,770

Grand Total

769

1117

119

102

118

487

435

107

10,025

367

18,325

18,502

459

247

1042

2,536

131

157

88

28

3,242

2,902

1412

352

73

78

7,480

506

71,206

REPORTS RESOLVED BY ACTIONS TAKEN ON ILLEGAL CONTENT

Actions Taken on Illegal Content - Aug 28 to Oct 20

Enforcement Process

Action Type

Reason Code

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

EU

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Grand Total

Automated Means

Global content deletion based on a violation of TIUC Terms of Service and Rules

Illegal or Harmful Speech

1

0

0

0

0

0

0

0

2

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

3

Non-Consensual Behaviour

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

8

0

8

Self-Harm

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

2

0

2

Violence

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

Country withheld Content

Data Protection & Privacy Violations

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

1

Illegal or Harmful Speech

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

1

No Violation Found

Animal Welfare

0

0

0

0

0

0

0

0

4

0

0

0

0

0

0

0

0

0

0

0

2

0

0

1

0

0

7

0

14

Data Protection & Privacy Violations

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

1

0

0

0

0

1

1

1

0

0

0

3

0

8

Illegal or Harmful Speech

5

2

0

0

0

0

0

0

202

0

0

0

0

0

3

3

0

0

0

0

1

0

1

0

0

0

13

0

230

Non-Consensual Behaviour

0

0

1

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

3

Pornography or Sexualized Content

0

0

0

0

0

0

1

0

2

0

0

0

1

5

0

2

0

0

0

0

0

0

0

0

0

0

7

1

19

Protection of Minors

1

0

0

0

0

0

0

0

9

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

8

0

18

Risk for Public Security

0

0

0

0

0

0

0

0

2

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

2

Scams and Fraud

8

5

1

0

2

0

4

1

33

3

0

0

0

1

5

4

0

0

3

0

35

3

1

12

0

0

15

5

141

Scope of Platform Service

0

0

0

0

0

0

0

0

7

0

0

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

1

0

9

Self-Harm

0

1

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

3

Unsafe and Illegal Products

0

0

0

0

1

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

3

0

0

0

0

0

1

0

6

Violence

0

1

0

0

0

0

2

0

9

0

0

0

0

0

1

1

0

0

0

0

1

3

0

0

0

0

10

0

28

Manual Closure

Global content deletion based on TIUC Terms of Service and Rules

Animal Welfare

0

0

0

0

0

0

0

0

14

0

11

8

0

0

0

0

0

0

0

0

3

1

0

0

1

0

0

0

38

Data Protection & Privacy Violations

1

2

0

0

0

3

3

0

15

1

15

33

4

0

3

4

0

0

0

0

5

7

7

1

0

1

15

3

123

Illegal or Harmful Speech

26

14

2

4

3

10

5

1

231

7

440

1,270

9

0

10

37

2

5

0

0

40

62

41

6

0

10

73

29

2,337

Negative Effects on Civic Discourse or Elections

0

2

0

0

0

0

0

0

2

0

3

8

0

1

2

0

0

0

0

0

0

1

0

0

0

0

0

1

20

Non-Consensual Behaviour

1

1

0

0

0

0

0

0

35

0

13

14

1

0

1

0

0

0

0

0

1

1

0

0

0

0

13

0

81

Pornography or Sexualized Content

16

1

0

1

0

3

8

1

80

4

55

108

4

0

6

11

1

0

0

0

19

13

4

9

1

0

30

6

381

Protection of Minors

5

8

3

1

0

3

5

1

211

12

152

308

6

2

17

5

1

0

0

0

51

23

5

3

0

1

98

7

928

Risk for Public Security

0

3

0

1

0

1

0

0

28

0

56

87

2

0

3

2

0

0

0

0

5

5

6

0

0

0

33

5

237

Scams and Fraud

2

0

0

0

0

1

0

0

12

0

3

1

0

0

0

0

0

0

0

0

27

1

0

0

0

0

2

0

49

Scope of Platform Service

0

0

0

0

0

0

0

0

4

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

4

Self-Harm

0

0

0

0

0

0

1

0

8

0

2

5

0

0

1

0

1

0

0

0

0

3

1

0

0

0

12

1

35

Unsafe and Illegal Products

1

0

0

0

0

0

1

0

3

1

69

19

0

0

7

0

0

0

0

0

3

0

0

1

0

0

0

2

107

Violence

11

7

0

0

1

4

3

0

120

9

215

192

4

1

8

26

0

1

0

0

34

25

12

2

0

1

48

17

741

Temporary suspension and global content deletion based on TIUC Terms of Service and Rules

Data Protection & Privacy Violations

0

0

0

0

0

0

1

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

2

Illegal or Harmful Speech

0

0

0

0

0

0

0

0

0

0

16

2

0

0

0

0

0

0

0

0

2

0

0

0

0

0

0

0

20

Pornography or Sexualized Content

0

0

0

0

0

0

0

0

0

0

4

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

4

Protection of Minors

0

0

0

0

0

0

0

0

0

0

1

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

2

Risk for Public Security

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

Scams and Fraud

0

0

0

0

0

0

0

0

0

0

2

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

2

Violence

0

0

0

0

0

0

0

0

0

0

28

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

29

Offer of help in case of self-harm and suicide concern based on TIUC Terms of Service and Rules

Protection of minors

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

1

Self-Harm

0

0

0

0

0

0

0

0

5

0

1

3

0

0

0

0

0

0

0

0

0

0

0

0

0

0

2

0

11

Content removed globally

Animal Welfare

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

1

Data Protection & Privacy Violations

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

Illegal or Harmful Speech

0

2

0

0

3

0

2

0

0

0

1

5

1

0

0

1

0

0

0

0

0

0

0

0

0

0

3

0

18

Non-Consensual Behaviour

0

3

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

3

Pornography or Sexualized Content

0

1

0

0

0

0

0

0

2

0

5

3

0

1

0

2

0

0

0

0

5

0

0

6

0

0

1

0

26

Protection of Minors

0

5

0

0

0

2

0

0

7

0

0

8

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

23

Risk for Public Security

0

0

0

0

0

0

0

0

6

0

3

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

9

Scams and Fraud

0

0

0

0

0

0

0

0

1

0

7

0

0

0

0

0

0

0

0

0

4

0

0

0

0

0

0

0

12

Self-Harm

0

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

Unsafe and Illegal Products

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

5

0

0

0

0

0

5

Country withheld Account

Illegal or Harmful Speech

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

2

0

0

0

0

0

0

0

0

0

0

0

0

2

Pornography or Sexualized Content

0

0

0

0

0

0

0

0

0

0

0

8

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

8

Scams and fraud

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

Violence

0

0

0

0

0

0

0

0

0

0

4

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

4

Country withheld Content

Animal Welfare

1

2

0

0

0

0

0

0

5

0

2

6

0

0

0

1

0

0

0

0

0

2

0

0

0

0

8

0

27

Data Protection & Privacy Violations

0

3

1

0

1

3

2

0

30

1

61

91

2

5

24

12

0

0

0

0

24

7

13

5

0

0

46

2

333

Illegal or Harmful Speech

84

92

2

9

2

39

32

4

1,131

21

2,433

2,962

32

5

90

315

9

3

6

0

209

204

138

16

12

2

698

49

8,599

Negative Effects on Civic Discourse or Elections

5

2

0

0

0

1

0

0

26

0

8

88

1

0

2

7

0

0

1

0

6

30

6

0

0

0

7

0

190

Non-Consensual Behaviour

0

2

0

0

0

0

0

2

36

1

29

32

1

0

4

2

0

1

0

0

7

3

0

1

0

0

14

0

135

Pornography or Sexualized Content

6

9

6

2

1

3

4

0

104

3

116

230

3

4

10

23

3

2

1

0

14

24

15

8

1

0

100

6

698

Protection of Minors

1

6

0

2

0

1

2

0

19

2

21

50

2

0

10

5

0

0

1

0

3

5

5

3

0

0

15

3

156

Risk for Public Security

2

8

0

0

1

2

0

0

19

0

46

89

3

0

0

4

1

0

0

0

3

12

2

0

0

0

2

3

197

Scams and Fraud

6

2

0

0

0

6

5

1

29

2

34

54

1

0

3

5

0

2

0

0

16

24

2

3

0

2

16

3

216

Scope of Platform Service

1

0

0

0

0

0

0

0

0

0

0

4

0

0

0

2

0

0

0

0

0

0

1

0

0

0

0

0

8

Self-Harm

0

0

1

0

1

0

0

0

6

0

5

2

0

0

0

0

0

0

0

0

0

0

0

0

0

0

2

2

19

Unsafe and Illegal Products

0

2

0

0

0

3

0

0

16

1

277

33

0

0

2

0

1

0

0

0

13

6

0

1

0

0

8

2

365

Violence

5

6

3

0

0

8

8

0

81

15

518

212

5

1

11

27

3

0

1

0

19

19

18

3

1

0

68

23

1,055

Globally withheld content

Intellectual Property Infringements

4

10

0

0

0

6

5

0

0

31

561

167

17

2

17

84

2

17

0

3

101

450

38

22

0

0

283

9

1,829

Account Suspension

Intellectual Property Infringements

1

1

0

6

5

0

1

0

0

1

28

498

0

0

7

22

1

7

0

0

27

39

88

19

0

0

104

0

855

No Violation Found

Animal Welfare

13

3

1

2

4

2

4

1

61

4

56

45

4

1

9

9

1

1

1

1

10

15

3

1

1

2

42

3

300

Data Protection & Privacy Violations

17

47

5

5

4

10

15

6

452

11

470

443

39

2

63

82

5

0

3

0

135

78

49

12

3

9

632

22

2,619

Illegal or Harmful Speech

280

340

44

20

25

165

133

40

3,729

109

5,460

6,887

154

27

234

872

52

35

33

7

721

645

456

80

16

16

2,356

125

23,061

Negative Effects on Civic Discourse or Elections

21

30

8

1

3

25

13

7

462

15

259

833

14

5

23

129

3

5

1

1

211

496

17

17

15

3

121

9

2,747

Non-Consensual Behaviour

13

10

1

5

2

2

12

2

94

8

102

87

5

15

27

34

0

1

0

0

27

13

15

1

1

0

147

21

645

Pornography or Sexualized Content

16

40

3

0

3

19

10

0

283

3

321

293

41

93

41

103

1

1

1

2

75

69

48

24

6

2

176

12

1,686

Protection of Minors

36

25

8

4

3

15

17

2

229

9

293

198

16

5

39

46

12

0

0

0

55

45

6

2

2

12

185

7

1,271

Risk for Public Security

37

90

8

3

3

42

14

9

363

24

729

756

12

6

19

53

9

5

1

0

115

97

28

9

3

4

148

12

2,599

Scams and Fraud

65

130

11

19

31

76

79

19

730

42

488

646

45

37

184

311

8

63

30

1

365

262

141

61

7

7

660

61

4,579

Scope of Platform Service

2

2

1

0

0

0

0

0

38

1

9

30

0

0

3

10

4

0

0

0

10

3

5

0

0

0

27

0

145

Self-Harm

1

3

0

1

1

4

4

0

57

2

28

61

0

0

3

12

0

0

3

0

7

8

4

2

0

0

37

3

241

Unsafe and Illegal Products

3

16

0

0

1

3

1

2

106

4

179

126

5

1

12

18

7

3

0

1

38

15

11

3

1

4

98

13

671

Violence

39

156

7

4

3

33

27

7

901

25

1,225

1,034

28

8

59

170

5

2

7

11

125

92

63

10

4

5

588

24

4,662

Grand Total

737

1,095

117

90

105

495

424

106

10,066

372

14,866

18,043

462

228

964

2,459

132

154

93

27

2,579

2,813

1,257

344

75

81

6,997

491

65,672

REPORTS OF ILLEGAL CONTENT MEDIAN HANDLE TIME

Reports of Illegal Content Median Handle Time (Hours) - Aug 28 to Oct 20

Enforcement Process

Action Type

Reason Code

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

EU

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Automated Means

Global content deletion based on TIUC Terms of Service and Rules

Illegal or Harmful Speech

34.3

29.5

Non-Consensual Behaviour

92.8

Self-Harm

92.7

Violence

21.4

Country withheld Content

Data Protection & Privacy Violations

117.0

Illegal or Harmful Speech

106.3

No violation found

Animal Welfare

45.7

82.9

26.5

47.9

Data Protection & Privacy Violations

30.1

32.9

50.3

33.7

104.0

39.4

Illegal or Harmful Speech

34.4

33.1

36.6

24.3

27.7

106.3

79.0

43.0

Non-Consensual Behaviour

44.5

29.5

30.5

Pornography or Sexualized Content

25.7

22.6

69.5

48.7

89.2

32.1

22.3

Protection of Minors

22.1

27.0

24.0

Risk for Public Security

96.1

Scams and Fraud

75.6

73.5

102.2

202.8

26.1

91.4

44.3

28.7

73.4

57.8

35.1

223.8

45.2

34.1

52.3

75.4

44.0

27.2

Scope of Platform Service

42.5

85.2

36.0

Self-Harm

52.2

21.3

49.1

Unsafe and Illegal Products

35.4

40.8

29.8

19.2

Violence

70.6

31.0

115.2

47.8

24.1

20.5

28.2

42.8

Manual Closure

Global content deletion based on TIUC Terms of Service and Rules

Animal Welfare

3.0

8.6

8.4

91.0

13.2

Data Protection & Privacy Violations

10.7

0.5

17.2

13.7

12.3

37.7

6.4

3.2

42.3

11.1

14.8

16.0

158.7

3.1

1.8

0.9

50.0

2.2

Illegal or Harmful Speech

13.3

10.5

7.6

15.2

12.6

2.1

2.7

0.2

8.0

9.6

4.8

3.1

11.2

12.6

9.6

45.0

5.4

13.0

12.8

6.3

9.0

22.8

10.4

14.6

Negative Effects on Civic Discourse or Elections

37.3

178.3

13.0

6.3

0.1

5.8

22.5

11.2

Non-Consensual Behaviour

3.1

53.4

33.2

12.0

4.1

10.6

26.9

88.3

15.7

1.3

11.5

Pornography or Sexualized Content

10.8

10.4

11.2

15.3

9.6

3.0

10.8

3.3

7.9

5.1

8.3

11.0

2.8

1.2

19.4

2.4

10.3

14.3

13.6

13.3

11.3

Protection of Minors

9.6

12.4

15.9

75.5

13.8

15.4

20.5

7.7

4.8

3.9

3.8

3.3

28.0

9.7

11.0

8.6

2.1

10.0

14.8

16.5

9.9

12.0

Risk for Public Security

5.6

2.3

0.8

11.8

4.1

1.5

14.5

10.1

0.7

1.4

6.0

4.3

0.3

11.8

Scams and Fraud

2.8

1.2

62.2

1.3

1.4

63.0

180.9

36.7

Scope of Platform Service

62.4

Self-Harm

1.5

3.0

13.5

8.9

10.8

16.9

17.7

0.9

36.8

0.2

Unsafe and Illegal Products

1.2

21.5

13.1

5.2

4.1

1.3

5.3

12.6

1.5

31.7

Violence

11.7

6.3

0.0

22.8

0.5

5.2

12.3

4.3

3.1

2.8

6.8

51.1

14.6

15.7

16.2

11.8

11.3

6.1

0.3

17.6

11.3

1.7

Temporary suspension and global content deletion based on TIUC Terms of Service and Rules

Data Protection & Privacy Violations

4.5

1.1

Illegal or Harmful Speech

1.4

2.7

18.7

Pornography or Sexualized Content

4.6

Protection of Minors

14.4

17.1

Risk for Public Security

26.5

Scams and Fraud

14.1

Violence

4.5

1.3

Offer of help in case of self-harm and suicide concern based on TIUC Terms of Service and Rules

Protection of minors

22.2

Self-harm

12.9

4.5

3.9

8.4

Content removed globally

Animal Welfare

0.0

Data Protection & Privacy Violations

0.0

Illegal or Harmful Speech

74.8

0.4

11.7

34.0

0.1

0.0

24.4

155.3

Non-Consensual Behaviour

102.9

Pornography or Sexualized Content

52.2

88.1

4.0

16.2

0.4

160.4

146.5

23.8

13.3

Protection of Minors

2.0

8.9

23.9

17.2

27.6

Risk for Public Security

32.1

26.5

Scams and Fraud

502.2

19.6

101.8

Self-Harm

1.2

Unsafe and Illegal Products

18.6

Country withheld Account

Illegal or Harmful Speech

51.5

Pornography or Sexualized Content

31.0

Scams and Fraud

33.0

Violence

6.9

Country withheld Content

Animal Welfare

0.0

10.1

18.7

13.0

9.1

6.6

7.5

10.8

Data Protection & Privacy Violations

17.2

157.6

0.4

139.4

0.3

90.6

53.3

6.6

2.2

104.2

33.9

125.9

52.8

10.2

123.5

9.7

26.3

55.4

108.2

Illegal or Harmful Speech

12.9

3.6

4.1

12.4

22.9

14.3

10.3

72.7

126.2

13.3

8.2

3.0

2.7

1.2

11.2

5.8

11.2

21.3

0.3

17.6

11.8

9.9

10.2

1.5

7.4

49.9

11.4

Negative Effects on Civic Discourse or Elections

8.4

17.1

1.8

157.0

15.1

2.8

1.8

35.4

4.7

11.7

9.6

12.7

3.1

47.4

Non-Consensual Behaviour

12.7

110.7

128.0

5.0

10.9

3.0

134.9

38.2

43.0

16.2

5.0

121.6

12.5

46.8

Pornography or Sexualized Content

12.3

17.0

18.8

9.9

47.3

1.8

35.2

88.0

3.8

12.4

5.2

7.9

49.0

10.1

4.1

12.5

51.5

4.3

13.5

19.5

1.5

1.9

77.4

13.8

31.1

Protection of Minors

8.1

12.1

0.6

162.2

28.9

44.2

138.5

12.1

10.2

4.8

9.0

93.0

0.2

11.2

4.1

11.9

4.6

17.2

19.3

Risk for Public Security

96.3

2.8

167.6

73.5

141.0

3.1

6.8

2.1

143.2

7.2

13.0

146.6

27.0

6.6

19.2

2.4

Scams and Fraud

109.1

124.0

161.3

72.8

137.1

141.1

37.3

11.3

128.6

6.0

185.6

150.2

161.2

130.1

71.8

80.3

170.9

79.4

135.4

138.5

Scope of Platform Service

6.3

0.9

23.9

31.2

Self-Harm

1.8

67.2

9.6

0.3

27.8

14.8

Unsafe and Illegal Products

165.7

49.0

111.2

2.0

12.4

0.5

66.9

87.8

18.8

2.5

97.1

90.0

Violence

59.1

1.2

13.3

12.0

51.1

122.0

4.9

2.4

4.7

3.0

0.2

1.6

11.5

18.5

3.1

15.8

16.4

8.9

0.3

20.9

18.8

14.6

Globally withheld Content

Intellectual Property Infringements

5.6

3.1

5.6

6.3

2.8

0.6

2.6

3.6

0.4

2.8

1.5

53.2

0.4

7.6

2.6

0.5

3.8

1.8

2.5

1.6

Account Suspension

Intellectual Property Infringements

81.2

14.7

31.8

58.8

77.7

63.3

81.7

50.7

29.5

66.7

68.6

86.3

94.6

56.5

32.8

43.1

77.4

No violation found

Animal Welfare

53.6

16.2

20.5

14.4

18.6

10.3

9.3

20.5

18.3

16.3

17.2

1.0

40.3

20.4

0.0

20.4

20.4

20.4

20.4

20.4

5.1

13.5

10.4

20.4

20.3

16.5

16.4

20.5

Data Protection & Privacy Violations

3.1

11.1

73.6

8.7

3.3

5.1

24.2

7.6

13.0

9.7

13.5

2.4

1.9

59.8

47.1

15.2

2.1

31.3

12.9

17.8

13.5

12.1

0.4

13.3

16.2

8.3

Illegal or Harmful Speech

9.6

7.6

13.1

12.2

16.7

9.2

4.4

12.8

11.1

14.3

11.0

2.7

8.4

16.4

11.0

8.4

11.8

10.9

2.4

119.7

14.8

12.7

8.0

11.6

2.0

13.9

9.9

4.4

Intellectual property infringements

28.7

4.5

2.1

31.7

101.8

47.2

52.8

6.1

4.5

37.2

33.9

1.1

9.5

25.6

10.4

48.6

36.1

38.4

30.2

53.0

35.4

25.9

43.2

53.5

56.8

Negative Effects on Civic Discourse or Elections

9.2

8.8

14.4

4.4

0.2

2.2

2.0

10.1

7.7

14.3

19.2

2.1

4.3

14.4

10.4

2.8

6.1

12.2

259.9

63.1

10.9

5.9

10.2

27.5

2.3

2.7

11.1

0.7

Non-Consensual Behaviour

4.7

8.1

16.9

63.0

6.5

5.7

17.1

5.9

16.0

45.5

11.8

4.9

12.3

84.0

20.7

14.2

143.8

11.8

17.3

19.1

0.1

0.4

11.4

9.6

Pornography or Sexualized Content

5.4

13.3

13.8

68.7

15.0

4.4

12.9

4.1

11.0

10.6

21.7

15.6

11.8

17.1

12.3

15.5

49.4

2.4

16.3

11.0

14.2

11.2

54.1

11.7

13.7

11.3

Protection of Minors

11.2

8.6

8.6

14.6

8.6

9.0

13.5

8.8

13.7

6.3

12.5

3.7

6.6

11.3

10.2

11.6

11.1

14.0

13.1

13.4

2.2

9.8

17.7

14.2

12.1

Risk for Public Security

5.0

9.0

1.5

15.4

3.6

3.1

2.3

8.2

8.7

3.5

10.9

4.2

16.0

13.2

7.7

6.3

10.3

9.9

4.1

13.0

11.6

11.7

10.6

0.3

5.4

9.9

5.2

Scams and Fraud

17.5

18.7

3.6

3.4

12.3

12.6

14.8

1.5

18.7

88.6

16.0

5.4

20.6

16.8

12.7

20.2

21.4

108.5

3.6

14.0

17.2

13.2

24.2

4.7

16.6

48.6

13.7

75.0

Scope of Platform Service

114.5

73.2

7.5

9.4

4.6

5.9

7.3

15.2

18.5

15.6

4.5

15.8

27.0

23.6

Self-Harm

2.1

13.6

0.4

1.0

41.4

47.4

11.2

1.5

11.6

9.5

15.8

27.8

1.8

23.5

12.7

2.3

23.9

11.1

2.9

Unsafe and Illegal Products

12.5

13.4

77.1

10.7

7.8

6.8

13.4

12.6

18.7

7.4

15.5

12.5

142.9

7.0

13.9

1.7

4.2

4.2

22.5

1.0

10.9

17.1

10.7

20.2

Violence

11.9

9.8

13.1

32.8

89.8

10.3

49.9

83.4

10.8

8.5

10.4

3.7

13.2

7.7

10.4

10.7

2.5

0.9

1.6

2.1

11.0

12.9

10.6

10.3

12.5

1.7

10.9

9.1

Important Notes about Actions taken on illegal content:

  1. Disparity between reports received and reports handled is caused by the pending cases at the end of the reporting period.
  2. We only use automated means to close user reports of illegal content where: (i) reported content is no longer accessible to the reporter following other means/workflows; or (ii) reporter displays bad actor patterns.
  3. The numbers of “Intellectual property infringements” reflect reports instead of individual items of content and accounts. Actions taken against intellectual property infringements are made globally meaning that media that infringes copyright and accounts that infringe trademarks will be disabled globally.
  4. Action Types: actions that do not reference TIUC Terms of Service and Rules have been taken based on illegality.
  5. To improve clarity, we've omitted countries and violation types with zero reports from the tables above.
  6. The tables REPORTS RESOLVED BY ACTIONS TAKEN ON ILLEGAL CONTENT and REPORTS OF ILLEGAL CONTENT MEDIAN HANDLE TIME were updated on 13 November 2023 to replace an undefined description "reported content" with the relevant enforcement method "manual closure".

Complaints received through our internal complaint-handling system.

COMPLAINTS OF ACTIONS TAKEN FOR ILLEGAL CONTENT RECEIVED

Illegal Content Complaints Received - Aug 28 to Oct 20

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

EU

Finland

France

Germany

Greece

Ireland

Italy

Latvia

Luxembourg

Netherlands

Poland

Portugal

Slovenia

Spain

Sweden

Grand Total

Complaints

3

8

1

1

1

1

5

3

33

1

52

33

1

10

15

2

5

5

5

6

1

14

2

208

COMPLAINTS OF ACTIONS TAKEN FOR ILLEGAL CONTENT DECISIONS

Illegal Content Complaints Actioned - Aug 28 to Oct 20

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

EU

Finland

France

Germany

Greece

Ireland

Italy

Latvia

Luxembourg

Netherlands

Poland

Portugal

Slovenia

Spain

Sweden

Grand Total

Overturned Appeal

1

3

2

3

13

1

1

2

1

3

4

1

35

Rejected Appeal

2

5

1

1

1

1

5

3

31

1

49

20

1

9

15

1

3

5

4

3

1

10

1

173

COMPLAINTS OF ACTIONS TAKEN FOR ILLEGAL CONTENT MEDIAN HANDLE TIME

Illegal Content Complaints Median Handle Time (Hours) - Aug 28 to Oct 20

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

EU

Finland

France

Germany

Greece

Ireland

Italy

Latvia

Luxembourg

Netherlands

Poland

Portugal

Slovenia

Spain

Sweden

Complaints

3.8

15.4

329

0.9

0

131.6

1.7

168.4

13.2

68

4.7

2

24.1

3.4

16

71.1

0

4.3

8.2

5.7

199.2

9.4

8.4

COMPLAINTS OF ACTIONS TAKEN FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS RECEIVED

TIUC Terms of Service and Rules Action Complaints - Aug 28 to Oct 20

Category

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Grand Total

Account Suspension Complaints

1,006

1,758

741

407

242

952

1,010

321

1,101

16,340

24,594

1,067

874

1,456

5,837

318

535

965

122

13,939

6,688

2,457

1,501

333

208

12,365

2,202

99,339

Content Action Complaints

70

149

16

19

15

70

54

10

45

1,296

960

50

27

176

177

10

17

13

8

340

180

135

68

22

12

1,068

108

5,115

Live Feature Action Complaints

1

4

1

2

1

45

32

1

1

2

10

1

1

3

20

8

6

4

1

7

11

162

Restricted Reach Complaints

48

86

21

35

10

57

50

14

66

371

470

41

17

195

145

10

8

12

4

350

217

65

38

15

30

454

188

3,017

Sensitive Media Action Complaints

5

11

1

4

3

12

4

3

7

49

129

4

3

15

22

3

58

21

3

6

3

1

23

21

411

Grand Total

1,130

2,008

780

467

270

1,091

1,119

348

1,219

18,101

26,185

1,163

922

1,844

6,191

339

561

996

134

14,707

7,114

2,666

1,617

374

251

13,917

2,530

108,044

COMPLAINTS OF ACTIONS TAKEN FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS DECISIONS

Decisions

Category

Overturned

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Grand Total

Account Suspension Complaints

No

928

1,607

699

367

222

847

938

306

1,013

15,047

23,503

969

817

1,333

5,382

283

500

942

108

13,426

6,289

2,166

1,347

311

187

10,784

2,055

92,376

Yes

74

130

37

35

18

92

62

14

74

1,114

948

84

51

97

368

31

31

19

13

443

352

257

138

19

16

1,374

131

6,022

Content Action Complaints

No

60

122

15

14

11

58

38

8

36

968

782

43

16

133

136

9

12

10

5

265

152

92

53

15

12

776

77

3,918

Yes

8

26

1

4

4

11

16

2

8

287

162

6

10

42

40

1

5

2

3

68

28

42

15

7

277

29

1,104

Live Feature Action Complaints

No

1

4

1

2

1

44

30

1

1

2

10

1

1

3

20

7

6

4

1

7

11

158

Yes

1

1

1

3

Restricted Reach Complaints

No

24

38

12

23

5

20

29

5

25

203

232

18

9

96

82

7

5

7

1

176

92

45

18

8

15

231

95

1,521

Yes

24

48

9

12

4

37

21

9

40

168

235

23

8

99

63

3

3

5

3

171

125

20

20

7

15

221

93

1,486

Sensitive Media Action Complaints

No

2

5

2

3

12

4

3

3

32

72

3

3

11

9

2

37

15

2

2

2

10

9

243

Yes

3

6

1

2

4

13

50

1

3

13

16

6

1

4

1

1

12

11

148

Grand Total

Total

1,124

1,986

775

461

267

1,077

1,109

347

1,203

17,877

26,015

1,148

915

1,816

6,103

335

557

990

133

14,622

7,067

2,631

1,601

371

246

13,692

2,511

106,979

COMPLAINTS OF ACTIONS TAKEN FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS MEDIAN HANDLE TIME

TIUC Terms of Service and Rules Complaints Median Handle Time (Hours)

Category

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Account Suspension Complaints

0.14

0.07

0.00

0.17

0.16

0.12

0.12

0.47

0.18

0.07

0.00

0.12

0.08

0.08

0.10

0.23

0.32

0.00

0.34

0.00

0.03

0.08

0.14

0.38

0.25

0.10

0.13

Content Action Complaints

0.25

0.33

0.90

0.16

0.48

0.07

0.05

0.28

0.46

0.55

1.04

0.77

0.48

0.15

0.35

0.13

0.30

1.20

0.27

0.42

0.62

0.30

0.57

0.35

0.30

0.43

0.27

Live Feature Action Complaints

1.21

9.11

4.76

3.71

12.87

5.51

6.40

0.07

4.51

4.30

7.98

8.86

3.58

1.22

4.98

3.74

6.31

11.72

1.72

2.35

7.74

Restricted Reach Complaints

0.08

0.05

0.10

0.17

0.04

0.08

0.03

0.05

0.07

0.08

0.08

0.05

0.05

0.07

0.08

0.05

0.17

0.10

0.15

0.08

0.07

0.07

0.05

0.05

0.08

0.08

0.07

Sensitive Media Action Complaints

5.82

0.13

11.75

0.27

2.78

0.22

4.80

2.18

1.13

0.93

1.88

1.65

3.70

1.17

0.45

0.18

0.82

0.47

0.68

2.68

0.98

4.08

1.30

1.32

Important Notes about Complaints:

  1. Information on the basis of complaints is not provided due to the wide variety of underlying reasoning contained in the open text field in the complaint form.
  2. To improve clarity, we've omitted countries and violation types with zero complaints from the tables above.
  3. The COMPLAINTS OF ACTIONS TAKEN FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS RECEIVED/DECISIONS tables were updated on 1 November 2023 to show additional data regarding complaints of actions taken based on the CSE policy that were not shown in the original version. The table COMPLAINTS OF ACTIONS TAKEN FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS MEDIAN HANDLE TIME has been updated following updates to the tables COMPLAINTS OF ACTIONS TAKEN FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS RECEIVED/DECISIONS.  

INDICATORS OF ACCURACY FOR CONTENT MODERATION

The possible rate of error of the automated means used in fulfilling those purposes, and any safeguards applied

VISIBILITY FILTERING INDICATORS

TIUC Terms of Service and Rules Visibility Filtering Complaints Received -  Aug 28 to Oct 20

Enforcement

Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Lithuanian

Maltese

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Automated Means

Hateful Conduct

1

6

21

11

84

1,098

18

240

151

4

2

0

56

1