Incident 268: Permanent Removal of Social Media Content via Automated Tools Allegedly Prevented Investigative Efforts
Description: Automated permanent removal of violating social media content such as terrorism, violent extremism, and hate speech without archival allegedly prevented its potential use to investigate serious crimes and hamper criminal accountability efforts.
Entities
View all entitiesAlleged: Facebook , Twitter and YouTube developed and deployed an AI system, which harmed International Criminal Court investigators , International Court of Justice investigators , investigative journalists , criminal investigators and victims of crimes documented on social media.
Incident Stats
Incident ID
268
Report Count
2
Incident Date
2020-03-16
Editors
Khoa Lam
Incident Reports
Reports Timeline
hrw.org · 2020
- View the original report at its source
- View the report at the Internet Archive
Social media platforms are taking down online content they consider terrorist, violently extremist, or hateful in a way that prevents its potential use to investigate serious crimes, including war crimes, Human Rights Watch said in a report…
hrw.org · 2020
- View the original report at its source
- View the report at the Internet Archive
In recent years, social media platforms have been taking down online content more often and more quickly, often in response to the demands of governments, but in a way that prevents the use of that content to investigate people suspected of…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.