Incidents involved as both Developer and Deployer
Incident 1412 Reports
California Police Turned on Music to Allegedly Trigger Instagram’s DCMA to Avoid Being Live-Streamed
2021-02-05
A police officer in Beverly Hills played copyrighted music on his phone when realizing that his interactions were being recorded on a livestream, allegedly hoping the Instagram's automated copyright detection system to end or mute the stream.
MoreIncident 3312 Reports
Bug in Instagram’s “Related Hashtags” Algorithm Allegedly Caused Disproportionate Treatment of Political Hashtags
2020-08-05
A bug was reported by Instagram’s spokesperson to have prevented an algorithm from populating related hashtags for thousands of hashtags, resulting in an allege preferential treatment for some politically partisan hashtags.
MoreIncident 3432 Reports
Facebook, Instagram, and Twitter Failed to Proactively Remove Targeted Racist Remarks via Automated Systems
2021-07-11
Facebook's, Instagram's, and Twitter's automated content moderation failed to proactively remove racist remarks and posts directing at Black football players after finals loss, allegedly largely relying on user reports of harassment.
MoreIncident 3942 Reports
Social Media's Automated Word-Flagging without Context Shifted Content Creators' Language Use
2017-03-15
TikTok's, YouTube's, Instagram's, and Twitch's use of algorithms to flag certain words devoid of context changed content creators' use of everyday language or discussion about certain topics in fear of their content getting flagged or auto-demonetized by mistake.
MoreIncidents involved as Deployer
Incident 4693 Reports
Automated Adult Content Detection Tools Showed Bias against Women Bodies
2006-02-25
Automated content moderation tools to detect sexual explicitness or "raciness" reportedly exhibited bias against women bodies, resulting in suppression of reach despite not breaking platform policies.
MoreRelated Entities
Incidents involved as both Developer and Deployer
- Incident 3432 Reports
Facebook, Instagram, and Twitter Failed to Proactively Remove Targeted Racist Remarks via Automated Systems
- Incident 1421 Report
Facebook’s Advertisement Moderation System Routinely Misidentified Adaptive Fashion Products as Medical Equipment and Blocked Their Sellers