Incidentes involucrados como desarrollador e implementador
Incidente 1412 Reportes
California Police Turned on Music to Allegedly Trigger Instagram’s DCMA to Avoid Being Live-Streamed
2021-02-05
A police officer in Beverly Hills played copyrighted music on his phone when realizing that his interactions were being recorded on a livestream, allegedly hoping the Instagram's automated copyright detection system to end or mute the stream.
MásIncidente 3312 Reportes
Bug in Instagram’s “Related Hashtags” Algorithm Allegedly Caused Disproportionate Treatment of Political Hashtags
2020-08-05
A bug was reported by Instagram’s spokesperson to have prevented an algorithm from populating related hashtags for thousands of hashtags, resulting in an allege preferential treatment for some politically partisan hashtags.
MásIncidente 3432 Reportes
Facebook, Instagram, and Twitter Failed to Proactively Remove Targeted Racist Remarks via Automated Systems
2021-07-11
Facebook's, Instagram's, and Twitter's automated content moderation failed to proactively remove racist remarks and posts directing at Black football players after finals loss, allegedly largely relying on user reports of harassment.
MásIncidente 3942 Reportes
Social Media's Automated Word-Flagging without Context Shifted Content Creators' Language Use
2017-03-15
TikTok's, YouTube's, Instagram's, and Twitch's use of algorithms to flag certain words devoid of context changed content creators' use of everyday language or discussion about certain topics in fear of their content getting flagged or auto-demonetized by mistake.
MásIncidents involved as Deployer
Incidente 4693 Reportes
Automated Adult Content Detection Tools Showed Bias against Women Bodies
2006-02-25
Automated content moderation tools to detect sexual explicitness or "raciness" reportedly exhibited bias against women bodies, resulting in suppression of reach despite not breaking platform policies.
MásEntidades Relacionadas
Incidentes involucrados como desarrollador e implementador
- Incidente 3432 Reportes
Facebook, Instagram, and Twitter Failed to Proactively Remove Targeted Racist Remarks via Automated Systems
- Incidente 1421 Reporte
Facebook’s Advertisement Moderation System Routinely Misidentified Adaptive Fashion Products as Medical Equipment and Blocked Their Sellers