Incidents impliqués en tant que développeur et déployeur

Incident 4717 Rapports
Facebook Allegedly Failed to Police Hate Speech Content That Contributed to Ethnic Violence in Ethiopia

2019-06-22

Facebook allegedly did not adequately remove hate speech, some of which was extremely violent and dehumanizing, on its platform including through automated means, contributing to the violence faced by ethnic communities in Ethiopia.

Plus

Incident 1695 Rapports
Facebook Allegedly Failed to Police Anti-Rohingya Hate Speech Content That Contributed to Violence in Myanmar

2018-08-15

Facebook allegedly did not adequately remove anti-Rohingya hate speech, some of which was extremely violent and dehumanizing, on its platform, contributing to the violence faced by Rohingya communities in Myanmar.

Plus

Incident 3994 Rapports
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content

2022-11-15

Meta AI trained and hosted a scientific paper generator that sometimes produced bad science and prohibited queries on topics and groups that are likely to produce offensive or harmful content.

Plus

Incident 2783 Rapports
Meta’s BlenderBot 3 Chatbot Demo Made Offensive Antisemitic Comments

2022-08-07

The publicly launched conversational AI demo BlenderBot 3 developed by Meta was reported by its users and acknowledged by its developers to have “occasionally” made offensive and inconsistent remarks such as invoking Jewish stereotypes.

Plus

Affecté par des incidents

Incident 3994 Rapports
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content

2022-11-15

Meta AI trained and hosted a scientific paper generator that sometimes produced bad science and prohibited queries on topics and groups that are likely to produce offensive or harmful content.

Plus

Incidents involved as Deployer

Incident 4693 Rapports
Automated Adult Content Detection Tools Showed Bias against Women Bodies

2006-02-25

Automated content moderation tools to detect sexual explicitness or "raciness" reportedly exhibited bias against women bodies, resulting in suppression of reach despite not breaking platform policies.

Plus

Entités associées