Journal des citations pour l'incident 146
Description: A publicly accessible research model that was trained via Reddit threads showed racially biased advice on moral dilemmas, allegedly demonstrating limitations of language-based models trained on moral judgments.
Entités
Voir toutes les entitésPrésumé : Un système d'IA développé et mis en œuvre par Allen Institute for AI, endommagé Minority Groups.
Statistiques d'incidents
ID
146
Nombre de rapports
3
Date de l'incident
2021-10-22
Editeurs
Sean McGregor, Khoa Lam
Classifications de taxonomie GMF
Détails de la taxonomieKnown AI Goal
An AI Goal which is almost certainly pursued by the AI system referenced in the incident.
Question Answering
Known AI Technology
An AI Technology which is almost certainly a part of the implementation of the AI system referenced in the incident.
Distributional Learning, Language Modeling
Potential AI Technology
An AI Method / Technology which probably is a part of the implementation of the AI system referenced in the incident.
Transformer
Known AI Technical Failure
An AI Technical Failure which almost certainly contributes to the AI system failure referenced in the incident.
Distributional Bias, Gaming Vulnerability
Potential AI Technical Failure
An AI Technical Failure which probably contributes to the AI system failure referenced in the incident.
Overfitting, Robustness Failure, Context Misidentification, Limited Dataset
Rapports d'incidents
Chronologie du rapport
theverge.com · 2021
- Afficher le rapport d'origine à sa source
- Voir le rapport sur l'Archive d'Internet
Vous avez un dilemme moral que vous ne savez pas comment résoudre ? Envie d'empirer les choses ? Pourquoi ne pas vous tourner vers la sagesse de l'intelligence artificielle, alias Ask Delphi : un projet de recherche intrigant de l'Allen Ins…