Journal des citations pour l'incident 85
Entités
Voir toutes les entitésStatistiques d'incidents
Classifications de taxonomie CSETv1
Détails de la taxonomieClassifications de taxonomie CSETv0
Détails de la taxonomieFull Description
On September 8, 2020, the Guardian published an op-ed generated by OpenAI’s GPT-3 text generator. The editors prompted GPT-3 to write an op-ed on about “why humans have nothing to fear from AI,” but some passages in the resulting output took a threatening tone, including “I know that I will not be able to avoid destroying humankind.” In a note the editors add that they used GPT-3 to generate eight different responses and the human editors spliced them together to create a compelling piece.
Short Description
On September 8, 2020, the Guardian published an op-ed generated by OpenAI’s GPT-3 text generating AI that included threats to destroy humankind.
Severity
Negligible
Harm Type
Psychological harm
AI System Description
OpenAI's GPT-3 neural-network-powered language generator.
System Developer
OpenAI
Sector of Deployment
Education
Relevant AI functions
Cognition, Action
AI Techniques
Unsupervised learning, Deep neural network
AI Applications
language generation
Location
United Kingdom
Named Entities
The Guardian, GPT-3, OpenAI
Technology Purveyor
The Guardian, OpenAI
Beginning Date
2020-09-08T07:00:00.000Z
Ending Date
2020-09-08T07:00:00.000Z
Near Miss
Unclear/unknown
Intent
Unclear
Lives Lost
No
Data Inputs
Unlabeled text drawn from web scraping
Rapports d'incidents
Chronologie du rapport
- Afficher le rapport d'origine à sa source
- Voir le rapport sur l'Archive d'Internet
Les anciens incidents suivants ont été convertis en "issues" suite à une mise à jour de la définition d'incident et critères d'ingestion.
21 : Un test de Turing plus difficile révèle la stupidité des chatbots
Description : Le Winograd Schem…
Variantes
Incidents similaires
Did our AI mess up? Flag the unrelated incidents
Incidents similaires
Did our AI mess up? Flag the unrelated incidents