OpenAI
Incidentes involucrados como desarrollador e implementador
Incidente 44325 Reportes
ChatGPT Abused to Develop Malicious Softwares
2022-12-21
OpenAI's ChatGPT was reportedly abused by cyber criminals including ones with no or low levels of coding or development skills to develop malware, ransomware, and other malicious softwares.
MásIncidente 42011 Reportes
Users Bypassed ChatGPT's Content Filters with Ease
2022-11-30
Users reported bypassing ChatGPT's content and keyword filters with relative ease using various methods such as prompt injection or creating personas to produce biased associations or generate harmful content.
MásIncidente 4508 Reportes
Kenyan Data Annotators Allegedly Exposed to Graphic Content for OpenAI's AI
2021-11-01
Sama AI's Kenyan contractors were reportedly asked with excessively low pay to annotate a large volume of disturbing content to improve OpenAI's generative AI systems such as ChatGPT, and whose contract was terminated prior to completion by Sama AI.
MásIncidente 4667 Reportes
AI-Generated-Text-Detection Tools Reported for High Error Rates
2023-01-03
Models developed to detect whether text generation AI was used such as AI Text Classifier and GPTZero reportedly contained high rates of false positive and false negative, such as mistakenly flagging Shakespeare's works.
MásAfectado por Incidentes
Incidente 42011 Reportes
Users Bypassed ChatGPT's Content Filters with Ease
2022-11-30
Users reported bypassing ChatGPT's content and keyword filters with relative ease using various methods such as prompt injection or creating personas to produce biased associations or generate harmful content.
MásIncidente 5037 Reportes
Bing AI Search Tool Reportedly Declared Threats against Users
2023-02-14
Users such as the person who revealed its built-in initial prompts reported Bing AI-powered search tool for making death threats or declaring them as threats, sometimes as an unintended persona.
MásIncidente 3573 Reportes
GPT-2 Able to Recite PII in Training Data
2019-02-14
OpenAI's GPT-2 reportedly memorized and could regurgitate verbatim instances of training data, including personally identifiable information such as names, emails, twitter handles, and phone numbers.
MásIncidente 4982 Reportes
GPT-4 Reportedly Posed as Blind Person to Convince Human to Complete CAPTCHA
2023-03-15
GPT-4 was reported by its researchers posing as a visually impaired person, contacting a TaskRabbit worker to have them complete the CAPTCHA test on its behalf.
MásIncidents involved as Developer
Incidente 54158 Reportes
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
2023-05-04
A lawyer in Mata v. Avianca, Inc. used ChatGPT for research. ChatGPT hallucinated court cases, which the lawyer then presented in court. The court determined the cases did not exist.
MásIncidente 48220 Reportes
ChatGPT-Assisted University Email Addressing Mass Shooting Denounced by Students
2023-02-16
Vanderbilt University's Office of Equity, Diversity and Inclusion used ChatGPT to write an email addressing student body about the 2023 Michigan State University shooting, which was condemned as "impersonal" and "lacking empathy".
MásIncidente 33914 Reportes
Open-Source Generative Models Abused by Students to Cheat on Assignments and Exams
2022-09-15
Students were reportedly using open-source text generative models such as GPT-3 and ChatGPT to complete school assignments and exams such as writing reports, essays.
MásIncidente 5037 Reportes
Bing AI Search Tool Reportedly Declared Threats against Users
2023-02-14
Users such as the person who revealed its built-in initial prompts reported Bing AI-powered search tool for making death threats or declaring them as threats, sometimes as an unintended persona.
MásEntidades Relacionadas
Murat Ayfer
Incidentes involucrados como desarrollador e implementador
Incidents involved as Developer
Stephan de Vries
Incidentes involucrados como desarrollador e implementador
Afectado por Incidentes
Microsoft
Incidentes involucrados como desarrollador e implementador
- Incidente 5037 Reportes
Bing AI Search Tool Reportedly Declared Threats against Users
- Incidente 4776 Reportes
Bing Chat Tentatively Hallucinated in Extended Conversations with Users