About

Why "AI Incidents"?

Intelligent systems are currently prone to unforeseen and often dangerous failures when they are deployed to the real world. Much like the transportation sector before it (e.g., FAA and FARS) and more recently computer systems, intelligent systems require a repository of problems experienced in the real world so that future researchers and developers may mitigate or avoid repeated bad outcomes.

What is an Incident?

The initial set of more than 1,000 incident reports have been intentionally broad in nature. Current examples include,

You are invited to explore the incidents collected to date, view the complete listing, and submit additional incident reports. Researchers are invited to review our working definition of AI incidents.

Current and Future Users

The database is a constantly evolving data product and collection of applications.

  • Current Users include system architects, industrial product developers, public relations managers, researchers, and public policy researchers. These users are invited to use the Discover application to proactively discover how recently deployed intelligent systems have produced unexpected outcomes in the real world. In so doing, they may avoid making similar mistakes in their development.
  • Future Uses will evolve through the code contributions of the open source community, including additional database summaries and taxonomies.

When Should You Report an Incident?

When in doubt of whether an event qualifies as an incident, please submit it! This project is intended to converge on a shared definition of "AI Incident" through exploration of the candidate incidents submitted by the broader community.

Board of Directors

The incident database is managed in a participatory manner by persons and organizations contributing code, research, and broader impacts. If you would like to participate in the governance of the project, please contact us and include your intended contribution to the AI Incident Database.

Voting Members

  • Helen Toner: Helen Toner is Director of Strategy at Georgetown’s Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University’s Center for the Governance of AI. Helen has written for Foreign Affairs and other outlets on the national security implications of AI and machine learning for China and the United States, as well as testifying before the U.S.-China Economic and Security Review Commission. She is a member of the board of directors for OpenAI. Helen holds an MA in Security Studies from Georgetown, as well as a BSc in Chemical Engineering and a Diploma in Languages from the University of Melbourne.
    Contributions: AI incident research and oversight of the CSET taxonomy.

  • Patrick Hall: Patrick is principal scientist at bnh.ai, a D.C.-based law firm specializing in AI and data analytics. Patrick also serves as visiting faculty at the George Washington University School of Business. Before co-founding bnh.ai, Patrick led responsible AI efforts at the machine learning software firm H2O.ai, where his work resulted in one of the world’s first commercial solutions for explainable and fair machine learning. Among other academic and technology media writing, Patrick is the primary author of popular e-books on explainable and responsible machine learning. Patrick studied computational chemistry at the University of Illinois before graduating from the Institute for Advanced Analytics at North Carolina State University.
    Contributions: Patrick is the leading contributor of incident reports to the AI Incident Database Project.

  • Sean McGregor: Sean McGregor founded the AI Incident Database project and recently left a position as machine learning architect at the neural accelerator startup Syntiant so he could focus on the assurance of intelligent systems full time. Dr. McGregor's work spans neural accelerators for energy efficient inference, deep learning for speech and heliophysics, and reinforcement learning for wildfire suppression policy. Outside his paid work, Sean organized a series of workshops at major academic AI conferences on the topic of "AI for Good" and is currently developing an incentives-based approach to making AI safer through audits and insurance. Contributions: Sean volunteers as a project maintainer and editor of the AI Incident Database (AIID) project.

Non-Voting Members

  • Neama Dadkhahnikoo: Neama Dadkhahnikoo is an expert in artificial intelligence and entrepreneurship, with over 15 years of experience in technology development at startups, non profits, and large companies. He currently serves as a Product Manager for Vertex AI, Google Cloud’s unified platform for end-to-end machine learning. Previously, Mr. Dadkhahnikoo was the Director of AI and Data Operations at the XPRIZE Foundation, CTO of CaregiversDirect (AI startup for home care), co-founder and COO of Textpert (AI startup for mental health), and a startup consultant. He started his career as a Software Developer for The Boeing Company. Mr. Dadkhahnikoo holds an MBA from UCLA Anderson; an MS in Project Management from Stevens Institute of Technology; and a BA in Applied Mathematics and Computer Science, with a minor in Physics, from UC Berkeley.
    Contributions: Neama serves as Executive Director to the Responsible AI Collaborative outside his role as product manager at Google.
  • Kit Harris: Kit leads on grant investigations and researches promising fields at Longview Philanthropy. He also writes about Longview recommendations and research for Longview client philanthropists. Prior to focusing on high-impact philanthropic work, Kit worked as a credit derivatives trader with J.P. Morgan. During that time, he donated the majority of his income to high-impact charities. Kit holds a first class degree in mathematics from the University of Oxford.
    Contributions: Kit serves as board observer, provides strategic advice, and is RAIC's point of contact at Longview.

Collaborators

Open Source Contributors: People that have contributed more than one pull request, graphics, site copy, or bug report to the AI Incident Database.

Responsible AI Collaborative: People that serve the organization behind the AI Incident Database.

Incident Editors: People that resolve incident submissions to the database and maintain them.

Additionally, Zachary Arnold made significant contributions to the incident criteria.

Taxonomy Editors: Organizations or people that have contributed taxonomies to the database.

Partnership on AI staff members:
Jingying Yang and Dr. Christine Custis contributed significantly to the early stages of the AIID.

Incident Contributors: People that have contributed a large numbers of incidents to the database.

The following people have collected a large number of incidents that are pending ingestion.

  • Zachary Arnold, Helen Toner, Ingrid Dickinson, Thomas Giallella, and Nicolina Demakos (Center for Security and Emerging Technology, Georgetown)
  • Charlie Pownall via AI, algorithmic and automation incident and controversy repository (AIAAIC)
  • Lawrence Lee, Darlena Phuong Quyen Nguyen, Iftekhar Ahmed (UC Irvine)

There is a growing community of people concerned with the collection and characterization of AI incidents, and we encourage everyone to contribute to the development of this system.


The Responsible AI Collaborative

The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative’s Form 990 and tax-exempt application.