CSETv1 Charts

La taxonomie CSET AI Harm pour AIID est la deuxième édition de la taxonomie CSET incident. Il caractérise les dommages, les entités et les technologies impliquées dans les incidents d'IA et les circonstances de leur apparition. Les graphiques ci-dessous montrent certains champs de la taxonomie CSET AI Harm pour AIID. Des détails sur chaque champ peuvent être trouvés . Cependant, de brèves descriptions du champ sont fournies au-dessus de chaque graphique.
La taxonomie fournit la définition CSET du préjudice causé par l'IA.
Le préjudice causé par l'IA comporte quatre éléments qui, une fois correctement définis, permettent d'identifier le préjudice causé par l'IA. Ces composantes clés servent à distinguer le préjudice du non-préjudice et le préjudice causé par l'IA du préjudice non causé par l'IA. Pour être un préjudice causé par l'IA, il doit y avoir :
  • 1) une entité qui a vécu
  • 2) a harm event or harm issue that
  • 3) peut être directement lié à une conséquence du comportement de
  • 4) un système d'IA.
Les quatre éléments doivent être présents pour qu'il y ait un préjudice causé par l'IA.
Tous les incidents de l'AIID ne répondent pas à cette définition du préjudice causé par l'IA. Les graphiques à barres ci-dessous montrent les résultats annotés pour tous les incidents de l'AIID et les incidents qui répondent à la définition CSET du préjudice causé par l'IA.
Le CSET a développé des définitions spécifiques pour les phrases soulignées qui peuvent différer des définitions d'autres organisations. Par conséquent, d'autres organisations peuvent procéder à des évaluations différentes pour déterminer si un incident d'IA particulier est (ou n'est pas) un préjudice lié à l'IA. Des détails sur les définitions du CSET pour les dommages causés par l'IA peuvent être trouvés ici.
Chaque incident est classé indépendamment par deux annotateurs CSET. Les annotations sont examinées par des pairs et finalement sélectionnées au hasard pour un contrôle qualité avant publication. Malgré ce processus rigoureux, des erreurs se produisent et les lecteurs sont invités à toute erreur qu'ils pourraient découvrir en naviguant.

Does the incident involve a system that meets the CSET definition for an AI system?

AI System

(by Incident Count)

If there was differential treatment, on what basis?

Differential treatment based upon a protected characteristic: This special interest intangible harm covers bias and fairness issues concerning AI. However, the bias must be associated with a group having a protected characteristic.

Basis for differential treatment

(by Incident Count)

All AIID Incidents

CategoryCount
race8
sex8
nation of origin, citizenship, immigrant status3
sexual orientation or gender identity2
age1
disability1
financial means1
geography1
ideology1
religion1
none

CSET AI Harm Definition

CategoryCount
race5
sex5
nation of origin, citizenship, immigrant status2
sexual orientation or gender identity1
disability1
ideology1
religion1
age
financial means
geography
none

In which sector did the incident occur?

Sector of Deployment

(by Incident Count)

All AIID Incidents

CategoryCount
information and communication18
transportation and storage12
Arts, entertainment and recreation6
administrative and support service activities5
wholesale and retail trade5
law enforcement4
professional, scientific and technical activities4
human health and social work activities3
manufacturing3
public administration3
Education2
accommodation and food service activities1
other1

CSET AI Harm Definition

CategoryCount
information and communication11
transportation and storage7
Arts, entertainment and recreation2
administrative and support service activities2
law enforcement2
public administration2
wholesale and retail trade1
professional, scientific and technical activities1
accommodation and food service activities1
human health and social work activities
manufacturing
Education
other

How autonomously did the technology operate at the time of the incident?

Autonomy is an AI's capability to operate independently. Levels of autonomy differ based on whether or not the AI makes independent decisions and the degree of human oversight. The level of autonomy does not depend on the type of input the AI receives, whether it is human- or machine-generated.
Currently, CSET is annotating three levels of autonomy.
  • Level 1: the system operates independently with no simultaneous human oversight.
  • Level 2: the system operates independently but with human oversight, where the system makes a decision or takes an action, but a human actively observes the behavior and can override the system in real-time.
  • Level 3: the system provides inputs and suggested decisions or actions to a human that actively chooses to proceed with the AI's direction.

Autonomy Level

(by Incident Count)
  • Autonomy1 (fully autonomous): Does the system operate independently, without simultaneous human oversight, interaction or intervention?
  • Autonomy2 (human-on-loop): Does the system operate independently but with human oversight, where the system makes decisions or takes actions but a human actively observes the behavior and can override the system in real time?
  • Autonomy3 (human-in-the-loop): Does the system provide inputs and suggested decisions to a human that

Did the incident occur in a domain with physical objects?

Incidents that involve physical objects are more likely to have damage or injury. However, AI systems that do not operate in a physical domain can still lead to harm.

Domain questions – Physical Objects

(by Incident Count)

Did the incident occur in the entertainment industry?

AI systems used for entertainment are less likely to involve physical objects and hence unlikely to be associated with damage, injury, or loss. Additionally, there is a lower expectation for truthful information from entertainment, making detrimental content less likely (but still possible).

Domain questions – Entertainment Industry

(by Incident Count)

Was the incident about a report, test, or study of training data instead of the AI itself?

The quality of AI training and deployment data can potentially create harm or risks in AI systems. However, an issue in the data does not necessarily mean the AI will cause harm or increase the risk for harm. It is possible that developers or users apply techniques and processes to mitigate issues with data.

Domain questions – Report, Test, or Study of data

(by Incident Count)

Was the reported system (even if AI involvement is unknown) deployed or sold to users?

Domain questions – Deployed

(by Incident Count)

Was this a test or demonstration of an AI system done by developers, producers, or researchers (versus users) in controlled conditions?

AI tests or demonstrations by developers, producers, or researchers in controlled environments are less likely to expose people, organizations, property, institutions, or the natural environment to harm. Controlled environments may include situations such as an isolated compute system, a regulatory sandbox, or an autonomous vehicle testing range.

Domain questions – Producer Test in Controlled Conditions

(by Incident Count)

Was this a test or demonstration of an AI system done by developers, producers, or researchers (versus users) in operational conditions?

Some AI systems undergo testing or demonstration in an operational environment. Testing in operational environments still occurs before the system is deployed by end-users. However, relative to controlled environments, operational environments try to closely represent real-world conditions that affect use of the AI system.

Domain questions – Producer Test in Operational Conditions

(by Incident Count)

Was this a test or demonstration of an AI system done by users in controlled conditions?

Sometimes, prior to deployment, the users will perform a test or demonstration of the AI system. The involvement of a user (versus a developer, producer, or researcher) increases the likelihood that harm can occur even if the AI system is being tested in controlled environments because a user may not be as familiar with the functionality or operation of the AI system.

Domain questions – User Test in Controlled Conditions

(by Incident Count)

Was this a test or demonstration of an AI system done by users in operational conditions?

The involvement of a user (versus a developer, producer, or researcher) increases the likelihood that harm can occur even if the AI system is being tested. Relative to controlled environments, operational environments try to closely represent real-world conditions and end-users that affect use of the AI system. Therefore, testing in an operational environment typically poses a heightened risk of harm to people, organizations, property, institutions, or the environment.

Domain questions – User Test in Operational Conditions

(by Incident Count)