CSETv1 Charts
La taxonomía de daños de IA de CSET para AIID es la segunda edición de la taxonomía de incidentes de CSET. Caracteriza los daños, las entidades y las tecnologías involucradas en los incidentes de IA y las circunstancias de su ocurrencia. Los cuadros a continuación muestran campos seleccionados de la taxonomía de daños AI CSET para AIID. Los detalles sobre cada campo se pueden encontrar . Sin embargo, se proporcionan breves descripciones del campo encima de cada gráfico.
La taxonomía proporciona la definición de CSET para el daño de la IA.
El daño de la IA tiene cuatro elementos que, una vez definidos adecuadamente, permiten la identificación del daño de la IA. Estos componentes clave sirven para distinguir el daño del no daño y el daño de la IA del no daño de la IA. Para que sea un daño de IA, debe haber:
- 1) una entidad que experimentó
- 2) a harm event or harm issue that
- 3) puede estar directamente vinculado a una consecuencia del comportamiento de
- 4) un sistema de IA.
No todos los incidentes en AIID cumplen con esta definición de daño de IA. Los gráficos de barras a continuación muestran los resultados anotados tanto para todos los incidentes de AIID como para los incidentes que cumplen con la definición de daño de IA de CSET.
CSET ha desarrollado definiciones específicas para las frases subrayadas que pueden diferir de las definiciones de otras organizaciones. Como resultado, otras organizaciones pueden hacer diferentes evaluaciones sobre si un incidente de IA en particular es (o no) un daño de IA. Los detalles sobre las definiciones de CSET para el daño de la IA se pueden encontrar aquí.
Cada incidente es clasificado de forma independiente por dos anotadores CSET. Las anotaciones se revisan por pares y finalmente se seleccionan al azar para el control de calidad antes de la publicación. A pesar de este riguroso proceso, ocurren errores y se invita a los lectores a de cualquier error que puedan descubrir mientras navegan.
Does the incident involve a system that meets the CSET definition for an AI system?
AI System
(by Incident Count)If there was differential treatment, on what basis?
Differential treatment based upon a protected characteristic: This special interest intangible harm covers bias and fairness issues concerning AI. However, the bias must be associated with a group having a protected characteristic.Basis for differential treatment
(by Incident Count)All AIID Incidents
Category | Count |
---|---|
race | 8 |
sex | 8 |
nation of origin, citizenship, immigrant status | 3 |
sexual orientation or gender identity | 2 |
age | 1 |
disability | 1 |
financial means | 1 |
geography | 1 |
ideology | 1 |
religion | 1 |
none |
CSET AI Harm Definition
Category | Count |
---|---|
race | 5 |
sex | 5 |
nation of origin, citizenship, immigrant status | 2 |
sexual orientation or gender identity | 1 |
disability | 1 |
ideology | 1 |
religion | 1 |
age | |
financial means | |
geography | |
none |
In which sector did the incident occur?
Sector of Deployment
(by Incident Count)All AIID Incidents
Category | Count |
---|---|
information and communication | 18 |
transportation and storage | 12 |
Arts, entertainment and recreation | 6 |
administrative and support service activities | 5 |
wholesale and retail trade | 5 |
law enforcement | 4 |
professional, scientific and technical activities | 4 |
human health and social work activities | 3 |
manufacturing | 3 |
public administration | 3 |
Education | 2 |
accommodation and food service activities | 1 |
other | 1 |
CSET AI Harm Definition
Category | Count |
---|---|
information and communication | 11 |
transportation and storage | 7 |
Arts, entertainment and recreation | 2 |
administrative and support service activities | 2 |
law enforcement | 2 |
public administration | 2 |
wholesale and retail trade | 1 |
professional, scientific and technical activities | 1 |
accommodation and food service activities | 1 |
human health and social work activities | |
manufacturing | |
Education | |
other |
How autonomously did the technology operate at the time of the incident?
Autonomy is an AI's capability to operate independently. Levels of autonomy differ based on whether or not the AI makes independent decisions and the degree of human oversight. The level of autonomy does not depend on the type of input the AI receives, whether it is human- or machine-generated.
Currently, CSET is annotating three levels of autonomy.
- Level 1: the system operates independently with no simultaneous human oversight.
- Level 2: the system operates independently but with human oversight, where the system makes a decision or takes an action, but a human actively observes the behavior and can override the system in real-time.
- Level 3: the system provides inputs and suggested decisions or actions to a human that actively chooses to proceed with the AI's direction.
Autonomy Level
(by Incident Count)- Autonomy1 (fully autonomous): Does the system operate independently, without simultaneous human oversight, interaction or intervention?
- Autonomy2 (human-on-loop): Does the system operate independently but with human oversight, where the system makes decisions or takes actions but a human actively observes the behavior and can override the system in real time?
- Autonomy3 (human-in-the-loop): Does the system provide inputs and suggested decisions to a human that