Incident 83: Spam filters are efficient and uncontroversial. Until you look at them.
Entities
View all entitiesIncident Stats
CSETv0 Taxonomy Classifications
Taxonomy DetailsFull Description
Gmail, Yahoo, Outlook, GMX, and LaPoste email inbox sites showed racial and content-based biases when AlgorithmWatch tested their spam box filtering algorithms. AlgorithmWatch sent hundreds of emails to 10 email accounts on the listed sites, and noticed emails would be filtered into the spam box if certain words were within the body of the email. A Nigerian students internship application was marked spam, but when the word "Nigeria" was removed it was delivered to the inbox. The same applied to a "sex education" email that was forwarded to inbox after removing "sex". A Joe Biden speech went through when the words "loan, investment, billion" were removed.
Short Description
Gmail, Yahoo, Outlook, GMX, and LaPoste email inbox sites showed racial and content-based biases when AlgorithmWatch tested their spam box filtering algorithms.
Severity
Unclear/unknown
Harm Distribution Basis
Race, National origin or immigrant status
Harm Type
Harm to civil liberties
AI System Description
Machine learning algorithms used to filter spam emails out of inboxes
System Developer
Gmail, Outlook, Yahoo, GMX, LaPoste
Sector of Deployment
Information and communication
Relevant AI functions
Perception, Cognition, Action
AI Techniques
Language recognition, content filtering
AI Applications
spam filtering
Named Entities
Gmail, Yahoo, Outlook, GMX, LaPoste, SpamAssassin, AlgorithmWatch
Technology Purveyor
Gmail, Yahoo, Outlook
Beginning Date
2020-10-22
Ending Date
2020-10-22
Near Miss
Unclear/unknown
Intent
Unclear
Lives Lost
No
Infrastructure Sectors
Communications
Data Inputs
inbound emails
Incident Reports
Reports Timeline
- View the original report at its source
- View the report at the Internet Archive
An experiment reveals that Microsoft Outlook marks messages as spam on the basis of a single word, such as “Nigeria”. Spam filters are largely unaudited and could discriminate unfairly.
In an experiment, AlgorithmWatch sent a few hundred em…
Variants
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Similar Incidents
Did our AI mess up? Flag the unrelated incidents