You are here
More "Racism in AI Models" News, Unfortunately.
Posted by Jim Craner on March 12, 2024
Well, unfortunately, we have a couple of new entries for the bias page of our AI resource site.
AI and Dialect Prejudice
First up, this paper from the Allen Institute for AI reveals that large language models (or "LLMs") have inherited human dialect prejudice against speakers of African-American English. Obviously this has implications whenever these sorts of AI tools are used: in HR and hiring, criminal justice, finance and banking, and other areas of society. In fact, one of the researchers posted a great rundown on Twitter, beginning with this pithy and terrifying tweet:
How does this happen? Remember that modern AIs are trained by exposing them to huge quantities of human language and images. The AI companies try to remove any material that might result in bias, but that is a tall order because we're human and just chock full of biases.
AI and Name-Based Bias
So, next up is this report from Bloomberg showing that AI models have inherited bias against people with names that are common to a certain racial group. A 2004 Harvard study found that people with these names are less likely to be hired than people with names associated with the majority race, provocatively titled "Are Emily and Greg More Employable than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination" (Bertrand, Mullainathan 2004). Of course the original study was conducted in the ancient days of newspaper classified ads and paper resumes, but the new study just involved running an AI script over and over.
Bloomberg was able to replicate these findings by providing fake resumes with different race-associated names to ChatGPT and asking it to evaluate them for potential hiring situations. When this experiment was repeated enough times, a clear and measurable bias against Black and Hispanic people was found. There are also some interesting visualizations showing the randomized nature of the source of experiments.
Do you have any recent examples of AI bias to share? Get in touch and let us know, we will add it to our resource page