More "Racism in AI Models" News, Unfortunately.

We review new research showing that large language models (LLMs), like GPT-4, exhibit biases against dialects and names associated with certain racial groups. One study from the Allen Institute for AI found that these models have a prejudice against speakers of African-American English, potentially impacting decisions in areas like HR, criminal justice, and finance. Additionally, a Bloomberg report demonstrated a similar bias in AI evaluations of resumes with names common to Black and Hispanic people, suggesting that despite efforts to curb bias in AI, it remains a significant issue rooted in the data they're trained on.

Algorithms: Avoiding the Implementation of Institutional Biases

Computer algorithms, the logic and code that power automated decision-making programs, increasingly dominate many aspects of modern society. There are already many examples of institutional biases – including ideological bias, racism, sexism, ableism – being solidified in algorithms, causing harm to already underprivileged populations. This article explores library-specific and society-wide examples as well as efforts to prevent the implementation of these biases in the future.