Using AI to Determine the Political Point of View of Books

In this blog post, we discuss experimenting with AI, specifically ChatGPT, to determine the political leanings of books, hoping to maintain diversity in library collections without contributing to book bans. We share our initial test with both distinctly political and neutral books to see if the AI can accurately identify their perspectives, and the results are encouraging. Moving forward, we plan to challenge the AI with newer and less straightforward titles, exploring how this technology can help us keep our libraries inclusive and welcoming.

More "Racism in AI Models" News, Unfortunately.

We review new research showing that large language models (LLMs), like GPT-4, exhibit biases against dialects and names associated with certain racial groups. One study from the Allen Institute for AI found that these models have a prejudice against speakers of African-American English, potentially impacting decisions in areas like HR, criminal justice, and finance. Additionally, a Bloomberg report demonstrated a similar bias in AI evaluations of resumes with names common to Black and Hispanic people, suggesting that despite efforts to curb bias in AI, it remains a significant issue rooted in the data they're trained on.

"AI Training for Library Patrons" Interest Group

Interested in delivering patron programming about AI?  From making fun images to serious discussions about misinformation, there is a lot to talk about!

We're forming a volunteer interest group for public library staff interested in collaborating on the development of a shared patron training curriculum about AI and related topics.  The curriculum will be released under a Creative Commons license so it's freely re-usable by libraries around the world.  

Topics may include: