Poster here:
https://democraticunderground.com/100218488845
ChatGPT Replicates Gender Bias in Recommendation Letters
A new study has found that the use of AI tools such as ChatGPT in the workplace entrenches biased language based on gender
https://www.scientificamerican.com/article/chatgpt-replicates-gender-bias-in-recommendation-letters/
No paywall encountered here. If so, try the archive https://archive.is/8adfs
Generative artificial intelligence has been touted as a valuable tool in the workplace. Estimates suggest it could increase productivity growth by 1.5 percent in the coming decade and boost global gross domestic product by 7 percent during the same period. But a new study advises that it should only be used with careful scrutinybecause its output discriminates against women.
The researchers asked two large language model (LLM) chatbotsChatGPT and Alpaca, a model developed by Stanford Universityto produce recommendation letters for hypothetical employees. In a paper shared on the preprint server arXiv.org, the authors analyzed how the LLMs used very different language to describe imaginary male and female workers.
We observed significant gender biases in the recommendation letters, says paper co-author Yixin Wan, a computer scientist at the University of California, Los Angeles. While ChatGPT deployed nouns such as expert and integrity for men, it was more likely to call women a beauty or delight. Alpaca had similar problems: men were listeners and thinkers, while women had grace and beauty. Adjectives proved similarly polarized. Men were respectful, reputable and authentic, according to ChatGPT, while women were stunning, warm and emotional. Neither OpenAI nor Stanford immediately responded to requests for comment from Scientific American.
The issues encountered when artificial intelligence is used in a professional context echo similar situations with previous generations of AI. In 2018 Reuters reported that Amazon had disbanded a team that had worked since 2014 to try and develop an AI-powered résumé review tool. The company scrapped this project after realizing that any mention of women in a document would cause the AI program to penalize that applicant. The discrimination arose because the system was trained on data from the company, which had, historically, employed mostly men.
You can download the paper at arxiv:
https://arxiv.org/abs/2310.09219
License CC zero, free to share
https://creativecommons.org/public-domain/cc0/
For a detailed reveal of OpenAI, there's an extensive article here:
https://newsletter.pragmaticengineer.com/p/what-is-openai
What is OpenAI, Really?
Its been five incredibly turbulent days at the leading AI tech company, with the exit and then return of CEO Sam Altman. As we dig into what went wrong, an even bigger question looms: what is OpenAI?