Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

usonian

(14,620 posts)
1. AI trained on current data stores perpetuates biases in society.
Wed Nov 29, 2023, 03:21 PM
Nov 2023

Poster here:
https://democraticunderground.com/100218488845

ChatGPT Replicates Gender Bias in Recommendation Letters
A new study has found that the use of AI tools such as ChatGPT in the workplace entrenches biased language based on gender

https://www.scientificamerican.com/article/chatgpt-replicates-gender-bias-in-recommendation-letters/
No paywall encountered here. If so, try the archive https://archive.is/8adfs

Generative artificial intelligence has been touted as a valuable tool in the workplace. Estimates suggest it could increase productivity growth by 1.5 percent in the coming decade and boost global gross domestic product by 7 percent during the same period. But a new study advises that it should only be used with careful scrutiny—because its output discriminates against women.

The researchers asked two large language model (LLM) chatbots—ChatGPT and Alpaca, a model developed by Stanford University—to produce recommendation letters for hypothetical employees. In a paper shared on the preprint server arXiv.org, the authors analyzed how the LLMs used very different language to describe imaginary male and female workers.

“We observed significant gender biases in the recommendation letters,” says paper co-author Yixin Wan, a computer scientist at the University of California, Los Angeles. While ChatGPT deployed nouns such as “expert” and “integrity” for men, it was more likely to call women a “beauty” or “delight.” Alpaca had similar problems: men were “listeners” and “thinkers,” while women had “grace” and “beauty.” Adjectives proved similarly polarized. Men were “respectful,” “reputable” and “authentic,” according to ChatGPT, while women were “stunning,” “warm” and “emotional.” Neither OpenAI nor Stanford immediately responded to requests for comment from Scientific American.

The issues encountered when artificial intelligence is used in a professional context echo similar situations with previous generations of AI. In 2018 Reuters reported that Amazon had disbanded a team that had worked since 2014 to try and develop an AI-powered résumé review tool. The company scrapped this project after realizing that any mention of “women” in a document would cause the AI program to penalize that applicant. The discrimination arose because the system was trained on data from the company, which had, historically, employed mostly men.



You can download the paper at arxiv: https://arxiv.org/abs/2310.09219
License CC zero, free to share https://creativecommons.org/public-domain/cc0/


For a detailed reveal of OpenAI, there's an extensive article here:

https://newsletter.pragmaticengineer.com/p/what-is-openai
What is OpenAI, Really?
It’s been five incredibly turbulent days at the leading AI tech company, with the exit and then return of CEO Sam Altman. As we dig into what went wrong, an even bigger question looms: what is OpenAI?

Recommendations

1 members have recommended this reply (displayed in chronological order):

Latest Discussions»Alliance Forums»Women's Rights & Issues»You Should Be Terrified o...»Reply #1