Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Omaha Steve

(103,349 posts)
Tue Jul 18, 2023, 01:05 PM Jul 2023

News & Commentary July 13, 2023 (Part I)


https://onlabor.org/techwork-july-13-2023/

By Maddie Chang

Maddie Chang is a student at Harvard Law School.

In today’s Tech@Work, human workers behind Google’s AI chatbot call attention to poor labor conditions; and in a separate but related matter, workers who train OpenAI’s ChatGPT have filed a petition to investigate OpenAI for labor abuses with Kenya’s National Assembly.

Google’s chatbot “Bard” is a generative artificial intelligence (AI) tool that produces answers to people’s questions in a conversational format – seemingly without human intervention. But behind the scenes, human workers are involved in improving the chatbot’s answers by rating their helpfulness and by flagging offensive content. Google contracts with outside companies like Appen and Accenture to provide these services. As reported in Bloomberg this week, subcontracted workers are raising issues with working conditions and the nature of the tasks they are assigned. Bloomberg spoke with several workers who reported having unreasonably tight timeframes and improper training to rate the coherence of information in chatbot responses.

Workers reported rating chatbot answers that contain high stakes information, including dosage information for various medicines, and information about state laws. “Raters,” who are paid as little as $14 per hour, noted not having background information about the truth of the information presented, and not having enough time to check the information. The guidelines workers have received say: “You do not need to perform a rigorous fact check” when evaluating the helpfulness of answers, and that ratings should be “based on your current knowledge or quick web search.” As reported in the Bloomberg piece, Google said that: “Ratings are deliberately performed on a sliding scale to get more precise feedback to improve these models…such ratings don’t directly impact the output of our models and they are by no means the only way we promote accuracy.” The Alphabet Workers Union, which has organized Google workers and subcontracted workers at Appen and Accenture, condemned the way new AI related tasks have made conditions for workers more difficult.

FULL story at link above.
Latest Discussions»Issue Forums»Omaha Steve's Labor Group»News & Commentary July 13...