Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
Women's Rights & Issues
Related: About this forumYou Should Be Terrified of What's Happening With AI
You Should Be Terrified of Whats Happening With AI
11/29/2023 by Jill Filipovic
The Sam Altman OpenAI story is being covered as a Succession-style drama. Its actually about the future of humanity.
(Nikolas Kokovlis / NurPhoto via Getty Images)
This story originally appeared on Jill.substack.com, a newsletter from journalist, lawyer and author Jill Filipovic.
Last week, the story of Sam Altman leaving OpenAI dominated the U.S. headlines, in part because it was just all so dramatic: A young genius who checks all the Silicon Valley boxes (white guy / prepper / dropped out of Stanford / extremely confident he is changing the world for the better,) who is also widely recognized as the most important person in AI, was unceremoniously pushed out of the AI company he ran, leading to a staff revolt against the board that pushed him out, a soft landing at Microsoft, and then, within days, reinstatement into his old position and a quick reshuffling of the board in which all the women were removed and replaced with men, including Larry men are better at math than women Summers. Spicy! But news outlets really did us all a disservice by initially framing this as a Succession-style power struggle rather than what it really is: a battle for the future of humanity. And not just for our jobs, but for our very basic ability to survive as a species.
. . . .
AI may very well make our lives easier. It will almost certainly put any of us out of work. But it may also leave us adriftwithout obligation, people dont do so well. AI also poses what the Atlantic story calls a mass desocialization event. Weve already seen how mediating our lives through screens can be much more isolating than connectivewe are able to reach many more people, but the depth and quality of those connections is much lower. We saw this with the pandemic, too: Being at home and communicating via Zoom was not nearly as meaningful as being in person, and while work-from-home has been in many ways wonderful, its also contributed to social isolation. We are more atomized than at any point in modern history. More of us live alone. We have fewer friends. And we are by most psychological measures worse off because we spend less time together in person. The more connected among us are doing better; the less-connected are doing worse. And the fewer reasons we have to leave our homes and our devices, the harder it will be to reverse any of these trends.
. . . . .
None of this is to say that we should simply shut down AI research and development. For one, that just isnt going to happen. For two, even if the U.S. shut it down, far worse actors would keep going, and there is certainly a big benefit to being first in this race. But I am not at all convinced that the people leading the development of this radical new technology have any idea what theyre doing. I think they know technically what theyre doing. But I dont think they have the knowledge of international relations or history or psychology or security or really anything else to understand what they may be unleashing into the world, and how it might (or might not) be controlled and regulated, used and misused. I think theyre driven by a desire to discover and to be first, and perhaps to make an absolute buttload of moneywithout coming close to appreciating what it might mean for the world, or what it might mean for the 8 billion human beings whose lives will be touched if not overhauled by this humanity-changing endeavor.
A few years back, I wrote a book about feminism and happiness, and the intersection of those two topics remains one of my chief interests. One thing that is clear in most of the research into human happiness is that what people think will make them happy, or what gives us a quick short-term thrill, is not actually related to what makes us happy in the long-termwhat leads to a life that feels good and rich and meaningful, or simply what results in contentedness. AI may be very good at giving us what we say we want. I am skeptical, though, that it will be any good at all at delivering what we actually need. And it may deliver the kind of devastation we never bargained for.
https://msmagazine.com/2023/11/29/ai-artificial-intelligence-openai/
2 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
You Should Be Terrified of What's Happening With AI (Original Post)
niyad
Nov 2023
OP
usonian
(14,600 posts)1. AI trained on current data stores perpetuates biases in society.
Poster here:
https://democraticunderground.com/100218488845
ChatGPT Replicates Gender Bias in Recommendation Letters
A new study has found that the use of AI tools such as ChatGPT in the workplace entrenches biased language based on gender
https://www.scientificamerican.com/article/chatgpt-replicates-gender-bias-in-recommendation-letters/
No paywall encountered here. If so, try the archive https://archive.is/8adfs
Generative artificial intelligence has been touted as a valuable tool in the workplace. Estimates suggest it could increase productivity growth by 1.5 percent in the coming decade and boost global gross domestic product by 7 percent during the same period. But a new study advises that it should only be used with careful scrutinybecause its output discriminates against women.
The researchers asked two large language model (LLM) chatbotsChatGPT and Alpaca, a model developed by Stanford Universityto produce recommendation letters for hypothetical employees. In a paper shared on the preprint server arXiv.org, the authors analyzed how the LLMs used very different language to describe imaginary male and female workers.
We observed significant gender biases in the recommendation letters, says paper co-author Yixin Wan, a computer scientist at the University of California, Los Angeles. While ChatGPT deployed nouns such as expert and integrity for men, it was more likely to call women a beauty or delight. Alpaca had similar problems: men were listeners and thinkers, while women had grace and beauty. Adjectives proved similarly polarized. Men were respectful, reputable and authentic, according to ChatGPT, while women were stunning, warm and emotional. Neither OpenAI nor Stanford immediately responded to requests for comment from Scientific American.
The issues encountered when artificial intelligence is used in a professional context echo similar situations with previous generations of AI. In 2018 Reuters reported that Amazon had disbanded a team that had worked since 2014 to try and develop an AI-powered résumé review tool. The company scrapped this project after realizing that any mention of women in a document would cause the AI program to penalize that applicant. The discrimination arose because the system was trained on data from the company, which had, historically, employed mostly men.
The researchers asked two large language model (LLM) chatbotsChatGPT and Alpaca, a model developed by Stanford Universityto produce recommendation letters for hypothetical employees. In a paper shared on the preprint server arXiv.org, the authors analyzed how the LLMs used very different language to describe imaginary male and female workers.
We observed significant gender biases in the recommendation letters, says paper co-author Yixin Wan, a computer scientist at the University of California, Los Angeles. While ChatGPT deployed nouns such as expert and integrity for men, it was more likely to call women a beauty or delight. Alpaca had similar problems: men were listeners and thinkers, while women had grace and beauty. Adjectives proved similarly polarized. Men were respectful, reputable and authentic, according to ChatGPT, while women were stunning, warm and emotional. Neither OpenAI nor Stanford immediately responded to requests for comment from Scientific American.
The issues encountered when artificial intelligence is used in a professional context echo similar situations with previous generations of AI. In 2018 Reuters reported that Amazon had disbanded a team that had worked since 2014 to try and develop an AI-powered résumé review tool. The company scrapped this project after realizing that any mention of women in a document would cause the AI program to penalize that applicant. The discrimination arose because the system was trained on data from the company, which had, historically, employed mostly men.
You can download the paper at arxiv: https://arxiv.org/abs/2310.09219
License CC zero, free to share https://creativecommons.org/public-domain/cc0/
For a detailed reveal of OpenAI, there's an extensive article here:
https://newsletter.pragmaticengineer.com/p/what-is-openai
What is OpenAI, Really?
Its been five incredibly turbulent days at the leading AI tech company, with the exit and then return of CEO Sam Altman. As we dig into what went wrong, an even bigger question looms: what is OpenAI?
KarenS
(4,694 posts)2. Stephen Hawking warned us about AI n/t