Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsHow AI Knows Things That No One Told It
https://www.scientificamerican.com/article/how-ai-knows-things-no-one-told-it/Lightly edited for brevity:
That GPT and other AI systems perform tasks they were not trained to do, giving them emergent abilities, has surprised even researchers who have been generally skeptical about the hype over Large Language Models. I dont know how theyre doing it or if they could do it more generally the way humans dobut theyve challenged my views, says Melanie Mitchell, an AI researcher at the Santa Fe Institute.
It is certainly much more than a stochastic parrot, and it certainly builds some representation of the worldalthough I do not think that it is quite like how humans build an internal world model, says Yoshua Bengio, an AI researcher at the University of Montreal.
-snip-
Researchers marvel at how much LLMs are able to learn from text. For example, Pavlick and her then Ph.D. student Roma Patel found that these networks absorb color descriptions from Internet text and construct internal representations of color. When they see the word red, they process it not just as an abstract symbol but as a concept that has certain relationship to maroon, crimson, fuchsia, rust, and so on. Demonstrating this was somewhat tricky. ...the researchers studied its response to a series of text prompts. To check whether it was merely echoing color relationships from online references, they tried misdirecting the system by telling it that red is in fact green. Rather than parroting back an incorrect answer, the systems color evaluations changed appropriately in order to maintain the correct relations.
Picking up on the idea that in order to perform its autocorrection function, the system seeks the underlying logic of its training data, machine learning researcher Sébastien Bubeck of Microsoft Research suggests that the wider the range of the data, the more general the rules the system will discover. Maybe were seeing such a huge jump because we have reached a diversity of data, which is large enough that the only underlying principle to all of it is that intelligent beings produced them, he says. And so the only way to explain all of this data is [for the model] to become intelligent.
It is certainly much more than a stochastic parrot, and it certainly builds some representation of the worldalthough I do not think that it is quite like how humans build an internal world model, says Yoshua Bengio, an AI researcher at the University of Montreal.
-snip-
Researchers marvel at how much LLMs are able to learn from text. For example, Pavlick and her then Ph.D. student Roma Patel found that these networks absorb color descriptions from Internet text and construct internal representations of color. When they see the word red, they process it not just as an abstract symbol but as a concept that has certain relationship to maroon, crimson, fuchsia, rust, and so on. Demonstrating this was somewhat tricky. ...the researchers studied its response to a series of text prompts. To check whether it was merely echoing color relationships from online references, they tried misdirecting the system by telling it that red is in fact green. Rather than parroting back an incorrect answer, the systems color evaluations changed appropriately in order to maintain the correct relations.
Picking up on the idea that in order to perform its autocorrection function, the system seeks the underlying logic of its training data, machine learning researcher Sébastien Bubeck of Microsoft Research suggests that the wider the range of the data, the more general the rules the system will discover. Maybe were seeing such a huge jump because we have reached a diversity of data, which is large enough that the only underlying principle to all of it is that intelligent beings produced them, he says. And so the only way to explain all of this data is [for the model] to become intelligent.
The top unsettling quote of the article comes from a cognitive scientist and AI researcher who says that the emergent abilities of Large Language Models are indirect evidence that we are probably not that far off from Artificial General Intelligence. (If you've read Nick Bostrom's book, this will scare you because he his posits that the transition from AGI to Superintelligence will occur as an uncontrollable "explosion".)
5 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
How AI Knows Things That No One Told It (Original Post)
LudwigPastorius
May 2023
OP
These theories re AGI etc leave me with a feeling that something is being missed.
BSdetect
May 2023
#1
Hey, there's big money to be made, or national security to "securitize".
LudwigPastorius
May 2023
#3
BSdetect
(9,048 posts)1. These theories re AGI etc leave me with a feeling that something is being missed.
Certainly there could be a sudden massive change we will not see coming.
intrepidity
(7,927 posts)2. As the article concludes:
At the same time, though, researchers worry the window may be closing on their ability to study these systems. OpenAI has not divulged the details of how it designed and trained GPT-4, in part because it is locked in competition with Google and other companiesnot to mention other countries. Probably theres going to be less open research from industry, and things are going to be more siloed and organized around building products, says Dan Roberts, a theoretical physicist at M.I.T., who applies the techniques of his profession to understanding AI.
And this lack of transparency does not just harm researchers; it also hinders efforts to understand the social impacts of the rush to adopt AI technology. Transparency about these models is the most important thing to ensure safety, Mitchell says.
And this lack of transparency does not just harm researchers; it also hinders efforts to understand the social impacts of the rush to adopt AI technology. Transparency about these models is the most important thing to ensure safety, Mitchell says.
This is the rationale behind OpenAI's "early" (before the alignment issue is solved) release. Some (Yudkowsky) have argued vehemently that this was the wrong decision. But there's no doubt that it has resulted in an explosion of research and study, so that seems like a good outcome. What we really need to fear, imho, is if this powerful tool were deployed quietly and nobody noticed.
In the Preface in Max Tegmark's book "Life 3.0", a pretty optimistic utopian vision of how this might've all unfolded is presented. But later on in the book, he shows how easily, with a power shift, it could all crumble into severe dystopia. From what I've witnessed of humans in my life, I tend to believe that unfortunately the dystopic view will prevail. :sigh:
LudwigPastorius
(11,084 posts)3. Hey, there's big money to be made, or national security to "securitize".
Transparency be damned.
LAS14
(14,789 posts)4. I've said this before.
AI doesn't have opposable thumbs. Pull the plug if necessary.
Chainfire
(17,757 posts)5. You haven't read enough science fiction! ;)
Given the choice between AI and Republicans running things, I would at least give AI a fair chance.