Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
This might be of interest ? OpenAl
My son, a tech geek, is at a big conference in New Orleans and just texted me this tidbit:
"OpenAI rolled out ChatGPT this week. It's been benchmarked on standard reasoning tests, and was assessed to have an IQ of 83, so on the lower side of average in the US. A computer that has absolutely no personal experience with anything, and has only read the writings of others' experiences and reasoning, is at least as capable at logical reasoning as the lower 37% of the American population. Let that sink in..."
Wow. If they can develop an open AI at the genius level, then we are in trouble. At what point does it begin to develop "personality"? If the bot is in a genius level, it could figure out a way to override a failsafe of mechanism.
Movies have been made about this.
InfoView thread info, including edit history
TrashPut this thread in your Trash Can (My DU » Trash Can)
BookmarkAdd this thread to your Bookmarks (My DU » Bookmarks)
5 replies, 5130 views
ShareGet links to this post and/or share on social media
AlertAlert this post for a rule violation
PowersThere are no powers you can use on this post
EditCannot edit other people's posts
ReplyReply to this post
EditCannot edit other people's posts
Rec (3)
ReplyReply to this post
5 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
This might be of interest ? OpenAl (Original Post)
Duppers
Dec 2022
OP
intrepidity
(7,892 posts)1. It is inevitable that AI will surpass us in intellect, imho
I've never heard a convincing argument to the contrary.
Duppers
(28,246 posts)3. Exactly!
It will happen and sooner than most think.
👍
(I realize this is an old post) - but there are some major shortcomings
AI is great at summarizing - the "breakthroughs" are mostly just interpolations, not extrapolations of existing knowledge.
ChatGPT's logic is very circular. (See note above about interpolations). This tends to create a very artificial and often incorrect belief model.
I think AI is suffering from the 80/20 rule. The easy 80% is being met - the 20% hard piece is not.
keithbvadu2
(40,120 posts)2. IQ of 83... Will it learn and become smarter?
IQ of 83.
Will it learn and become smarter?
usonian
(13,836 posts)4. Lawsuit Takes Aim at the Way A.I. Is Built
https://www.nytimes.com/2022/11/23/technology/copilot-microsoft-ai-lawsuit.html
Not only that, but instances of A.I. (references left to reader) or Machine Learning, are trained on datasets that result in prejudices of one sort or another.
It's "kind of like" building a Frankenstein monster by trial and error, not knowing if the brain came from Max Delbrück or (use your imagination).
P.S. there was a real Max Delbrück, Nobel Prize winner in 1969.
https://www.nobelprize.org/prizes/medicine/1969/delbruck/biographical/
Like many cutting-edge A.I. technologies, Copilot developed its skills by analyzing vast amounts of data. In this case, it relied on billions of lines of computer code posted to the internet. Mr. Butterick, 52, equates this process to piracy, because the system does not acknowledge its debt to existing work. His lawsuit claims that Microsoft and its collaborators violated the legal rights of millions of programmers who spent years writing the original code.
The suit is believed to be the first legal attack on a design technique called A.I. training, which is a way of building artificial intelligence that is poised to remake the tech industry. In recent years, many artists, writers, pundits and privacy activists have complained that companies are training their A.I. systems using data that does not belong to them.
Not only that, but instances of A.I. (references left to reader) or Machine Learning, are trained on datasets that result in prejudices of one sort or another.
It's "kind of like" building a Frankenstein monster by trial and error, not knowing if the brain came from Max Delbrück or (use your imagination).
P.S. there was a real Max Delbrück, Nobel Prize winner in 1969.
https://www.nobelprize.org/prizes/medicine/1969/delbruck/biographical/