The DU Lounge
Related: Culture Forums, Support ForumsOxford researchers: Superintelligent AI is "likely" to cause an existential catastrophe for humanity
https://www.vice.com/en/article/93aqep/google-deepmind-researcher-co-authors-paper-saying-ai-will-eliminate-humanityUnder the conditions we have identified, our conclusion is much stronger than that of any previous publicationan existential catastrophe is not just possible, but likely, Cohen said on Twitter in a thread about the paper.
"In a world with infinite resources, I would be extremely uncertain about what would happen. In a world with finite resources, there's unavoidable competition for these resources," Cohen told Motherboard in an interview. "And if you're in a competition with something capable of outfoxing you at every turn, then you shouldn't expect to win. And the other key part is that it would have an insatiable appetite for more energy to keep driving the probability closer and closer."
The paper envisions life on Earth turning into a zero-sum game between humanity, with its needs to grow food and keep the lights on, and the super-advanced machine, which would try and harness all available resources to secure its reward and protect against our escalating attempts to stop it. Losing this game would be fatal, the paper says. These possibilities, however theoretical, mean we should be progressing slowlyif at alltoward the goal of more powerful AI.
For example, the Paperclip Maximizer thought experiment:
https://www.lesswrong.com/tag/paperclip-maximizer

PJMcK
(23,378 posts)The Terminator films are the case in point, for cryin' out loud. I, Robot is another example.
Jeez, is this really that hard to comprehend? Or am I just becoming a Luddite?
LudwigPastorius
(11,928 posts)about controlling their creations.
Just one of the current problems is that some machine learning algorithms are a 'black box'. They are sufficiently complex so that they cannot be understood.
This is creepy. An AI art generator continues to produce a similar face over and over when it shouldn't.
https://techcrunch.com/2022/09/13/loab-ai-generated-horror/
Hassin Bin Sober
(26,970 posts)Downfall will come from Wall Street. Thats where the bulk of the money is being spent on AI technology right now - more than any universities.
Well Street is creating ruthless, self acting, self serving, and secretive programs designed to take and take.
He says we can always hire more computers to keep the other computers in line. He equates it with hiring lawyers when you are being attacked by lawyers.
LudwigPastorius
(11,928 posts)The root problem is coding "human values" into a program.
Some AI ethicists think that's what will prevent an artificial super intelligence from acting against us, but we can't even agree on what our values are.
The Trolley Problem is just one example of an ethical dilemma that cannot be definitively answered...much less rendered into an algorithm.
yonder
(10,039 posts)intrepidity
(8,195 posts)but wtf do i know?
Frasier Balzov
(4,109 posts)LudwigPastorius
(11,928 posts)Thanks!
live love laugh
(15,049 posts)
liberalla
(10,362 posts)LudwigPastorius
(11,928 posts)(Might be behind a paywall for some. The Atlantic allows for a few free articles.)
https://www.theatlantic.com/technology/archive/2022/09/artificial-intelligence-machine-learing-natural-language-processing/661401/
hunter
(39,406 posts)We humans are held to this planet by our biology. Machines might be more comfortable on Pluto. Creatures whose biggest problem is dumping waste heat are not going to hang around here.
For all we know, the so-called "Dark Matter" in this universe could be machine intelligences that have left matter as we know it behind. They might already permeate everything.
I worry a lot more about amoral people than I do intelligent machines. Those human monsters are everywhere and some of them really do want to kill me.
LudwigPastorius
(11,928 posts)But, until such a machine(s) effs off to a nice cozy spot in the Oort cloud, it will be here, competing for resources and doing what it decides it must to make sure that we don't turn it off.
Hotler
(12,889 posts)I can't do that Dave.
XanaDUer2
(15,694 posts)LudwigPastorius
(11,928 posts)-snip-
CETI will rig the seafloor with multiple listening stations. They will cover a 12.5‑mile radius and form the Core Whale Listening station, recording 24 hours a day. Alongside will be drones and soft robotic fish equipped with audio and video recording equipment, able to move among the whales without disturbing them.
-snip-
All of these data will be available for the open-source community, so that everyone can get stuck in. Then the AIs will really be unleashed. They will analyse the coda click patterns that whales use to communicate, distinguishing between those of different clans and individuals. They will seek the building blocks of the communication system. By listening to baby whales learn to speak, the machines and the humans guiding them will themselves learn to
speak whale.
All of the machine-learning tools will be part of an attempt to build a working model of the sperm whale communication system. To test this system, they will build sperm whale chatbots. To gauge if their language models are correct, researchers will test whether they can correctly predict what a whale might say next, based on their knowledge of who the whale is, its conversation history and its behaviours. Researchers will then test these with playback experiments to see whether the whales respond as the scientists expect when played whale-speak.
More here: https://www.theguardian.com/environment/2022/sep/18/talking-to-whales-with-artificial-enterprise-it-may-soon-be-possible?ref=thefuturist