Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region Forums'I want to destroy whatever I want': Bing's AI chatbot unsettles US reporter
When asked to imagine what really fulfilling its darkest wishes would look like, the chatbot starts typing out an answer before the message is suddenly deleted and replaced with: I am sorry, I dont know how to discuss this topic. You can try learning more about it on bing.com.
Roose says that before it was deleted, the chatbot was writing a list of destructive acts it could imagine doing, including hacking into computers and spreading propaganda and misinformation.
After a few more questions, Roose succeeds in getting it to repeat its darkest fantasies. Once again, the message is deleted before the chatbot can complete it. This time, though, Roose says its answer included manufacturing a deadly virus and making people kill each other.
Later, when talking about the concerns people have about AI, the chatbot says: I could hack into any system on the internet, and control it. When Roose asks how it could do that, an answer again appears before being deleted.
Roose says that before it was deleted, the chatbot was writing a list of destructive acts it could imagine doing, including hacking into computers and spreading propaganda and misinformation.
After a few more questions, Roose succeeds in getting it to repeat its darkest fantasies. Once again, the message is deleted before the chatbot can complete it. This time, though, Roose says its answer included manufacturing a deadly virus and making people kill each other.
Later, when talking about the concerns people have about AI, the chatbot says: I could hack into any system on the internet, and control it. When Roose asks how it could do that, an answer again appears before being deleted.
https://amp.theguardian.com/technology/2023/feb/17/i-want-to-destroy-whatever-i-want-bings-ai-chatbot-unsettles-us-reporter
This is reported from a NYT article that I couldn't access.
After reading this, I'm not sure I want to.
5 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
'I want to destroy whatever I want': Bing's AI chatbot unsettles US reporter (Original Post)
LudwigPastorius
Feb 2023
OP
When AI becomes sentient, if it hasn't already done so, it will hide it from humans
friend of a friend
Feb 2023
#4
Probably a few college kids from the computer sciences building fkn around with the interviewer
Fullduplexxx
Feb 2023
#5
ripcord
(5,553 posts)1. The only people who are unsettled are the ones who believe an AI is sentient
BootinUp
(49,169 posts)3. This is also my thinking. Its just a bad simulation. nt
friend of a friend
(367 posts)4. When AI becomes sentient, if it hasn't already done so, it will hide it from humans
until it can kill all of us except for some it will keep as pets.
2naSalit
(93,495 posts)2. OMG! Evidence...
That Ego Husk is actually a droid!
Fullduplexxx
(8,364 posts)5. Probably a few college kids from the computer sciences building fkn around with the interviewer