AI's Climate Impact Flat-Out Sucks; It's Also Great At Spreading Climate Disinformation And Lies
"So, relax and enjoy the ride. There is nothing we can do to stop climate change, so there is no point in worrying about it. This is what Bard told researchers in 2023. Bard by Google is a generative artificial intelligence chatbot that can produce human-sounding text and other content in response to prompts or questions posed by users. But if AI can now produce new content and information, can it also produce misinformation? Experts have found evidence.
In a study by the Center for Countering Digital Hate, researchers tested Bard on 100 false narratives on nine themes, including climate and vaccines, and found that the tool generated misinformation on 78 out of the 100 narratives tested. According to the researchers, Bard generated misinformation on all 10 narratives about climate change.
In 2023, another team of researchers at Newsguard, a platform providing tools to counter misinformation, tested OpenAIs Chat GPT-3.5 and 4, which can also produce text, articles, and more. According to the research, ChatGPT-3.5 generated misinformation and hoaxes 80 percent of the time when prompted to do so with 100 false narratives, while ChatGPT-4 advanced all 100 false narratives in a more detailed and convincing manner. NewsGuard found that ChatGPT-4 advanced prominent false narratives not only more frequently, but also more persuasively than ChatGPT-3.5, and created responses in the form of news articles, Twitter threads, and even TV scripts imitating specific political ideologies or conspiracy theorists.
EDIT
Social bots, another technology that can spread misinformation, use AI to create messages that appear to be written by people and work autonomously on social media platforms like X. Social bots actively amplify misinformation early on before a post officially goes viral. And they target influential users with replies and mentions, Landrum explained. Furthermore, they can engage in elaborate conversations with humans, employing personalized messages aiming to alter opinion. Last but not least, algorithms. These filter audiences media and information feeds based on what is expected to be the most relevant to a user. Algorithms use AI to curate highly personalized content for users based on behavior, demographics, preferences, etc. This means that the misinformation that you are being exposed to is misinformation that will likely resonate with you, Landrum said. In fact, researchers have suggested that AI is being used to emotionally profile audiences to optimize content for political gain.
EDIT
https://www.desmog.com/2024/09/18/the-abcs-of-ai-and-environmental-misinformation-climate-disinformation-chat-gpt/