Tuesday, June 4, 2024

Top 5 This Week

Related Posts

OpenAI Disrupts Covert Influence Operations by Russia, China, Iran, and Israel Using AI Tools

OpenAI, a leading artificial intelligence (AI) company, recently made an announcement that it had successfully disrupted five influence operations from Russia, China, Iran, and Israel. These operations were using OpenAI’s AI tools to manipulate public opinion and shape political outcomes on the internet. The actors behind these operations had been using OpenAI tools to generate comments, produce articles, and create fake names or bios for social media accounts over the past three months.

According to OpenAI, these covert influence operations targeted various ongoing issues such as criticisms of the Chinese regime, U.S. and European politics, Russia’s invasion of Ukraine, and the conflict in Gaza. However, despite their efforts, these operations did not achieve their goals of significantly increasing audience engagement.

OpenAI identified several trends from these actors’ use of their AI tools. These trends included content generation, which involved mixing old and new materials between AI-generated content and other types of content. The actors also faked engagement by creating replies for their own social media posts and used productivity enhancement techniques like summarizing social media posts.

One of the operations that OpenAI disrupted was a pro-Beijing Spamouflage disinformation and propaganda network in China. This network used OpenAI’s AI model to seek advice about social media activities, research news and current events, and generate content in multiple languages including Chinese, English, Japanese, and Korean. The content generated by this network often praised the Chinese communist regime, criticized the U.S. government, and targeted Chinese dissidents.

OpenAI also discovered two operations from Russia. One of them, called Doppelganger, used OpenAI tools to generate comments in multiple languages and post them on various platforms. The other Russian network, called Bad Grammar, primarily operated on Telegram and focused on Ukraine, Moldova, the United States, and the Baltic States. This network used OpenAI tools to debug code for a Telegram bot that automatically posted information on the platform.

Additionally, OpenAI found an operation from Israel linked to the Tel Aviv-based political marketing firm STOIC, as well as an operation from Iran. Both of these operations used OpenAI’s ChatGPT to generate articles. Iran published the content on a website related to the Iran threat actor website, while Israel posted its comments on multiple platforms.

In a separate development, Facebook released a quarterly report revealing that it had disrupted six covert influence operations, including one from Iran and another from the Israeli firm STOIC. This report indicated that “likely AI-generated” deceptive content had been posted on the platform.

OpenAI gained global attention with the launch of its chatbot, ChatGPT, in November 2022. The chatbot quickly became popular, impressing users with its ability to answer questions and engage in conversations on a wide array of topics.

Last year, OpenAI faced controversy when its board fired CEO Sam Altman, leading to significant backlash and ultimately resulting in Altman’s reinstatement and the formation of a new board.

OpenAI’s efforts to disrupt influence operations highlight the increasing sophistication and prevalence of AI-powered disinformation campaigns. These operations demonstrate how AI tools can be leveraged to manipulate public opinion and shape political outcomes. By identifying and disrupting these operations, OpenAI is playing a crucial role in safeguarding the integrity of online discourse and protecting against the spread of misinformation.

Popular Articles