Press "Enter" to skip to content

OpenAI Thwarts Multiple Covert Influence Operations Exploiting AI Models 2024

Spread the love

Overview

OpenAI announced it has successfully halted five covert influence operations that misused its AI models to engage in deceptive activities online. These operations, originating from Russia, China, Iran, and Israel, aimed to manipulate public opinion and influence political outcomes under false pretenses, the company revealed on Thursday. OpenAI emphasized that, as of May 2024, these campaigns had not significantly boosted their audience engagement or reach through the use of its services. The company collaborated with stakeholders across the tech industry, civil society, and governments to counter these malicious actors.

Rising Concerns Amidst Global Elections

This disclosure from OpenAI comes at a time of heightened concern over the role of generative AI in upcoming elections worldwide, including those in the United States. The report highlights how these influence operations have harnessed generative AI to produce text and images at unprecedented volumes and to generate fake engagement through AI-created comments on social media posts.

Details of the Influence Operations

Ben Nimmo, the principal investigator on OpenAI’s Intelligence and Investigations team, discussed the findings with the media, as reported by Bloomberg. He noted the importance of addressing questions about the potential impact of AI on influence operations.

Russian Operations

One notable Russian operation, dubbed “Doppelganger,” utilized OpenAI’s models to create headlines, convert news articles into Facebook posts, and generate comments in multiple languages, all aimed at undermining support for Ukraine. Another Russian group employed OpenAI’s models to debug code for a Telegram bot, which posted brief political comments in English and Russian, targeting audiences in Ukraine, Moldova, the US, and Baltic States.

Chinese Network

The Chinese network known as “Spamouflage” used OpenAI’s models to analyze social media activity and generate multilingual text-based content across various platforms, including Facebook and Instagram.

Iranian Efforts

The Iranian “International Union of Virtual Media” also leveraged AI to produce content in multiple languages, further demonstrating the diverse geographical spread and sophisticated use of AI in these influence operations.

Industry-Wide Comparisons

OpenAI’s revelations are akin to periodic disclosures by other tech giants. For example, Meta recently reported on coordinated inauthentic behavior, detailing how an Israeli marketing firm employed fake Facebook accounts to conduct an influence campaign targeting individuals in the US and Canada.

By exposing these operations, OpenAI aims to shed light on the evolving tactics of influence operations and the critical role of AI in both enabling and combating such activities.

ALSO APPLY : Goldman Sachs Internship 2024: Free Online Engineering Job Simulation

Follow On Twitter: Krishna Sahu

For More Update Join My WhatsApp Channel Click Here

 

2 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *