OpenAI takes down covert operations tied to China and other countries
Briefly

According to OpenAI researchers, Chinese propagandists are leveraging ChatGPT to generate social media posts and internal documents to influence public opinion and conduct surveillance. Ben Nimmo noted a rise in covert operations, with OpenAI disrupting 10 malicious uses of their AI tools in the past three months, four of which were linked to China. Operations included generating content in multiple languages and targeting diverse subjects to give a misleading sense of engagement online. One notable operation, "Sneer Review," exemplified these tactics across various platforms, raising concerns about digital misinformation.
"What we're seeing from China is a growing range of covert operations using a growing range of tactics," said Ben Nimmo, highlighting the evolving strategies of influence operations.
"In the last three months, OpenAI says it disrupted 10 operations using its AI tools in malicious ways, and banned accounts connected to them," demonstrating proactive measures against misuse.
One Chinese operation, dubbed "Sneer Review," used ChatGPT to generate various posts across platforms like TikTok and Reddit, creating a false impression of organic engagement.
The operations targeted many different countries and topics, combining elements of influence operations, social engineering, and surveillance, showcasing the complexity of modern misinformation campaigns.
Read at www.npr.org
[
|
]