How global threat actors are weaponizing AI now, according to OpenAI
Briefly

As generative AI technology expands, so do concerns about its potential misuse. OpenAI's annual report illustrates the risks, detailing ten cases, including four suspected to originate from China, where bad actors leveraged AI for misinformation and manipulation, notably around geostrategic topics. OpenAI emphasized that their ongoing investigations into these abuses help to refine their defenses against such threats. The dual nature of AI as a tool for creativity and a vector for misinformation remains a critical discourse for policymakers and tech developers.
In each of the 10 cases outlined in the new report, OpenAI outlined how it detected and addressed the problem.
A 'main account' would publish a post, then others would follow with comments, all of which were designed to create an illusion of authentic human engagement.
Read at ZDNET
[
|
]