Disrupting malicious uses of AI by state-affiliated threat actors

By neub9
2 Min Read

Through collaboration and information sharing with Microsoft, we have successfully disrupted the operations of five state-affiliated malicious actors. These include two threat actors linked to China, Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor, Crimson Sandstorm; the North Korea-affiliated actor, Emerald Sleet; and the Russia-affiliated actor, Forest Blizzard. As a result of our efforts, the OpenAI accounts associated with these actors have been terminated.

These malicious actors were utilizing OpenAI services for various purposes such as querying open-source information, translating content, identifying coding errors, and running basic coding tasks.

Specifically, Charcoal Typhoon leveraged our services to research companies and cybersecurity tools, debug code, create scripts, and produce content potentially for use in phishing campaigns. Meanwhile, Salmon Typhoon used our services for translating technical papers, retrieving publicly available information on intelligence agencies and threat actors, aiding in coding, and researching stealth processes.

Crimson Sandstorm utilized our services for scripting support in app and web development, creating content for spear-phishing campaigns, and researching methods for evading malware detection. Similarly, Emerald Sleet depended on our services to identify defense experts in the Asia-Pacific region, understand vulnerabilities, assist with scripting tasks, and produce phishing campaign materials. As for Forest Blizzard, their focus was on open-source research into satellite communication protocols and radar imaging technology, as well as scripting tasks.

For detailed technical information about the nature of these threat actors and their activities, you can refer to the Microsoft blog post published today.

It’s important to note that the activities of these actors align with previous red team assessments conducted with cybersecurity experts. These assessments found that GPT-4 has limited capabilities for malicious cybersecurity tasks, and does not provide significantly enhanced capabilities compared to publicly available, non-AI powered tools.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *