NCSC Warns That AI is Already Being Used by Ransomware Gangs

neub9
By neub9
2 Min Read

The UK’s National Cyber Security Centre (NCSC) has issued a warning in a newly published report about the impact of artificial intelligence on cyber threats. The report predicts that the volume and impact of threats, including ransomware, will increase over the next two years as malicious attackers take advantage of AI technology.

The NCSC, part of GCHQ, stated that AI has enabled unskilled hackers to carry out more effective access and information gathering operations. This lowers the barrier to entry for novice cybercriminals, hacker-for-hire, and hacktivists.

In the past, scammers and cybercriminals were often hindered by poor grammar and spelling mistakes in their emails and texts, particularly if they were not native speakers of the targeted language. However, publicly-available AI tools have made it simple to generate believable text, images, audio, and even deepfake videos, increasing the risk of successful cyber attacks.

The report also warns that AI can be used by malicious hackers to identify high-value data for examination and exfiltration, enhancing the impact of security breaches. Furthermore, the NCSC predicts that by 2025, it will be difficult for everyone to discern genuine emails or password reset requests from phishing, spoofing, or social engineering attempts due to the advancements in Generative AI and large language models (LLMs).

Despite the potential risks, AI also has the potential to enhance security resilience by improving the detection of threats such as malicious emails and phishing campaigns, making them easier to counteract.


Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor and do not necessarily reflect those of Tripwire.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *