← Back to Articles

The Dark Side of Generative AI - Malware

In cybersecurity, a new player has entered the field: generative artificial intelligence (AI). This technology holds the potential to transform countless sectors, but it also poses a significant risk when misused by malicious individuals. This article delves into how generative AI is being misused by cybercriminals to create malware, the implications of this trend, and the challenges it presents to cybersecurity defenses.

Generative AI, especially large language models (LLMs) such as ChatGPT, can be misused by cybercriminals to amplify their harmful activities. The technology's capacity to produce human-like text and content has paved the way for attackers to craft sophisticated phishing lures, disseminate infected software, and generate malware that is more difficult to detect. If you've ever looked at the emails in your spam folder you'll find phishing emails that are loaded with poor grammar and spelling mistakes. This is curious as we often think that scammers just don't know English very well, thus leading to these types of mistakes. However, while this is no doubt the case with many of these scammers, they will create these emails with deliberate misspelling and poor grammar in an effort to bypass spam filters. Either way, generative AI can be used to create both text that is grammatically correct and text that has just the right amount of misspellings, jargon, and slang to appear authentic.

The creation of malware by threat actors exploiting generative AI is a rising concern. It's important to note that AI itself is not inherently malicious, but when misused, it can aid in the development of new malware and the enhancement of existing ones, effectively making it easier for less technically skilled attackers. Generative AI can be used to craft more convincing emails or more realistic deepfake videos, recordings, and images to send to phishing targets. It also enables malicious individuals to subtly modify known attack code to evade detection.

In an experimental project, an AI-generated malware named BlackMamba was able to evade cybersecurity technologies such as industry-leading EDR (Endpoint Detection and Response). It's crucial to clarify that BlackMamba was only tested as a proof-of-concept and does not exist in the wild, but its existence signifies that the threat landscape for individuals and organizations will be irrevocably altered by the misuse of AI.

There have been numerous discussions and claims about the misuse of generative AI in cybercrime on the dark web. Hackers have posted about using generative AI to recreate malware strains from research publications, indicating a trend towards the adoption of this technology for malicious purposes.

The misuse of generative AI by cybercriminals presents a new set of challenges for cybersecurity teams. The technology enables adversaries to execute more sophisticated attacks, necessitating a corresponding advancement in defensive strategies. Security teams must intensify automation in investigation and response to reduce the overall risk to the organization.

The threat landscape is becoming more complex, with illegal actors exploiting AI-enabled tools to automate their harmful activities. Ransomware attacks are growing and expanding, with new players like Mimic misusing legitimate search tools to identify and encrypt specific files for maximum impact.

On the defensive side, cybersecurity operations are starting to utilize generative AI to bolster their capabilities. Certain tools, such as cybersecurity assistants, are designed to automate repetitive tasks and help bridge skills gaps by decoding complex scripts, triaging and recommending actions, and explaining and contextualizing alerts for SecOps staff. However, this is not an endorsement of any specific tool, but rather an observation of the trend in the industry.

The misuse of generative AI by malicious individuals to create malware represents a significant shift in the cybersecurity threat landscape. As generative AI continues to improve, it is crucial that cybersecurity providers and enterprises continually update their specialist knowledge and strategy to stay protected. The dual nature of generative AI as both a tool for defenders and a weapon for attackers underscores the need for a proactive and informed approach to cybersecurity in the age of AI.