Researchers at IBM come up with AI powered malware
Artificial Intelligence has been touted as a solution for detecting and combating malware, and to stop cyber attacks before they can do damage. The same technology can also be leveraged by hackers to power a new generation of malware that can evade the best security defenses and infect a computer network or launch an attack only when the intended target's face is detected by a camera. To demonstrate this, researchers at IBM came up with DeepLocker, a new breed of highly targeted and evasive attack tools powered by AI. It conceals its intent until it reaches a specific victim.
According to IBM researchers, DeepLocker stays under the radar without being detected and executes its malicious code as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition. It has been dubbed as a "spray and pray" approach. Researchers believe that this kind of stealthy AI powered malware is dangerous because, like nation state malware, it could infect millions of systems without being detected.
The malware can hide itself in "normal" applications, like video conferencing software, to avoid detection by most antivirus and malware scanners until it reaches its specific target, which can be identified by voice recognition, facial recognition, geolocation and other system-level features. DeepLocker os different from other malware because it uses AI to make trigger conditions making it almost impossible to reverse engineer. The malicious payload will only be unlocked if the intended target is reached.
To demonstrate DeepLocker's abilities, researchers designed a proof of concept that camouflages the well known WannaCry ransomware in a video conferencing app so that it remains undetected by security tools, including antivirus engines and malware sandboxes. DeepLocker did not unlock and execute the ransomware on the system until it recognized the face of the target, which can be matched using publicly available photos of the target. All DeepLocker needs is the victim's photo, which can easily be found from any social media profiles on LinkedIn, Facebook, Twitter, Google+, or Instagram, to target victims. To make mattes worse, Trustwave has recently open-sourced a facial recognition tool called Social Mapper, which can be used to search for targets across social networks all at once. The IBM Research group will unveil more details and a live demonstration of its proof-of-concept implementation of DeepLocker at the Black Hat USA security conference in Las Vegas.