Is AI increasing the sophistication of cybercriminals?

Although it has significant benefits, it appears AI technologies are being used by malicious actors to enhance the sophistication of cyberattacks

There is no doubt that the development of artificial intelligence (AI) has brought with it a huge range of benefits to many businesses. In the last decade, the significant increase in the adoption of AI and machine learning (ML), has enabled a number of organizations to successfully harness these technologies and implement them into their business practices, streamlining processes and improving productivity.

Today, businesses across almost every sector are looking for the opportunity to incorporate AI in support of their operations. However, as with any industry, the advancements of AI have brought with it a darker side to the technology.

According to cybersecurity firm BlueVoyant, there have been some significant changes in the cybercrime landscape over the past few months. The company reports that attackers are continuing to use increasingly sophisticated technologies to write malicious code and highly convincing phishing emails.

What trends are we seeing now?

Balázs Csendes, BlueVoyant’s Manager for Central and Eastern Europe, has used findings from the company’s threat intelligence team to uncover trends that we are seeing in AI and cybercrime.

AI tool passwords for sale

ChatGPT logins have become valuable. They are being sold on the dark web, just like passwords for other online services. Cybercriminals typically use information theft software to steal login credentials and sensitive data from unprotected devices, as this software can be easily deployed, especially if a user is using an older operating system or has disabled automated protection. Many users register with OpenAI using a company email address, and this type of access data is sold on the dark web at a slightly higher price than data associated with private email addresses, due to it sensitive nature.

WormGPT and other open-source tools

While ChatGPT has been designed to prevent its use in unlawful activities, there are of course some AI tools that operate without such constraints. One such example is WormGPT, a service that is intended by the developers to help cybersecurity professionals to test malware generated with the tool, and enhance their ability to defend against threats.

Although the website includes a warning from the creators, suggesting that the tool is not to be used for criminal purposes, the fact remains that it can be misused for illegal activities. BlueVoyant’s threat intelligence observed that a variant of WormGPT has been developed, which can be used for malicious intent and is available through subscription on the dark web. Capable of generating code in multiple programming languages, the variant can steal cookies and other valuable data from unsuspecting users’ devices.

As well as this, WormGPT can also be used as a tool for supporting phishing campaigns. Attackers can write highly persuasive messages with sophisticated language and wording, making fraudulent emails even more challenging to identify. Additionally, it can be used to identify legitimate services that are then exploited for illicit purposes, such as SMS text messaging services for large-scale phishing campaigns.

Future trends that are expected to grow

BlueVoyant’s threat intelligence indicates that in the near future, other trends in cyberspace are likely to come about as AI continues to grow. Cybercriminals are likely to harness these new technologies to create AI-enhanced malware, enabling them to autonomously steal sensitive data and evade antivirus software.

Furthermore, AI is expected to aid cybercriminals in document forgery, meaning that document verification becomes increasingly crucial. Advanced AI tools that facilitate the forging of documents, making it easier to pass through online filters, will enable illegal activities like fraudulent bank account creation and money laundering.

When looking to the future, a common cause of concern surrounds the adoption of AI in various industries rendering jobs obsolete. However, in the field of cybercrime, BlueVoyant’s experts note that such concerns have yet to materialize. Instead, the shortage of IT skills has spilled over, which has in fact resulted in a significant demand for individuals who are well-trained and understand generative AI (GenAI), some of whom are unfortunately applying their expertise to illicit activities.

As organizations all over the world continue to integrate and adopt AI tools into their everyday business practices, the “dark side” of AI is also experiencing a surge in activity. Security teams need to prepare for the coming rise in AI-driven cyber threats, however, at the same time these threats are simultaneously increasing the demand for cybersecurity professionals, which will ultimately benefit the business landscape in the long run.

STAY CONNECTED

Get the latest updates from
Curious With AI

Discover special offers, top stories, upcoming events, and more.

Please enable JavaScript in your browser to complete this form.