darkweb, ai

Unveiling AI Abuse: How Cybercriminals Are Leveraging Language Models on the Dark Web

The growing popularity of technologies such as Large Language Models (LLMs) makes performing mundane tasks easier and information more accessible, but it also creates new risks for information security. It’s not only software developers and AI enthusiasts who are actively discussing ways to utilize language models—attackers are too.

Disclaimer

This research aims to shed light on the activities of the dark web community associated with the nefarious use of artificial intelligence (AI) tools. The examples provided in the text do not suggest inherent danger in chatbots and other tools but help to illustrate how cybercriminals can exploit them for malicious purposes. Staying informed about trends and discussions in the dark web equips companies to establish more effective defenses against the ever-evolving landscape of threats.

AI in Cybercrime: Understanding the Threat

Artificial Intelligence (AI) is revolutionizing the tech landscape, but it also poses significant risks. Cybercriminals are increasingly using AI to enhance their tactics, making it crucial for businesses to stay informed and prepared.

Below the chart, you can see statistics on posts in forums and Telegram channels regarding the use of ChatGPT for illegal activities and AI-based tools. While activity peaked in April, it is now trending downward. However, the increasing complexity and efficiency of these models reflect a shift from quantity to quality in their application.

The Role of AI in Cybercrime

AI technologies, including LLMs like ChatGPT, are being used by cybercriminals for various illegal activities. These models enable attackers to automate tasks, generate malicious code, and lower the entry barrier for cybercrime. Here’s how:

  • Polymorphic Malware: Cybercriminals use AI to create polymorphic malware, which can modify its code while retaining its functionality. This makes detection and analysis more challenging.

  • Automated Attacks: AI tools help attackers execute complex tasks with minimal expertise. For instance, processing user data dumps can now be done with a single prompt, making it easier for less experienced criminals to operate.

Jailbreaking AI Models

Attackers use special prompts, known as jailbreaks, to bypass AI model restrictions and obtain responses related to criminal activity. These jailbreaks are shared on social platforms and forums, enabling cybercriminals to exploit AI capabilities for malicious purposes.

Jailbreaking Chat

snippet from a jailbreaking forum

Observations on the Use of GPT on Cybercrime Forums

Between January and December 2023, discussions in cybercriminal forums revealed innovative ways to exploit AI technologies. Here's what we observed:

  • Polymorphic Malware Generation: One forum post suggested using GPT to generate polymorphic malware that modifies its code while retaining its functionality. This tactic bypasses standard security checks by accessing legitimate domains like openai.com.

  • Simplifying Complex Tasks: AI tools enable criminals to solve tasks with a single prompt, lowering the entry threshold into cybercrime. For example, a user on a cybercrime forum used ChatGPT to resolve issues with processing user data dumps.

  • Team Recruitment for Carding: Forums also facilitate the recruitment of teams for illegal activities like carding, where users mention using AI in coding to process malware log files.

One post suggested using GPT to generate polymorphic malware that can modify its code while keeping its basic functionality intact. Such programs are much more difficult to detect and analyze than regular malware. The author of the post suggested using the OpenAI API to generate code with a specific functionality.

AI-Powered Forum Interactions

Some cybercriminal forums have integrated AI tools like ChatGPT to provide automatic responses to posts. This widespread use demonstrates the versatility of AI in facilitating various tasks, including illegal ones.

The Growing Threat of AI-Driven Attacks

The use of AI in cybercrime is increasing, with discussions on forums and Telegram channels revealing innovative ways to exploit these technologies. Here are some key observations:

  • AI-Powered Tools: Cybercriminal forums have integrated AI tools like ChatGPT to provide automatic responses to posts. This widespread use demonstrates the versatility of AI in facilitating various tasks, including illegal ones.

  • Open-Source Tools: Open-source AI solutions are being used by both developers and cybercriminals to enhance their tools. For example, an open-source utility designed to obfuscate PowerShell code can help attackers stay undetected.

Unregulated AI Projects

Projects such as WormGPT, XXXGPT, and FraudGPT offer unrestricted AI capabilities, making them attractive to cybercriminals. Despite shutdowns due to backlash, these projects continue to pose a threat as they offer enhanced functionalities without ethical safeguards.

Snippet of WormGPT for sale on the dark web

Snippet of unregulated ChatGPT-like projects

Stolen AI Accounts

The market for stolen ChatGPT accounts is rising, with cybercriminals selling hacked premium accounts. Automated tools create free accounts that are sold in bundles, allowing users to switch accounts when one is banned.

The Rising Market for Stolen AI Accounts

Another danger for both users and companies is the rising market for stolen accounts to the paid version of ChatGPT. In the example provided, a forum member is distributing accounts for free, which have presumably been obtained from malware log files. The account data is collected from infected user devices.

  • Hacked Premium Accounts: Hacked premium ChatGPT accounts are sold on forums, with sellers advising users to avoid changing account details to remain undetected.

  • Automated Account Creation: Attackers use automated tools to create free ChatGPT accounts in bulk, which are sold in bundles to evade bans.

Below the chart, you’ll find data on the volume of posts concerning ChatGPT accounts for sale across dark web channels from January to December 2023. These accounts are typically acquired through theft or bulk registration and are advertised extensively across multiple platforms. The chart highlights a peak in activity in April, aligning with the earlier surge in forum interest.

In conclusion, the rising use of AI by attackers is a significant concern. As AI technologies facilitate easier access to information and simplify various tasks, they also lower the entry barriers for malicious activities. Although current tools may appear rudimentary, the swift advancement of technology indicates that more sophisticated attacks could soon become a reality.

Conclusion: Staying Vigilant in an AI-Driven World

The increasing popularity and use of AI by attackers are alarming. While AI simplifies many aspects of life, it also lowers the barrier for malicious actors. Although many discussed solutions may not pose immediate threats due to their simplicity, technology is evolving rapidly. The capabilities of language models may soon enable sophisticated attacks, underscoring the importance of staying informed and vigilant.

Stay Informed with Digital Dispatch

For more insights into the evolving landscape of cybersecurity, AI innovations, and the role of technology in our lives, visit our Digital Dispatch YouTube channel. Our latest video, How South African Companies Get Hacked: The Role of Reconnaissance, explores the initial phase of cyberattacks and how understanding reconnaissance can help protect your business.

Join Our Tech Community

Stay ahead of tech trends by subscribing to our channel. Your engagement supports us in bringing you valuable content that empowers and informs. Check out the video and see how these insights can help you strengthen your cybersecurity defenses. Don't forget to like, comment, and subscribe for more expert advice and tech news!

Watch Now: How South African Companies Get Hacked: The Role of Reconnaissance

Explore Our Threat Intelligence Offerings

In addition to valuable insights from our YouTube channel, explore the cutting-edge threat intelligence solutions available on our webstore. Protect your organization with:

Visit our webstore to learn more about these offerings and how they can help safeguard your digital assets.

Back to blog

Leave a comment

Please note, comments need to be approved before they are published.