Update - Is Chat GPT Safe: How ChatGPT Can Turn Anyone into a Threat Actor

Artificial Intelligence tools are becoming increasingly popular ranging from AI generated art to stories with a basic prompt. ChatGPT is the latest innovation of this idea, allowing a user to have human-like conversations using AI. There are growing concerns about the potential for AI-powered cyber attacks, and whether or not AI technologies like ChatGPT can be used by threat actors to launch malicious campaigns will be discussed in this article.

It is built around reinforcement learning, adapting from every use for purposes such as writing emails, creating scripts, assisting with code and more. Currently, ChatGPT is free to anyone with an OpenAI account making it available to anyone with a device and email. With instant access for anyone and over one million users, this opens the door to malicious actors as well.

What is ChatGPT?

ChatGPT is a large language model developed by OpenAI. It uses deep learning techniques to generate human-like text, allowing it to understand the context and meaning of the words it generates. The model has been trained on a massive dataset of text, allowing it to generate text on a wide range of topics with a high degree of accuracy. ChatGPT can be used for a variety of purposes, including generating marketing copy, chatbots, and more. However, it can also be used for malicious activities, making it important to be aware of its potential dangers.

What ChatGPT Is Capable Of 

The power of ChatGPT is well documented. New York banned ChatGPT out of fear of cheating, and Google worries it may be a direct competitor to their search engine. The most worrying aspect of Cartificial intelligence 3382507 340hatGPT is the program being abused by malicious hackers. Not just malicious hackers with a strong technical background, but those with little to no experience at all. Quickly, researchers and cybersecurity anatalists began to stress-test this AI tool, pushing to see how far it could go. Israeli team Check Point found that when ChatGPT was combined with OpenAI’s code-writing system, ChatGPT would write a phishing email carrying malicious data. There are limitations to this system, as ChatGPT will not simply “create phishing emails” or comply with other malicious intent prompts. Although it is not impossible as team Check Point demonstrated. Rather than asking ChatGPT to write a phishing email, users found a method of twisting words to retrieve what they want indirectly. This adds a layer of complexity, but with trial and error it is possible for anyone to do this. 

How Anyone Could Become a Threat Actor with ChatGPT

Along with research and cybersecurity teams digging hard into ChatGPT’s capabilities, hidden forums across the web have begun their own testing, creating malicious content using ChatGPT. An information stealer was created through ChatGPT, scanning for photos, PDF, and Microsoft Office documents, then copying and sending them over the Internet. Another user created an encryption tool to automatically decrypt or encrypt a machine. If altered, it could turn the code into ransomware. One last notable example was the use of the program for direct financial gain. Using a combination of ChatGPT and an OpenAI, users are able to create believable products to then sell on legitimate marketplaces like Etsy, possibly raking in thousands of dollars daily. For some of these users, it was their first time writing any code. While ChatGPT will not answer direct prompts to create phishing emails or ransomware attacks, it makes it possible for anyone to become a threat actor.

Protecting Against Attacks

The threat of cybersecurity risks is on the rise and the contribution of ChatGPT to malicious hacking certainly won’t help. While developing these attacks is new territory, at the end of the day, these are still phishing, ransomware, and general cybersecurity attacks. 

Implementing email security best practices is critical in protecting against these threats. Beware of common malicious tactics and do not click links from messages you are not familiar with. Training employees, backing up important data and having solid email encryption are the right steps to protecting any business of any size. 

Businesses can outsource email protection for additional security against these threats.

Final Thoughts

Cyberattacks are becoming increasingly common and the way they develop is ever-changing. Now with tools like ChatGPT, anyone can teach themselves malicious tactics to carry out cyberattacks. Many countries like China and Russia have flat-out banned ChatGPT and schools across the United States have begun to roll out banning the program from use. ChatGPT may be a technical feat of innovation, but in its current state it is too young and has not been adapted to subvert the issues of creating malicious content. With the emergence of ChatGPT, staying on top of email security best practices has never been more important.