Info Image

How to Prepare for a ChatGPT Cyberattack

How to Prepare for a ChatGPT Cyberattack Image Credit: kung_tom/BigStockPhoto.com

According to the McKinsey 2022 State of AI review, the average number of artificial intelligence capabilities companies employ has doubled over the last four years - from 2018 to 2022. Given all the interest around it, AI is becoming a prominent tool for businesses across industries…and it’s also catching the eye of hackers.

Since its release, ChatGPT has flooded headlines and generated massive buzz. The large language model (LLM), developed by OpenAI, is based on the GPT-3 natural language generator and showcases how powerful AI technologies can potentially transform how businesses operate. In just three short months, ChatGPT has become an essential workflow tool for developers, writers, and students alike.

Not only has the AI-driven processing tool taken the world by storm, but ChatGPT’s ability to initiate more natural and human-like interactions with customers - especially for non-priority engagements - has supercharged companies’ productivity and customers’ satisfaction and loyalty. The tool also has the capacity to research just about any topic online and select relevant content from various sources that can help businesses develop a coherent and effective content marketing plan.

From assisting copywriters with articles and blogs to creating song lyrics and everything in between, the conversational chatbot can produce content to a level no other technology has achieved. However, like many business tools, ChatGPT can and has become a double-edged sword.

The probable threat of ChatGPT

Because it engages with a user at their level of expertise, ChatGPT also empowers them to learn quickly and act effectively - a feature that benefits both traditional users and hackers with malicious intent. Cybercriminals have jumped onto the ChatGPT bandwagon, using the AI’s code generation capabilities to launch cyberattacks.

A new Check Point Research (CPR) report notes several instances of threat actors deploying much more sophisticated cyber attacks using ChatGPT to develop basic hacking tools. The Check Point researchers detail cybercriminals on underground hacking forums writing malware, creating data encryption tools, writing code to create new dark web marketplaces, and even more malicious purposes without any developmental skills at all. Most cybercriminals are pretty well-versed in technological know-how when committing attacks. However, ChatGPT allows less code-fluent hackers to create more sophisticated cyberattacks that seem to come from real people, such as phishing emails, text scams, malicious code for ransomware attacks, and other exploits.

Scammers can also use ChatGPT to build bots and sites to trick users into sharing their sensitive information and launch highly targeted social engineering scams and phishing campaigns. Moreover, another tool tied to ChatGPT was posted on an underground forum that could be used to install a backdoor on a device, enabling a hacker to encrypt it without involving user interaction and upload more malware onto the compromised device.

When introducing ChatGPT, OpenAI stated that it had put guardrails in place to prevent the natural language interface from producing malicious code. However, since then, people have found ways to fool the system by rewording their requests to work around the guardrails or tricking it into thinking their requests are for research purposes only.

Hackers have added ChatGPT as the latest tool in their belt, and cybersecurity experts need to know what to watch out for.

Safeguard Your business From AI cyberattacks

Threats from AI technology are not new problems, but the scenarios presented with ChatGPT are alarming. Cybercriminals without coding knowledge are already using ChatGPT to create code to steal data. To anticipate, prepare, and even prevent attacks with AI technology, companies need to start thinking like bad actors.

A recent study by Andrew Patel and Jason Sattler of W/Labs, Creatively malicious prompt engineering, found that large language model outputs can generate datasets containing many examples of malicious content. Used proactively, ChatGPT could help level the security playing field, depending on its available version, enabling companies to combat AI with AI.

“Enterprises are facing a deluge of automated cyberattacks, which are exponentially rising in velocity, variety, and complexity,” stated Gartner VP analyst Katell Thielemann when discussing the rise of artificial intelligence. “However, AI is simultaneously supporting security teams in detecting and responding to threats, fundamentally changing organizations’ defense paradigms.”

Through prompt engineering, businesses and cybersecurity experts can mass potential security faults or false scenarios - such as toxic speech and online harassment - to craft methods for detecting such content and determining whether such detection mechanisms are effective. Fixing weaknesses in operating procedures before adversaries discover them will raise an organization's security posture. Companies can also mitigate cyberattacks by reviewing their risks and training employees to identify the “tells” of AI-generated material. There are a few tools ready to do this.

On January 31, OpenAI released its very own language classifier to determine if something was written with AI - especially ChatGPT. Although still not definitive, the company states you can use their tool to provide insight into deciding which pieces of content were AI-written.

Executives must include security training specifically covering phishing attacks and security incident notification processes as part of their strategy and roadmaps to ensure a quick reaction from their team. Training your employees on programs like ChatGPT helps them understand the chatbot and how to identify attacks perpetrated through it. Have teams run through practice scenarios and engage with trusted partners and advisors to shore up any holes.

The increasing use of artificial intelligence like ChatGPT to create highly sophisticated yet malicious campaigns demonstrates that cybersecurity experts not only need to brush up on their security skills, but they also must ensure that their organization is investing in employee training to help them spot the telltale signs of these types of attacks. Doing so will allow your employees to focus on the most vulnerable and valuable aspects of your business.

It's still too soon to say how much cybercriminals will lean on ChatGPT in the long run or for how much longer they'll be able to abuse the platform. As with any new technology, ChatGPT has its own benefits and challenges that will significantly impact businesses and the cybersecurity market.

The views expressed in this article belong solely to the author and do not represent The Fast Mode. While information provided in this article is obtained from sources believed by The Fast Mode to be reliable, The Fast Mode is not liable for any losses or damages arising from any information limitations, changes, inaccuracies, misrepresentations, omissions or errors contained therein. The heading is for ease of reference and shall not be deemed to influence the information presented.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Faisal Bhutto is the SVP of Cloud and Cybersecurity at Calian IT & Cyber Solutions and is responsible for the evolution and expansion of the global cloud and cybersecurity business. Faisal has an in-depth understanding of customer needs with an extensive technical background, industry certifications such as CCIE, and membership on OEM advisory councils including the Cisco Global Technology Advisory Board. Under his leadership, Computex (now Calian IT & Cyber Solutions) was recognized as the top cybersecurity firm in Houston two years in a row.

PREVIOUS POST

Push to Eliminate 'Digital Poverty' to Drive Demand for Satellite-Powered Broadband Connectivity Post Pandemic