ChatGPT: Understanding and Mitigating the Cybersecurity Risks

April 4, 2023 - 3 minutes read

Over the last few weeks, there’s been a lot of coverage of OpenAI’s release of ChatGPT. The technology is piquing the interest of many for the innovative possibilities it brings to all sorts of communications, from answering questions to providing learning resources to troubleshooting tech issues. Unfortunately, many in the cybersecurity industry are concerned malicious actors will use this new tool for nefarious purposes, launching clever new ways to ensnare potential cybercrime victims.

Two Concerning ChatGPT Use Cases 

One of these use cases is the ability for foreign threat actors to create legitimate-looking emails with proper grammar and spelling in languages they don’t speak. This makes phishing emails more challenging for recipients to identify since poorly-written messages are common clues that something is amiss.

Another dangerous ChatGPT scenario involves the creation of polymorphic malware. This is where the original malicious code mutates using techniques such as obfuscation and encryption but retains its functionality. This makes the malware more difficult to detect using traditional security controls.

How Did We Get Here? 

The purpose of considering the cyberattack scenarios possible with ChatGPT is not to instill fear—there is already enough of that happening online. Rather, it’s important to take a step back and look at the history of the threat landscape and how it’s evolved to understand the current situation. Since the early days of computing and the internet, threat actors have always excelled at taking advantage of innovative technologies to develop new techniques for infiltrating networks and systems. This prompts the industry to innovate further by enhancing the available detection technologies or developing new detection techniques. It’s a constant game of cat and mouse.

Phishing and polymorphic malware have been around since the 1990s and have evolved over the past two decades. In that time, detection capabilities have changed alongside them to continuously get faster and more accurate. Some of these technologies include capabilities like heuristics,  sandboxing, dynamic reputational scoring, and behavior analysis. Today, these are table stakes, but they were once groundbreaking new techniques created to address the emerging risks of their day.

How to Address the Challenges of ChatGPT 

ChatGPT is expected to be the latest technique for threat actors, and vendors will continue to evolve their detection engines as well. Organizations should continue with their in-depth defense strategies and reliance on user awareness training programs, maintain a disciplined patch management program, and ensure they regularly test their defenses to uncover blind spots to harden those vectors.

Source: Fortra

The post ChatGPT: Understanding and Mitigating the Cybersecurity Risks appeared first on NSS.

Powered by WPeMatico