Everyone’s talking about artificial intelligence (AI) today, thanks to one app taking the world by storm. ChatGPT reached 100 million global users in just two months – faster than any other consumer app in history, according to analysts. Stories abound of its uncanny ability to mimic human text – in use cases as varied as job applications, school assignments and even jokes. But on the flipside, reports are also emerging of threat actors exploring how to use the tool to simplify and automate cyber-attacks and malware creation.
It's pretty certain that AI tools like ChatGPT are here to stay. And while they offer many benefits to society, they’ll also democratize fraud and cybercrime. That will force organizations to re-evaluate their cybersecurity posture, and take a fresh look at data protection.
ChatGPT for good and bad
ChatGPT is based on developer OpenAI’s GPT-3 family of large language models. According to OpenAI its “dialogue format makes it possible for ChatGPT to answer follow up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” It is this conversational, human-like way of interacting with users that has wowed many trying the AI tool for the first time.
Apart from the novelty value of ChatGPT, it could genuinely help organizations to enhance their customer service, free up staff to work on higher value tasks, and even improve productivity as an aid for content marketing, web coding and other tasks. However, its accessibility and powerful AI backend also open the door for more malignant uses.
Security experts have enlisted ChatGPT to help write ransomware, and polymorphic malware capable of evading cyber-defenses with ease. There’s also been discussion on the dark web from threat actors using the tool to develop information-stealers, multi-layer encryption tools and even dark web marketplace scripts. Although OpenAI designed some guardrails to prevent it creating malicious content, these don’t appear to be working as intended. Even controls designed to stop users in some countries like Russia from accessing the tool seem to have failed.
ChatGPT could also provide a boost for fraudsters looking to craft convincing mistake-free phishing campaigns and other scams en masse. In the future, hackers may be able to feed it with data on individual users’ writing styles and activities in order to target convincing phishing, business email compromise (BEC) cyber-espionage and other attacks.
In summary, ChatGPT may be bad news for network defenders for two reasons:
- It is helping to democratize the ability to launch cyber-attacks to a wider group of non-technical threat actors.
- For a fee of just $20 per month it puts these capabilities in the hands of these malicious users. That gives them the power to automate cybercrime on a large scale for a relatively small cost.
Focus on what matters
A majority (51%) of cybersecurity leaders expect ChatGPT to enable a successful cyber-attack within a year, according to one recent study. So how can organizations mitigate its potential to cause chaos?
First, they will need to get better at spotting malicious content generated by AI in phishing, impersonation and malware attacks. And second, they must also recognize that as AI tools gain sophistication, these efforts may not always succeed. That’s why it will be increasingly important to combine detection with protection of their most important corporate data assets.
This is the promise of data-centric security. It means augmenting alternative security controls with the application of strong data protection, wherever that data is located. In this way, comforte’s Data Security Platform:
- Discovers and automatically classifies all sensitive enterprise data
- Searches locations like cloud data stores which are often hidden from view
- Offers multiple protection methods like tokenization, which preserve data utility for analytics and other use cases
- Integrates seamlessly with data flows and applications for rapid time-to-value
- Integrates with Kubernetes and VMware for enhanced DevSecOps
Comforte’s data discovery capabilities are also powered by AI. Increasingly, the fight against cybercrime will pitch AI-based security tools like this against malicious AI.