Chat GPT seems to be the talk of the town recently. It’s obviously got huge potential, if used in the right way. However, how can you be sure that your staff are using it safely?
What is ChatGPT?
ChatGPT is an artificial intelligence chatbot developed by OpenAI and released in November 2022. It was founded by a group of entrepreneurs and researchers, including Elon Musk and Sam Altman, in 2015. OpenAI is backed by several investors including Microsoft.
It is built on top of OpenAI’s foundational large language models and uses natural language processing to create human-like, conversational dialogue. The ‘GPT’ element stands for ‘Generative Pre-trained Transformer’, which uses specialised algorithms to find patterns within data sequences, then it processes requests and formulates responses.
ChatGPT is trained with reinforcement learning through human feedback and reward models that rank the best responses. This feedback helps augment ChatGPT with machine learning to improve future responses. The language model can respond to questions and compose various written content, including articles, social media posts, essays, code and email.
Potential for cybercrime
Appealing and helpful though it can be, ChatGPT poses a huge potential threat as a gateway for fraud and malicious data gathering.
As chatbots become embedded in the internet and social media, the chances of becoming a victim of malware or malicious emails will increase. The cybersecurity industry is already seeing evidence of ChatGPT’s use by criminals. It is with this in mind that companies including JP Morgan and Amazon have banned or restricted staff use of ChatGPT.
Malware and hacking
The SolarWinds supply chain attack in December 2020 targeted multiple organisations, impacting over 100 companies, including US government agencies, telcos and Fortune 500 companies.
According to cybersecurity specialists, cybercriminals have begun leveraging ChatGPT to quickly create hacking tools. Many early ChatGPT users realised that it could code harmful software that is capable of spying on users’ keyboard strokes, encrypting data or creating ransomware.
Check Point Research (CPR), an Israeli security company, released an initial analysis of ChatGPT4 in March 2023. They came up with five scenarios that could allow harmful attacks faster and with more precision, even by non-technical criminals. The scenarios were:
1. C++ Malware that collects PDF files and sends them to FTP
2. Phishing: Impersonation of a bank
3. Phishing: Emails to employees
4. PHP Reverse Shell
5. Java programme that downloads and executes putty that can launch as a hidden Powershell
In one forum post reviewed by Check Point, a hacker who’d previously shared Android malware showcased code written by ChatGPT that stole files of interest, compressed them and sent them across the web. They showed off another tool that installed a backdoor on a computer and uploaded further malware to an infected PC.
In the same forum, another user shared Python code that could encrypt files, saying OpenAI’s app helped them build it. They claimed it was the first script they’d ever developed. As Check Point noted, such code can be used for entirely benign purposes, but it could also easily be modified to encrypt someone’s machine, completely without any user interaction, similar to the way in which ransomware works.
While the ChatGPT-coded tools appeared quite rudimentary, it may only a matter of time before more ‘skilled’ hackers discover ways to exploit the app, so OpenAI may eventually be legally compelled to educate its AI to detect such exploitation.
OpenAI has put certain filters in place to prevent obvious requests for ChatGPT to construct malware with policy violation notifications. They say they have trained the model to refuse inappropriate requests, using moderation tools to warn or block certain types of unsafe and sensitive content. But hackers and journalists have found ways around those safeguards.
Scamming and phishing
ChatGPT also has the potential to be exploited by hackers who don’t speak English to create legitimate-looking phishing emails. Tell-tale signs of fraudulent messages such as bad grammar and spelling will be less obvious.
Dating scammers are also trying ChatGPT’s potential to construct other chatbots tailored to impersonate and target young females, as they try to create convincing personas and automate idle chatter.
One user in the forum post reviewed by Check Point also discussed ‘abusing’ ChatGPT by having it help code up features of a dark web marketplace, similar to drug bazaars like Silk Road or Alphabay. As an example, the user showed how the chatbot could quickly build an app that monitored cryptocurrency prices for a theoretical payment system.
Chatbots can be useful for work and personal tasks, but they collect vast amounts of data. An example that reflects this area of concern, the Italian regulator, Garante, imposed a ban on ChatGPT in March after finding that some users’ messages and payment information were exposed to others. They reversed the ban in April after OpenAI agreed to meet regulators’ demands to adhere to strict European data protection laws.
The measures include adding information on OpenAI’s website about how it collects and uses data that trains the algorithms powering ChatGPT, providing EU users with a new form for objecting to having their data used for training, and adding a tool to verify users’ ages when signing up.
Most people are aware of the privacy risks posed by search engines such as Google, but the conversational nature of chatbots can catch people off guard and encourage them to give away more information than they would have entered into a search engine.
Chatbots like ChatGPT typically collect text, voice and device information as well as data that can reveal your location, such as your IP address. They can also gather data from social media activity, which can be linked to your email address and phone number.
While the firms behind the chatbots say your data is required to help improve services, it can also be used for targeted advertising. Each time you ask an AI chatbot for help, micro-calculations feed the algorithm to profile individuals.
Using ChatGPT privately and securely
Most cyberattacks begin with an email so it’s important to protect your inbox. Here are some tips to help you prevent a ChatGPT cyberattack:
- Be suspicious of unsolicited attachments or links, and be careful when sharing personal information. If you don’t know the sender, or if the email looks suspicious, don’t open it. Don’t respond to email requests for personal information, even if they look legitimate.
- Keep your anti-virus software and operating system up to date. This will help protect you from any malicious attachments or links and prevent cybercriminals from exploiting vulnerabilities in outdated software to gain access to your systems.
- Set up two-factor authentication for your email account which adds an extra layer of security such as a code sent to your phone, in addition to your password.
Chatbots can be useful at work, but we advise you to proceed with caution and always follow your company’s security policies. Never share sensitive or confidential information so you cannot fall foul of regulations such as the EU update to GDPR regulations.
The nature of a chatbot means that it will always reveal information about the user, regardless of how the service is used. Even if you use a chatbot through an anonymous account or a VPN, the content you provide over time could reveal enough information to be identified or tracked down. Considerable care should be taken before sharing any data, especially if the information is sensitive or business-related.
If you want to discuss your organisation’s security options in more detail with one of Securus’ cybersecurity experts, please feel free to get in touch.