The Cyber Security Risks Associated With Chatgpt

This article covers some of the dangers that come with ChatGPT, such as the fact that it can be used to generate phishing emails that are more realistic and harder to detect.

Andrew Kedi
3 min readJan 28, 2023
Photo by Maxim Hopman on Unsplash

Today it's mentioned that there are guardrails in place to prevent AI chatbots from being used for nefarious purposes. — ChatGPT, a tool released by OpenAI, is an AI-powered chatbot that has the capability to generate convincing phishing emails and other malicious code.

Cybercriminals are increasingly using this tool to carry out cyber attacks and develop new attacks as it has natural language capabilities and is scale capable for threat actors with varying levels of expertise and coding knowledge. By simply entering a query, ChatGPT can provide commands and suggestions to the user.

Although the tool is attractive, it can also be used as a powerful tool by threat actors for their malicious operations. Cyber security experts have warned about the potential risks associated with using ChatGPT, such as the generation of convincing phishing emails that could lead to data theft or disruption of services.

Sophisticated phishing emails have become the top attack vector for cybercriminals and with ChatGPT’s ability to recreate python-based malware, it could be used by threat actors to their advantage. Although many businesses have embraced chatbots, security experts have discovered certain threat actors that are using ChatGPT for malicious purposes. They have documented several instances of ransomware attacks, where certain research publications make it clear that the actor is using ChatGPT.

While there comes excitement with the use of chatbots, security experts must always remain vigilant and take extra precautions when implementing them. By regularly checking any suspicious activity or behavior by threat actors in a network, experts can mitigate potential threats associated with ChatGPT and other chatbot technologies.

For example, given the rise of AI chatbots, experts should be cognizant of the fact that less experienced attackers may use chatbot tools to write exploits or malware code and create their own exploits. To improve defenses, organizations should ensure that employees are adequately trained on how to identify potentially malicious activity and take appropriate steps to prevent it.

Experienced attackers may be able to write more sophisticated malware and use automated tools to play an important role in exploit development. Therefore, it is important for organizations to put guardrails in place to protect against these threats, such as using strong passwords and regularly monitoring the network for any suspicious activity.

Organizations should also invest in threat intelligence solutions to monitor for malicious actors who are attempting to use AI chatbots like OpenAIs ChatGPT to enact cyber attacks. This is because these chatbots can be used to write malware code, build malware and fool standard security tools such as phishing emails and other sites or tools. Processes like OpenAIs ChatGPT can also be used to obtain threat intelligence from dark websites.

Cyber security risks associated with ChatGPT include detecting sophisticated phishing emails. Traditional methods of detecting phishing emails such as spam filters and keyword filtering are no longer sufficient because they cannot detect the new types of attacks that may contain typical indicators of phishing emails. To combat this, organizations must use models like ChatGPT to generate large language models to generate text that is indistinguishable from the real human-written text. By using behavioral analysis, ChatGPT can distinguish humans from AI-generated messages and generate plausible messages even when traditional methods fail.

Good luck!

--

--

Andrew Kedi

Msc. Information security, certified Linux Administrator(LPIC-1), CISSP, Passionate with cyber security and bug bounty.