As part of feedback from developers owned by Microsoft, it is now free for general use (paid subscriptions coming soon).
Cybersecurity firm Check Point Research (CPR) has witnessed attempts by Russian cybercriminals to circumvent OpenAI restrictions in order to use ChatGPT for malicious purposes.
Underground hacking forums discuss how hackers can bypass control of IP addresses, payment cards, and phone numbers. All these are required to access ChatGPT from Russia.
CPR shared screenshots of what they saw and warned of the rapidly growing interest of hackers looking to expand their malicious activity on ChatGPT.
“We are currently seeing Russian hackers discussing and confirming ways to break geofencing to use ChatGPT for malicious purposes. We believe that you are most likely trying to implement and test ChatGPT, warns Sergey Shykevich, Threat Intelligence Group Manager at Check Point.
Cybercriminals are taking more and more interest in ChatGPT. Because the AI technology behind ChatGPT makes hackers more cost effective.
Just as ChatGPT can be used to help developers code, it can also be used for malicious purposes.
On December 29th, a thread named “ChatGPT – Advantages”
The thread’s publisher revealed that they were using ChatGPT to attempt to replicate malware types and techniques described in research publications and malware articles in general.
On December 21st, a threat actor posted a Python script. He emphasized that this was “the first script he made”.
When another cybercriminal commented that the style of the code was similar to OpenAI’s, the hacker said OpenAI had given him “a nice hand to finish the script to the right extent.” I have confirmed that
This means that potential cybercriminals with little or no development skills can leverage ChatGPT to develop malicious tools and become full-fledged cybercriminals with technical competence. could mean
Another threat is that ChatGPT can be used to spread misinformation and fake news.
But OpenAI is already vigilant on this front.
Its researchers collaborated with Georgetown University’s Center for Security and Emerging Technologies and the Stanford Internet Observatory in the United States to investigate how large-scale language models could be exploited for disinformation purposes. .
Improved generative language models open up new possibilities in various fields such as healthcare, law, education, and science.
“However, as with any new technology, it is worth considering how they can be exploited. secret or deceptive undertakings,” said the latest report based on the workshop. We recruited 30 disinformation researchers, machine learning experts, and policy analysts.
“We believe it is important to analyze the threat of AI-enabled influence operations and outline steps that language models can take before being used in large-scale influence operations,” the report said. increase.