New Delhi, January 8: Artificial intelligence (AI)-powered ChatGPT, which provides human-like answers to questions, is also being used by cybercriminals to develop malicious tools that can steal data, report warns doing.
The first example of cybercriminals using ChatGPT to create malicious code was discovered by researchers at Check Point Research (CPR). Twitter Data ‘Breached’: Information of Over 200 Million Users Dumped onto Dark Web, Previously Sold for $200,000.
In underground hacking forums, threat actors create “infostealers” and cryptographic tools to facilitate fraudulent activities.
Researchers have warned that interest in ChatGPT is growing rapidly as cybercriminals spread and teach their malicious activities.
“Cybercriminals are attracted to ChatGPT. In recent weeks, we have seen evidence that hackers have started using ChatGPT to write malicious code. ChatGPT provides a good starting point for hackers. Giving it may speed up the hacker’s process, ”said Sergey Shykevich. , Threat Intelligence Group Manager at Check Point.
Just as ChatGPT can be used to help developers code, it can also be used for malicious purposes.
On December 29th, a thread appeared on a popular underground hacking forum named “ChatGPT – Malware Benefits”. Data breaches: Indian government sector top target for hackers as cyberattacks grow 95% in H2 2022.
The thread’s publisher revealed that they were using ChatGPT to attempt to replicate malware types and techniques described in research publications and malware articles in general.
“While this individual may be a tech-minded threat actor, these posts demonstrate how less technically competent cybercriminals can utilize ChatGPT for malicious purposes that they can readily use. It seems to show with a real-world example,” the report said.
On December 21st, a threat actor posted a Python script, highlighting that it was the first script he ever created.
When another cybercriminal commented that the style of the code was similar to OpenAI’s, the hacker said OpenAI had given him “a nice hand to finish the script to the right extent.” I have confirmed that
This means that potential cybercriminals with little or no development skills can leverage ChatGPT to develop malicious tools and become full-fledged cybercriminals with technical competence. The report warns that it could mean
“The tools we analyze are very basic, but it’s only a matter of time before more sophisticated attackers enhance their use of AI-based tools,” said Shykevich.
ChatGPT developer OpenAI is reportedly looking to raise capital at a valuation of nearly $30 billion.
Microsoft acquired OpenAI for $1 billion and is now pushing ChatGPT applications to solve real problems.
(The above article first appeared on LatestLY on January 8, 2023 at 12:06 PM (IST). For news and updates on politics, the world, sports, entertainment and lifestyle, please visit our website Please log on to the site latestly.com).