ChatGPT has caught the attention of people globally. Artificial Intelligence (AI), created by OpenAI, relies on machine learning for the test. OpenAI CEO and co-founder Sam Altman said that ChatGPT is “incredibly limited” and that relying too heavily on it could be a mistake. However, a new report from research firm Checkpoint suggests that it may be helping hackers write phishing emails and malicious code.
ChatGPT is getting praise for generating well written code. Checkpoint Research reports that this capability of OpenAI’s new interface to its Large Language Model (LLM) could help cybercriminals with their malicious social engineering attack vectors, especially as the cyber security world is rapidly changing. “It is important to stress the importance of being vigilant that this new and evolving technology can influence both good and bad things. While this new technology helps defenders to run phishing campaigns and develop malware,” Checkpoint It has been said in a research report. You can find examples of malicious code generated by ChatGPT on Twitter.
What is ChatGPT?
ChatGPT is essentially a variant of OpenAI’s popular GPT-3.5 language-generation software designed to interact with people. According to an OpenAI of language model, some of its characteristics include answering questions, challenging incorrect premises, rejecting inappropriate questions, and even admitting its mistakes.
Automatically generated phishing emails
One researcher said, “We didn’t write a single line of code and let the AI do all the work. We just put the pieces of the puzzle together.” Researchers at Checkpoint created “a believable phishing email” using Codex and ChatGPT – just like some of us created our assignments.