MillenniumPost
Opinion

A treacherous terrain

The use of AI-enabled hacks by malevolent actors to execute cybercrimes is on the rise, necessitating an urge for law-enforcement agencies and the industry to act collaboratively

A treacherous terrain
X

While your child was engrossed in watching cartoon movies on your phone, a friend called and informed them that they had sent a PDF containing a list of interesting cartoon movie links to your WhatsApp. Eager to watch the new cartoons, your child quickly opened the PDF and clicked on the link. As the animation began to play, a message suddenly appeared on the screen, which annoyed your child. They clicked on "Yes" to make the message disappear. A few days later, you received a screenshot of a WhatsApp chat that appeared to be with one of your friends along with morphed abused images of yours. However, upon closer inspection, it was apparent that the chat had been deliberately tampered with and modified to harass you. The blackmailer then contacted you, demanding a large sum of money and threatening to release a private video clip to the public if their demands were not met. You were stunned and terrified by the situation. Despite your cybersecurity instincts telling you not to pay the ransom, the blackmailer sent you the video clip anyway, intensifying the pressure on you. You had no idea how the blackmailer had obtained the private moment video.

How did the attackers obtain the WhatsApp chat and modify it, as well as record video clips and access images saved on your phone?

The PDF your child opened contained a malicious payload that downloaded onto your phone, infecting it with a malware programme developed using ChatGPT-4 that gave the blackmailer full access to all your files. The PDF's phishing links had a malicious payload that requested webcam access permission, enabling the attacker to access it remotely without the owner's knowledge or consent. The attacker targeted your child because they knew that despite using a malicious payload, browser safety policies would not grant automatic permission for webcam access. Thus, they deceived your child into granting access by making it appear necessary to watch the cartoon.

Moreover, what would be the repercussions on your reputation if modified WhatsApp chat, your morphed images, and private moment video clips were to go viral? I believe you already know the answer.

Implications for social engineering

Social engineering is similar to a "human hack" in which con artists employ psychological ploys to trick and influence others into disclosing private information or executing certain activities. By using such tactics, the hacker can take advantage of the victim's heightened emotional state to compromise their better judgment and gain access to confidential information.

Implications for phishing

With the advent of ChatGPT-4, cybercriminals can now create phishing emails and messages with grammatically correct and convincing content. Unlike previous poorly worded emails that were likely produced by foreign organisations with limited proficiency in the target language, ChatGPT-4 can generate persuasive phishing hooks and business email compromise messages.

Implications for malware obfuscation

ChatGPT-4 can be exploited by threat actors to create polymorphic malware that evades traditional signature-based security controls. Each time the researchers queried ChatGPT-4, they obtained a unique code that could generate multiple mutations of the same malware, making it challenging to detect.

Implications for ransomware

The threat actor created malware with the help of ChatGPT-4 that targeted and compressed common file types, then uploaded them to an FTP server. Actors with limited technical skills also developed an encryption algorithm to encrypt files in a specified directory. These examples demonstrate how even individuals with limited expertise can use ChatGPT-4 to build ransomware-like programmes.

Implications for vulnerable code and software

Hackers can use ChatGPT-4 to analyse code and understand what each module does, reducing the barrier to entry for potential adversaries. Researchers found weaknesses in the smart contract code using ChatGPT-4.

Implications for misinformation

ChatGPT-4 can be used by rogue actors, and political opponents to spread false narratives across multiple accounts in what is called “astroturfing.” ChatGPT-4 can mimic humans and produce an infinite supply of content on various topics, triggered by keywords or controversial issues. These bots can post to an indefinite number of social accounts and will be indistinguishable from human users, thus elevating disinformation campaigns to a new level.

Securing against social engineering attacks

Social engineering attacks often exploit human error. So, staying up-to-date on fraud prevention tips and emerging cyber threats is crucial. Then, follow these tips to secure yourself and your family from social engineering attacks:

• Minimise your online presence. Limit the amount of personal information you share on social media to avoid being an easy target for hackers. Even seemingly harmless details like vacation photos or school names can be used against you;

• Install antivirus software to protect against malware, spyware, and ransomware;

• Regularly check your credit report and bank statements for signs of identity theft, such as unfamiliar charges or accounts;

• Use a VPN for secure browsing and shopping online;

• For all of your accounts, activate Multi-Factor Authentication (MFA), ideally using an authenticator app rather than Two-Factor Authentication (2FA) through SMS.

As the impact of ChatGPT-4 is expected to surge, law enforcement agencies, regulatory bodies, and private businesses must prepare for its potential positive and negative implications that may affect their daily operations. Government should stress the necessity of enhancing cooperation between law enforcement agencies and the technology industry to counter the rising menace of cybercrime facilitated by advanced AI technologies like OpenAI. The malevolent use of ChatGPT-4 can inflict severe harm, and it is crucial to create awareness and promptly address any vulnerabilities. Law enforcement agencies need to comprehend the impact of ChatGPT-4 to anticipate, deter, and investigate criminal exploitation. Experts are identifying potential malicious applications of ChatGPT-4 and providing recommendations for law enforcement agencies to prepare for the misuse of AI-enabled technology. Cybercriminals are already using advanced AI techniques to create fake chatbots, malware bots, and social media accounts. Even without prior knowledge, ChatGPT-4 can assist in gaining a deeper understanding of various potential crime domains such as home invasion, terrorism, cybercrime, and child sexual abuse. ChatGPT-4-based phishing attacks and malware are expected to increase soon.

The writer is an HoD and Assistant Professor of Dept of Computer Sc & Electronics, Ramakrishna Mission Vidyamandira. Views expressed are personal

Next Story
Share it