Rechercher
Contactez-nous Suivez-nous sur Twitter En francais English Language
 

Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

Email Security Firms Witness Over 10X Surge in Email Phishing Attacks Amid ChatGPT’s Emergence

March 2024 by Stocklytics.com

The emergence of Generative AI has brought about significant changes in various aspects of our daily lives in a relatively short period. However, as this technology becomes more prevalent, email security firms have reported a staggering surge in phishing attacks. According to an analysis by Stocklytics.com, email phishing attacks have increased dramatically by more than 10X since the introduction of ChatGPT. Some firms reported a rise of as high as 1265% in these attacks.

Apart from creating AI tools like WormGPT, Dark Bart, and FraudGPT that produce malware and are spreading on the dark web, cybercriminals are exploring ways to exploit OpenAI’s flagship AI chatbot.

Stocklytics financial analyst Edith Reads shared insights on the data;

Threat actors are using tools like ChatGPT to orchestrate schemes involving targeted email fraud and phishing attempts. These attacks often entice victims to click on deceitful links and disclose information, like usernames and passwords.
Stocklytics Financial Analyst, Edith Reads

AI-Powered Phishing Attacks

During the last quarter of 2022, phishing attacks surged, with cybercriminals sending out approximately 31,000 fraudulent emails daily. This surge represented a 967% increase in credential phishing attempts.

Interestingly, 70% of these phishing attacks were carried out through text-based business email compromises (BEC), while 39% of mobile-targeted attacks were SMS phishing (smishing). The perpetrators leveraged tools like ChatGPT to create phishing messages to deceive individuals into revealing sensitive information.

Cybercriminals typically send deceptive emails, texts, or social media messages that appear legitimate in phishing attacks. These tactics lead victims to websites where they authorize transactions from their accounts, resulting in financial losses.

Addressing the Threats

Since cybercriminals leverage generative AI to perpetrate their schemes, Edith suggests that cybersecurity experts should proactively utilize AI technologies to combat these evolving threats.

She stated:

It is vital for companies to integrate AI directly into their security frameworks to monitor all communication channels and neutralize risks consistently.
Stocklytics Financial Analyst, Edith Reads

Despite efforts by AI developers such as OpenAI, Anthropic, and Midjourney to implement measures against the misuse of their platforms for malicious purposes, skilled individuals continue to find ways around these protective barriers.

Recent reports, including one from the RAND Corporation, have raised concerns about the potential misuse of generative AI chatbots by terrorists to learn about carrying out biological attacks. Additionally, researchers have demonstrated how exploiting less commonly tested languages can allow hackers to manipulate ChatGPT into providing instructions for criminal activities.

To tackle these issues, OpenAI requested cybersecurity experts, also called Red Teams, to pinpoint security weaknesses in its AI systems.


See previous articles

    

See next articles


Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55
Mail: ipsimp@free.fr

All new podcasts