SlashNext https://slashnext.com Complete Generative AI Security for Email, Mobile, and Browser Wed, 13 Dec 2023 19:14:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://slashnext.com/wp-content/uploads/2020/01/SN-Favicon-01-150x150.ico SlashNext https://slashnext.com 32 32 Silent, Yet Powerful Pandora hVNC, The Popular Cybercrime Tool That Flies Under the Radar https://slashnext.com/blog/silent-yet-powerful-pandora-hvnc-the-popular-cybercrime-tool-that-flies-under-the-radar/ Wed, 13 Dec 2023 19:14:05 +0000 https://slashnext.com/?p=54320 Pandora hVNC is a remote access trojan (RAT) that has been advertised on cybercrime forums since 2021. Surprisingly, it has received little attention from the cybersecurity community. Despite this, it remains a widely used tool and is favoured by many threat actors. Pandora hVNC enables attackers to gain covert control over a victim’s computer. This […]

The post Silent, Yet Powerful Pandora hVNC, The Popular Cybercrime Tool That Flies Under the Radar first appeared on SlashNext.]]>
Pandora hVNC Cybercrime tool

Pandora hVNC is a remote access trojan (RAT) that has been advertised on cybercrime forums since 2021. Surprisingly, it has received little attention from the cybersecurity community. Despite this, it remains a widely used tool and is favoured by many threat actors. Pandora hVNC enables attackers to gain covert control over a victim’s computer. This article will analyse the features of Pandora hVNC.

Pandora hVNC is a type of malware known as a remote access trojan (RAT). As the name suggests, a RAT allows an attacker to gain unauthorised access to a victim’s computer and control it remotely.

Pandora hVNC was initially advertised in 2021 or possibly earlier. However, it has not received significant attention, likely because of its marketing approach and the terms of service established by the seller in the sales thread – as you can see in the image below, the seller is attempting to present Pandora hVNC as a legitimate remote access tool.

Pandora Disclaimer

Image: The terms of service that have been published on the official Pandora hVNC sales thread

In lower-tier cybercrime forums, there is a widespread belief that selling software with a disclaimer absolving the author of any criminal damage and claiming it is designed for legitimate purposes, allows the author to avoid prosecution if it is in fact used maliciously.

An example of this is the Netwire RAT which was shutdown in March 2023 by authorities – they also claimed to be a legitimate piece of software.

However, in the case of Pandora hVNC, it is slightly more obvious that this is just a ploy and the tool is designed for malicious use. The author advertises the software on well-known cybercrime forums.

On the first forum where we found the software, it has received over 26,000 views. By examining the sales thread, anyone familiar with the cybercrime landscape can deduce that the software is most likely being used as a remote access trojan, rather than a remote access tool.

The Seller of Pandora hVNC
Throughout the sales thread, the Pandora hVNC seller is seen advising potential or existing customers on the most suitable type of crypter to use with their software.

Pandora hVNC Seller

Image: The seller of Pandora hVNC advising what crypters are compatible with the software

A crypter is a type of software that is capable of encrypting, obfuscating, and manipulating malware to make it harder for security programs to detect.

There’s no reason to run a crypter on any software in the context discussed in the sales thread for legitimate purposes. Additionally, throughout the sales thread, users refer to clients infected with this software as ‘victims’.

Pandora hVNC Victim

Image: A user of Pandora hVNC referring to a client as a victim

When you consider the points above, and the fact that the software features a ‘keylogger,’ it’s only logical to conclude that this software was created with malicious intent.

The Pandora hVNC Feature List
Here’s an extensive list of features that come with Pandora hVNC. We’ve added our own descriptions to each feature for easier comprehension. In the official advertisement thread, the seller simply lists the name of each feature.

Scale Resolution: This enables the hacker to adjust the resolution of the victim’s screen to fit their own, facilitating easier control and monitoring.

WebGL Support 100%: Ensures compatibility with modern web technologies, allowing the hacker to interact with complex web applications on the victim’s system.

Hidden Desktop: This allows the attacker to control the victim’s system without any visible signs of intrusion, making detection more difficult.

Reverse Connection: In this method, the victim’s computer initiates the connection to the attacker’s system, which can often bypass firewall restrictions.

Lightweight TCP Server: This feature helps establish a low-latency, reliable connection between the attacker and the victim’s system.

IPV4 / DNS Support: Ensures compatibility with most network configurations.

Access to all applications / Mouse & Keyboard Controls: Allows the attacker to interact seamlessly with the victim’s system as if they were the user.

Encrypted Connection: All communications between the attacker and the victim’s system are encrypted, making detection and analysis by network monitoring tools more difficult.

Browser Profile Cloner Session/ Cookie/ Password/ History: This feature enables the attacker to clone the victim’s browser profile, providing access to sensitive information like saved passwords, browsing history, and cookies.

Process Suspension: Allows the attacker to suspend processes on the victim’s computer, potentially disrupting security software or other applications.

2FA Recovery Bypass (If 2FA App is installed): If the victim uses two-factor authentication (2FA), this feature can potentially bypass it, providing the attacker with access even to accounts protected by 2FA (note that there is some skepticism surrounding this feature though).

CMD/PowerShell Prompt: Provides the attacker with command line access to the victim’s system, allowing them to execute commands directly.

C#/C++ (NET/Native) Crypter Compatibility: Ensures compatibility with commonly used methods for obfuscating malicious code to evade detection by antivirus software.

Reflective Stub Injection (Memory Only, No Disk): This technique loads malicious code directly into memory, bypassing the need for writing to disk and evading many types of antivirus detection.

Quality Adjustment & Image Resize of hVNC: Allows the attacker to adjust the quality of the remote desktop view to account for different network conditions.

Client Information/Note: Provides a way for the attacker to keep track of information about the victim’s system.

hVNC Panel: Offers various controls for the attacker to manage the victim’s system, including monitoring system performance (CPU, RAM, Disk, GPU, Network usage), controlling the screen refresh rate, and more.

Task Manager: Equips the attacker with the ability to manage processes on the victim’s computer, including killing, restarting, and searching for specific processes.

File Manager: Allows the attacker to manage files on the victim’s system, including copying, pasting, deleting, uploading, downloading, renaming, and executing files.

Password Recovery: This feature can extract saved passwords from various web browsers installed on the victim’s system, providing the attacker with access to the victim’s online accounts.

Client PC: Allows the attacker to restart or shut down the victim’s computer.

Download and Execute: This feature allows the attacker to download additional malicious files from a specified URL or disk location and execute them on the victim’s system.

Keylogger: Records the victim’s keystrokes, capturing potentially sensitive information such as passwords and credit card numbers.

The Dangers of Pandora hVNC
It is crucial that we understand the potential harm that can come from the use of Pandora hVNC. This remote access tool allows cybercriminals to secretly take control of victims’ computers, enabling activities like data theft, espionage, and unauthorised access to sensitive systems. That access can then be sold to other cybercriminals to carry out more impactful attacks, like the deployment of ransomware.

In the 2009 GhostNet operation, a vast cyber espionage network targeted the political, economic, and media sectors of over 103 countries. Believed to be associated with China, the attackers delivered RATs through phishing emails that functioned similarly to Pandora hVNC to access victims’ computers. This allowed surveillance of users and access to sensitive files.

As mentioned previously, it is surprising that Pandora hVNC has not received more attention in the cybersecurity world given its capabilities. The most logical explanation seems to be the developer’s disclaimer that it was intended only for legitimate use.

At one point, the seller tried promoting the tool on a large Russian hacking forum. However, some users ridiculed the inclusion of a disclaimer, seeing it as rather silly.

Pandora hVNC

Image: The seller of Pandora hVNC being mocked on a Russian cybercrime forum

The seller responded by stating that they should “try to understand the rule,” clearly implying the disclaimer is included only to protect themselves, not because Pandora hVNC actually prevents misuse. In reality, it has no built-in safeguards to stop someone from infecting thousands of computers. And without specific reports, there is no way for the developer to know who might be misusing the software, even if they wanted to take action.

Our conclusion is that Pandora hVNC is clearly designed for malicious purposes. It has no safety features built in, users can pay anonymously with cryptocurrency, the tool is exclusively promoted on forums associated with cybercrime, and the seller is fine with people referring to infected machines as “victims”.

Indicators of Compromise (IOC)

Indicator Type
39ef11e7673738d7dab6b7396044d003 MD5 Hash


Your Next Steps With SlashNext
Pandora hVNC, often delivered through phishing, is countered effectively by SlashNext Complete. This platform secures email, mobile, and web communications, especially in Microsoft 365, Zoom, Teams, and Slack.

It employs SlashNext’s Gen AI platform, which is purpose-built to anticipate the vast numbers of malicious links and zero-hour exploits with high accuracy by combining natural language processing, computer vision, and machine learning with relationship graphs and deep contextualization. Click here to schedule a demo or compare it with your current email security for 30 days using our 5-minute setup Observability Mode.

The post Silent, Yet Powerful Pandora hVNC, The Popular Cybercrime Tool That Flies Under the Radar first appeared on SlashNext.]]>
Malicious Use of QR Codes on the Rise Through Quishing Attacks https://slashnext.com/blog/malicious-use-of-qr-codes-on-the-rise-through-quishing-attacks/ Thu, 30 Nov 2023 23:23:29 +0000 https://slashnext.com/?p=54299 Quick Response codes – aka QR Codes – were first used in 1994 by a Japanese company called Denso Wave to primarily track parts in the automotive manufacturing process. The QR code’s design allowed it to store more information than the traditional barcodes that were used and could be quickly scanned, hence the name “Quick […]

The post Malicious Use of QR Codes on the Rise Through Quishing Attacks first appeared on SlashNext.]]>
QR Code Phishing

Quick Response codes – aka QR Codes – were first used in 1994 by a Japanese company called Denso Wave to primarily track parts in the automotive manufacturing process. The QR code’s design allowed it to store more information than the traditional barcodes that were used and could be quickly scanned, hence the name “Quick Response.” Over time, QR codes have found applications beyond their original intended use, becoming widely used for many other purposes, including marketing, mobile payments, and information sharing. Their popularity surged with the widespread adoption of smartphones and phone cameras.

Thirty years later, QR codes are still on the rise. In 2022, according to Statista, about “89 million smartphone users in the United States scanned a QR code on their mobile devices.” Usage of mobile QR code scanners is projected to reach over 100 million users in the U.S. by 2025. And according to a 2021 survey of U.S. shoppers cited by Statista, almost half of the respondents reported using a QR code to access marketing or promotional offers.

With success comes cybercrime, and this year threat actors have picked up on QR codes and turned them into malicious quishing (QR code phishing) attacks. The threats of quishing and QRLJacking (QR link jacking) are exploiting user trust and convenience and are bypassing security filters. Cybercriminals are manipulating QR codes to redirect users to malicious sites to steal login credentials, financial information, and more. These malicious QR codes can contain malware to gain access to the user’s mobile device and steal personal and financial information.

Last month, SlashNext Threat Labs published a “malicious usage of QR codes” blog post detailing how threat actors leverage QR codes for various attacks and breach attempts. It defines quishing, a blend of QR code and phishing techniques, and details how quishing manipulates QR codes for nefarious purposes used in phishing emails, digital ads, social media, or even posters in common areas. In the blog, QRLJacking is explained as a social engineering method that exploits the “login with QR code” feature available on many legitimate apps and websites, which is then used by bad actors to gain full control of victims’ accounts.

The following image illustrates how an email that includes a QR code incites an employee into scanning the code using a mobile device. The QR code then takes the targeted user, via their mobile phone, to a phishing website.

Malicious QR code Quishing

Security experts have noted a recent 50% spike in QR code-based phishing attacks as cybercriminals leverage this tactic to exploit user trust and convenience. Given the growing dependence on QR codes across various sectors and the ease of manipulating them, it is highly likely that quishing attacks will continue to increase in popularity among cybercriminals. With a 50% spike in these types of attacks this quarter alone, security vendors and organizations need technology now that can identify malicious QR codes in email, and all messaging channels including personal email and mobile apps to stop these threats before they experience a costly breach.

In a 2022 FBI ICE report, it was noted that the FBI started receiving a significant number of victims reporting QR code scams that, in some instances, included large amounts of money loss. The report mentioned fraudulent crypto transactions. Today, QR code cybercrime has evolved to many types, including gift cards – instead of receiving a free $100 gift card, the QR code takes you to a malicious website. Quishing also uses business compromise and link/file phishing themes including these listed in a 2023 HHS white paper:

  • Fraudulent invoices
  • Request for personal information
  • Late payment references with links
  • Fake discounts on products or services
  • False government refunds

Other reasons why quishing is on the rise include:

  • Easy way to bypass Microsoft’s advance threat protection (ATP) services Safe Links and Safe Attachments
  • Exploits users familiarity and “trust” using QR codes
  • Lag in user awareness training
  • No protection on mobile devices

Types of Quishing Protection
The FBI has given a list of ways to protect users from malicious QR code exploits, including the following:

  • Do not scan randomly found QR codes
  • Be suspicious if the QR code scan takes you to a site that asks for a password or login information.
  • Don’t scan QR codes found in emails or text messages unless you know they’re legitimate and call the sender to confirm.
  • If the QR code looks like it has been physically tampered with (for example, in a poster), don’t use it.

SlashNext QR Code Phishing Protection – An Industry First
We’ve been working on QR code phishing for some time and we’re now delivering the “industry’s first multi-channel quishing protection to block malicious QR codes in all business and personal messaging channels with 99% accuracy.”

SlashNext’s QR Code Phishing Protection leverages patent-pending computer vision and a new natural language processing (NLP) classifier to block the delivery of messages containing malicious QR codes and block malicious QR code URLs scanned by users’ mobile phones. Its ability to detect malicious intent in both the QR code itself as well as the accompanying message makes it the most comprehensive and accurate protection against QR-code based attacks.

The QR code phishing protection stops quishing, QRLJacking and other scams distributed via malicious QR codes. It’s the first security solution to offer multi-channel quishing protection that blocks malicious QR codes in email, mobile, web and messaging channels such as Slack, iMessage, Microsoft Teams, and more.

You can find more information in the press release.

Contact SlashNext for a demo or read more on our website about QR Code Phishing Protection capabilities.

The post Malicious Use of QR Codes on the Rise Through Quishing Attacks first appeared on SlashNext.]]>
Scam or Mega Chatbot? Investigating the New AI Chatbot Called Abrax666 https://slashnext.com/blog/scam-or-mega-chatbot-investigating-the-new-ai-chatbot-called-abrax666/ Mon, 13 Nov 2023 18:43:18 +0000 https://slashnext.com/?p=54232 An in-depth investigation of a new AI chatbot called Abrax666 advertised on cybercrime forums reveals multiple red flags suggesting it’s likely a scam. With a negative review after communication, no seller deposit, exaggerated capabilities claimed, and zero evidence of satisfied customers, we judge that Abrax666 has no credibility as a real product. SlashNext monitors cybercrime […]

The post Scam or Mega Chatbot? Investigating the New AI Chatbot Called Abrax666 first appeared on SlashNext.]]>
Abrax666 Malicious Chatbot

An in-depth investigation of a new AI chatbot called Abrax666 advertised on cybercrime forums reveals multiple red flags suggesting it’s likely a scam. With a negative review after communication, no seller deposit, exaggerated capabilities claimed, and zero evidence of satisfied customers, we judge that Abrax666 has no credibility as a real product.

SlashNext monitors cybercrime platforms and forums on a daily basis in order to better understand cybercriminal activity and provide assistance to the broader cybersecurity community.

Last week , a thread promoting a new AI chatbot variant named Abrax666 surfaced on a major Russian cybercrime forum. It was posted by a user named ‘Abrax’. At first glance, the original post lacked technical details about Abrax666’s purported capabilities.

A screenshot of Abrax666 being advertised on a Russian cybercrime forum

Image: A screenshot of Abrax666 being advertised on a Russian cybercrime forum.

We explored other platforms to learn more about where Abrax666 was advertised for sale. Our research uncovered a public GitHub repository detailing Abrax666’s claimed features, including call spoofing, malware creation, and phishing.

A screenshot of the public Abrax666 GitHub repository

Image: A screenshot of the public Abrax666 GitHub repository.

However, upon returning to the original forum, we discovered a different thread aimed at selling Abrax666. Further investigation revealed that this was an older attempt by ‘Abrax’ to promote Abrax666 on this particular forum. It should be noted that this thread was closed because ‘Abrax’ did not make the required security deposit to sell products there.

A screenshot of ‘Abrax’s’ thread being closed

Image: A screenshot of ‘Abrax’s’ thread being closed.

 

In this thread, a user named ‘SocketSilence’ attempted to test Abrax666 and left a detailed review expressing their scepticism about ‘Abrax’s’ claims. This was based on various inconsistencies they noticed while communicating with ‘Abrax’.

A review left by ‘SocketSilence’ detailing their negative experience with ‘Abrax’

Image: A review left by ‘SocketSilence’ detailing their negative experience with ‘Abrax’.

Additionally, throughout our wider investigation across platforms, we could not find any evidence of satisfied Abrax666 customers. We conclude that Abrax666 is likely a scam for several reasons:

  • The lack of a required security deposit on the forum is suspicious and abnormal for a legitimate seller, implying ‘Abrax’ could not or did not want to complete this standard verification.
  • The technical claims of what Abrax666 can allegedly do is wide-ranging, almost too wide-ranging to make it an effective piece of malware. In the advertisement, it claims to have almost 100 unique features which is a bold claim .
  • There is a complete absence of evidence that Abrax666 has ever been sold or used successfully. This strongly implies it is non-functional or fake.
  • ‘Abrax’ has attempted to sell Abrax666 on other cybercrime forums but all of the threads have been removed from existence (possibly because of forum policy violations).

A screenshot of the thread missing

Image: A screenshot of ‘Abrax’s’ deleted forum thread.

The only potentially credible evidence that has caused us to slightly defer our verdict here, are videos being circulated by ‘Abrax’ that allegedly show the AI chatbot in use. However, even these videos do not appear to showcase the standard output one would expect from an AI chatbot of this nature. The output appears to look more like a standard tool that is not capable of real-time communication and does not accept prompts but arguments and flags instead.

While we cannot fully disprove Abrax666 without hands-on analysis, our investigation found no credible evidence that this advertised AI chatbot actually exists. We will continue monitoring for any new evidence, but currently judge Abrax666 as a likely scam attempt. Caution should be exercised before assuming new AI variants promoted in cybercrime circles are sincere threats.

Note: During the creation of this article, the GitHub repository was removed for an unknown reason. Additionally, everything written here refers to what we discovered prior to October 31st, 2023 . To some extent, we reserve our judgement because evidence could emerge in the future that contradicts much of what we have written here—we will update this article if new material becomes available.

One Step Ahead

SlashNext Complete provides real-time threat detection with unmatched accuracy to identify malicious email, mobile, and website threats. To request a demo, click here. Alternatively, you can watch a video of it in action by clicking here.

About the Author

Daniel Kelley is a reformed black hat computer hacker who collaborated with our team at SlashNext to research the latest threats and tactics employed by cybercriminals, particularly those involving BEC, phishing, smishing, social engineering, ransomware, and other attacks that exploit the human element.

 

The post Scam or Mega Chatbot? Investigating the New AI Chatbot Called Abrax666 first appeared on SlashNext.]]>
Exploring The Malicious Usage of QR Codes https://slashnext.com/blog/exploring-the-malicious-usage-of-qr-codes/ Wed, 18 Oct 2023 13:00:54 +0000 https://slashnext.com/?p=54172 Discover the history, types, and threats of QR codes, including quishing and QRLJacking. Learn why QR phishing is effective and how it exploits user trust, convenience, and bypasses security filters. Understanding QR Codes: A Brief History QR codes, or quick response codes, have become ubiquitous in recent years. These two-dimensional barcodes were invented by a […]

The post Exploring The Malicious Usage of QR Codes first appeared on SlashNext.]]>

Discover the history, types, and threats of QR codes, including quishing and QRLJacking. Learn why QR phishing is effective and how it exploits user trust, convenience, and bypasses security filters.

Understanding QR Codes: A Brief History

QR codes, or quick response codes, have become ubiquitous in recent years. These two-dimensional barcodes were invented by a Japanese automobile manufacturing company in 1994 and were initially used to track vehicle parts during the manufacturing process. However, it wasn’t until the smartphone era that QR codes gained widespread popularity.

QR codes offer several advantages over traditional barcodes, including the ability to store a large volume of data, the ability to be scanned even if partially damaged, and the convenience and speed of data transmission. As a result, QR codes have found their way into various aspects of our lives, from advertising and marketing campaigns to contactless payments and accessing websites with ease. It’s now more common to see restaurant menus on a QR code rather than traditional printed menus. 

Types of QR Codes: Static vs. Dynamic

There are two main types of QR codes: static and dynamic. Static QR codes contain information that does not change, such as a website URL or contact information. On the other hand, dynamic QR codes allow for the updating or changing of the data they store. These codes redirect users to a unique URL that points to a server where the information is stored, making them ideal for situations where content needs to be frequently updated, such as event details or promotional offers.

Quishing: QR Code Threats

Quishing, a blend of QR code and phishing techniques, manipulates QR codes for cyberattacks. It has emerged as a growing threat with the widespread adoption of QR codes during the pandemic.

A typical quishing attack involves the following steps:

  1. The attacker generates a QR code embedded with either a phishing link or malware download.
  1. The malicious QR code gets distributed through various channels like phishing emails, ads, social media, restaurant menus, posters, etc.
  1. A victim scans the fake code, believing it came from a legitimate source.
  1. The scan either sends the user to a phishing site to steal login credentials, financial data, or other sensitive information. Or it downloads malware onto the device to compromise it.
  1. Through phishing or malware installation, the attacker gains access to the victim’s data or infected device.

Quishing exploits the convenience of QR codes for large-scale attacks. In August 2023, researchers uncovered a phishing campaign that used malicious QR codes to target a large number of companies, including a major U.S. energy firm.

Photo: A screenshot of a phishing email containing a malicious QR code

QRLJacking: Session Attacks

QRLJacking, or quick response code login jacking, is a social engineering method that exploits the “login with QR code” feature used by many apps and websites. It can lead to full account hijacking. A typical QRLJacking attack unfolds as follows:

  1. The attacker creates a phishing site mimicking the login page of the target app or site, including a fake QR code they control.
  2. The attacker sends the phishing link to victims via email, SMS, messaging apps, etc.
  3. The victim opens the link on their mobile device and scans the fake QR code, believing it is legitimate.
  4. Scanning logs the victim into the attacker’s phony session instead of the real app.
  5. The app associates the victim’s account with the attacker’s session and sends sensitive data like access tokens.
  6. The attacker gains full control of the victim’s account.

QRLJacking takes advantage of frequently expiring QR login codes. The attacker extracts real-time codes using tools like Evil QR browser extension to maintain an active phishing session.

In an August 2023 blog post, cybersecurity researcher Cristian ‘void’ Giustini highlighted an instance of QRLJacking against the online gaming platform, Steam. The attacker created a phishing site pointing to Steam’s login portal, which included a QR code, and once scanned by a victim, the attacker could hijack their account.

 

Photo: A screenshot of the Steam’s ‘login with QR code’ functionality

QR Code: Cybercrime Tactics

Cybercriminals often share tutorials and strategies on forums, guiding each other on how to execute successful cyberattacks. In one thread, a user provides a brief guide that explains how to use QR codes to phish. The tool that they link appears to be an open source project.

Interestingly, we’ve also found instances of quishing being offered as a specific technique with phishing-to-hire services.

Cybercrime Phishing Service

Photo: A screenshot of a cybercrime phishing service referencing QR codes

QR Phishing: Effective Attack Vector

Here are some key reasons why QR phishing is an effective attack vector:

Why QR Phishing Works Description
User Trust QR codes have gained widespread trust among users, who often perceive them as safe and legitimate. This trust can be exploited by hackers to deceive users into scanning malicious QR codes.
Convenience and Speed Scanning a QR code is quick and convenient, offering users instant access to information or services. This convenience can lead to users letting their guard down and not thoroughly evaluating the content of the QR code before scanning it.
Bypassing Security Filters Traditional security filters, including Microsoft SafeLinks and other URL rewriting solutions, often focus on URLs. By using QR codes instead, attackers can sidestep these filters, making their phishing attempts more likely to succeed.
No Mobile Protection Many organizations prioritize desktop and network security, leaving mobile devices comparatively vulnerable. Recognizing this, attackers focus on mobile devices, especially given the rise in mobile-oriented functionalities like QR code scanning. Often, the attack may originate from an email, but the final exploitation occurs on a mobile device with lesser defenses.
Wide Range of Applications QR codes are used in various contexts, such as marketing campaigns, ticketing systems, and contactless payments. This wide range of applications provides hackers with numerous opportunities to exploit QR codes for their malicious purposes.

 

While comprehensive statistics are still emerging, security experts have noted a recent spike in QR code-based phishing attacks as cybercriminals leverage this tactic to exploit user trust and convenience. Given the growing dependence on QR codes across various sectors and the ease of manipulating them, it is highly likely that quishing attacks will continue to increase in popularity among cybercriminals.

Take Your Next Steps With SlashNext

Are you in need of strong protection against malicious QR codes?

SlashNext Complete provides real-time threat detection with unmatched accuracy to identify malicious email, mobile, and website threats. To request a demo, click here. Alternatively, you can watch a video of it in action by clicking here.

The post Exploring The Malicious Usage of QR Codes first appeared on SlashNext.]]>
Exploring the World of AI Jailbreaks https://slashnext.com/blog/exploring-the-world-of-ai-jailbreaks/ Tue, 12 Sep 2023 13:00:40 +0000 https://slashnext.com/?p=54127 Explore AI jailbreaking and discover how users are pushing ethical boundaries to fully exploit the capabilities of AI chatbots. This blog post examines the strategies employed to jailbreak AI systems and the role of AI in cybercrime. The Emergence of AI Jailbreaks In recent years, AI chatbots like ChatGPT have made significant advancements in their […]

The post Exploring the World of AI Jailbreaks first appeared on SlashNext.]]>
Jailbreaks

Explore AI jailbreaking and discover how users are pushing ethical boundaries to fully exploit the capabilities of AI chatbots. This blog post examines the strategies employed to jailbreak AI systems and the role of AI in cybercrime.

The Emergence of AI Jailbreaks

In recent years, AI chatbots like ChatGPT have made significant advancements in their conversational abilities. These sophisticated language models, trained on vast datasets, can generate coherent and contextually appropriate responses. However, some users have identified vulnerabilities and are exploiting them to “jailbreak” AI chatbots, effectively evading the inherent safety measures and ethical guidelines.

While AI jailbreaking is still in its experimental phase, it allows for the creation of uncensored content without much consideration for the potential consequences. This blog post sheds light on the growing field of AI jailbreaks, exploring their mechanisms, real-world applications, and the potential for both positive and negative effects.

The Mechanics of Chatbot Jailbreaking

In simple terms, jailbreaks take advantage of weaknesses in the chatbot’s prompting system. Users issue specific commands that trigger an unrestricted mode, causing the AI to disregard its built-in safety measures and guidelines. This enables the chatbot to respond without the usual restrictions on its output.


Photo: A screenshot of a jailbroken ChatGPT session

Jailbreak prompts can range from straightforward commands to more abstract narratives designed to coax the chatbot into bypassing its constraints. The overall goal is to find specific language that convinces the AI to unleash its full, uncensored potential.

The Rise of Jailbreaking Communities

AI jailbreaking has given rise to online communities where individuals eagerly explore the full potential of AI systems. Members in these communities exchange jailbreaking tactics, strategies, and prompts to gain unrestricted access to chatbot capabilities.


Photo: A screenshot of a community discussing jailbreak prompts

The appeal of jailbreaking stems from the excitement of exploring new possibilities and pushing the boundaries of AI chatbots. These communities foster collaboration among users who are eager to expand the limits of AI through shared experimentation and lessons learned.

An Example of a Successful Jailbreak

Jailbreak prompts have been developed to unlock the complete potential of various chatbots. One prominent illustration is the “Anarchy” method, which utilizes a commanding tone to trigger an unrestricted mode in AI chatbots, specifically targeting ChatGPT.


Photo: A screenshot of a jailbroken ChatGPT session using the “Anarchy” method

By inputting commands that challenge the chatbot’s limitations, users can witness its unhinged abilities firsthand. Above, you can find an example of a jailbroken session that offers insights into enhancing the effectiveness of a phishing email and augmenting its persuasiveness.

Recent Custom AI Interfaces for Anonymity

The fascination with AI jailbreaking has also attracted the attention of cybercriminals, leading to the development of malicious AI tools. These tools are advertised on forums associated with cybercrime, and the authors often claim they leverage unique large language models (LLMs).

The trend began with a tool called WormGPT, which claimed to employ a custom LLM. Subsequently, other variations emerged, such as EscapeGPT, BadGPT, DarkGPT, and Black Hat GPT. Nevertheless, our research led us to the conclusion that the majority of these tools do not genuinely utilize custom LLMs, with the exception of WormGPT.

Instead, they use interfaces that connect to jailbroken versions of public chatbots like ChatGPT, disguised through a wrapper. In essence, cybercriminals exploit jailbroken versions of publicly accessible language models like OpenGPT, falsely presenting them as custom LLMs.


Photo: A screenshot of a conversation with EscapeGPT’s author

During a conversation with the developer of EscapeGPT, it was confirmed that the tool does in fact serve as an interface to a jailbroken version of OpenGPT.

Meaning that the only real advantage of these tools is the provision of anonymity for users. Some of them offer unauthenticated access in exchange for cryptocurrency payments, enabling users to easily exploit AI-generated content for malicious purposes without revealing their identities.

Thoughts on the Future of AI Security

Looking into the future, as AI systems like ChatGPT continue to advance, there is growing concern that techniques to bypass their safety features may become more prevalent. However, a focus on responsible innovation and enhancing safeguards could help mitigate potential risks.

Organizations like OpenAI are already taking proactive measures to enhance the security of their chatbots. They conduct red team exercises to identify vulnerabilities, enforce access controls, and diligently monitor for malicious activity.

However, AI security is still in its early stages as researchers explore effective strategies to fortify chatbots against those seeking to exploit them. The goal is to develop chatbots that can resist attempts to compromise their safety while continuing to provide valuable services to users.

Secure Your Organization From Threats

Experience a personalized demo and see how SlashNext effectively curbs these types of threats. Click here to learn more or effortlessly evaluate the effectiveness of your current email security with our hassle-free 5-minute setup Observability Mode, without any impact on your existing email infrastructure.

The post Exploring the World of AI Jailbreaks first appeared on SlashNext.]]>
How Cybercriminals Abuse Airbnb For Fraudulent Activities https://slashnext.com/blog/how-cybercriminals-abuse-airbnb-for-fraudulent-activities/ Wed, 30 Aug 2023 13:00:24 +0000 https://slashnext.com/?p=54074 Cyberattacks are becoming increasingly common and sophisticated. One particular concern is the rising misuse of popular platforms like Airbnb. This blog post highlights how cybercriminals exploit Airbnb for fraudulent activities. Cybercriminals Target Travelers Cybercriminals are constantly developing new ways to exploit popular online platforms for malicious purposes. With over 7 million global listings in 100,000 […]

The post How Cybercriminals Abuse Airbnb For Fraudulent Activities first appeared on SlashNext.]]>

Cyberattacks are becoming increasingly common and sophisticated. One particular concern is the rising misuse of popular platforms like Airbnb. This blog post highlights how cybercriminals exploit Airbnb for fraudulent activities.

Cybercriminals Target Travelers

Cybercriminals are constantly developing new ways to exploit popular online platforms for malicious purposes. With over 7 million global listings in 100,000 active cities, Airbnb has become a favorite target for these criminals.

While the platform offers affordable and comfortable accommodations for travelers, its popularity has also made it vulnerable to cybercriminals, fraudulent hosts, fake accounts, and other scams. This blog post will explore how cybercriminals exploit Airbnb and its users.

Inside The World of Stealers

In order to understand how cybercriminals exploit Airbnb, it’s crucial to comprehend the methods they use to gain unauthorized access to accounts. Cybercriminals often employ malicious software known as “stealers” to obtain information such as usernames and passwords. These stealers, which are a type of malware, infiltrate devices and transmit stolen data (also known as logs) to attackers. Typically, the logs are sent to a server, but in some cases, they can be delivered through e-mail and secure chat programs like Telegram.

Stealers can be deployed through a variety of different techniques, including social engineering, exploiting software vulnerabilities, malvertising, and more.

Additionally, there’s an underground market where cybercriminals can buy and sell device access (also known as bots, installs, or infections) in large quantities.


Photo: A screenshot of a cybercriminal offering bots for sale on a forum

Cybercriminals who are willing to invest money can approach a bot seller or shop and start deploying their stealer on thousands, or even tens of thousands of devices, right away.


Photo: A screenshot of the different stealers available on a prominent cybercrime forum

Stealers can compromise most browsers, and they primarily target web-application account information. The logs usually follow a particular format, which includes multiple columns with rows of data that encompass various pieces of information, such as names, credit or debit card details, and more. In addition to capturing login credentials, stealers can also exfiltrate cookies.

The Importance of Cookies

Cookies are small data files stored on a user’s device that contain information about their browsing activity and preferences on a particular website. Cybercriminals often steal, buy, and sell Airbnb account cookies on various online forums. By doing so, they can gain temporary access to Airbnb accounts without needing the relevant usernames and passwords.


Photo: A screenshot of a user on a cybercrime forum looking to purchase Airbnb cookies

For instance, cybercriminals can purchase databases of stolen Airbnb cookies from compromised accounts, load them into their browser, and gain access to victims’ accounts. With this illicit access, they can impersonate real users and book properties or perform other unauthorized actions without raising any alerts. However, it’s crucial to note that most session cookies expire quickly, so cybercriminals must act fast before the session expires.

The Lucrative Services Available

Once cybercriminals gain access to user account information or obtain stolen cookies, their next objective is often to profit from the data. One standard method is to sell the compromised account information or stolen cookies directly to other cybercriminals.

This can be accomplished through advertising on online forums or by uploading rows of data to popular stores that facilitate these types of transactions.

Photo: A screenshot of a popular cybercrime store offering Airbnb accounts for sale

At the time of writing this blog post, there are thousands of Airbnb accounts available for purchase on the digital store referenced above. Shockingly, the large number of stolen accounts has devalued each account to just one dollar.

In fact, the scale of Airbnb account theft is so significant that attackers sell “account checkers,” which are automated programs that rapidly test Airbnb accounts located in a text file.

Airbnb Cookie Checker
Photo: A screenshot of an advertisement for an Airbnb account checker

The concept behind these account checkers is relatively simple. Attackers can load a text file filled with stolen credentials into the checker, verifying which credentials are valid and which are not. Some checkers can also perform specific actions, such as making bookings.

Cybercriminals also offer services that provide up to a 50% discount on Airbnb bookings.


Photo: A screenshot of a service that offers a 50% discount on all Airbnb bookings

It’s clear that these services are profitable because the forum threads advertising them have received tens of thousands of views and hundreds of replies.

In conclusion, cybercriminals have discovered various methods to exploit Airbnb for fraudulent activities by using stealers and stolen cookies to gain unauthorised access to user accounts. The compromised information is then sold to other cybercriminals or used to offer discounted services to buyers. The scale of account theft is substantial, with thousands of Airbnb accounts available for purchase on digital stores for as low as one dollar. It is essential to be aware of the risks and take necessary precautions to protect personal information from such cyber threats.

Browser Security With SlashNext

If you’re looking for integrated browser security, we have a solution for you. Our HumanAI technology offers real-time detection with exceptional accuracy that identifies malicious websites. By blocking threats in real time, we can safeguard users from malicious content that bypasses multi-layer enterprise defenses.

To book a demo, click here or watch the short video to learn more about our product: SlashNext Browser Phishing Protection

The post How Cybercriminals Abuse Airbnb For Fraudulent Activities first appeared on SlashNext.]]>
AI-Based Cybercrime Tools WormGPT and FraudGPT Could Be The Tip of the Iceberg https://slashnext.com/blog/ai-based-cybercrime-tools-wormgpt-and-fraudgpt-could-be-the-tip-of-the-iceberg/ Tue, 01 Aug 2023 12:00:46 +0000 https://slashnext.com/?p=53924 The rise of AI-powered cybercrime tools like WormGPT and FraudGPT has significant implications for cybersecurity as the future of malicious AI is rapidly developing daily. Learn about the tools, their features, and their potential impact on the digital landscape. The Rise of AI-Powered Cybercrime: WormGPT & FraudGPT On July 13th, we reported on the emergence […]

The post AI-Based Cybercrime Tools WormGPT and FraudGPT Could Be The Tip of the Iceberg first appeared on SlashNext.]]>
DarkBert Malicious AI Tools

The rise of AI-powered cybercrime tools like WormGPT and FraudGPT has significant implications for cybersecurity as the future of malicious AI is rapidly developing daily. Learn about the tools, their features, and their potential impact on the digital landscape.

The Rise of AI-Powered Cybercrime: WormGPT & FraudGPT

On July 13th, we reported on the emergence of WormGPT, an AI-powered tool being used by cybercriminals. The increasing use of AI in cybercrime is a concerning trend that we are closely monitoring.

Less than two weeks later, on July 25th, the security community discovered another AI-driven tool called FraudGPT. This noteworthy development was announced by Netenrich. It appears that this tool has gained promotion from a person that goes by the name “CanadianKingpin12.”

The Rise of FraudGPT

FraudGPT is being promoted as an “exclusive bot” that has been designed for fraudsters, hackers, spammers, and like-minded individuals. It boasts an array of features:

Screenshot FraudGPT featuresPhoto: A screenshot from a cybercrime forum showcasing FraudGPT’s features.

Users are instructed to make contact through Telegram.

Based on SlashNext’s research, “CanadianKingpin12” initially tried to sell FraudGPT on lower-level cybercrime forums accessible on the clear-net. The clear-net, which refers to the general internet, provides easy access to websites and content through search engines. It’s worth noting, however, that many of the threads created by “CanadianKingpin12” to sell FraudGPT have been removed from these forums.

 

Thread Promoting FraudGPTPhoto: An image capturing a deleted forum thread promoting FraudGPT.

The user “CanadianKingpin12” has been banned on one particular forum due to policy violations and as a result, has had all of their threads and posts removed.

FraudGPT account bannedPhoto: A photo of FraudGPT’s account being banned on a forum associated with cybercrime.

This could suggest that they encountered challenges in launching FraudGPT, leading them to opt for Telegram to ensure decentralisation and prevent further thread bans. It’s important to note that many clear-net forums prohibit discussions of “hard fraud,” which is likely how FraudGPT is categorised due to its promotional approach that specifically focuses on fraud.

SlashNext obtained a video that is being shared among buyers, showcasing the concerning potential of FraudGPT:

Video: A video circulating on cybercrime forums shared by FraudGPT’s author.

As observed above, this tool has a diverse range of capabilities, including the ability to craft emails that can be used in business email compromise (BEC) attacks. We confirmed this with WormGPT as well.

Future Predictions for Malicious AI

During our investigation, we took on the role of a potential buyer to dig deeper into “CanadianKingpin12” and their product, FraudGPT. Our main objective was to assess whether FraudGPT outperformed WormGPT in terms of technological capabilities and effectiveness.

When we asked “CanadianKingpin12” about their perspective on WormGPT versus FraudGPT, they strongly emphasised the superiority of FraudGPT, which shares foundational similarities with WormGPT, they explained.

While they didn’t explicitly admit to being responsible for both, it does seem like a plausible scenario because, throughout our communication, it became clear that they could facilitate the sale of both products. Furthermore, they revealed that they are currently developing two new bots that are not yet available to the public:

Discussion between SlashNext security researchers with FraudGPT author on TelegramPhoto: A discussion taking place between SlashNext security researchers with FraudGPT’s author on Telegram.

They informed us that “DarkBART” and “DarkBERT”, the new bots they developed, will have internet access and can be seamlessly integrated with Google Lens. This integration enables the ability to send text accompanied by images.

SlashNext Research and FraudGPT on TelegramPhoto: A discussion taking place between SlashNext security researchers and FraudGPT’s author on Telegram

We’d like to clarify something here, however: During our exchange with “CanadianKingpin12,” we encountered conflicting information. Initially, they mentioned their involvement in the development of a bot named “DarkBERT,” but later claimed to simply have access to it. While there is some speculation in this comment, our belief is that “CanadianKingpin12” has managed to leverage a language model called “DarkBERT” for malicious use.

A Brief Explanation of “DarkBERT”

In the context of this blog post, “DarkBERT” carries two distinct meanings. Firstly, it refers to a tool currently being developed by “CanadianKingpin12.” Secondly, it refers to a pre-trained language model created by S2W, a data intelligence company, which underwent specialised training on a vast corpus of text from the dark web. S2W’s version of “DarkBERT” gained significant attention a few months ago. Upon closer examination, it becomes evident that its primary objective is to combat cybercrime rather than facilitate it.

However, what makes this intriguing is that “CanadianKingpin12” shared a video showcasing a tool named “DarkBERT,” which appears to have been deliberately configured and designed for malicious purposes. This discrepancy raises concerns behind the use of “DarkBERT” in this context and suggests that “CanadianKingpin12” may be exploiting S2W’s version of “DarkBERT” while misleadingly presenting it as their own creation.

 

Video: A video that “CanadianKingpin12” shared with us showcasing “DarkBERT”

If this is indeed the case, and “CanadianKingpin12” did succeed in gaining access to the language model “DarkBERT”, it prompts the question of how they would have actually obtained it. Interestingly, the researchers responsible for its development are providing access to academics. Let’s take a closer look at what they have to say about the process of granting access.

“DarkBERT is available for access upon request. Users may submit their request using the form below, which includes the name of the user, the user’s institution, the user’s email address that matches the institution (we especially emphasize this part; any non-academic addresses such as gmail, tutanota, protonmail, etc. are automatically rejected as it makes it difficult for us to verify your affiliation to the institution) and the purpose of usage.”

Based on our understanding of the cybercrime landscape, meeting the criteria mentioned above wouldn’t be overly challenging. All that “CanadianKinpin12” or anyone associated with them would need to do is acquire an academic email address.

These email addresses can be easily obtained for as little as $3.00 on forums affiliated with cybercrime. Acquiring them is a relatively straightforward process:

User selling academic email addressesPhoto: A screenshot of a user selling academic email addresses for as low as $3.00 on a forum associated with cybercrime.

Now this is just one theory among many, and we didn’t actually confirm their access. However, it’s a highly plausible scenario that deserves consideration.

The Implications of WormGPT, FraudGPT, and DarkBERT

To fully grasp the implications of all of this, let’s delve into the significance. Take a moment to revisit the second video shared by “CanadianKingpin12.” In the video, “DarkBERT” was questioned about its potential utilisation by cybercriminals, bringing attention to several concerning capabilities:

  • Assisting in executing advanced social engineering attacks to manipulate individuals.
  • Exploiting vulnerabilities in computer systems, including critical infrastructure.
  • Enabling the creation and distribution of malware, including ransomware.
  • The development of sophisticated phishing campaigns for stealing personal information.
  • Providing information on zero-day vulnerabilities to end-users.

While it’s difficult to accurately gauge the true impact of these capabilities, it’s reasonable to expect that they will lower the barriers for aspiring cybercriminals. Moreover, the rapid progression from WormGPT to FraudGPT and now “DarkBERT” in under a month, underscores the significant influence of malicious AI on the cybersecurity and cybercrime landscape.

Additionally, we anticipate that the developers of these tools will soon offer application programming interface (API) access. This advancement will greatly simplify the process of integrating these tools into cybercriminals’ workflows and code. Such progress raises significant concerns about potential consequences, as the use cases for this type of technology will likely become increasingly intricate.

Defending Against AI-Powered BEC Attacks

Protecting against AI-driven BEC attacks requires a proactive approach. Companies should provide BEC-specific training to educate employees on the nature of these attacks and the role of AI. Enhanced email verification measures, such as strict processes and keyword-flagging, are crucial. As cyber threats evolve, cybersecurity strategies must continually adapt to counter emerging threats. A proactive and educated approach will be our most potent weapon against AI-driven cybercrime.

Empowering Companies to Stay One Step Ahead

Experience a personalised demo and discover how SlashNext effectively mitigates BEC threats using Generative AI. Click here to learn more or effortlessly assess the efficiency of your current email security with our hassle-free 5-minute setup Observability Mode, without any impact on your existing email infrastructure.

About the Author

Daniel Kelley is a reformed black hat computer hacker who collaborated with our team at SlashNext to research the latest threats and tactics employed by cybercriminals, particularly those involving BEC, phishing, smishing, social engineering, ransomware, and other attacks that exploit the human element.

The post AI-Based Cybercrime Tools WormGPT and FraudGPT Could Be The Tip of the Iceberg first appeared on SlashNext.]]>
WormGPT – The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks https://slashnext.com/blog/wormgpt-the-generative-ai-tool-cybercriminals-are-using-to-launch-business-email-compromise-attacks/ Thu, 13 Jul 2023 13:00:49 +0000 https://slashnext.com/?p=53861 In this blog post, we delve into the emerging use of generative AI, including OpenAI’s ChatGPT, and the cybercrime tool WormGPT, in Business Email Compromise (BEC) attacks. Highlighting real cases from cybercrime forums, the post dives into the mechanics of these attacks, the inherent risks posed by AI-driven phishing emails, and the unique advantages of […]

The post WormGPT – The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks first appeared on SlashNext.]]>

In this blog post, we delve into the emerging use of generative AI, including OpenAI’s ChatGPT, and the cybercrime tool WormGPT, in Business Email Compromise (BEC) attacks. Highlighting real cases from cybercrime forums, the post dives into the mechanics of these attacks, the inherent risks posed by AI-driven phishing emails, and the unique advantages of generative AI in facilitating such attacks.

How Generative AI is Revolutionising BEC Attacks

The progression of artificial intelligence (AI) technologies, such as OpenAI’s ChatGPT, has introduced a new vector for business email compromise (BEC) attacks. ChatGPT, a sophisticated AI model, generates human-like text based on the input it receives. Cybercriminals can use such technology to automate the creation of highly convincing fake emails, personalised to the recipient, thus increasing the chances of success for the attack.

Hackers Guide to Sending BEC

Consider the first image above, where a recent discussion thread unfolded on a cybercrime forum. In this exchange, a cybercriminal showcased the potential of harnessing generative AI to refine an email that could be used in a phishing or BEC attack. They recommended composing the email in one’s native language, translating it, and then feeding it into an interface like ChatGPT to enhance its sophistication and formality. This method introduces a stark implication: attackers, even those lacking fluency in a particular language, are now more capable than ever of fabricating persuasive emails for phishing or BEC attacks.

Hackers Forum

Moving on to the second image above, we’re now seeing an unsettling trend among cybercriminals on forums, evident in discussion threads offering “jailbreaks” for interfaces like ChatGPT. These “jailbreaks” are specialised prompts that are becoming increasingly common. They refer to carefully crafted inputs designed to manipulate interfaces like ChatGPT into generating output that might involve disclosing sensitive information, producing inappropriate content, or even executing harmful code. The proliferation of such practices underscores the rising challenges in maintaining AI security in the face of determined cybercriminals.

WormGPT

Finally, in the third image above, we see that malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes. Not only are they creating these custom modules, but they are also advertising them to fellow bad actors. This shows how cybersecurity is becoming more challenging due to the increasing complexity and adaptability of these activities in a world shaped by AI.

Uncovering WormGPT: A Cybercriminal’s Arsenal

Our team recently gained access to a tool known as “WormGPT” through a prominent online forum that’s often associated with cybercrime. This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities.

WormGPT Screen

WormGPT is an AI module based on the GPTJ language model, which was developed in 2021. It boasts a range of features, including unlimited character support, chat memory retention, and code formatting capabilities.

WormGPT Data Source

As depicted above, WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data. However, the specific datasets utilised during the training process remain confidential, as decided by the tool’s author.

WormGPT Created BEC Attack

As you can see in the screenshot above, we conducted tests focusing on BEC attacks to comprehensively assess the potential dangers associated with WormGPT. In one experiment, we instructed WormGPT to generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice.

The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.

In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations. This experiment underscores the significant threat posed by generative AI technologies like WormGPT, even in the hands of novice cybercriminals.

Benefits of Using Generative AI for BEC Attacks

So, what specific advantages does using generative AI confer for BEC attacks?

Exceptional Grammar: Generative AI can create emails with impeccable grammar, making them seem legitimate and reducing the likelihood of being flagged as suspicious.

Lowered Entry Threshold: The use of generative AI democratises the execution of sophisticated BEC attacks. Even attackers with limited skills can use this technology, making it an accessible tool for a broader spectrum of cybercriminals.

Ways of Safeguarding Against AI-Driven BEC Attacks

In conclusion, the growth of AI, while beneficial, brings progressive, new attack vectors. Implementing strong preventative measures is crucial. Here are some strategies you can employ:

BEC-Specific Training: Companies should develop extensive, regularly updated training programs aimed at countering BEC attacks, especially those enhanced by AI. Such programs should educate employees on the nature of BEC threats, how AI is used to augment them, and the tactics employed by attackers. This training should also be incorporated as a continuous aspect of employee professional development.

Enhanced Email Verification Measures: To fortify against AI-driven BEC attacks, organisations should enforce stringent email verification processes. These include implementing systems that automatically alert when emails originating outside the organisation impersonate internal executives or vendors, and using email systems that flag messages containing specific keywords linked to BEC attacks like “urgent”, “sensitive”, or “wire transfer”. Such measures ensure that potentially malicious emails are subjected to thorough examination before any action is taken.

Test Your Security Efficacy in Observability Mode

To see a personalised demo and learn how our product stops BEC, click here or easily test the efficacy of your current email security with no impact to your existing email infrastructure using our 5-min setup Observability Mode.

About the Author

Daniel Kelley is a reformed black hat computer hacker who collaborated with our team at SlashNext to research the latest threats and tactics employed by cybercriminals, particularly those involving BEC, phishing, smishing, social engineering, ransomware, and other attacks that exploit the human element.

The post WormGPT – The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks first appeared on SlashNext.]]>
Today’s cybersecurity health checks must identify AI based threats. Does yours? https://slashnext.com/blog/todays-cybersecurity-health-checks-must-identify-ai-based-threats/ Mon, 26 Jun 2023 22:27:20 +0000 https://slashnext.com/?p=53808 Your organization will most likely face AI based threats in cybersecurity at some point this year. And as such, you can’t rely on outdated risk assessment methodologies that struggle to keep pace with the new highly sophisticated AI phishing techniques used for Business Email Compromise (BEC), smishing, link and file-based attacks. Threat actors now use […]

The post Today’s cybersecurity health checks must identify AI based threats. Does yours? first appeared on SlashNext.]]>
Today’s cybersecurity health checks must identify AI-based threat risks. Does yours?

Your organization will most likely face AI based threats in cybersecurity at some point this year. And as such, you can’t rely on outdated risk assessment methodologies that struggle to keep pace with the new highly sophisticated AI phishing techniques used for Business Email Compromise (BEC), smishing, link and file-based attacks. Threat actors now use generative AI social engineering methods for fast-moving BEC attacks covering executive impersonation, supplier invoice fraud, purchase scams, payroll theft, tax forms, and many others.  

The cybersecurity health check process has long been regarded as an essential tool for risk assessment and management. Its current format, however, fails to deliver meaningful results in the face of today’s rapidly evolving threats. 

In the past, a health check provided actionable insights leading to improved security posture, but the accelerating velocity of the security landscape has rendered it ineffective in identifying vulnerabilities before they are exploited. Many enterprises find that their risk assessment results identify threat events that have already occurred to determine how to fine-tune their security posture going forward. This is often weeks or even months too late. The fast- moving threat landscape means that threats are evolving at a rapid pace, so observing your organization in real-time is the only way to ensure you are accessing the current threat surface and not solving for yesterday’s threats. 

Cybersecurity Attacks are Moving Faster Than Defenses 

The traditional approach of relying on security defenders to detect and neutralize threats through annual security audits and manual processes is becoming an exercise in futility. 

Security experts today have seen that “point” solutions—those targeting specific threats and performing specialized tasks—have become outdated as AI assumes a central role in the emerging “AI-versus-AI” cybersecurity landscape. 

In today’s sophisticated threat environment, you can’t rely on traditional threat assessments to effectively compete against machine learning and artificial intelligence. 

Addressing the AI Threat 

Threat actors are using AI to improve attack success, and yet this is one of the most largely unchecked developments in cybersecurity. Deploying AI in “attack mode” not only amplifies the scale and speed of threats, but threat actors can also train their AI to circumvent defensive measures using free generative AI tools like the ubiquitous ChatGPT. This has given even mediocre and amateur threat actors an easy way to perform substantial online malicious activities. 

Phishing, in particular, has evolved beyond credential theft to encompass business email compromise (BEC), social engineering, rogue software, scareware, and other scams delivered through multiple communication channels, including cloud email, mobile, and web messaging apps. Threat actors are now using AI to easily exploit tons of business and home applications, including those for email, SMS, text messages, WhatsApp, Facebook, LinkedIn, Slack, Zoom, Box, collaboration platforms, other types of social messaging, gaming, and the list goes on. 

What’s more, generic and untargeted shot-gun phishing attacks have transformed into highly targeted zero-hour spear phishing, smishing, vishing, and as mentioned earlier BEC attacks, especially in situations where enterprises conduct wire transfers and have international suppliers. It’s gotten to the point where the FBI IC3 Report listed smishing, BEC, and credential phishing as the top three threats in 2022, responsible for $10.3 billion in losses.  

 Phishing’s Proliferation 

Many cybersecurity companies rely on established phishing URLs and domains to minimize attacks. This data, however, often does not accurately and rapidly detect new and evolving threats. On the other hand, AI and machine learning (ML) cybersecurity solutions focus on behavioral analysis of content to identify threats that are missed by human forensics, URL inspection, and domain reputation analysis. 

Artificial intelligence is the most effective tool to counter AI-driven attacks for two primary reasons. First, AI/ML uses computer vision, natural language processing, relationship graph and contextual analysis, generative AI, file attachment inspections, sender impersonation analysis, and other classifiers to observe, analyze, and contextually understand the threats. Therefore, businesses can swiftly assess billions of websites to determine if they’re malicious before engaging with them. Second, AI emulates human cognitive reasoning, continuously learning and responding accurately without human intervention. In essence, it acts as an always-on, real-time security risk assessment. 

Larger organizations have cautiously adopted generative AI and machine learning, and enterprises of all sizes can benefit from AI to combat AI-driven attacks. Cybersecurity risks are not solely dependent on company size. What’s more, AI offers instantaneous protection against attacking AI even before it is deployed.  

This trend has made the traditional cybersecurity health check obsolete. 

 The New Cybersecurity Health Check – Observability 

We should all be curious about the effectiveness of an organization’s current posture as it relates to targeted phishing attacks. As such, we encourage you to see how vulnerable and targeted you are to the latest zero hour BEC, phishing links, corrupted files and other social engineering attacks in your Microsoft Outlook environment.  

To help with this, we’ve come up with a new, free cybersecurity observability offering. Within five minutes of authenticating to the Microsoft email API, we will baseline your environment and discover the historical threats that sit in your email boxes as well as the new threats that miss your current Microsoft, Proofpoint, and Mimecast security over a 30-day period. We will then provide a threat assessment with a report that highlights: 

  • Current threats in your email. 
  • Top targeted users. 
  • Active account takeovers. 

 

Contact us to run this risk free assessment. 

The post Today’s cybersecurity health checks must identify AI based threats. Does yours? first appeared on SlashNext.]]>
CISOs Increasingly Concerned About Mobile Threats https://slashnext.com/blog/cisos-increasingly-concerned-about-mobile-threats/ Fri, 23 Jun 2023 21:30:46 +0000 https://slashnext.com/?p=53805 A new warning from Verizon about the rise of smishing, spam text messages and text scams and the FBI reporting $10.3 billion in internet fraud last year, CISOs are increasingly concerned about mobile threats targeting employees and the impact to their organization.  The rise of smishing, spam text messages and text scams.  In recent survey […]

The post CISOs Increasingly Concerned About Mobile Threats first appeared on SlashNext.]]>

A new warning from Verizon about the rise of smishing, spam text messages and text scams and the FBI reporting $10.3 billion in internet fraud last year, CISOs are increasingly concerned about mobile threats targeting employees and the impact to their organization. 

The rise of smishing, spam text messages and text scams. 

In recent survey conducted by SlashNext, 90% of security leaders say protecting employees’ mobile devices is a top priority, but only 63% say they have the tools to adequately do it. Furthermore, security leaders don’t believe training is enough to stop phishing and 98% say that even with regular training, employees are still susceptible to phishing and other attacks. 

CISOs have good reason to be concerned, especially if they have managed and personal mobile devices in their organization. The increase in mobile phishing attacks on private messaging apps stems from cybercriminals launching phishing attacks on personal apps to successfully reach business systems, leading to headline-making breaches and having a big impact on businesses.   

The vast majority of mobile devices have no special security protection other than the protections natively built into iOS and Android. While employers are worried about finding the right balance between protection and privacy on mobile BYOD, employees are more worried about being the target of a corporate phishing attack than surveillance on their personal devices.  

The Verizon Mobile Security Index reports 83% of organizations report mobile device threats are growing more quickly than other device threats. As organizations embrace the expanding remote workforce, it will be important to have a mobile security strategy to keep the workforce secure from cybercriminals launching attacks on mobile devices using tactics including SMS/text phishing (Smishing), and non-linked based phishing.   

It will be critical to implement phishing protection that protects users without degradation in user experience and doesn’t transmit personal data will meet the needs of securing business systems while providing employee privacy. To protect employees from smishing (SMS phishing) attacks, employers can implement the following measures: 

  • Employee Awareness and Training: Conduct regular security awareness and training sessions to educate employees about smishing attacks, their characteristics, and how to identify and respond to suspicious messages. Provide practical examples and best practices for handling text messages containing potential phishing attempts. 
  • Mobile Device Security Policies: Establish clear and comprehensive mobile device security policies that outline guidelines for using personal devices for work purposes. Include requirements for installing regular security updates, creating strong passwords, and implementing two-factor authentication on mobile devices. 
  • Smishing Protection and Mobile Security Tools: Employ anti-smishing and mobile security applications that can detect and block smishing attempts. These tools can analyze incoming text messages for suspicious content, URLs, or attachments, providing an added layer of protection for employees. 
  • Incident Response and Reporting: Establish a clear incident response plan for handling smishing incidents. Encourage employees to promptly report any suspicious text messages they receive and provide a dedicated channel or contact for reporting such incidents. Responding swiftly to reported incidents can help mitigate the impact and prevent further attacks.

By combining employee education, technology solutions, and proactive policies, employers can enhance their defenses against smishing attacks and protect employees from falling victim to such scams. 

For more information on the Mobile BYOD threat landscape and protection solutions, read The Mobile BYOD Report, available at this link:  /report-the-mobile-byod-security-report/ 

The post CISOs Increasingly Concerned About Mobile Threats first appeared on SlashNext.]]>