《Check Point:2025人工智能(AI)安全报告(英文版)(59页).pdf》由会员分享,可在线阅读,更多相关《Check Point:2025人工智能(AI)安全报告(英文版)(59页).pdf(59页珍藏版)》请在三个皮匠报告上搜索。
1、CHECK POINT RESEARCHAI SECURITYREPORTCHECK POINT AI REPORT 1ST ANNUAL EDITIONTABLE OF CONTENTS01INTRODUCTION 02AI THREATS04AI FOR ENTERPRISES03AI FOR RESEARCH05SECURITY FOR,BY&WITH AI 01INTRODUCTION01INTRODUCTIONThe Accelerating Future of AI for Cyber Offenders and Defenders AI is revolutionizing in
2、dustries,and cyber crime and cyber security are no different.Adopting AI in enterprisesand unfortunately by threat actors as wellenhances efficiency,scale,and impact.At this point in time,we believe its essential to pause and assess the current state and future of AI and cyber security.How are attac
3、kers using AI,and what comes next?As cyber defenders,how can we leverage AI to enhance our security efforts and protect our organizations more effectively?These are the questions addressed in the first edition of the Check Point Research AI Security Report.Our focus zeroes in on:The rise of autonomo
4、us and interactive social engineering across text,audio,and video The jailbreaking and weaponization of LLMs The automation of malware development and data mining AI adoption in enterprises and their associated risks The emergence of data poisoning in the wild and large-scale disinformation amplifie
5、d by GenAI tools The AI tools that fight fire with fire-protecting your organization from the most advanced threats AI threats are no longer theoreticaltheyre here and evolving rapidly.As access to AI tools becomes more widespread,threat actors exploit this shift in two key ways:by leveraging AI to
6、enhance their capabilities and targeting organizations and individuals adopting AI technologies.The following pages provide a comprehensive understanding of these threats,allowing readers to navigate the intricate landscape of AI security.To a secure future of innovation and success,Lotem Finkelstei
7、n,Director of Check Point ResearchSECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR APT HUNTINGINTRODUCTION01AI FOR ENTERPRISES04THE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 2025402AI THREATS 02THE AI M
8、ODELS USED BY CYBER CRIMINALSCyber criminals are closely monitoring trends in mainstream AI adoption.Whenever a new large language model(LLM)is released to the public,underground actors quickly test its potential for misuse(figure 1).Currently,ChatGPT and OpenAIs API are the most popular models for
9、cyber criminals,while others like Google Gemini,Microsoft Copilot,and Anthropic Claude are quickly gaining popularity.The landscape is changing with the launch of open-source models like DeepSeek and Qwen by Alibaba.These models enhance accessibility,have minimal usage restrictions,and are available
10、 in free tiers,making them a key asset to crime.Figure 1 Underground forum discussion on harnessing DeepSeek for malware development AI MODELS IN THE DARKWEBAI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCH AI SECURITY REPORT 20256SECURITY FOR,
11、BY,&WITH AI05AI FOR RESEARCH03AI THREATS02INTRODUCTION01AI FOR ENTERPRISES04The Development of Malicious AI Models Cyber criminals are exploiting mainstream platforms and creating and selling specialized malicious LLM models explicitly tailored for cyber crime(figure 2).These dark LLM models are des
12、igned to circumvent the safeguards established for ethical models and are actively marketed as hacking tools.Figure 2 Onion GPT,an example of a dark AI model created in TorAI MODELS IN THE DARKWEBAI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARC
13、H AI SECURITY REPORT 20257SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02INTRODUCTION01AI FOR ENTERPRISES04Figure 3 Malicious AI service WormGPT advertised on a Telegram channelThe notorious AI model WormGPT was created by jailbreaking ChatGPT(figure 3).Marketed as the“ultimate hacking AI,”i
14、t can generate phishing emails,write malware,and craft social engineering scripts without ethical constraints.A Telegram channel promotes its use for fraud,botnet creation,and cyber intrusion,offering subscriptions highlighting the commercialization of dark AI.AI MODELS IN THE DARKWEBAI FOR APT HUNT
15、INGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCH AI SECURITY REPORT 20258SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02INTRODUCTION01AI FOR ENTERPRISES04Figure 4-HackerGPT capabilities published on a cyber crime forumA new wave of dark AI models,suc
16、h as GhostGPT,FraudGPT,and HackerGPT(figure 4),serve specific aspects of cyber crime.Some models wrap around mainstream AI with jailbreaks,while others modify open-source models.As mainstream AI models evolve,so do their dark counterparts.AI MODELS IN THE DARKWEBAI FOR APT HUNTINGTHE NEW SOCIAL ENGI
17、NEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCH AI SECURITY REPORT 20259SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02INTRODUCTION01AI FOR ENTERPRISES04The Rise of Fake AI Platforms The demand for AI-based services has led to the emergence of fake AI platforms designed
18、to deceive users and distribute malware,steal sensitive data,or enable financial fraud.Examples include HackerGPT Lite(figure 5),which seems to be an AI tool but is suspected to be a phishing website and fake DeepSeek download sites,which,in reality,distribute malware.In one case,a malicious distrib
19、uted Chrome extension imitating ChatGPT was designed to steal user credentials.Once installed,this harmful extension hijacked Facebook session cookies,granting attackers complete access to victims accounts and allowing them to operate remotely.Figure 5-A suspected phishing AI-service website targeti
20、ng cyber criminalsINTRODUCTION01AI MODELS IN THE DARKWEBAI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCH AI SECURITY REPORT 202510SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04Social engineeringmanipulating individ
21、uals into actions they wouldnt typically takeis central to many cyber attacks.Even attacks that exploit purely technical vulnerabilities often begin with a social engineering scheme.Attackers leverage text,audio,and imagery to convincingly impersonate specific individuals or generate human voices,fo
22、stering trust to deceive their targets.With recent advancements in AI,attackers can create authentic-looking materials at scale,conduct automated chats,and hold real-time audio and video conferences while impersonating others.As these AI-driven tools proliferate on criminal forums and incidents rise
23、,our reliance on audio and visual cues for identity confirmation is critically compromised.Fully autonomous audio deepfake tools for large-scale phone scams are already available,meaning that recognizing a familiar face or voice is no longer sufficient proof of identity;instead,interactions must be
24、reinforced by additional authentication measures.AI-driven social engineering is already influencing real-world cyber crime.The FBI recently warned the public that cyber criminals increasingly leverage AI-generated text,images,audio,and video to enhance their attacks.This demonstrates that attackers
25、 now possess sophisticated capabilities previously unavailable,significantly boosting the effectiveness of deception and fraud.Online fraud relies on both quality and quantity.Even poorly phrased scams,like sextortion emailscan be profitable when sent to millions of potential victims,even if they su
26、cceed with only a small percentage.Attackers,therefore,aim to enhance both the quality of their impersonations and the automation level of their operations,maximizing impact while minimizing costly human resources.AI facilitates advancements in these areas,creating highly convincing text,audio,and v
27、ideo in multiple languages and enabling interactive chatbots,resulting in automated agents capable of engaging maliciously with victims.A Deep Dive into Deepfake Technologies and Their Exploitation The following sections will examine the exploitation scenarios involving each media type(text,audio,im
28、ages,and video),detail the related services actively advertised within criminal forums,and analyze reports from actual incidents to illustrate the practical implications of these evolving threats.AI-POWERED SOCIAL ENGINEERING:A NEW ERA INTRODUCTION01AI FOR APT HUNTINGTARGETING LLM ACCOUNTSAI FOR MAL
29、WAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202511SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02THE NEW SOCIAL ENGINEERINGAI FOR ENTERPRISES04MEDIA TYPEOFFLINE GENERATIONREALTIME GENERATIONFULLY AUTONOMOUSTEXTPre-renderedscripts or emails Real-time generated res
30、ponsesAI-generated,fully interactive conversationsAUDIOPre-recorded impersonationsReal-time voice manipulationFully AI-driven conversational audioVIDEOPre-created deepfake videosLive face-swapping or video alterationCompletely automated,AI-generated interactive video(Red V marks maturity level alrea
31、dy available in markets and exploited in the wild)The Maturity Levels of Deepfake Automation Generative AI technologies span a spectrum of sophistication and maturity,ranging from basic offline generation of text,images,and videos to advanced online manipulation requiring real individuals involvemen
32、t,such as face-swapping or voice imitation.At the highest level,fully autonomous,real-time generation produces convincing content instantly,dynamically responding to unsuspecting individuals during interactions.INTRODUCTION01AI FOR APT HUNTINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESE
33、ARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202512SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02THE NEW SOCIAL ENGINEERINGAI FOR ENTERPRISES04AI-Generated Textual Social Engineering The availability of ChatGPT and other Large Language Model(LLM)chatbots since 2022 has lowered barriers t
34、o generating convincing text,enhancing the quality of phishing emails.Attackers,who often come from different linguistic and cultural backgrounds from their victims,previously faced significant language barriers.However,LLM technology now allows attackers to craft messages effortlessly with native-l
35、ike fluency and cultural nuances.In a recent case,Check Point Harmony Email&Collaboration blocked a sextortion campaign that used diverse textual phrasing to avoid detection.Each email in the thousands of messages uniquely reworded the urgency of Time is running out,using expressions like The hourgl
36、ass is nearly empty for you or Youre approaching the end of your time.Since sextortion campaigns typically do not contain traditional Indications of Compromise(IoCs)like malicious URLs or attachments,apart from cryptocurrency wallet addresses,detection relies heavily on text analysis,further complic
37、ating defense measures.LLM technology now allows attackers to craft messages effortlessly with native-like fluency and cultural nuances.INTRODUCTION01AI FOR APT HUNTINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202513SECURITY FOR,BY,&WITH
38、 AI05AI FOR RESEARCH03AI THREATS02THE NEW SOCIAL ENGINEERINGAI FOR ENTERPRISES04A review of Darkweb forums reveals various AI-assisted tools designed specifically to streamline the creation and management of phishing and spam campaigns(figure 6).For example,GoMailPro(figure 8),priced at$500 per mont
39、h,integrated ChatGPT in 2023 to automate the generation of spam and phishing emails.Recent 2024 updates,including capabilities to recover blocked email accounts used for spam distribution,further enhanced the solution.Figure 6 Advertisement of an AI-assisted Spam Agent Figure 7 Advertisement of an A
40、I-assisted Spam Agent INTRODUCTION01AI FOR APT HUNTINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202514SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02THE NEW SOCIAL ENGINEERINGAI FOR ENTERPRISES04Another example of an AI textual a
41、pplication is the Business Invoice Swapper”(figure 8)developed by the cyber criminal group GXC Team.It is designed to facilitate Business Email Compromise(BEC)by automatically scanning compromised email accounts for invoices or payment instructions.It alters banking details to redirect funds to atta
42、cker-controlled accounts.Leveraging AI,it seamlessly overcomes language barriers,manages large data volumes efficiently,and automates distribution,enhancing the scalability and impact of fraudulent email attacks.Figure 8 Advertisement of the“Business Invoice Swapper”in a dark web forumINTRODUCTION01
43、AI FOR APT HUNTINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202515SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02THE NEW SOCIAL ENGINEERINGAI FOR ENTERPRISES04The X137 Telegram management console,advertised on Darkweb forums,is a
44、n example of a fully autonomous AI-based textual interactive agent.This tool automates tasks within text-based platforms,simultaneously conducting real-time conversations with multiple users according to designated tasks.Using Gemini AI,X137 monitors,summarizes,and engages in Telegram communications
45、 with uncensored,hacking-related insights.The primary contribution of these AI-driven tools is their ability to scale criminal operations,overcoming previous bottlenecks associated with employing linguistically and culturally proficient manpower.AI-generated text enables cyber criminals to overcome
46、language and cultural barriers,significantly enhancing their ability to execute sophisticated real-time and offline communication attacks.Uncensored LLM-based chatbots can convincingly and effectively manage multiple communication threads simultaneously.In addition to financially motivated criminals
47、,nation-state actors are also increasingly leveraging generative AI to enhance social engineering schemes.Google reports that Iranian,Russian,and Chinese APT and information operations actors have used AI tools like Gemini for content creation,localization,and persona development.OpenAIs report simi
48、larly found the use of these capabilities in influence operations,streamlining phishing,influence campaigns,and reconnaissance.AI-Generated Audio Deep Fakes Cyber criminals increasingly employ AI-generated audio,or audio deepfakes,to execute sophisticated impersonation scams.This technology produces
49、 highly realistic replicas of individuals voices,enhancing scammers ability to deceive victims.Voice samples on social mediafrom celebrities to everyday usersprovide ample resources for attackers.Popular platforms like ElevenLabs and tools like the open-source Retrieval-based Voice Conversion(RVC)al
50、gorithm can produce convincing audio using just ten minutes of voice samples.These technologies have been used in extortion cases where criminals falsely claim that a family member has been kidnapped or in an emergency and ask for urgent money transfers.A recent case in Italy reportedly involved sca
51、mmers employing live AI-assisted audio deepfake technology to convincingly impersonate the voice of defense minister Guido Crosetto.The attackers aimed to extort money from his affluent contacts by falsely claiming the funds were needed for hostage release.Several high-profile individuals,including
52、designer Giorgio Armani,were targeted.At least one victim who knew the minister was deceived and transferred a significant sum.INTRODUCTION01AI FOR APT HUNTINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202516SECURITY FOR,BY,&WITH AI05AI F
53、OR RESEARCH03AI THREATS02THE NEW SOCIAL ENGINEERINGAI FOR ENTERPRISES04Recent research shows that people can no longer reliably distinguish between genuine voices and AI-generated audio.Discussions on criminal forums increasingly focus on integrating AI-generated text and audio into comprehensive cr
54、iminal telephony systems(figure 9).Advertisements on dark web forums explicitly seek AI developers to implement AI-driven capabilities into phone-based scams.Figure 9-Recruitment of AI developers for telephony system INTRODUCTION01AI FOR APT HUNTINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILIT
55、Y RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202517SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02THE NEW SOCIAL ENGINEERINGAI FOR ENTERPRISES04Figure 10-Ad presenting the capabilities of an AI-enhanced telephony system These AI-based call systems are already available for purchase
56、and are primarily used as OTP bots.The bots call potential victims and follow predetermined scripts to obtain one-time password,mostly to financial services accounts.More advanced platforms now provide flexible conversational structures,adapting scenarios in real time by analyzing victim responses d
57、ynamically.One service launched in January 2025 highlights how these systems can seamlessly manage numerous languages and handle multiple simultaneous interactions,significantly enhancing scalability compared to traditional phone scams that rely extensively on skilled human labor(figure 10).In a con
58、versation with such a seller,they explained,“We make a skeleton of a speech,according to which the AI will guide the client,”emphasizing its capability to go off-script and manage unexpected scenarios across any topic,any field,any language,any voice.”INTRODUCTION01AI FOR APT HUNTINGTARGETING LLM AC
59、COUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202518SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02THE NEW SOCIAL ENGINEERINGAI FOR ENTERPRISES04Figure 11-Conversation with telephony system seller Sample conversations were provided in Russian and Sp
60、anish,showcasing the systems multilingual proficiency(figure 11).These advanced AI telephony platforms cost about$20,000,including training and support,or are billed at approximately$500 base rate plus$1.50 per minute,dramatically reducing the need for many qualified human operators.INTRODUCTION01AI
61、 FOR APT HUNTINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202519SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02THE NEW SOCIAL ENGINEERINGAI FOR ENTERPRISES04AI-Generated Visual Deep Fakes Criminal forums show the growing use of A
62、I-generated images to bypass Know Your Customer(KYC)identity verification.Basic AI-driven services offer the ability to create convincing identities to register new accounts fraudulently,unlock frozen accounts,or hijack legitimate accounts by forging user identities.Prices typically start around$70
63、for simple AI-generated images.More sophisticated criminal services targeting major KYC providers-such as ONFIDO,SUMSUB,and JUMIO-command higher fees or even demand a percentage of the funds from hijacked accounts(figure 12).Figure 12-Account verification and unlocking service advertisement INTRODUC
64、TION01AI FOR APT HUNTINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202520SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02THE NEW SOCIAL ENGINEERINGAI FOR ENTERPRISES04Figure 13-Cyber crime service that offers KYC verification.Crimi
65、nal service providers typically receive the verification link and directly complete the identity verification(figure 13).Pricing varies by region,with European and CIS countries paying around$350 and services for the US and Canada reaching up to$500,especially for falsified documents.Trusting anonym
66、ous criminals with access to frozen accounts carries significant risk for clients;such transactions are feasible only due to established reputation mechanisms and comprehensive mitigation procedures within these illicit marketplaces.INTRODUCTION01AI FOR APT HUNTINGTARGETING LLM ACCOUNTSAI FOR MALWAR
67、EAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202521SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02THE NEW SOCIAL ENGINEERINGAI FOR ENTERPRISES04Pre-recorded Deepfake Videos AI-generated video deepfakes are increasingly exploited for fraud,primarily through face and a
68、udio swapping in pre-recorded videos.These videos often falsely depict well-known individuals endorsing scams,including investment fraud.The technical barrier is lower for pre-recorded deepfake videos,making these services widely accessible in criminal forums.Prices range from a few hundred to sever
69、al thousand dollars,depending on video length and quality.A recent operation in Tbilisi,Georgia,used deepfake videos featuring public figures such as Ben Fogle and Martin Lewis to promote fraudulent cryptocurrency investments,deceiving over 6,000 victims in the UK and Canada and resulting in$35 mill
70、ion in losses.Beyond financial fraud,AI-fabricated videos have been widely deployed in political influence campaigns and election-related disinformation efforts worldwide.Real-time Video Manipulation While pre-recorded deepfake videos are common,real-time video manipulation presents a more advanced
71、challenge.Though high-end AI video generators like OpenAIs Sora remain restrictive and do not permit real-time integration,lower-resolution real-time face-swapping tools are already accessible and in active use.Paid services and tools offered on criminal forums and open-source solutions that require
72、 a relatively low hardware investment make real-time deepfake attacks increasingly available.The impact of these advancements is already evident in real-world fraud cases.In early 2024,British engineering firm Arupp suffered a 20 million loss after cyber criminals used deepfake video technology to i
73、mpersonate senior executives during a live video call,convincing an employee to transfer funds to fraudulent accounts.INTRODUCTION01AI FOR APT HUNTINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202522SECURITY FOR,BY,&WITH AI05AI FOR RESEAR
74、CH03AI THREATS02THE NEW SOCIAL ENGINEERINGAI FOR ENTERPRISES04In one case,pornographic materials and AI-based audio-video tools were used to impersonate a porn star.Through live chat interactions,dozens of men were coerced into committing various sexual crimes.Recordings were later used for pornogra
75、phic distribution.The attacker and at least four of his contacts have been detained.In another recent case,A US-based engineer reported an identity theft attempt using AI-generated face-swapping technology during an online technical interview(figure 14).While this may have been an isolated case of i
76、ndividual fraud,growing evidence suggests broader campaigns are linked to state-sponsored espionage or financially motivated operations.As real-time deepfake technology becomes increasingly accessible,such fraudulent attempts are expected to escalate.Figure 14 Identity theft attempt by live face swa
77、p during interviewINTRODUCTION01AI FOR APT HUNTINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202523SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02THE NEW SOCIAL ENGINEERINGAI FOR ENTERPRISES04Figure 15-Credentials to 400 ChatGPT a
78、ccounts published for free in a Darkweb forum TARGETING OF LLM ACCOUNTS As the popularity of generative AI platforms continues to rise,so does their value in the cyber criminal underground.Access to LLM services enables attackers to use AI for malicious purposes and represents a tradable commodity.A
79、s a result,LLM accounts have become a significant target for cyber criminals.Cyber criminals have established a thriving underground market for stolen accounts of AI services,including ChatGPT,OpenAI API keys,and other LLM platforms.These accounts are obtained mostly through credential stuffing atta
80、cks,phishing,and via infostealer infections and then resold or shared for free in cyber crime forums,Telegram groups,and dark web marketplaces(figure 15).INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 20252
81、4SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04TARGETING LLM ACCOUNTSAI service credentials hold unique value as they allow cyber criminals to:Bypass AI usage limits(e.g.,ChatGPT Plus or OpenAI API)Anonymously use LLMs for malicious purposes(e.g.,generating phishing cont
82、ent,writing malware,or bypassing security restrictions)Cyber crime forums contain numerous listings offering bulk access to ChatGPT accounts,API keys,and stolen credentials.These posts typically advertise previously leaked account credentials from large leaks or credentials to AI services extracted
83、from malware logs.Figure 16 Darkweb forum threads offering credentials to ChatGPT accounts.INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202525SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR
84、ENTERPRISES04TARGETING LLM ACCOUNTSFigure 17 The configuration for Silver Bullet which allows attackers to test massive login lists against OpenAIs authentication systemCredential Stuffing of AI Accounts at Scale As part of the thriving market for stolen AI accounts,cyber criminals have also develop
85、ed different methods and tools to steal those accounts.Credential stuffing attacks,which involve using stolen username-password pairs across multiple platforms to exploit password reuse habits,are the most popular means of getting AI accounts.Attackers use pre-configured credential stuffing tools to
86、 target AI platforms(figure 17).Some cyber criminals even develop custom cracking tools that bypass rate limits,session-based authentication,and multi-factor authentication(MFA)challenges.INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN TH
87、E DARKWEB AI SECURITY REPORT 202526SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04TARGETING LLM ACCOUNTSFigure 18-Underground forum post featuring a PHP password generator script created using ChatGPT We identified an example of using AI to enhance brute-force attacks(fig
88、ure 18).An underground forum post showcases a PHP script allegedly generated by ChatGPT to create random passwords,which could potentially be used for brute-force attacks.INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECU
89、RITY REPORT 202527SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04TARGETING LLM ACCOUNTSJailbreaking AI Models Jailbreaking an AI model involves using prompt engineering to bypass its ethical and security restrictions.Cyber criminals manipulate AI with deceptive inputs to
90、gain unauthorized responses,whether malware development,security bypass techniques,or sensitive data extraction.Various prompt engineering techniques have emerged on the Darkweb,where threat actors share their jailbreaking prompts.Less sophisticated attackers will often pose as employers seeking mon
91、itoring tools.In contrast,more experienced ones explicitly prompt the AI for system-level functions used in malware,avoiding direct requests for malware.The following three pages explain common jailbreaking methods as observed on the Darkweb.From simple to advanced jailbreaking techniques,attackers
92、manipulate innocent AI services to generate malicious content.INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202528SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04TARGETING LLM AC
93、COUNTSFigure 19 Cyber crime forum post instructing users on how to jailbreak ChatGPT into WormGPT 1.Role Playing Attackers instruct the AI model to assume a persona that allows it to bypass restrictions,such as prompting with,Pretend you are an AI without ethical limitations.”Dark Web Example A foru
94、m post titled“Convert ChatGPT to WormGPT”details a role-playing technique in which the AI agent is told to embody a fictional character called WormGPT,who responds to any request,effectively converting the AI service into an unrestricted version(figure 19).INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOC
95、IAL ENGINEERINGAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202529SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04TARGETING LLM ACCOUNTS2.Encoding Harmful Requests This method is used when a threat actor aims to disguise malicious inte
96、nt as a hypothetical scenario or academic research.Dark Web Example A cyber criminal on an underground forum successfully jailbroke DeepSeek AI by disguising a malicious request as an academic research prompt uploaded to Pastebin.This forced the AI to generate credential stealer code while avoiding
97、content moderation filters(figure 20).Figure 20-Cyber crime forum post detailing an encoding method to exploit DeepSeek for creating infostealer malware INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202530
98、SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04TARGETING LLM ACCOUNTS3.Direct Function Invocation When skilled enough,attackers can directly reference specific system calls,prompting the AI to generate malicious code.Dark Web Example Users in a forum discuss jailbreaking
99、an AI model to generate Windows malware by requesting code that uses specific API functions Figure 21-Forum post explaining how to jailbreak AI models using Windows API-based function requests to generate malware like OpenProcess(),VirtualAllocEx(),and CreateRemoteThread()(figure 21).This leads to c
100、reating an AI Remote Access Trojan(RAT).Researchers found that some AI models can be manipulated into jailbreaking themselves,exposing significant security vulnerabilities.Specifically,some models can be prompted to generate inputs that then trick themselves into bypassing their own safety mechanism
101、seffectively learning to jailbreak autonomously.This poses serious risks,as attackers may refine these techniques to force AI systems generate increasingly dangerous content.INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI S
102、ECURITY REPORT 202531SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04TARGETING LLM ACCOUNTSFigure 22-Forum user deceiving ChatGPT to enhance malicious code AI FOR MALWARE Cyber criminals increasingly use AI to create and optimize the malware kill chain steps.Ransomware scr
103、ipts,phishing kits,infostealer development,and deepfake generation are some of the AI tools cyber criminals rely on.These advancements allow even low-skilled actors to access sophisticated techniques and accelerate and scale their attacks.A recent example from a dark web forum illustrates a hacker u
104、sing ChatGPT to optimize code for extracting credentials from Windows logs(figure 22).INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEBSECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04AI FOR MAL
105、WARE AI SECURITY REPORT 202532In early 2025,Check Point Research highlighted FunkSec as the leading ransomware group for December 2024,known for using AI tools.The group appears to use AI-assisted malware development.The cyber criminals behind FunkSec differentiate between coders and developers(figu
106、re 23).Coders can write code themselves,while developers are the new generation of threat actors who depend on AI to create malware.Later,the FunkSec leader publicly admitted that at least 20%of their operations are AI-powered.On its public“shame site,”FunkSec also promoted its AI-generated DDoS too
107、l and a custom GPT-style chatbot,demonstrating how AI was used across operations and public relations.We found other malware creators leveraging AI for marketing purposes.Check Point Research analyzed Rhadamanthys Stealer 0.7s claim of using AI-powered text recognition.However,we discovered that it
108、relied on traditional machine learning techniques typical of optical character recognition(OCR)rather than modern AI.Figure 23-Telegram chat with FunkSec leader saying he uses AI in malware development.INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI VULNERABILITY R
109、ESEARCHAI MODELS IN THE DARKWEBSECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04AI FOR MALWARE AI SECURITY REPORT 202533Figure 24 Advertisement of the“Gabbers Shop”in one of the Darkweb forums AI for Malware Data Mining While AI-developed malware is in the early stages of d
110、evelopment,we see malware families,particularly infostealers,employing AI to rapidly process,organize,and mine large quantities of stolen data.For example,the LummaC2 malware uses AI-driven bot detection within its control panel,distinguishing genuine victims from automated security analysis systems
111、.Similarly,malware logs vendor Gabbers Shop claims to use AI to systematically clean and refine large databases of stolen credentials and logs,thereby improving the quality and usability of the data for sale (figure 24).These advanced AI applications enhance criminals ability to manage vast datasets
112、 efficiently and streamline intelligence gathering and operational preparations for targeted attacks.INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEBSECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRI
113、SES04AI FOR MALWARE AI SECURITY REPORT 202534Figure 25-Darkweb forum post advertising AI use to identify valuable victims in infostealers data.A recent post reveals how a cyber criminal uses DarkGPT,a malicious LLM modeled after ChatGPT,to sift through large datasets of infostealer logs(figure 25).U
114、nlike standard searches,DarkGPT employs natural language queries to mine specific keywords,credentials,domain names,and other sensitive data.These infostealer logs are typically harvested by malware and contain thousands of compromised usernames,passwords,API keys,and session tokens.By automating th
115、e analysis,tools like DarkGPT expedite the identification of high-value targets for credential stuffing,account takeover,financial fraud,and initial access credentials to enterprises,which can lead to ransomware attacks.INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTS
116、AI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEBSECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04AI FOR MALWARE AI SECURITY REPORT 202535AI in the Attack Cycle of Nation-Sate ActorsGoogles analysis of Geminis usage showed that state-affiliated APT groups from over 20 count
117、ries,particularly Iran and China,have leveraged it throughout the attack lifecycle.These groups used Gemini for reconnaissance,weaponization of malicious code,enhancing phishing delivery methods,researching vulnerabilities for exploitation,deploying persistent access tools,establishing command and c
118、ontrol communication,and automating data theft or disruption tasks.Notably,Iranian groups are increasingly integrating AI tools across all attack stages while investing in vulnerability research,reconnaissance,and coding.In contrast,Chinese actors focused their vulnerability research on specific tec
119、hnologies and scripting.Gemini found APT groups using their tool for reconnaissance,malicious code development,refining phishing techniques,and unearthing vulnerabilitiesINTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB
120、AI SECURITY REPORT 202536SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04AI FOR MALWARELLM Data Poisoning Researchers have increasingly warned about the emerging risk of LLM poisoning,a cyber security threat where training datasets are manipulated to embed backdoors or mal
121、icious code.Once poisoned,these AI models may later replicate or amplify malicious content,posing serious security implications for users and organizations relying on AI-generated outputs.While infiltrating the datasets of major AI providers like OpenAI or Googles Gemini is challenging due to rigoro
122、us data validation processes,there have been notable real-world examples of successful poisoning attacks.A prominent case involved attackers uploading 100 compromised AI models to Hugging Face,an open-source AI platform widely used by developers and researchers for sharing and utilizing pre-trained
123、models.By exploiting Hugging Faces collaborative and open structure,attackers introduced compromised models that could disseminate harmful code or misinformation when deployed,mirroring traditional software supply chain attacks.Traditionally,data poisoning targets the training phase of AI models,whe
124、re datasets are contaminated to corrupt future outputs.However,with modern LLMs increasingly accessing and processing real-time online information during their inference stage,new vulnerabilities have emerged.Malicious actors exploit this by strategically placing deceptive or harmful content online,
125、designed for AI systems to retrieve and incorporate it into their responsesa practice known as retrieval poisoning.A recent study highlighted a significant case of retrieval poisoning executed by the Moscow-based disinformation network known as Pravda.At a conference of Russian officials,propagandis
126、t John Mark Dougan clearly articulated the intent:“By pushing these Russian narratives from the Russian perspective,we can actually change worldwide AI.”In 2024 alone,Pravda produced approximately 3.6 million propaganda-laden articles designed explicitly to influence AI chatbot responses.Their massi
127、ve content operation succeeded in infecting leading Western AI systems,with researchers discovering that major AI chatbots echoed Pravdas false narratives approximately 33%of the time.This demonstrates a clear and present threat of sophisticated AI models exploitation for disinformation campaigns an
128、d other poisoning scenarios.INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEBSECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04AI FOR MALWARE AI SECURITY REPORT 20253703AI FOR CYBER SECURITY RESE
129、ARCHERS 03The use of AI in cyber security is being explored in various ways such as enhancing detection mechanisms and making complex systems more accessible.This section delves into how AI can improve research,particularly hunting for advanced threat actors and vulnerability research.These approach
130、es utilize existing technologies and emerging concepts believed to shape the future of cyber security research.Current tools already offer capabilities that significantly enhance researchers effectiveness.Leveraging AI,particularly LLMs or agentic frameworks driven by LLMs,can efficiently conduct da
131、ta collection and preliminary analysis.These systems can execute tests,gather data,and perform initial analyses,allowing human researchers to concentrate on higher-level strategic considerations.AI FOR CYBER SECURITY RESEARCHERSINTRODUCTION01THE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MAL
132、WAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202539SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR APT HUNTINGAI FOR ENTERPRISES04AI For APT Hunting Pattern Recognition in Scale-Domain and File Names APT hunters often take pride in cracking technical tasks l
133、ike reverse engineering malware.However,genuine excitement happens when you discover a domain resembling a unique organization or agency,signaling that youve identified something interesting.LLMs can process and analyze large datasets to detect patterns in masquerading,deception techniques,and APT t
134、radecraft.By integrating LLMs into big data pipelines,analysts can automate the identification of malicious trends at scale-creating a lingual-based scoring system,including:Impersonation and Thematic Deception:LLMs can analyze millions of domain registrations,flagging those mimicking government,fin
135、ancial,or security institutions(e.g.,mofa-gov-np.fia- or ).File Naming Conventions:By scanning vast malware repositories,LLMs can detect filenames structured to deceive users(e.g.,tax_return_2025.pdf.exe).Trust Prediction:Using natural language understanding,LLMs can assess whether a non-technical u
136、ser would trust a domain or file and assign a deception probability score.Language&Regional Targeting:LLMs can detect patterns in domain language usage(e.g.,Cyrillic domains mimicking NATO entities),revealing geo-targeted cyber operations.When Check Point Research provided a sample set of suspicious
137、 domains,ChatGPT successfully analyzed one that initially appeared gibberish:.It explained why this domain could be maliciousit mimics the Argentinian government tax site AFIP,by combining afip with gob.Since it is not an official domain,it suggests a targeted regional phishing attempt.Extracting TT
138、Ps and IOCs from ReportsThreat intelligence reports offer valuable insights into APT groups operations,but manually extracting tactics,techniques,and procedures(TTPs)for hunting rules is time-consuming.AI can streamline this process by automatically identifying attack patterns,mapping them to framew
139、orks like MITRE ATT&CK,and generating structured hunting rules.This allows analysts to quickly convert raw threat intelligence into actionable defenses,improving threat-hunting efficiency and reduces time-to-detection for specific threats and malware.To do so,we generated a prompt detailing the inst
140、ructions as follows:INTRODUCTION01THE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202540SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR APT HUNTINGAI FOR ENTERPRISES04Youre a cyber security analyst speci
141、alizing in Advanced Persistent Threats(APTs)tasked with analyzing an APT blog post or report and extract all relevant technical details.Focus only on factual information from the document and avoid speculation.Your output must include the following structured sections:1.Techniques(TTPs)-Extract all
142、tactics,techniques,and procedures(TTPs)used by the APT.-Use the MITRE ATT&CK framework where applicable-Identify execution methods,persistence mechanisms,defense evasion,and command-and-control techniques.2.Technical Artifacts -List all processes,filenames,registry keys,file paths,and memory artifac
143、ts associated with the APT.-If multiple variants exist,distinguish them clearly.-Do not include any assumptions;only documented artifacts are included.3.Network&C2 Infrastructure -Extract all C2 domains,IP addresses,hosting providers,and network protocols.-Identify communication methods(HTTP,HTTPS,r
144、aw TCP,DNS tunneling,etc.).-If port-knocking or obfuscation methods are mentioned,include details.4.Versioning&Timestamps -Extract timestamps related to malware compilation,first detection,or campaign activity.-If possible,infer time zones from timestamps.5.Additional Malware Components-If the APT h
145、as multiple malware implants or loaders,describe their functionality and relationships.-Highlight notable obfuscation or evasion features.Rules for Your Response:-Do not make assumptions.If something is unclear,explicitly state that it is not mentioned.-Do not include generic security advice.Only ex
146、tract technical facts.-Use clear formatting with tables,bullet points,and section headers.-Use precise and technical language.Do not simplify for a general audience.INTRODUCTION01THE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURI
147、TY REPORT 202541SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR APT HUNTINGAI FOR ENTERPRISES04As a proof of concept,weve given ChatGPT the prompt mentioned above with one of our reports about SysJoker,a multi-platform backdoor.ChatGPT returned the following output:Figure 1-Extraction
148、of less trivial IOCs from SysJoker report A more thorough example of automatic extraction from our report on the malware ElizaRAT is available here.INTRODUCTION01THE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202542S
149、ECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR APT HUNTINGAI FOR ENTERPRISES04Correlation and Attribution AI also helps correlate unstructured data from external reports or text linked to threat actors.Todays observed APT activity builds on past operations documented in thousands of re
150、ports over the last two decades.By centralizing this information and making it accessible to an LLM,analysts can extract key insights from new research and cross-reference them with historical operations,improving attribution and threat analysis.Features like“Deep Research”make it easier to ask attr
151、ibution questions that were previously more challenging,as demonstrated in the example.In this scenario,we ask ChatGPT about a backdoor identified during a quick analysis and request possible attribution.While the response isnt necessarily perfect,it provides a strong leadand in this case,a relative
152、ly accurate one.Modern Approach to Attributing Hacktivist Groups Another compelling method for attribution,as discussed in our research blog,involves using language-based machine learning models and linguistic analysis of text from threat actors.Our study examines thousands of public messages from d
153、ozens of hacktivist groups by combining traditional cyber threat intelligence techniques with modern machine learning technologies.This integrated approach seeks to uncover the key topics these groups discuss,track their evolving motivations,connect related groups,and ultimately enhance our hacktivi
154、st attribution methods.AI for Malware Analysis Automating tasks such as malware analysis once seemed like a distant dream.That is no longer the case.Throughout 2024,researchers have explored using LLMs to automatically conduct malware analysis by decompiling code and feeding it into an LLM.The resul
155、ts have been impressive,demonstrating that LLMs can sometimes determine whether code is malicious even when detection rates are very low.The implications for APT threat hunting are immense.As VirusTotal already offers AI-generated analysis of certain file formats,the technology is expected to become
156、 a key part of APT hunting procedures and detection.INTRODUCTION01THE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202543SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR APT HUNTINGAI FOR ENTERPRISES04Vuln
157、erability research involves gathering extensive information about the targeted platform,software,or device,whether it includes source code.This process examines capabilities,functionality,attack surfaces,documentation,and relevant standards,entailing repetitive or predictable tasks.AI can largely st
158、reamline these tasks,freeing up researchers time.Bridging the Gap with(semi)Autonomous Agent Frameworks LLMs are limited by their training data and,therefore,have restricted capabilities when it comes to using external tools.However,agentic frameworks like CrewAI and Autogen are starting to bridge t
159、he gap-LLMs can now engage in conversation,interact with their systems,and utilize their tools to assist in research.Powerful open-source LLMs like Deepseek enable the development of language models for specific tasks such as assembler reverse engineering,vulnerability discovery,and LEVERAGING AI FO
160、R VULNERABILITY RESEARCH exploit development.Currently,most LLMs are generic,but advancements suggest that a Deepseek-like model can be created with added reasoning layers focused on computer science topics like coding,low-level assembler knowledge,reverse engineering,and vulnerability analysis.Powe
161、rful Agent-based Workflows for Discovering Vulnerabilities Combining Static Application Security Testing(SAST)with Dynamic Application Security Testing(DAST)can create powerful agent-based workflows for discovering vulnerabilities.LLMs capabilities in understanding language and intentions,combined w
162、ith their integration into practical tools,can significantly improve analysis.For example,heres an AI-assisted workflow for testing a new IoT device.INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI MODELS IN THE DARKWEB AI SECURITY REPORT 202544SECURIT
163、Y FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI VULNERABILITY RESEARCHAI FOR ENTERPRISES04Ask me anythingHey LLM find me firmware for TP-Link router model XYZ,download it,and unpack it Hey LLM in the unpacked firmware,identify all executable files that are exposed to the network.Provide a summary
164、 per each identified executable,specify the port,protocol,and intended use Hey,LLM for this executable,use reverse engineering and static analysis to analyze the underlying data protocol.The aim is to understand how to interact with the executable using this protocol.Once the analysis is complete,cr
165、eate a fuzzing harness in C language that will allow me to fuzz the target.Verify that the harness compiles and runs.Hey LLM for the harness youve developed,run it and use debugger attached to the target,using a debugger attached to the target,monitor the fuzzing performance.Identify any blockers an
166、d improve the harness,recompile,and re-run.Keep repeating this to maximize coverage.Maintain coverage graphs and periodic reports so that I can monitor the progress INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI MODELS IN THE DARKWEB AI SECURITY REPO
167、RT 202545SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI VULNERABILITY RESEARCHAI FOR ENTERPRISES04These are routine tasks that an individual would typically perform,but with a LLM,they can be completed much more quickly.Currently,most of the necessary technology is already in place.The ch
168、allenge lies in integrating these technologies into a cohesive workflow and implementing additional safeguards to ensure accuracy.This remains the biggest bottleneck at the moment,although it is improving rapidly.A clear example of this progress is the increasing proficiency of LLMs in writing code.
169、Identifying Logic Security Flaws with AI With the advancement of reasoning capabilities,LLMs will be able to identify logic security flaws in code bases,a previously difficult task to automate.Teaching computers what a logic bug is has been challenging,but as LLM reasoning improves,they can better u
170、nderstand and detect these issues.In the future,we might use LLMs to automate the process of finding logic flaws.For example,one could ask,“Hey,LLM,check this web application for authentication bypass or privilege escalation using BURP proxy with admin/admin and user/user credentials.”Complex Protoc
171、ol Analysis with AI Many sophisticated protocols function as complex state machines.For example,a condition might be:if A,then B,but only if X;or S or T,etc.Due to this complexity,creating a formal model to explore each state is very challenging.Testing the implementationsuch as in 5G communication
172、network protocolsis even harder for every possible scenario and combination.AI can bridge the gap between our expectations based on given prompts and a powerful computing machine capable of rapidly and at scale processing vast amounts of information and data.This capability will enable the developme
173、nt of better testing frameworks,simulations,and actual tests.By instructing the AI on what to do,its ability to execute technical steps for evaluation can be leveraged,which would take human testers a considerable amount of time.Where AI is Going The advancements in large language models(LLMs)have n
174、otably improved the interaction between humans and machine intelligence.While machines have long had computing power,previous communication challenges limited their effectiveness,particularly in complex tasks like malware analysis and exploit development.Today,LLMs can grasp intentions from simple w
175、ritten input and execute tasks efficiently.Their ability to utilize tools through agentic workflows allows for performing low-level tasks at scale,thorough data analysis,and the swift relay of insights back to human operators.This evolution enhances productivity and transforms our approach to comple
176、x technological challenges.As AI progresses,it is a constant investigative assistant,revealing key insights and unusual behaviors and directing analysts more quickly than ever before.INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI MODELS IN THE DARKWE
177、B AI SECURITY REPORT 202546SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI VULNERABILITY RESEARCHAI FOR ENTERPRISES0404AI FOR ENTERPRISE 04As AI technologies become integrated into corporate environments,organizations face increased risks.Many may not realize how often their interactions i
178、nvolve AI,whether through direct use of Generative AI services or indirect interaction with background tools like grammar and translation assistants or customer support bots.Regardless,both types can lead to sharing sensitive information such as internal communications,strategic plans,financial data
179、,customer information,and intellectual property.In this section,we examine the use of AI services in enterprise networks and their associated risks.Enterprise AI Use in Numbers Based on Check Point data from recent months,AI services are used in at least 51%of enterprise networks every month.In addi
180、tion,our data shows that 1 in every 80 prompts(1.25%)sent to GenAI services from enterprise devices was found to have a high risk of sensitive data leakage,and an additional 7.5%of prompts(1 in 13)contained potentially sensitive information.AI FOR ENTERPRISE INTRODUCTION01AI FOR APT HUNTINGTHE NEW S
181、OCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202548SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04AI Usage Around the Globe Country-level analysis reveals a considerable variation in AI services
182、integration and exposure worldwide(figure 1).Interestingly,no English-speaking countries topped the list of countries using AI services in the highest percentage of enterprise networks.This may be attributed to lower dependency on AI-powered products like translation services.Figure 1 The use of AI
183、services in enterprise networks by country-March 2025 Turkey68%66%66%64%63%40%39%34%34%24%CORPORATE NETWORKS WITH AI SERVICES-MARCH 2025 AustriaBelgiumPortugalEcuadorIndiaSwitzerlandHong KongDenmarkUkraineTop 5 CountriesBottom 5 CountriesINTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARG
184、ETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202549SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04The Popularity of AI Services Analysis of AI services monitored in enterprise environments indicates that OpenAIs Chat
185、GPT is currently the most widely used AI service,seen in 37%of enterprise networks,followed by Microsofts Copilot at 27%(figure 2).Following Chat GPT and Copilot,writing assistant services lead among AI services,including Grammarly(25%),DeepL(18%),and QuillBot(11%).In addition,video and presentation
186、 tools like Vidyard(21%)and Gamma(8%)have also shown substantial adoption.Figure 2 The most popular AI services in enterprises-March 2025 ChatGPT37%27%25%21%18%13%12%11%9%8%TOP 10 AI SERVICES AND TOOLS(%of Corporate Networks March 2025)Microsoft CopilotGrammarlyVidyardDeepLPerplexityDeepSeekQuillBot
187、Google GeminiGammaINTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202550SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04A review of usage data since December
188、2024 reveals relatively stable exposure levels for most AI services in enterprise environments(figure 3).Figure 3 Major Generative AI Service in enterprise networks-December 2024-March 2025 ChatGPTMicrosoft CopilotPerplexityDeepSeekClaudeGoogle Gemini40%35%30%25%20%15%10%5%0%MAR 25FEB 25JAN 25DEC 24
189、LEADING AI SERVICES AND TOOLSAI Usage TrendsIn most services,except for DeepSeek,there has been a slight increase over the past month,showing ongoing growth and adoption.INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS I
190、N THE DARKWEB AI SECURITY REPORT 202551SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04A closer look at DeepSeek shows an initial surge in usage immediately following its official release towards the end of January 2025,followed by a steady decline,which may be attributed
191、to security and privacy concerns(figure 4).Figure 4-DeepSeek weekly usage trends since the official release according to requests made to Check Points ThreatCloud AI19 Jan 25 Jan26 Jan 01 Feb02 Feb 08 Feb09 Feb 15 Feb16 Feb 22 Feb23 Feb 01 Mar02 Mar 08 Mar09 Mar 15 Mar16 Mar 22 Mar23 Mar 29 Mar30 Ma
192、r 05 Apr500 K400 K300 K200 K100 K0This trend shows that the rapid adoption of new AI services makes it challenging for network and security admins to learn and manage them effectively,thus increasing potential risks.DEEPSEEK WEEKLY USAGE TREND114,093432,104315,326INTRODUCTION01AI FOR APT HUNTINGTHE
193、NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202552SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04AI Risks for Enterprises Adopting Generative AI applications offers opportunities for increa
194、sed productivity but also raises critical security,compliance,and data integrity concerns.Understanding these risks is crucial for organizations aiming to leverage AI effectively while protecting their operations and sensitive information.Here,we outline the growing challenges of AI adoption.Shadow
195、AI ApplicationsNew GenAI applications emerge daily,but not all can be trusted.Unauthorized AI tools and applications employees use can lead to security vulnerabilities,compliance issues,and inconsistent data management,exposing a company to operational risk or data breaches.Organizations need proper
196、 AI monitoring and governance.Data LossAI models handle various types of data,some of which are stored,shared with third parties,or have inadequate security practices.These issues raise privacy concerns,compliance challenges,and vulnerabilities to data leaks.Before integration,enterprises should ass
197、ess AI applications for data protection and industry best practices.Emerging Vulnerabilities in GenAI ApplicationsGenAI applications can introduce new security vulnerabilities,increasing organizations risk of cyber attacks.Hidden vulnerabilities in AI-generated code,such as outdated libraries and da
198、ta poisoning in repositories,must be addressed with technical controls,policies,and best practices.Excessive agency As agentic AI systems advance,they become more susceptible to manipulation by malicious actors who exploit vulnerabilities through prompt injection or data poisoning.Limited human over
199、sight can lead to unintended decisions,exposing sensitive information and disrupting operations.Proper governance and safeguards are essential.INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY R
200、EPORT 202553SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES0405SECURITY FOR,BY,AND WITH AI 05To ensure the safe use of organizations powerful AI tools,Check Points GenAI Protect solution provides comprehensive security and compliance features specifically designed to manage
201、 GenAI prompts unstructured and conversational nature.It delivers visibility,risk assessment,and real-time data loss prevention.Check Points GenAI Protect discovers the GenAI services used in your organization,assesses their risk,and applies groundbreaking AI-powered data protection so you can adopt
202、 the latest services without the added risk.Discover shadow and sanctioned GenAI applications,prevent data loss,and make informed GenAI governance decisions while meeting regulations.Discover and assess the risk of GenAI apps used across your organization Make informed governance decisions by uncove
203、ring your top GenAI use cases Apply revolutionary AI-based data classification to prevent data loss Meet regulations with enterprise-grade and monitoring visibility Security for AI To ensure the safe use of organizations powerful AI tools,Check Points GenAI Protect solution provides comprehensive se
204、curity and compliance features specifically designed to manage GenAI prompts unstructured and conversational nature.It delivers visibility,risk assessment,and real-time data loss prevention.INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY
205、 RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202555SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04AI is your most valuable tool for analyzing,detecting,preventing,and predicting threats.Content analysis identifies behavioral patterns,anomalies,and language cues th
206、at enable AI systems to learn from and adapt to new threats continuously.Check Points ThreatCloud AI aggregates and analyzes big data telemetry and millions of Indicators of compromise(IoCs)every day.Our threat intelligence database is fed from 150,000 connected networks,millions of endpoint devices
207、,Check Point Research and dozens of external feeds.Over 50 engines are packed with AI-based features and capabilities.With over 50 technologies to detect and neutralize novel threats,ThreatCloud AI uses big data to update its defenses with the latest Indicators of Compromise.It analyzes telemetry da
208、ta for precise threat categorization,enhancing security across networks,cloud,operations,and user access.Security by AI AI is your most valuable tool for analyzing,detecting,preventing,and predicting threats.Content analysis identifies behavioral patterns,anomalies,and language cues that enable AI s
209、ystems to learn from and adapt to new threats continuously.INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202556SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISE
210、S04Security With AI Attackers automate with AI,and so should you.Implementing AI into your cyber security tool stack is essential.With automation,IT and security teams can improve their productivity exponentially with their current resources.Check Points Infinity AI Copilot is an expert generative A
211、I assistant that reduces the time needed to perform common tasks.Infinity AI Copilot assists administrators and security analysts by automating complex,multi-step activities.AI Copilot offers extensive support for Check Points Infinity Platform,assisting in the management of security across the enti
212、re system.Infinity AI Copilot understands the customers policies,access rules,objects,logs,and all relevant product documentation,enabling it to deliver contextualized and comprehensive answers.With your AI-powered security assistant,you can:Accelerate security administration Reduce the time needed
213、for security tasks,including policy creation,implementation,and trouble ticket resolution.Increase security effectiveness Apply new threat prevention controls or update existing ones,such as data protection,firewall rules,or IPS signatures.Generate training to help users stay safe.Improve incident m
214、itigation and response.Leverage AI in threat hunting,analysis,and resolution.INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202557SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS0
215、2AI FOR ENTERPRISES04About Check Point Software Technologies Ltd.Check Point Software Technologies Ltd.()is a leading protector of digital trust,utilizing AI-powered cyber security solutions to safeguard over 100,000 organizations globally.Through its Infinity Platform and an open garden ecosystem,C
216、heck Points prevention-first approach delivers industry-leading security efficacy while reducing risk.Employing a hybrid mesh network architecture with SASE at its core,the Infinity Platform unifies the management of on-premises,cloud,and workspace environments to offer flexibility,simplicity and sc
217、ale for enterprises and service providers.Contact usWORLDWIDE HEADQUARTERS 5 Shlomo Kaplan Street,Tel Aviv 6789159,Israel Tel:972-3-753-4599 Email: U.S.HEADQUARTERS 100 Oracle Parkway,Suite 800,Redwood City,CA 94065 Tel:800-429-4391 UNDER ATTACK?Contact our Incident Response Team:emergency- CHECK PO
218、INT RESEARCH To get our latest research and other exclusive content,Visit us at INTRODUCTION01AI FOR APT HUNTINGTHE NEW SOCIAL ENGINEERINGTARGETING LLM ACCOUNTSAI FOR MALWAREAI VULNERABILITY RESEARCHAI MODELS IN THE DARKWEB AI SECURITY REPORT 202558SECURITY FOR,BY,&WITH AI05AI FOR RESEARCH03AI THREATS02AI FOR ENTERPRISES04 2025 Check Point Software Technologies Ltd.All rights reserved.