Google Warns of AI-Powered North Korean Malware Campaign Targeting Crypto, DeFi

Google Warns of AI-Powered North Korean Malware Campaign Targeting Crypto, DeFi

Source: Decrypt

Published:16:21 UTC

BTC Price:$69317

#CryptoSecurity #AIThreats #DeFi

Analysis

Price Impact

Med

The continuous and escalating threat from sophisticated north korean hacking groups, now employing ai deepfakes and advanced social engineering, represents a significant ongoing risk to the crypto and defi ecosystem. while not a direct market crash trigger, the theft of over $2 billion in 2025 highlights substantial capital drain and can erode investor trust, potentially deterring new entrants or institutional adoption due to heightened security concerns and regulatory scrutiny.

Trustworthiness

High

The warning comes directly from google's mandiant security team, a highly reputable cybersecurity firm, and is corroborated by blockchain analytics firm chainalysis, providing strong credibility for the reported threats and stolen amounts.

Price Direction

Neutral

While the news underscores systemic risks and potential fud (fear, uncertainty, doubt), the broader crypto market has historically shown resilience to hacking reports unless a major, systemic vulnerability is exploited across multiple projects. however, it fosters an environment of caution and highlights the need for improved security practices, which could indirectly temper bullish sentiment or attract more regulatory attention.

Time Effect

Long

The integration of ai into these attacks signifies a new, more sophisticated era of cyber threats that will require long-term, evolving security measures and vigilance from individuals and institutions within the crypto space. this isn't a one-time event but an ongoing arms race between defenders and attackers.

Original Article:

Article Content:

In brief North Korean actors are targeting the crypto industry with phishing attacks using AI deepfakes and fake Zoom meetings, Google warned. More than $2 billion in crypto was stolen by DPRK hackers in 2025. Experts warn that trusted digital identities are becoming the weakest link. Google’s security team at Mandiant has warned that North Korean hackers are incorporating artificial intelligence–generated deepfakes into fake video meetings as part of increasingly sophisticated attacks against crypto companies, according to a report released Monday. Mandiant said it recently investigated an intrusion at a fintech company that it attributes to UNC1069, or "CryptoCore", a threat actor linked with high confidence to North Korea. The attack used a compromised Telegram account, a spoofed Zoom meeting, and a so-called ClickFix technique to trick the victim into running malicious commands. Investigators also found evidence that AI-generated video was used to deceive the target during the fake meeting. North Korean actor UNC1069 is targeting the crypto sector with AI-enabled social engineering, deepfakes, and 7 new malware families. Get the details on their TTPs and tooling, as well as IOCs to detect and hunt for the activity detailed in our post 👇 https://t.co/t2qIB35stt pic.twitter.com/mWhCbwQI9F — Mandiant (part of Google Cloud) (@Mandiant) February 9, 2026 “Mandiant has observed UNC1069 employing these techniques to target both corporate entities and individuals within the cryptocurrency industry, including software firms and their developers, as well as venture capital firms and their employees or executives,” the report said. North Korea's crypto theft campaign The warning comes as North Korea’s cryptocurrency thefts continue to grow in scale. In mid-December, blockchain analytics firm Chainalysis said North Korean hackers stole $2.02 billion in cryptocurrency in 2025, a 51% increase from the year before. The total amount stolen by DPRK-linked actors now stands at roughly $6.75 billion, even as the number of attacks has declined. The findings highlight a broader shift in how state-linked cybercriminals are operating. Rather than relying on mass phishing campaigns, CryptoCore and similar groups are focusing on highly tailored attacks that exploit trust in routine digital interactions, such as calendar invites and video calls. In this way, North Korea is achieving larger thefts through fewer, more targeted incidents. According to Mandiant, the attack began when the victim was contacted on Telegram by what appeared to be a known cryptocurrency executive whose account had already been compromised. After building rapport, the attacker sent a Calendly link for a 30-minute meeting that directed the victim to a fake Zoom call hosted on the group’s own infrastructure. During the call, the victim reported seeing what appeared to be a deepfake video of a well-known crypto CEO. Once the meeting began, the attackers claimed there were audio problems and instructed the victim to run “troubleshooting” commands, a ClickFix technique that ultimately triggered the malware infection. Forensic analysis later identified seven distinct malware families on the victim’s system, deployed in an apparent attempt to harvest credentials, browser data and session tokens for financial theft and future impersonation. Deepfake impersonation Fraser Edwards, co-founder and CEO of decentralized identity firm cheqd, said the attack reflects a pattern he is seeing repeatedly against people whose jobs depend on remote meetings and rapid coordination. “The effectiveness of this approach comes from how little has to look unusual,” Edwards said. “The sender is familiar. The meeting format is routine. There is no malware attachment or obvious exploit. Trust is leveraged before any technical defence has a chance to intervene.” Edwards said deepfake video is typically introduced at escalation points, such as live calls, where seeing a familiar face can override doubts created by unexpected requests or technical issues. “Seeing what appears to be a real person on camera is often enough to override doubt created by an unexpected request or technical issue. The goal is not prolonged interaction, but just enough realism to move the victim to the next step,” he said.   He added that AI is now being used to support impersonation outside of live calls. “It is used to draft messages, correct tone of voice, and mirror the way someone normally communicates with colleagues or friends. That makes routine messages harder to question and reduces the chance that a recipient pauses long enough to verify the interaction,” he explained. Edwards warned the risk will increase as AI agents are introduced into everyday communication and decision-making. “Agents can send messages, schedule calls, and act on behalf of users at machine speed. If those systems are abused or compromised, deepfake audio or video can be deployed automatically, turning impersonation from a manual effort into a scalable process,” he said. It's "unrealistic" to expect most users to know how to spot a deepfake, Edwards said, adding that, "The answer is not asking users to pay closer attention, but building systems that protect them by default. That means improving how authenticity is signalled and verified, so users can quickly understand whether content is real, synthetic, or unverified without relying on instinct, familiarity, or manual investigation.” Daily Debrief Newsletter Start every day with the top news stories right now, plus original features, a podcast, videos and more. Your Email Get it! Get it!