North Korean Group Targets South With Military ID Deepfakes

North Korean Group Targets South With Military ID Deepfakes

Summary

The North Korea-linked APT group Kimsuky has been using generative AI, including ChatGPT, to create deepfaked South Korean military ID documents as part of tailored social‑engineering campaigns aimed at journalists, researchers and human‑rights activists. Genians, a South Korean cybersecurity firm, analysed the campaign and found the malicious lure asked recipients to review draft ID documents and download linked archives that contained a zip with a malicious LNK loader.

The campaign illustrates a growing trend: nation‑state actors mixing realistic synthetic identities with context‑aware lures to boost engagement and deliver malware. Recorded Future and other researchers have observed similar AI use by North Korean clusters (PurpleDelta, PurpleBravo) for code generation, translation and persona crafting. OpenAI and Anthropic have also reported misuse of their models by threat actors.

Key Points

  • Kimsuky used ChatGPT and other AI tools to generate realistic images of South Korean military ID documents to support phishing lures.
  • Targets included defence‑related institutions, journalists, researchers and human‑rights activists with personalised, topic‑relevant content.
  • The attack lure asked recipients to download a zip file; opening an included LNK file would execute the malicious loader.
  • Genians linked the campaign to Kimsuky via infrastructure, malware indicators and metadata showing AI generation.
  • Other North Korean groups (PurpleDelta, PurpleBravo) are also leveraging AI for persona building, code and translation to improve deception.
  • Vendors such as OpenAI and Anthropic have documented model misuse by nation‑state actors for offensive operations.
  • Adding believable photo IDs increases perceived authority and the likelihood that victims will open attachments or links.

Content Summary

Genians’ analysis (published 15 Sept) describes tailored phishing emails that used deepfake military IDs to establish credibility and relevance. The emails referenced sensitive topics — North Korea research, national defence and political issues — to entice recipients to download an archive. The archive contained a malicious LNK file which, if opened, delivered a loader to compromise systems.

Researchers tied the campaign to Kimsuky using IPs, malware signatures and other indicators. The incident fits a broader pattern of North Korean operators using generative AI to scale and refine social‑engineering, create synthetic recruiters or identities, and automate parts of their operations.

Context and Relevance

This story matters because it shows how generative AI is being weaponised by nation‑state groups to make phishing far more convincing. Organisations working on defence, media, human‑rights and research are particularly exposed because attackers now tailor both imagery and content to a target’s professional context. The technique shortens the time and skill needed to produce credible fakes, raising the baseline risk for sectors handling sensitive geopolitical information.

For security teams, this trend reinforces the need for stronger email defences, file‑execution policies (block LNKs from untrusted sources), multi‑factor authentication, enhanced user awareness training that covers AI‑enabled deception, and threat‑intelligence sharing to detect and block known indicators.

Why should I read this?

Short answer: because this isn’t sci‑fi — it’s a practical, low‑effort trick that makes phishing scarier and quicker to roll out. If you work in defence, media, research or human‑rights, this is directly aimed at you. Read it so you know what the lure looks like, why it works, and what to lock down before someone clicks.

Source

Source: https://www.darkreading.com/cyberattacks-data-breaches/north-korean-group-south-military-id-deepfakes