When the insider Is the adversary: North Korea’s remote work espionage campaign

Hands typing on a keyboard surrounded by security icons
(Image credit: Shutterstock)

In a revelation that should concern every security leader, the U.S. Justice Department (DOJ) recently disclosed that over 300 companies, including tech giants and at least one defense contractor, unknowingly hired North Korean operatives posing as remote IT workers.

These individuals infiltrated corporate networks not by breaching firewalls or exploiting zero-days, but by landing jobs through video interviews, onboarding processes, and legitimate access credentials. Once inside, they stole sensitive data and funneled millions in earnings back to the Kim regime, fueling its sanctioned weapons programs.

The campaign is one of the most aggressive, large-scale examples of an insider threat - a category of risk that arises when individuals within an organization, whether employees, contractors, or partners, abuse their authorized access to cause harm.

Unlike external threats that, at least in theory, can be detected and stopped through technical signatures or perimeter defenses, insider threats operate from within, often undetected, with full access to sensitive systems and data.

This North Korean operation wasn’t improvised. It was calculated, professional, and deeply strategic. And it signals a shift in how adversaries operate: not just breaking in, but blending in.

Ariel Parnes

Co-Founder and Chief Operating Officer at Mitiga.

The Threat You Can’t Patch

Unlike external attackers, insider threats - especially those that enter through HR services - don’t trigger alerts at the door. They have keys. They follow protocols. They attend standups. They do the work, or just enough of it, while quietly collecting access and evading scrutiny.

That’s what makes this threat so difficult to detect and so devastating when successful. These operatives didn’t brute-force credentials. They weren’t scraping dark corners of the internet. They passed interviews by using stolen or fabricated identities. According to the DOJ, they often relied on American citizens’ identities stolen through job boards or phishing. Many even went as far as using AI-generated content and deepfakes to pass interviews.

Once employed, they didn’t need to act suspiciously to gain access. They simply did what everyone else did: log in via VPN, accessed the codebase, reviewed Jira tickets, joined Slack channels. They weren’t intruders. They were team members.

How Remote Work and AI Changed the Game

What enabled this campaign was a unique combination of evolving workplace dynamics and readily available AI tools. First, the normalization of remote work made it plausible to have employees who would never be physically seen or meet a manager face to face. What might have once been considered an unusual hire became completely normal in the post-pandemic world.

Second, generative AI gave attackers the tools to mimic fluency, build impressive resumes, and even generate convincing interview responses. Some operatives used synthetic video and audio to complete interviews or handle technical screenings, masking language fluency gaps or cultural tells.

Then came the infrastructure. In some cases, U.S.-based collaborators helped maintain “laptop farms” - stacks of employer-issued machines in a single location controlled by the operatives using KVM switches and VPNs. This setup ensured that access appeared to originate from within the United States, helping them slip past geofencing and fraud detection systems.

These weren’t lone actors. They were part of a coordinated state-sponsored effort with global infrastructure, deep operational discipline, and a clear strategic mission: extract value from Western companies to fund North Korea’s sanctioned economy and military ambitions.

A Blind Spot in Detection

The alarming success of this campaign highlights a gap that many organizations still haven’t addressed: detecting adversaries who look legitimate on paper, behave within expected parameters, and don’t trip alarms.

Traditional security tools are tuned for external anomalies: port scans, malware signatures, brute-force attempts. But an insider who joins a company through standard hiring, logs in during work hours, and accesses systems they're authorized to use won’t trigger those alerts. They aren’t acting maliciously in a technical sense - until they are.

What’s needed is not only tighter hiring practices, but also better visibility into user behavior and environment-wide activity patterns. Security teams need to be able to distinguish between normal and anomalous behavior even among valid users.

That means collecting and retaining forensic-grade data - logs from cloud applications, identity systems, endpoint activity, and remote access infrastructure - and making it searchable and analyzable at scale. Without a way to retrospectively investigate how access was used, organizations are flying blind. They will only know they’ve been compromised once the data is gone, the money is missing, or law enforcement shows up.

From Reactive to Proactive: How to Get Ahead of the Next Campaign

Defending against insider threats like this starts before the first alert. It requires rethinking onboarding, monitoring, and response.

Companies need to layer behavioral analytics on top of access logs, looking for subtle indicators: unusual access times, lateral movement into unexpected systems, usage patterns that don’t match the rest of the team. This type of detection requires models trained in real-world behavior, tuned not for raw volume but for suspicious variance.

It also means proactively hunting, not waiting for an alert, but actively asking: what access looks unusual? Where are we seeing employees access systems they typically don’t use? Why is a new hire downloading a volume of data typically accessed only by team leads? These questions can’t be answered without proper instrumentation. And they can’t be answered late.

No Industry Is Immune

This campaign didn’t target one sector. It was less about where the operatives landed and more about how many places they could get into. That’s the hallmark of a campaign focused on widespread infiltration, long-term persistence, and maximum value extraction.

The companies that were affected weren’t necessarily careless. They were operating in a threat landscape that had shifted beneath them. The attackers just moved faster.

What This Means Going Forward

The remote workforce isn't going away. Neither is AI. Together, they’ve created both unprecedented flexibility - and unprecedented opportunity for adversaries. Companies need to adapt.

Insider threats are no longer just about disgruntled employees or careless contractors. They’re adversaries with time, resources, and state backing, who understand our systems, processes, and blind spots better than we’d like to admit.

Protecting from this threat means investing not just in prevention, but in detection and investigation as well. Because the next adversary isn’t knocking at your firewall. They’re already logged in.

We list the best identity management solution.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

TOPICS

Co-Founder and Chief Operating Officer at Mitiga.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.