The initial reaction to supply chain breaches is often panic.

  • Are we compromised?
  • How many systems are affected?
  • Was patient data exposed?
  • Are clinical systems at risk?

These are the right questions. But in a health system, they can’t be answered with speculation. They can only be answered with data.

Between June and December 2025, Chinese government-linked hackers hijacked Notepad++’s update system to deliver malware to targeted organizations in government, telecom, aviation, and critical infrastructure sectors. The attack went undetected for months.

While healthcare was not the reported target, that detail is almost beside the point. Health systems rely on thousands of third-party components and auto-update mechanisms across clinical, financial, research, and operational systems. The same conditions that made this attack possible exist inside nearly every modern hospital environment.

What makes this particularly concerning:

Notepad++ has been around for more than 20 years. But longevity and popularity don’t equal security. And open source does not mean immune. In this case, the developer’s shared hosting setup became the weak link that compromised the entire update chain.

Software updates are meant to make us more secure. When they become the attack vector, we lose one of our most fundamental security controls.

Unlike mass malware campaigns, this attack delivered payloads only to specific victims. Traditional security tools that look for widespread patterns often miss these surgical strikes. And healthcare organizations, with valuable data and complex vendor ecosystems, are precisely the kind of environments that could be selectively targeted.

If you are running software with auto-update capabilities — and you are — you need visibility into what is actually being delivered to your endpoints, not just confidence in the vendor’s security posture.

The question is not, “Do we trust our vendors?” The question is, “How do we verify what is actually running inside our environment?”

You may feel confident about your approved software list. You likely have governance, procurement review, and vendor security questionnaires in place. But can you verify — at a code level — that what is executing in your environment matches what should be there?

Confidence is not control. Approval is not verification.

Health systems operate in an environment where uptime protects patient safety, regulatory scrutiny is constant, and trust is non-negotiable. “We believed the update was legitimate” will not satisfy a board, a regulator, or a patient.

This requires verifiable visibility into your actual codebase and runtime environment — not assumptions about what should be there.

Three Actions to Take Now

1. Inventory your update mechanisms.

Know which applications — enterprise, departmental, clinical, and developer tools — have auto-update enabled. Understand where updates originate, how they are delivered, and what level of control you have over timing and validation.

2. Establish verification processes.

Do not rely solely on signed updates. Signing mechanisms can be compromised. Implement processes to verify the integrity and provenance of what is being deployed before it reaches production systems.

3. Bridge the gap between security and operations.

DevOps and clinical engineering teams need to understand supply chain risk. Security teams need to understand operational dependencies and patient-care implications. Governance must connect both perspectives. Supply chain risk is not just an IT issue — it is an enterprise resilience issue.

This is not about adding more tools or assigning blame. Supply chain security is about building systems that can verify trust, even when threat actors target the very mechanisms we rely on to stay secure.

The Notepad++ attack succeeded because organizations trusted a legitimate update mechanism from a respected open-source project. That trust was exploited.

Health systems cannot eliminate trust from their ecosystems. But they can operationalize verification.

When your software updates run tonight — across your EHR integrations, research platforms, imaging systems, and developer environments — will you know exactly what is being deployed?

And more importantly, will you be able to prove it?

When Trusted Updates Become the Attack Vector: What the Notepad++ Breach Means for Health Systems
Code Snippet
go
Learn more
“When you understand your RER, you gain clarity on where to focus your efforts. That insight transforms development from chaotic to controlled”
— Sophia Liang, CTO at TripleKey
1. Reduced Technical Debt
Proactive risk management prevents future bottlenecks.
2. Enhanced Team Morale:
Teams equipped with clear risk insights feel empowered.
3. Faster Time to Market:
Efficient risk handling eliminates unnecessary delays.

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

“Efficiency isn’t just about speed—it’s about navigating risks with precision to keep your development pipeline resilient and agile.”

— Sophia Liang, CTO at TripleKey

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

const calculateRER = (riskResolved, codeChanges) => {
  return (riskResolved / codeChanges).toFixed(2);
};

// Example calculation:
const resolvedRisks = 35;
const codeUpdates = 150;

console.log(`Your RER is: ${calculateRER(resolvedRisks, codeUpdates)}`);