NHS Open Source Pullback: Security Measures or Misguided Policy?

From Eatin3d, the free encyclopedia of technology

Introduction

The UK's National Health Service (NHS) has announced plans to close nearly all of its open-source repositories, a move that has sparked significant debate within the tech and healthcare communities. The decision comes in response to the increasing sophistication of Large Language Model (LLM) tools, particularly Anthropic's Mythos, which are now capable of identifying security vulnerabilities in publicly available code. However, critics argue that this blanket approach is both unnecessary and counterproductive, potentially undermining the principles of transparency and collaboration that open source embodies.

NHS Open Source Pullback: Security Measures or Misguided Policy?
Source: lwn.net

Background: NHS and Open Source

Open source has long been a cornerstone of NHS digital strategy. During the COVID-19 pandemic, NHSX—the digital transformation arm of the NHS—actively embraced open source, releasing the code for the Covid Contact Tracing app to the public as soon as it was available. This move was lauded for its transparency and allowed independent security researchers to scrutinize the code. Despite being installed on millions of devices and facing intense scrutiny from hostile actors, the app's open-source code caused zero security incidents. This example highlights the potential for open source to coexist with robust security practices.

Today, the NHS maintains hundreds of repositories on platforms like GitHub. These include data sets, internal tools, guidance documents, research tools, and front-end design elements. The vast majority are not mission-critical security systems. According to Terence Eden, a former NHSX employee, these repositories are "not meaningfully affected by any advance in security scanning" and contain "nothing … which could realistically lead to a security incident."

The Threat from LLM Tools

The NHS justification for the repository closure hinges on the capabilities of LLM tools like Anthropic's Mythos. These AI systems can automatically scan codebases for vulnerabilities, potentially exposing flaws that malicious actors could exploit. While this is a legitimate concern for active software projects handling sensitive data, the NHS's existing repositories are largely dormant or low-risk.

How Advanced Scanning Works

LLM-based security scanners use natural language understanding to analyze code contextually. They can identify patterns that traditional static analysis tools might miss. However, the threats they pose are primarily relevant to production-grade software with active security implications. For static datasets or documentation, the risk is minimal. The NHS decision ignores this distinction, applying a one-size-fits-all policy that penalizes even harmless public research.

The Controversy

Terence Eden's Perspective

Eden, who played a key role in the NHS's open-source efforts, has been vocal in his opposition. He points out that the new guidance is in direct contradiction to the UK's Tech Code of Practice, specifically point 3: "Be open and use open source." This code insists that public sector organizations should make code open by default unless there are clear security or commercial reasons to keep it closed. The NHS's blanket closure appears to violate this principle.

Eden further argues that the NHS is overreacting. The repositories in question are not high-risk; they are mostly data, research, and internal tools. Closing them all sends a chilling message to the developer community and hampers the kind of collaborative innovation that has driven NHS digital progress.

Contradiction with Open Government Policy

The UK government has long championed open source. The Tech Code of Practice is designed to encourage transparency, reuse, and community contributions. The NHS decision undermines this commitment. If the NHS truly wants to mitigate LLM-based risks, a more nuanced approach—such as selectively removing only actionable vulnerabilities, improving code review processes, or adopting proactive security audits—would be more effective without sacrificing openness.

Implications for Public Health Tech

The closure of these repositories could have far-reaching consequences. Researchers, developers, and other health institutions rely on NHS open-source data for their work. Projects that depend on NHS APIs or libraries may face disruptions. Moreover, the move could erode public trust, which is essential for health technology adoption. As Eden notes, the COVID contact tracing app's open source strategy built trust through transparency. Reversing course now may signal that the NHS is retreating from that commitment.

Furthermore, the decision sets a precedent for other public sector organizations. If the NHS—one of the world's largest healthcare providers—retreats from open source, others may follow, throttling innovation in health IT. The challenge of LLM-driven security scanning is real, but it demands targeted responses, not wholesale abandonment of open source principles.

Conclusion

The NHS's decision to close open-source repositories in response to LLM tool advancements is a defensive maneuver that may do more harm than good. While protecting against security vulnerabilities is imperative, the current policy overlooks the minimal risk of most NHS repositories and contradicts established government guidelines. As Terence Eden's critique makes clear, a more balanced approach is needed—one that preserves openness while intelligently managing risk. The NHS must find a way to navigate the evolving threat landscape without sacrificing the transparency and collaboration that open source provides.