Anthropic Urges Aggressive US Action to Maintain AI Lead Over China by 2028

By

Anthropic Warns: US Must Enforce Export Controls and Curb AI Distillation or Risk Losing Edge to China

Anthropic, the artificial intelligence safety company, has released a new policy paper urging the United States and its allies to enforce stricter export controls, combat model distillation attacks, and expand overseas deployment of American AI systems. The goal: to preserve a decisive technological lead over China by 2028.

Anthropic Urges Aggressive US Action to Maintain AI Lead Over China by 2028

“Without immediate, coordinated action, China could narrow the gap through illicit distillation of our frontier models,” an Anthropic spokesperson told reporters. “We are calling on policymakers to treat AI chips and model weights as strategic assets—just like nuclear technology.”

The paper, titled Securing the AI Frontier: A Policy Roadmap for 2025–2028, outlines three concrete steps: tightening semiconductor export controls, investing in physical and cyber defenses against distillation, and accelerating the export of US-built AI platforms to allied nations.

Background

The US–China AI race has intensified dramatically since 2022, when export controls on advanced chips were first introduced. Despite those measures, Chinese firms have increasingly turned to “model distillation”—a technique that extracts knowledge from a proprietary AI model by querying it extensively and training a cheaper copy.

These distillation attacks allow adversaries to replicate frontier capabilities without paying for the original training compute. Anthropic’s paper warns that current controls do not adequately address this threat. “Distillation is the back door that could let China leapfrog years of US investment,” the paper states.

The company also points out that many US AI companies are voluntarily restricting access to their models, but a fragmented industry response is insufficient. “Without a unified government framework, individual companies become the weak link,” the spokesperson added.

What This Means

If implemented, Anthropic’s proposals would fundamentally reshape how AI technology is shared globally. Exporting US AI systems to allies—rather than open-sourcing them—could create a trusted bloc that excludes China while reinforcing international AI norms.

However, critics warn that overly aggressive controls could backfire, pushing China to accelerate domestic development and potentially fragment the global AI ecosystem. The paper acknowledges these risks but argues that the cost of inaction is even higher.

“We have a narrow window—roughly three to four years—to entrench US leadership before China’s indigenous innovation catches up,” the paper concludes. “The choices made today will determine whether AI is governed by democratic values or authoritarian control.”

Anthropic’s call comes as the Biden administration prepares to finalize new rules on AI chip exports and as Congress debates broader AI oversight legislation. The company has briefed staff on both the National Security Council and the Department of Commerce.

Key Recommendations

  • Strengthen semiconductor export controls – Expand restrictions to cover memory bandwidth and packaging technologies, not just compute capacity.
  • Mandate distillation-proof APIs – Require frontier AI providers to deploy rate limits, output watermarking, and behavioral auditing to detect and prevent model theft.
  • Promote “AI diplomacy” – Create a US-led export program that deploys safe, certified AI systems to allies in exchange for non-compete agreements on frontier research.

The full policy paper is available on Anthropic’s website. The company says it will engage with lawmakers in both parties to ensure the recommendations are considered in upcoming legislative packages.

Tags:

Related Articles

Recommended

Discover More

Labor Unveils Emergency Gas Reservation Policy for East Coast MarketsRansomware in 2026: 5 Critical Trends You Must Know10 Key Developments in Swift: March 2026 EditionData Quality Bug Overturns Key Election Finding, Researchers WarnHow to Effectively Respond to a Source Code Repository Breach: A Step-by-Step Guide Inspired by the Trellix Incident