← Back to Intelligence

OpenAI Accuses DeepSeek of Distilling US Models for Advantage

Date: February 12, 2026
Company: DeepSeek
Category: Policy, Business & Society

Narrative

DeepSeek is using distillation techniques and obfuscated methods to extract outputs from leading US frontier models (including OpenAI's) to train its next-generation systems, as part of "ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs." OpenAI detected new programmatic access attempts by DeepSeek-linked accounts bypassing safeguards and using third-party routers to hide activity.

OpenAI Memo to US House Select Committee on China (reported via Bloomberg/Reuters)

Reality

Memo sent February 12, 2026; widely reported February 13–14. No public response from DeepSeek yet. Accusation focuses on preparations for next model (likely V4). DeepSeek recently expanded context window to >1M tokens (from 128k) and updated knowledge cutoff to May 2025 (from July 2024) in ongoing V3 iterations, fueling speculation on V4 readiness.

Implication

Escalates US-China AI tensions and "free-riding" debates amid export controls and chip restrictions. Highlights distillation as a growing competitive threat to US labs' moats (high R&D/compute costs vs. low-cost replication). Note: While OpenAI characterizes this as "intellectual property theft," the AI research community has a long history of using distillation for efficiency; the legal debate centers on whether DeepSeek violated Terms of Service (ToS) by using model outputs to train a competing commercial product. Could accelerate policy responses (e.g., tighter API safeguards, further GPU export limits). Adds pressure on DeepSeek ahead of anticipated V4 release (mid-Feb, coding-focused), potentially amplifying market reactions if V4 lands strong despite controversy.

Tags

  • deepseek
  • openai
  • regulation
  • chinese-ai