← Back to Intelligence

OpenAI Publishes o1 Safety Research

Date: January 15, 2025
Company: OpenAI
Category: Models & Research

Narrative

Chain-of-thought reasoning enables better alignment. Models can deliberate on safety. New "deliberative alignment" paradigm reduces jailbreak success.

OpenAI Research

Reality

Safety improvements documented across benchmarks. However, DeepSeek R1 release same month showed reasoning available without deliberative alignment safety layer. Raised questions about safety moat.

Implication

Introduced deliberative alignment concept. But rapid open-source reasoning development complicated safety narrative. No clear path to prevent reasoning capability proliferation.

Tags

  • openai
  • safety
  • reasoning
  • research