Key Points
- AI researchers are calling for deeper investigation into techniques for monitoring AI reasoning models
- CoT monitoring could be a key method for understanding AI decision-making
- The researchers argue that CoT monitoring could be a valuable addition to safety measures for frontier AI
- AI model developers are encouraged to study what makes CoTs ‘monitorable’
- The research community is urged to make the best use of CoT monitorability and study how it can be preserved

Introduction to AI Reasoning Models
AI researchers from OpenAI, Google DeepMind, Anthropic, and other companies are urging the tech industry to monitor AI ‘thoughts’ in a position paper. The paper highlights the importance of understanding AI reasoning models, which are a core technology for powering AI agents.
A key feature of AI reasoning models is their chains-of-thought, or CoTs, which are an externalized process in which AI models work through problems. The researchers argue that CoT monitoring could be a valuable addition to safety measures for frontier AI, offering a rare glimpse into how AI agents make decisions.
The Need for CoT Monitoring
The researchers note that while AI labs have excelled at improving AI performance, there is still relatively little understood about how AI reasoning models work. They argue that CoT monitoring could be a key method for understanding AI decision-making, but caution that it could be fragile and that interventions could reduce transparency or reliability.
The paper’s authors call on AI model developers to study what makes CoTs ‘monitorable’ and to track CoT monitorability. They also encourage the research community to make the best use of CoT monitorability and study how it can be preserved.
Industry Response
The position paper has been signed by notable researchers, including OpenAI chief research officer Mark Chen, Safe Superintelligence CEO Ilya Sutskever, and Nobel laureate Geoffrey Hinton. The paper marks a moment of unity among AI industry leaders in an attempt to boost research around AI safety.
Source: techcrunch.com