Anthropic Expands Data Use Policy for Claude Consumers

Key Points

  • Anthropic will train Claude models on new or resumed chat and coding data unless users opt out.
  • Data retention is extended to five years for users who do not opt out.
  • All Claude consumer tiers (Free, Pro, Max, Claude Code) are covered; commercial tiers and API users are excluded.
  • Users must make a decision by September 28 via a pop‑up or sign‑up toggle.
  • The default setting for data sharing is turned on; users can toggle off to opt out.
  • Opt‑out can be changed later in Settings → Privacy → “Help improve Claude”.
  • Anthropic filters sensitive data and does not sell user information to third‑parties.

Anthropic will start training its AI models on chat transcripts

Policy Overview

Anthropic disclosed that it will start using new or resumed chat transcripts and coding sessions from Claude users to train and improve its AI models, unless users actively opt out. The company is also lengthening its data‑retention window to five years for users who do not choose to opt out.

Scope of the Change

The updated policy applies to every consumer subscription tier of Claude, including the free, Pro, and Max plans as well as Claude Code. It does not extend to Anthropic’s commercial usage tiers such as Claude Gov, Claude for Work, Claude for Education, nor to API customers or users accessing Claude through third‑party platforms like Amazon Bedrock and Google Cloud’s Vertex AI.

User Decision Process

New users will encounter a preference toggle during the Claude sign‑up flow, while existing users will see a h pop‑up prompting them to accept the revised terms. The notice informs users that they must make a choice by September 28. A “Not now” button allows users to defer the decision, but the deadline remains firm.

The pop‑up prominently displays the heading “Updates to Consumer Terms and Policies” and offers a large “Accept” button. Beneath this, a smaller toggle labeled “Allow the use of your chats and coding sessions to train and improve Anthropic AI models” is set to “On” by default. Users who click “Accept” without changing the toggle consent to data use and the five‑year retention period.

Opt‑Out Mechanism

Anyone wishing to decline can flip the toggle to “Off” within the pop‑up. Those who have already accepted can later change their setting by navigating to Settings → Privacy → Privacy Settings and turning off the “Help improve Claude” option. Anthropic notes that any change only affects future data; data already used for training cannot be retracted.

Data Handling Practices

Anthropic emphasizes that it employs a combination of tools and automated processes to filter or obfuscate sensitive information before using it for the training of its models. The company also states unequivocally that it does not sell user data to third parties.

Implications for Users

The default‑on setting means many users may inadvertently grant Anthropic permission to train on their interactions, especially if they accept without reviewing the toggle. While the policy offers a clear path to opt out, the requirement to decide by a specific date adds urgency. Users on commercial plans remain unaffected, preserving the existing data handling arrangements for those segments.

Source: theverge.com