Users of Anthropic Must Decide – Share Conversations or Opt Out

Users of Anthropic Must Decide – Share Conversations or Opt Out

Anthropic has updated its policies, requiring Claude users to decide by September 28 if their conversations can be used for AI training. In the past, user prompts and outputs were deleted within 30 days unless flagged for policy reasons, or up to two years if required. The new system extends retention to five years for those who opt-in, while business users of Claude for Work, Claude Gov, and API services remain exempt.

The company emphasizes user choice, stating that sharing data helps improve model safety and enhances Claude’s abilities in coding, analysis, and reasoning. Yet, the change also allows Anthropic to collect large-scale conversational data critical for competing with AI leaders like OpenAI and Google. Such access strengthens model performance and provides valuable insights from real-world interactions.

These changes highlight wider industry challenges. OpenAI faces a court mandate to retain all ChatGPT conversations, raising questions about privacy and user consent. Meanwhile, many AI users remain unaware of policy shifts, often consenting without fully realizing the consequences, which complicates the pursuit of transparent and ethical AI practices.

Anthropic’s update presents new users with a consent screen and existing users with a pop-up featuring a large “Accept” button and a small toggle for training permissions, automatically turned on. Experts warn this design may lead users to share data unintentionally, creating ethical concerns and regulatory scrutiny. The policy change underscores the ongoing tension between AI innovation, data collection, and user privacy.

Related Articles