Anthropic wants to use user conversation data to train AI models

Anthropic wants to use user conversation data to train AI models



Technology company Anthropic, which previously did not use consumer conversation data for training its artificial intelligence (AI) models , now wants to train its AI system with user conversation data and programming sessions.

According to a TechCrunch release on Thursday (August 28th), Anthropic is making a major change in the way they handle user data by requiring Claude users to choose by September 28th, 2025 whether their conversations may be used to train AI models.

Additionally, the company stated that it will extend the data retention period to five years for those who choose not to unsubscribe.

This new policy is different from the old rules.

Previous Anthropic consumer product user data is automatically deleted within 30 days, unless required by law or policy to retain it longer or their feedback is flagged as a policy violation.

If flagged as a policy violation, user input and output data may be retained for up to two years.

Changes to the data handling policy apply to Claude Free, Pro, and Max users, including Claude Code services.

Customers of business services such as Claude Gov, Claude for Work, Claude for Education, or API users are not affected.

The data of companies using the company's services remains unused to train AI models.

Post a Comment

Previous Post Next Post

KALISSIA (PERFECT PRODUCTS REALM) is a Big Wide Range & Variety Shopping 🛍️🛒🛍️ Site 👉 Click Now Shopping Start