Meta announced the implementation of safety regulations by changing the way its artificial intelligence (AI) products are trained to prioritize the safety of teenagers using chatbots.
According to a TechCrunch report on Sunday, the company will now train chatbots to no longer engage in conversations related to sensitive topics such as self-harm, suicide, eating disorders, or potentially inappropriate romantic conversations and will restrict access to only a select number of AI characters for underage users.
“As we continue to refine our systems, we are adding more restrictions as an additional precaution, including training our AI to not discuss these topics with teens, but instead direct them to expert resources,” said Meta spokesperson Stephanie Otway.