Meta launches a new tool that allows parents to monitor their children's conversations with artificial intelligence

 

Meta, the parent company of Facebook and Instagram, has announced a new tool that gives parents access to the topics of their children's conversations with its smart chatbots

Meta, the parent company of Facebook and Instagram, has announced a new tool that gives parents access to the topics of their children's conversations with its smart chatbots. 

Parents already received alerts when their children talked about serious topics such as suicide or self-harm, but the new tool will provide a more comprehensive and detailed view of those conversations.

Launched on April 23, this service allows parents using parental control tools on Facebook, Messenger, and Instagram to access a new tab called Insights, which includes an option titled "Their Interactions with AI." This option displays a list of topics their children have discussed with Meta bots over the past seven days.

The topics include broad main categories such as school, travel, writing, entertainment, lifestyle, and health and wellness, as well as subtopics under each category. For example, wellness subtopics include mental and physical health, while lifestyle includes topics such as fashion and food.

However, to use this feature, children must be using "Teen accounts" available on Meta platforms, according to PC Mag. The tool will initially be available to parents in the United States, the United Kingdom, Australia, Canada, and Brazil, with a global version to be released in the coming weeks.

The launch of this tool comes shortly after Meta was found guilty in a lawsuit that ordered it to pay $375 million for failing to prevent the exploitation of children on its apps.

To further enhance teen safety, Meta also announced the formation of an "AI Wellbeing Expert Council," a group of specialists who will provide ongoing feedback on teen AI experiences to ensure they remain safe and age-appropriate. Meta employees working on AI projects are expected to hold regular meetings with the council to discuss feature updates and gather product feedback.

 It is worth noting that the issue of children's safety on social media has become increasingly prominent in recent months. Last March, a California court awarded a woman $6 million in damages after she proved that the Meta and Google (YouTube) apps caused her depression and anxiety, claiming that their products were designed to be addictive and kept her captive to them since childhood. This ruling marks the first time social media companies have been found guilty of the harmful effects of their products on individuals, particularly children and teenagers, as the jury determined that these apps did not include adequate measures to protect younger users.



Post a Comment

Previous Post Next Post

Translate